The "Margin of Wrongness"
Polls have what is called the margin of error, a "scientific" measure that concludes how likely any given poll's results will be within a certain percentage. A poll of 500 people will have a wider margin of error (and will be regarded as less accurate) than one with 900 people. For state-wide polling, that is the average range of poll respondents used to reflect the opinions of several million people.
For people who are into politics and polls, Real Clear Politics collects all sorts of political polls and tosses them together to create an average. In theory, the method of using many state polls would create a larger sample, and the average would then create a more accurate reflection of the state of the race. But it doesn't always work out that well as different polls use different sampling methodologies, have partisan tilts, or are simply just large outliers that can drastically alter the "average." While the margin of error likes to predict how close a race will be within a certain degree of likelihood, what I like to call the Margin of Wrongness actually see how far off the results are. Let's just say reality is often outside the Margin of Error.
How Close are State Polls?
There are a few way to answer this question. If success is measured by predicting the eventual winner of a state, they are usually pretty decent. But their success in predicting winners is usually limited to easy wins, where polls show one candidate or another with a very large lead. For instance, in 2008, polls showed Obama would win Nevada by 6.5 points, but he won by almost double that (12.4). They also showed he would win Iowa by 15.3 points, but that was well off as he won by 9.3. Both victories were outside both the statistical Margin of Error as well as the Margin of Wrongness, but they still "predicted" the winner. By that metric, they are okay overall, though not so great in close races. However, if success is determined by how closely they pick the margin, the picture gets a bit murkier.
When we take a look at 2008's closer presidential state races, and there were not many, the predictions are not very solid. That year, only Florida, Ohio, Indiana, Missouri, and North Carolina had the margin between Obama and McCain within just a few points. The polls incorrectly picked the winner in Indiana and North Carolina, and they were just barely saved in a third, Missouri. Overall, polls in the current 11 swing states were off by about 3 points in 2008. It didn't matter in the blowout states, which were enough to give Obama the presidency alone, but the results were essentially 50-50 in the close races.
If you were expecting a rebound year in 2010, you would be wrong. The margin accuracy in senate races that year actually got worse, up to being 4.3 points off. Once again, being off 4.3 points did not matter very much in the blowout races. But the RCP average, and pollsters in general, whiffed on the close races once again. In Nevada, Sharron Angle was supposed to unseat Democratic Senate Majority Leader Harry Reid by 3 points. She lost by 6. Likewise, every final poll had Ken Buck winning in Colorado by an average of 3 points, he lost by 1. In Alaska, Joe Miller was predicted to win (though polling more infrequent), but lost to a write-in candidate. In Washington, it was supposed to be a fraction of a point win for the incumbent, but she won by 4 points. Again, polls picked the wrong winner in about half of the closer, MoW cases. In 2010 Governors races, they unanimously picked the wrong winner in Illinois and Connecticut, and in Minnesota, a 5.2 point poll win was actually a .5 point win. Likewise, the polls were off by more than 4 points on the margin overall.