Over the last several years I have studied statistics, analyzed government data regularly and produced charts and infographics for local and national newspapers.
I’ve found the majority of the polls and surveys done today have major problems, and their results are often confusing — most indicatively, presidential polls.
In this time I’ve grown fond of data. With confidence I say I love math and good numbers, such as those of the U.S. Census or the Bureau of Labor Statistics.
They operate with large budgets, pull from large samples and often employ hundreds of people, creating real scientific studies that are interesting, testable and useful. Those numbers are what we seek — but aren’t always what we get.
And though I analyze and vet survey data, I believe readers don’t understand how these polls are conducted and what they entail.
Because of media outlets’ concision goals and the space constraints of print newspapers, we take only limited looks at studies, and we leave out most of the quantified data.
The analysis of any snapshot data (and supposed “levels of confidence” with proposed “accepted” margins of error), often leads to a fraction of data that seems to say, “It’s conclusive!” or “This is probably relevant!” though you should realize that it’s could be a bunch of misleading malarkey.
Turn on any news television broadcast and immediately you’ll hear a presidential race statistic that seems potent and relevant.
A staunch conservative and a loyal liberal will claim (with the same shaky validity via a “recent statistic”) that both of their candidates could equally be president — Public Policy Polling says Obama is going to sweep, yet Gallup says Romney is going to upset.
Why? A lot of this is because most people wouldn’t be bothered to spend the 30 seconds to talk to a pollster. The Pew Research Center recently said its response rate has fallen from 36 percent to just 9 percent in the last eight years.
Regardless, some scientists march on in the quest of finding good numbers. Nate Silver, famed presidential polling expert and author of The New York Times’ FiveThirtyEight blog, is one of these people.
Widely considered to be one of the true experts in psephology, or the scientific prediction and study of elections, he accurately predicted 49 out of 50 states’ outcomes in 2008, and regularly pores over dozens of statistical methods of measuring this race and publishes this daily.
One such analysis is how he describes the aforementioned difference between the differing polling agencies. He says the “house effect” consistently pulls agencies in certain ways, and in doing so, says that some data can still be extrapolated from all of this.
However, he agrees there are obvious discrepancies in this year’s data depending on if a polling house exclusively uses landline phones to collect their data.
The National Center for Health Statistics stated in 2011 that one in three people had only wireless phones, and that an additional one in six receive all or almost all of their calls despite having a landline, which leaves half of the potential sample size simply ignored.
The Pew Research Center, similarly, noted frustration with polling during the 2010 midterm elections. They stated that because of this cellphone skew, election polls may be biased — with a bump of as much as six points toward Republicans.
A recent Slate article on this topic cites that though surveys are “probably more reliable on the whole than they were 30 years ago,” and though they have so many problems in accuracy, and though technology may be a challenge, we’re doing better and getting better all the time.
Or, if you’re crazy and you can pretend to reject all hypotheses, patterns and trends, you can take the data collected by 7-Eleven, Inc.
They’re running an ongoing presidential poll that tabulates your vote depending on which coffee cup you get — a red one or a blue one.
Besides the obvious bias that the poll is only conducted among those who purchase a coffee inside retail 7-Eleven stores, which are usually in mostly urban areas of only 34 states (and D.C.), it is amusing to look at the data.
They somehow tabulate that several states — such as Texas, Arizona and South Carolina — are “blue” states, which have not happened in decades for presidential elections, if ever.
While I love coffee and I love this idea, people don’t go to 7-Eleven for good coffee. But apparently, they say their data has been “predictive” in the 2004 and 2008 elections as well.
I’ll listen to the Nate Silvers of the world, but I can’t really agree (or disagree) with any poll that says, “This person or this proposition is surely going to win!”
The simple fact is we won’t really know how any poll is — all of these are but first drafts of public opinion — until the Election Day actually happens. Don’t wait with bated breath.