If I had a nickel for every time someone asked me what our ‘numbers’ say about the Republican Presidential Primary, I could pay for an expensive phone survey by now. Yes, we show Donald Trump having a healthy lead. Yes, Jeb Bush and Scott Walker are in a dead heat for 2nd and 3rd. Yes, it’s a jumbled hot mess from there.
Normally, we would just share our results at will, like it was a study about chicken wings or sunscreen. There’s little-to-no money in political horse race polling; most firms just do it for the PR. We’re happy to give plenty of our political research away for free.
But we’re also very careful. Polling the US electorate is an imprecise science. Every honest person in the field will tell you that. At CivicScience, we won’t publish horse race polls within a couple days of an election, for fear that imperfect data could influence one person to change their vote or stay home altogether. Too many voters (and donors) allow high-margin-of-error polls to sway their behavior – as if electability or momentum is more important than ideology or character.
Our reach is miniscule by major media standards. At worst, our poll results might be seen by 5,000 people on Twitter and a few thousand more who read our blog on a regular basis. Mass media outlets who publish horse race polling, without effusive caveats about its weaknesses, can have a powerful and dangerous impact on the political process.
Fox News is now taking things to a harrowing level, using polls to position GOP Primary candidates in the first major televised debates. Make no mistake – the difference between winning a podium at the primetime debate and being relegated to the 5pm ‘Kids’ Table,’ will be enough to send several contenders packing. In a race where 75% of the candidates are separated by just a few percentage points, we’re left to trust the judgment of Fox News and the handful of pollsters they bless. Do you?
The decline in poll response rates is well-documented. Land-line phone ownership will soon dip below 50%. Few people answer their cell phones when called from a number they don’t recognize. The people left over represent a small and distinct group. The fact that some pollsters come close, even some of the time, is a testament to their resourcefulness.
This past weekend, the Wall St. Journal published a national poll of 252 Republican voters, conducted over five days. You read that right. 252 people. Barely five people per U.S. state. The poll had a margin of error of 6.17% – meaning that every candidate outside the top three could be ranked anywhere from 4th to 14th. If we reported data from a 252-person survey to one of our corporate clients, they would have security escort us from the building.
Another poll from Monmouth University yesterday cited 423 Republican voters. Better than 252, sure, but still boasting a margin of error of over 4.8%. Are these sub-500-person poll samples reflective of tens of millions of Republican voters across the country? Maybe they are. Maybe they’re not. Regardless, when these polls can have such a profound impact on elections, is ‘maybe’ good enough?
At the same time these polls were being indiscriminately broadcast over traditional and social media, a quiet event occurred that sparked a bunch of ‘Atta Boys’ around the CivicScience office. The Marist Institute for Public Opinion, a well-respected polling organization, and the McClatchy Company, one of the remaining bastions of professional journalism, suspended their polling of the GOP Primary. Their justification? According to Marist, “too much precision” is being attributed to polls where candidates “are just a small fraction apart.”
Moreover, they added, “It’s making candidates change their behavior [to gain even a small bump in the numbers]. Now the public polls are affecting the process they’re supposed to be measuring.”
Right on!
None of this is to say that opinion research can’t be trusted and actionable when used properly. Most of our research is conducted on behalf of consumer companies across industries like media, retail, technology, finance, and consumer goods. The insights professionals who digest and act on our data, however, are trained to understand the shortcomings and risks. If a topic we’re studying comes back 51% to 49%, our clients don’t base million-dollar decisions on it.
Holding decisions that have such a significant impact on the Democratic process to a lower standard is simply reckless.