Good Polls and Bad Polls: Determining the Quality and Relevance of Public Opinion Surveys

Image source: The Texas Politics Project. “Polling.”

As the election season is ramping up, polling results are getting more and more media attention. They are part of the lifeblood of politics, a critical endeavor that can help tip the scales of an election by gauging where the public stands on candidates and issues. Campaigns rely on them so much to get a sense of what tweaks they need to make in a campaign to gain support, such as in messaging and which demographics they need to reach out to.

More and more it seems, polling also gets increasingly criticized and attacked when election results conflict with it. With recent high-profile misses, such as the 2016 presidential election, these calls have gotten arguably louder with each successive miss. Over the years, a few political observers and the general public have even called for the banning of polling.

Certainly, polling is an imperfect enterprise (much like all enterprises that human beings partake in) and facing some major issues. It is extraordinarily difficult to pinpoint public opinion for a variety of reasons, not the least of which is the idiosyncrasies of individuals. Since polling is taking a sample of the population, there is always room for error. (The population refers to the overarching group of people that is being studied and from which the sample is being taken. It could be the national population, but it could also be just one state or one congressional district).

However, much of the criticism of polling also stems from a lack of public understanding of how polls work and the poor coverage of polls by the media. Today’s post will focus on the “5 W’s” — who, what, where, when, and why — how of polling, along with how to get a sense of the quality and relevance of particular polls.

  • Who is administering the poll?
  • What races, issues, etc. is the poll surveying opinions on?
  • Where are the poll’s respondents coming from, and what campaign(s) is/are in question?
  • When is the poll being administered during the election cycle?
  • Why is the poll being commissioned in the first place?
  • How is the poll conducted?

Most of the time for the media, a poll is a poll and they are all newsworthy. Coverage tends to be a cursory look at the surface details, such as sample size, the margin of error, and basic percentages of who responds in what way on a survey question. Part of the reason for this is the media’s economic interests. Polls are usually interesting to the public, so the media reports on it to get viewers and listeners. Yet much of the public also wants a quick summary of what the poll says in a neatly packaged soundbite, so the lack of critical coverage of polls is not only due to the media.

It isn’t necessary to go into all of the deep, nitty-gritty details of these polls for the public, and some of the really intricate aspects are not going to be released by the pollster anyway. For example, you may often see the term “likely voter.” However, there is no clear definition of what a likely voter is, and how pollsters define it is proprietary and won’t be shared with the media or public.

That being said, there are more nuances of polls that are important to know, and some should be covered in more detail by the media. While it wouldn’t be feasible for media outlets to cover everything about them consistently, here are some things to know about the inner workings of polling, grouped into the “5 W’s” and “how.”

  • Who: Just because a survey is done by a pollster doesn’t mean the poll is done well, or even that it is a professional pollster doing it. Some pollsters have much better track records than others. Pew Research Center, for example, is excellent at polling, while SurveyMonkey is highly suspect. FiveThirtyEight has its own rankings of how good pollsters are. Some polls and pollsters, however, are flat-out fake.
  • What: What the poll consists of also matters a lot. Some polls are longer and much more comprehensive than others. Some polls focus only on issues while others focus on candidates, but many polls will focus on both. The ordering and wording of polling questions also matter. Errors and issues here can skew polling results by influencing respondents to answer in ways that they may not otherwise.
  • Where: Every election situation is different. No two congressional or state legislative districts are the same, nor are two different states. Every race has its own intricacies. This is a stumbling block for a lot of media analysis on polling, as commentators often extrapolate from one poll and try to apply its implications across the board. Think of all the times you have heard about a statewide, congressional, or state legislative election being a potential “bellwether” or indication of what is to come for Donald Trump in his re-election bid in 2020. Most of it is media sensationalism to generate interest in a story. On their own, any particular election doesn’t tell us anything about what will happen in 2020, or even a group of them. You can’t automatically apply a local or state election to what will happen nationally because they are different sets of voters (save for perhaps a bit of overlap between the sets). These speculations aren’t necessarily entirely useless, but they should be taken with large grains of salt.
  • When: Quality aside, polls aren’t necessarily notable depending on what time of the election cycle it is. You may have heard of recent polls suggesting that several of the Democratic presidential candidates are ahead of Trump in theoretical head-to-head matchups. The problem with these is that the primaries haven’t even started, let alone the cementing of a Democratic nominee as Trump’s challenger. The first primary, Iowa, isn’t until February. Many, if not most, of the public who is going to vote in November isn’t even paying attention to what is going on right now. If it gets close to the end of the primaries, and several candidates such as Joe Biden, Bernie Sanders, and Elizabeth Warren are close to taking the Democratic nomination, then head-to-head polls against Trump will mean something. On issues, salience also affects opinions. For example, in the immediate aftermath of the Parkland High School shooting, there were heightened calls for various gun control measures, but this has cooled off somewhat as time has gone on. When the poll is conducted is very important.
  • Why: While pollsters (at least the real ones) all seek an accurate picture of public opinion, they sometimes differ in why they do it. Nonpartisan organizations, such as Pew and Gallup, are surveying the public simply for research and educational purposes. Meanwhile, pollsters hired by well-funded campaigns are expressly doing it to 1) help their campaigns win and 2) make money. Polling, when it is high-quality, is a costly enterprise, but it is often necessary to win. Depending on survey results, pollsters will strategize in when and what aspects of these polls will be sent to media to disseminate to the masses, putting their respective spin on where the the race stands. Other observers with a vested interest in that campaign, but not expressly affiliated with it, can also use that information for their own election activities to try to help that campaign win — and sometimes pollsters want that. Super PACs are a notable example, which can get involved in elections but cannot directly coordinate with any campaigns.
  • How: Pollsters have different sampling methods for how they find respondents, which has a huge effect on how accurate their polls are. One of the main goals of polling is to be as representative of the population as possible. Since it is hard to be exactly representative, there are sometimes instances of “oversampling” or “undersampling.” Oversampling means a certain demographic makes up a larger proportion of the sample than its share of the population. Undersampling is the opposite issue, where a demographic is smaller proportion of the sample than the population. Whites being 52% of a given population but making up 60% of a poll’s respondents is an example of oversampling. This is important because demographic groups often have diverging opinions of a candidate or issue. So if a demographic is oversampled, it can skew the poll’s results in their favor, while undersampled demographics would have the results skewed against their opinions.

Polling will be heavily covered and debated in the media for the rest of the 2020 cycle, as usual. Polling will also likely receive a lot of scrutiny and criticism, particularly if their accuracy is off on the results of primaries and the November election. Keep in mind, however, that there is more to polling than meets the eye and what is in the media. The issue may not be the polls themselves, but the coverage of them. It could also be both that are the problem.

Quality polling is difficult to do, and not all polls should be treated the same. Not all polls are created equal. Be wary of the aspects of polls discussed above. Accurate analysis makes a big difference between good and bad media coverage of polling — and between whether a campaign wins or loses.

Senior Page Editor - Sayfie Review, Assistant Staff Writer - Ballotpedia (my views do not express those of my employers), M.A. in Political Science

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store