Understanding What Went Wrong with 2016 Polls Will Take Time

Trying to grapple with the failure of polling to predict the Electoral College victory of President-elect Donald Trump, I wrote a short piece for the Castleton student newspaper, The Spartan. In that piece, I argue that we need examine the systematic omission of a segment of the population who are not unreachable, but rather who refuse to participate in polling as respondents. As response rates declined over the past two decades, it was not only a result of those that we could not reach, but we also saw a rise in refusals—those who we could reach but who refused to participate in any polls. Figure 1 shows that this segment of the public is actually greater than the proportion that we cannot reach at all. While we know very little about the unreachable segment of the population, we have some, but limited, information about the refusals. We need to employ that metadata to understand as much as we can about this subpopulation.

The American Association for Public Opinion Research has put together a taskforce to examine the polling from 2016; this is something to watch closely. What I believe that I know now is this:

  1. Any explanation that employs one factor to explain the polling errors is wrong. There are many factors in play.
  2. Most of the early attempts to explain the errors are also wrong; we need a thoughtful and deep examination of the methodology, which will take time and peer discussion.

More to come.

pewresponserates
Figure 1. Pew’s response rates, 1997 – 2012.

Favorability and Vote Choice

(This post was co-written with John Graves, summer intern at the Castleton Polling Institute and student at Mill River Union High School, Clarendon, VT)

With the Vermont state primary behind us, the Castleton Polling Institute went back to the July VPR Poll to explore the relationship between the candidates’ relative favorability and their share of the primary votes. Without developing a “likely voter” model (which in low-turnout elections becomes very difficult), we simply used the favorability ratings from all of the respondents who identified themselves as either Democrat or Republican and as potential primary voters.

Using the principle of transitivity from rational choice theory, we made the following presumptions:

  • If Respondent A rated Candidate X more favorably than they rated Candidate X’s primary opponents, then Respondent A would choose Candidate X. Thus the probability of Respondent A’s vote going to Candidate X would be 1, and the probability of Respondent A’s vote going to Candidate Y or Z is 0.
  • If Respondent A rated all candidates the same, Respondent A is equally likely to choose any candidate. Thus, the vote probability in a three-way race is Candidate X = .33, Candidate Y = .33, and Candidate Z = .33.
  • If Respondent A rated Candidate X and Candidate Y more favorably than they rated Candidate Z, then Respondent A is equally likely choose X or Y but not Z. Thus the probability of Respondent A’s vote going to Candidate X would be .5, to Candidate Y is .5, and the probability of Respondent A’s vote going to Candidate Z is 0.

Even if Respondent A rated all of the candidate’s poorly, if Respondent A was to cast a vote in a rational manner, the vote would go to whomever was rated highest, on a relative scale.

Additional presumptions:

  • Respondents are more likely to vote for a candidate with whom with they have at least passing familiarity than for one they don’t recognize.
  • We presume, however, that a respondent will choose a candidate unknown to him over one whom the respondent has rated unfavorably.
  • Thus, in order of likelihood to get respondents’ votes, here are the scores assigned to each respondent for each of the candidates:

1. Very favorable rating and known to the respondent
2. Somewhat favorable rating and known to the respondent
3. Known to the respondent, but the respondent has no definite opinion either favorable or unfavorable
4. Unknown to the respondent
5. Somewhat unfavorable rating and known to the respondent
6. Very unfavorable rating and known to the respondent

After figuring out which candidate or candidates we thought each subject was going to vote for we tried to control for the most likely voters by looking at party affiliation and how likely each subject self-reported that they would be to vote in the primary. We concluded that the most representative sample of likely voters would be subjects who were affiliated with the given party and who also said they were at least somewhat likely to vote in the primary. This formed a group of 69 Republicans and 138 Democrats from the poll that were predicted to vote in the primary, representing 11.9% and 23.7% respectively of the registered voters from the VPR poll. These numbers are slightly higher than the actual 10.3% and 16.2% turnout in the actual election, but that is to be expected with the polling response bias for citizens interested in politics.

Figure 1 illustrates the percent of the vote each candidate is projected to receive based on the relative favorability ratings; in addition, the chart compares the projected vote against the actual vote received in the respective primary races.

 

Favorability&Vote
Figure 1. Projected vote (with error bars) based on relative candidate favorabiilty ratings, compared with actual vote totals

 

As Figure 1 illustrates, our model did a good job at predicting both parties’ gubernatorial primary elections, with both predictions within the margin of error for the actual results, with the exception of Peter Galbraith’s projected vote total, which was lower than the model projected. In the Republican race our model predicted Scott to win with 64 percent of the vote, very close to the actual 60 percent. The model also predicted that Minter would receive 48 percent of the Democratic vote—very close to the 49 percent she actually received. It is possible—although we lack any empirical evidence—that the model’s over-prediction of Galbraith could be explained by some strategic voting, voters choosing their favorite between the two front runners out of concern that Galbraith could not win.

On the other hand, the model missed predicting the Democratic primary outcome for the Lieutenant Governor’s race, picking Smith instead of Zuckerman as the likely winner. One possible reason for this difference between the model and results could be because of a change in public perception from the time the poll was completed until Election Day. This seems especially possible in this race given the late endorsement from the extremely popular Bernie Sanders who might have changed the minds of some Vermont voters. This difference illustrates the difficulty in predicting  election results in advance in low turnout elections, especially when only using favorability rating as a proxy for whom subjects will vote. It is also possible that Progressives—who would not have self-identified as Democrats and who therefore would not be included in the model—crossed over to the Democratic primary to support Zuckerman.

Though our model successfully predicted two out of the three races, it is a respondent-level model, and therefore requires that we have a good estimate for who will vote in the primaries—which of our respondents expressing views will actually show up and cast a ballot. In a higher turnout race, such as the general election, we can estimate that a majority of respondents will follow through and vote. This is not the case with the state primary races, where fewer than 3 in 10 eligible voters cast a ballot.

Consequently, we lack a high-enough level of confidence in this model to predict a future event so we are left to test the model and do as most political scientists do: predict the past.

The Perils of Polling in Low-turnout Primaries

Recently Energy Independent Vermont commissioned a poll conducted by Fairbank, Maslin, Maullin, Metz, and Associates (FM3), a public opinion research group that works primarily with Democratic candidates and a wide array of governments, non-profits, and corporations. The poll interviewed 600 registered Vermont voters, and although little additional information about the methodology was published, the report claimed to represent “likely voters” defined by those “who said they are likely to vote” (from Polhamus, Mike. “Poll finds support for carbon tax, other climate change steps.” VTDigger. Accessed online on July 12, 2016). Self-reported likelihood to vote is a notoriously biased number, even in the best of elections; this is what pollsters call social desirability bias.

The poll reported 65 percent of respondents saying that they are likely to vote in the state primary; the voter turnout in the last gubernatorial primary election without an incumbent (2010) was only 24 percent, and in 2014 the turnout was only 9 percent. Given prior elections, 65 percent is an unrealistic projection for state primary turnout.

While I admire FM3’s attempt to poll in these important primaries, I contend that a much larger sample is necessary. It may be counter-intuitive to some, but polling is much easier in large populations than in small populations. What is most difficult about the projections of the population voting in primaries is that the parameters of these populations are generally unknown. We do not have exit poll data to tell us about the general patterns of state primary voters. The best indicator we have for whether or not someone will vote in the state primary is one’s past voting history, which can be obtained from voting records. Past behavior is the best predictor of future behavior.

When the voting population is small, the danger of using past voting behavior is that mobilization of just a small number of new voters—voters not picked up in a sample frame including only past voters—can make a large impact. In other words, a strong get-out-the-vote (GOTV) movement can overcome name recognition, advertising, and direct mail disadvantages.

The FM3 poll may be right on target, but it is more likely that the respondents in the late June poll will not look like the voting population in the August 9th election because, unless this is a fortunately unrepresentative sample of registered voters, most of these respondents are not likely to vote in the state primary.

Reflections on the Vermont 2016 Presidential Primary Poll

On February 22, 2016, Vermont Public Radio released the results of a statewide presidential primary and issues poll conducted by us, the Castleton Polling Institute. The poll came out of the field on February 17 in order to weight the data and give VPR reporters time to prepare stories putting the polling results in context, and VPR wanted to use that time to reflect on where Vermonters stood in advance of 2016 Town Meeting Day and a presidential primary that was to feature a US Senator from Vermont in the Democratic primary and a topsy-turvy Republican race.

Since the election, I have taken some time to reflect on the poll and how well it reflected the public’s primary preferences; I’m conducting a review of our polling to assess to what extent we had a clear picture of the Vermont likely voters 12 days prior to the presidential primary and whether or not our likely voter model needs an overhaul.

Voter Turnout

We used the 2008 presidential primaries as a basis for estimating voter turnout in 2016, since 2008 is the most recent election where no incumbent (neither president nor vice-president) was seeking the nomination in either party. In addition, we made a presumption that the Sanders’ campaign had created an excitement among younger voters akin to the 2008 Obama campaign. Our poll reinforced these assumptions, showing a high degree of support for Sanders among younger voters and showing that the percentage of votes cast in the Democratic primary would near (but not reach) the level of 2008. Sixty-six percent of poll respondents said that they would take a Democratic ballot, and 22 percent said that they would take a Republican ballot in the open primary; when we adjust for 11 percent that hadn’t yet decided in which primary they would vote (eliminating the 11 percent “unsure” and distributing that percent proportionately among the Democratic and Republican primaries) we had 75 percent in the Democratic primary and 25 percent in the Republican primary. The adjusted values overestimated the Democratic share of primary voters (69 percent) and underestimated the Republican share (31 percent) by 6 percentage points. It appears, given the volatility and excitement surrounding the Republican nomination race that the “unsure” voters gravitated more strongly to the Republican contest.

The Democratic Primary

In our likely voter estimation, 78 percent of the respondents planning to vote in the Democratic primary favored Sanders, in contrast to 13 percent for Clinton; 9 percent were unsure at the time, which is not an unreasonable stance two weeks prior to a primary election. Adjusting for the fact that voters do not cast “unsure” ballots, distributing the “unsure” voters proportionately results in 86 percent for Sanders and 14 percent for Clinton, estimates that perfectly reflect the actual share of the vote for the Democratic candidates.

Tab1_PrimaryReflection
Table 1. Polling support compared with election results for the 2016 Democratic presidential primary

The Republican Primary

Given the volatility of the Republican race in the 12 days from when the VPR poll came out of the field until Vermonters cast their votes, it is not surprising that the estimates of where voters stood on February 17 did not mirror the final Republican vote tally. Using the same process of adjusting for the “unsure” voters (by distributing their votes among the candidates in proportion to the candidates’ share of the vote without “unsure” voters), our likely voter model had Donald Trump winning the Vermont Republican primary with 38 percent of the vote, nearly 6 percentage points higher than his actual share of the vote.

We estimated that Marco Rubio would place second with 17 percent of the vote (adjusted from 15 percent), and John Kasich would finish third with 16 percent of the vote (adjusted from 14 percent). Instead, Kasich finished with 30 percent of the vote and Rubio with 19 percent

Tab2_PrimaryReflection
Table 2. Polling support compared with election results for the 2016 GOP presidential primary

The difference between where we had Trump and Rubio on February 17th and where they finished on March 1 is affected by a great deal of campaign dynamics, but the estimates were well within our poll’s sampling error for the subset of Republican voters (MoE = +/- 9 percentage points). Kasich’s final vote tally, however, fell outside of the margin of error; his final vote share was nearly 14 percentage points higher than where we had his estimate on February 17.

The differences between estimates made 12 days before the election and the final election tallies in the Vermont Republican contest can be attributed to two major factors:

  1. The breadth of the field changed as candidates dropped out of the race, and
  2. The efforts that the Kasich campaign put into Vermont changed Kasich’s prospects after the poll was out of the field.

By the time Vermonters cast their ballots the field had winnowed down to five active candidates; most of the Vermonters who supported Bush (5 percent), Christie (3 percent) and others (2 percent) sought out other candidates to support. Additionally, the 12 percent “unsure”—which we distributed proportionately to candidates based on their poll support—were not likely to go to the candidates who had suspended their campaigns. It is not inconceivable that some of the Bush and Christie support would go to the remaining governor in the race, John Kasich, but that would not explain all of Kasich’s gains.

Between the conclusion of the poll and election day, Kasich was the only candidate to visit Vermont, not once but twice (February 27th and 29th), including a visit to the more densely Republican Rutland County. Given that Vermont is the size of a small congressional district (the average size being 710,767, about 14% bigger than the population of Vermont), it is possible to make measurable gains in a short time because a candidate can reach a large proportion of the voters without the effort and resources it would take in a larger state.

Campaigns matter, and their activity can move voters. To believe otherwise, we could conduct a poll at the outset of candidate announcements and use those results to predict winners. But to do so would be a ridiculous proposition. In primary elections, voters cannot fall back on the decision shortcut of party preference, so candidates have more room to sway voters. The dynamics of the campaigns make it difficult to mirror election day results days before an election when voters have time to change or make up their minds.

The VPR poll asked respondents if they were likely to change their minds before election day. Overall, a majority (59 percent) said that their mind was made up, but among those planning to vote in the Republican primary, a majority (55 percent) said that they might change their mind, as illustrated in Figure 1. The odds are very high that many did in fact cast their ballot for someone other than the candidate they supported in the poll.

Pres Primary Change Mind
Figure 1. Likelihood of changing one’s mind about which candidate respondents will support, by choice of primary

In general, we believe that the VPR poll and the likely voter model employed did an effective job demonstrating public views at that time; in fact, those results mirrored the final outcome in the Democratic primary, where voters had mostly settled on their choices earlier than in the Republican primary. Differences between poll results and the ultimate election results in the Republican primary are easily attributed to the Kasich campaign efforts and the changing landscape in the Republican race in the aftermath of the South Carolina and Nevada primaries.

What is “Likely” and “Unlikely” in Polling

The recent VPR poll was conducted like any other general population public opinion poll. The largest sampling frame for telephone was utilized—in this case, a dual-frame sample of landline and cell phone numbers—and the data were weighted to reflect U.S. Census estimates for Vermont’s adult population on age and gender. In addition, the data were also weighted to reflect the county-level populations proportionately.

All of the data related to issues, job performance ratings, and the 2016 Vermont gubernatorial were weighted to reflect the views of the general population. During data collection, the Polling Institute works the sample to achieve the highest response rates possible given time and budget constraints, and in the end, the general population weights are relatively small and do not distort the original data a great deal.

The data reflecting preferences in the upcoming Vermont presidential primary are weighted to reflect the population of likely voters in each of the party’s primary. Weighting the general population is far easier than weighting to likely voters because we have hard data from the Census Bureau describing the general population. The general population actually exists at the time of the poll; this is not the case when considering likely voters. The voting population does not yet exist; there are no pre-existing measure of who what citizens (or poll respondents) will actually cast a ballot on March 1 (or before by absentee ballot).

Weighting to the voting population is weighting to a population that is still speculative. That is why we refer to likely voters as opposed to actual voters. But if we want to estimate what voters may do on election day, we have to recognize that the entire adult population does not vote, and in a primary, the proportion of voters will be lower than that found in a general election.

So, we develop a separate weight to help us understand what voters may do on March 1 as they cast their votes in the presidential primaries. The formula we used to estimate the voting population for the upcoming primary started with eliminating the views of those poll respondents we think are unlikely to vote at all; consequently, we built a model (using a second data set) that excluded all of those respondents who

  1. Are not registered to vote;
  2. Do not follow news about the presidential race either “very closely” or “somewhat closely”; and,
  3. Say that they are either “not too likely” or “not at all likely” to vote in the Vermont Presidential Primary.

Using that criteria, we eliminated 258 actual respondents (unweighted), bringing us to an unweighted base of 637 records or 71 percent of the original data set. We then worked with those remaining records to devise a variable that would give greater weight to those respondents among those remaining who are most likely to vote in the presidential primaries, since we know that turnout will not be as high as 71 percent. In fact, we estimate that turnout will be from 40 – 45 percent of registered voters.
In order to differentiate among the remaining respondents who are most likely to vote, we gave points to respondents meeting the following conditions:

  1. Follow news about the election “very closely”
  2. Say that they are “very likely to vote”
  3. Identify with one of the major political parties
  4. Have a college degree or more education
  5. Responded to poll after the New Hampshire Primary (Feb. 9th)

These criteria were used to generate weights for each individual case that were then applied to the general population weights to devise a new weight variable defining our “likely voter model.” The first two criteria take what respondents tell us about their interest in the election and how likely they are to participate, while criteria 3 and 4 apply data from the demographics that are associated with voting participation. The last criterion takes into account that candidate preference shifted measurably after the New Hampshire primary showed that Trump and Sanders can win and that Kasich may be more of a contender than earlier thought.

Applying the likely voter model to the reduced data set left us with a dataset that represents 58 percent of the originally weighted sample—a figure higher than our voter turnout estimation but weighted to give those within the remaining sample who meet likely voter criteria a greater weighted response.

The estimates for how Vermont would vote if the election were held during the time we were in the field are shown in the following two figures:

GOP2016PresPref
Figure 1. Vermont 2016 GOP Presidential Primary Preferences, based on a likely voter model

 

Dem2016PresPref
Figure 2. Vermont 2016 Democratic Presidential Primary Preferences, based on a likely voter model

The Iowa Polls

Philip Bump of The Washington Post wrote a column titled, “Why were the Iowa polls so wrong?” (February 2, 2016, The Fix) and on the same day, Mark Blumenthal and Jon Cohen had a Huffington Post blog titled, “Were The Iowa Polls Wrong? Maybe They Were Just Too Early.” (February 2, 2016).

So were the Iowa polls “wrong,” and if so, what was wrong with them? To answer this question, we have to ask, “what are they meant to reflect?” Naturally, it goes without saying that the media expect polls to predict elections; the horse-race questions tell us who is in the lead, and the final horse-race poll should show who is first past the finish line. But to beat the horse race metaphor into the ground, if the horses are very close in the final poll, though not yet across the line, it is not inconceivable that the leader should slip and ultimately lose.

Dewey-defeats-Truman

“Dewey defeats Truman” is a classic example of a polling debacle, but it would not be fair to say that the polls in 1948 were wrong; they just stopped polling too early and didn’t take measure of the last part of the race when Truman passed Dewey on the final turn.
Polls are a snap shot of opinion, and opinion can change, as it did in the 1948 presidential race, and again in the 2008 Democratic primary, when polls showed a lead for then-Senator Obama before Hillary Clinton pulled out a victory.

So it is possible that the Selzer Des Moines Register poll was not incorrect; it was just too early, as Mark Blumenthal and Jon Cohen conclude in their analysis. In addition, Patrick Murray, the director of the Monmouth Poll, did the thankless post-mortem work on his polls to investigate what went wrong. Monmouth called over 250 of the Iowan Republicans interviewed in their poll and found that a higher than average number of Trump supporters never went to the caucuses, but more importantly, many Republicans changed their minds and decided late to caucus for Ted Cruz. This story was reported by the Huffington Post on February 5th.

So, when the most recent CNN/WMUR poll in New Hampshire came out on February 4, 2016 with the headline, “NH Poll: Trump on top, Rubio in second,” I immediately searched for the level of undecideds. With only five days to go before the primary, one third of GOP primary voters said that they were undecided.

If decisions are made in the polling booth or just a day or hours before voting, polls will reflect the final vote only if the undecided voters break out the way the decided voters are distributed. If last minute deciders break toward one candidate in larger proportions, the poll may have been “right” in that it reflects the public at that time, but it would not reflect the election outcome.

Deception in Polling

Recently, articles were trending with the finding that a particular candidate’s group of supporters would bomb a fictional city [for an example article: Click here].

Survey researchers tend to immediately seek out the methodological details of the poll when we see headlines like these. In this case, very few methodological details about the poll are available on the company’s website [PPP Poll Release] and certainly not enough detail to meet the basic AAPOR Transparency Initiative’s requirements (of which the company is not a member).

Without much methodological information, I do not claim to dispute the findings of this particular poll, but I would like to spend a moment considering the potential meaning and impact of this finding. Responding anything other than the “not sure” option to this question isn’t necessarily an indication that a respondent believes the city from Aladdin is an actual place, but rather a likely indication of general sentiment concerning military action in the Middle East.

Respondents don’t expect polls to intentionally deceive them. Those responding trust the researcher to ask valid and fair questions. Unfortunately headlines and poll questions like these erode the public’s trust in the field of survey research and further damages our ability to gather public opinion, behavioral, and other social science data for purposes far more important than sensational headlines.

It is one thing to carefully and respectfully design methodological question-wording experiments to advance the science of survey research to help us to write questions that are better at measuring public opinion, and quite another to use deception to collect data with the intention of being click-bait. In all likelihood, it is the misuse of methodological research that informed the creators of this poll that they would be likely to find a sensational, headline-making result by asking this question.

As part of our commitment to our respondents and our profession, Castleton Polling Institute will not undertake work that aims to intentionally deceive our respondents for the purposes attention-grabbing headlines. Our intentions when we ask to you to participate in research are not to trick you or ask unfair questions, but rather to be able to report on the public’s true opinion.

Why Transparency Matters

The Castleton Polling Institute is proud to announce our admittance as the 55th charter member of the American Association for Public Opinion Research’s Transparency Initiative (www.aapor.org/ti). This membership aligns with the Polling Institute’s commitment to data quality and integrity.

The information being disclosed by Transparency Initiative members is designed to help those who utilize data better understand how the data was collected. By understanding the methodological “hows”, users and readers of the data can better understand whether or not the results presented are precise enough or a good enough fit for their purposes.

The choices made when collecting data, from who is being asked the questions to the wording of the questions and so on, all impact the results. Every study and survey comes with limitations and, by knowing the choices that those who collected the data made, it can help the users and readers evaluate whether or not the results are useful for them.

As we venture further into the 2016 Primary Election season, the usefulness of this type of methodological information becomes more apparent for the average citizen. Articles with headlines that can seem sensational (“clear frontrunner” or “surged ahead”) tend to utilize poll data to support these claims. By accessing required transparency disclosure items, readers can find out information like: who was asked to participate in the poll, what the exact wording of the questions was, when the poll was conducted, and what the margin of (sampling) error was. These items can help the reader understand why, for example, today’s headline about a candidate’s “surge” might seem strange as he/she just was caught in a scandal yesterday; the information in the transparency disclosure might tell you that the poll was conducted two weeks ago.

Of course statistics and data seem to be of national interest during election season, but providing basic methodological details is important to understanding the context of data collected for any purpose on any topic. You’ll see methodological details on this blog site and our website as we share data. Castleton Polling Institute is excited to join with our colleagues in promoting the importance of transparency in our work.

Professional Survey Research?

Those of us who work in the survey research industry as professionals are often met with quizzical looks when we initially meet people and tell them what we do for a living. We provide high-quality research and data collection services to all sorts of entities like the media, businesses, organizations, government, and academics for a living.

The services we provide are akin to those of a professional contractor hired to build your house. Yes, you can go the route of do-it-yourself (DIY). Just as there are the Lowe’s and Home Depots of the home improvement DIY, survey research has its own free or cheaply available tools to allow you to DIY your own survey (e.g., Survey Monkey).

Accessing these tools can work really well for some purposes and some users. Just as most people wouldn’t think twice about changing the paint color in their own living room, sometimes a DIY survey is a good fit for purpose. Just as some people are better skilled house painters than others, some are also more skilled DYI survey researchers.

However, in reality, there is often a limit to one’s DIY capacity. Many projects and purposes are just too large, complex, or important to attempt without consulting a professional.

Professional survey researchers are highly-educated and experienced in collecting data to best serve their clients’ research needs. We are members of organizations like the American Association for Public Opinion Research (www.aapor.org) that provide standards for survey research. We utilize methods that are scientifically tested by methodologists and statisticians to design surveys and collect data.

We work to combat the “survey fatigue” (see previous blog post from July 6, 2015) by working to make the survey-taking experience actually pleasant for respondents. We know the sources of error and how to provide our clients with data they can be confident in while understanding the inevitable limitations.

We hope that you’ll consult us when you have data that needs to be collected and it is too important, large, or complex to DIY. We’re also always happy to help work on those smaller projects too—just like a building contractor, we’re here to save you from the uncertainty and frustrations that predictably come with DIY.

Aren’t there too many surveys?

Far too often, the vicious circles we encounter are of our own making. For instance, imagine the child who says to the parent, “This food is disgusting,” and the parent responding, “Well, I’m not going to put a lot of time into making a dinner you’re not going to eat.” The child does not like the hastily prepared meals that result, and the parent who witnesses the child’s continued dissatisfaction is not incentivized to prepare anything more appetizing.

This is the case in survey research today. We develop surveys too quickly, and send them out untested, and then we bemoan poor response rates. Potential survey respondents are inundated with surveys asking for their views on every matter under the sun, and people do not see the connection between survey response and improvement in those areas that are the subject of surveys. Researchers cannot draw adequate inferences from low rates of return, and the respondents don’t reply because they don’t see adequate inferences being drawn. And the circle continues.

But there is a way out of the vicious circle.

First, stop doing so many surveys. Survey research is riddled with systematic error, and while it is easy to do a survey, it is very difficult to do one well. If one could obtain data from sources other than survey respondents—e.g. administrative records, subject observations, or historical records—then the conduct of a survey is not only poor methodology, it is wasting the time of survey respondents.

Second, we need to show more respect for our survey subjects. We do this by honoring their time and efforts through making carefully constructed survey instruments that are pre-tested before going out, and we make full use of the data so that the time taken to complete a survey is not in vain.

Third, take time to learn the best practices in survey research before soliciting survey responses. We are not spending enough time considering both what it is precisely that we need to learn and how we will work with the data once the data collection is completed. We should have those decisions made before we start to develop a survey instrument.

If we can (a) reduce the number of surveys, (b) reduce the level of burden placed upon survey respondents, and (c) make it clear how the data will be useful and fully considered, then we may have an argument to make to those who don’t respond to our surveys. The culture of survey response, however, will not change until the culture of those conducting surveys does.