Thoughts from the Castleton Library’s Panel: The Truth is Out There: How We Know What We Know

In early March, I was asked to provide my thoughts on a panel hosted by the Castleton University Library about “Truth”. Here were the questions panelists asked to consider:

What is truth, how do we know what’s true? How do we (academia, publishers) strive for accuracy? What about scientific literacy, the scientific method, how knowledge is constructed. What should we as citizens do to stand up for science/truth or educate ourselves further on this?

Below are the thoughts I shared, as a survey researcher and applied sociologist:

The thoughts that I would to share on this topic come from two distinct but related areas: one as a consumer of social and scientific research and the second as a sociologist.

I think it makes sense to start with the more micro-level ideas, the way in which data and facts are presented. As a professional whose job entails the collection of opinions and data, which are regularly presented as facts, I am often uncomfortable with the way in which these are presented publicly–frequently as precise, hard-truths, with no context or limitations.

When people ask me, “How did the pollsters get the most recent presidential election ‘so wrong’?” my response to them is that they didn’t. For example Nate Silver’s FiveThrityEight poll aggregator before the election had Trump with a 29% chance of winning the Electoral College. That’s right: 3 in 10. If you were to go to a casino with those odds, you’d feel okay, but you wouldn’t be surprised if you didn’t win-same for the election. The problem is that many polls and aggregators came up with a similar result, with Clinton likely to win, and this ended up being reported and shared as a certainty, rather than a probability.

Many of us who collect data, view our work as providing insight to a specific aspect of public opinion or behaviors for a specific population and at a specific time in a specific context. The quantitative data we collect is intended as an estimate or statistical probability.

There is error. Social scientists acknowledge error. The error is both known and unknown. We have measures of some sources of errors—like with margins of sampling error—those plus/minus percent rages that you see in the footnote of some reports of polls. Other types of errors are unknown or not easily observable or quantifiable, like measurement error and nonresponse error.

With the data we collect, we know all the details are important. It matters how the question is worded. It matters who is asking the question. It matters who is asked to respond and who is willing to respond. It also matters when you are asking. Public opinion isn’t true forever. Things change. The data being presented is a tied to a point-in-time, with a particular population, asked a particular question, with error both known and unknown built in. All of this is why you often find me and others like me asking for more details about how the data was collected, so that we can better evaluate the source and likely errors for ourselves.

We know that sometimes even the act of asking a question, can create an opinion. And depending on who is asked the result can be different, which brings me to the second idea that I’d like to discuss, which is at a much more of a macro-level.

As a sociologist, I have a difficult time discussing the concept or idea of truth without exploring social constructionism. For this, I turn to the work of Berger and Luckmann. We exist in a social world. Objects, symbols, language, and interactions all have meaning because we, collectively, have assigned meaning to them. The meaning that is assigned is based on our shared social understanding—we’ve created it.

Some tangible, simplistic ways this is clear to us is when the same symbol conveys a different meaning in a different culture. For example the hand gesture of “the middle finger” is an insult in the United States, but its equivalent in the United Kingdom is a V with the palm facing in.

This created, shared reality goes beyond simple cultural misunderstandings. We use this reality to create our institutions, grant power, legitimacy, and authority. Through the process of socialization, we internalize this reality and view it as natural and objective.

When we act as essentialists and do not acknowledge that these things are constructed and only are real because we’ve assigned them meaning, we further legitimize those that we’ve given authority to and those institutions, which can lead to discrimination and oppression. However, because we are all social actors and can partake in the construction of this shared truth, we can alter our structures, institutions, grant power, and legitimize diverse viewpoints.

This is why inclusion and diversity is so important. Whose reality, experience, and ideas are accepted as the truth matters. Who we ask and who we don’t ask matters. Who gets to be at the table and talk matters. I might add that I think it matters even at Castleton during an N-period panel about truth.

On Eroding Democratic Norms

The Constitution is a well-crafted document, but its power comes from the reverence we pay it; our democracy is dependent on democratic norms, such as respect for constitutional procedures, rule of law, and trust in the basic fairness of the system. Undermining that trust erodes the very foundation of our government.

The founding fathers recognized the importance of these norms. While they devised a system based on the premise that human nature is corruptible and that men were naturally self-serving and ambitious, they also believed that those who represent the people will be of superior character. “If we consider the situation of the men on whom the free suffrages of their fellow-citizens may confer the representative trust, we shall find it involving every security which can be devised or desired for their fidelity to their constituents,” wrote James Madison in Federalist 57. Madison adds, “In the first place, as they will have been distinguished by the preference of their fellow-citizens, we are to presume that in general they will be somewhat distinguished also by those qualities which entitle them to it, and which promise a sincere and scrupulous regard to the nature of their engagements.”

And since the founding, presidents of the United States have paid homage to the necessity of respect for law, the values espoused in the Declaration of Independence, and the rights enumerated in the Bill of Rights. Additionally, modern presidents have made pains to get facts straight, even when using those facts to spin a narrative supporting controversial policy positions. Democratic norms and facts have mattered—at least the espousal of facts and norms have mattered.

President Trump has taken a noticeable departure from this standard, as far as I can tell. He has avowed that the media is the enemy of the people, contradicting the long-held position among American leaders that a free press is a necessary staple of a healthy democracy. He has made irrefutably erroneous statements speaking as the head of state that he has not corrected, and his press secretary has seemingly renounced the goal of fact checking, in favor of supporting the non-factual statements of the President.

This behavior has eroded our democratic norms and principles in the short period of President Trump’s tenure so far, and if continued, could possibly create irreparable damage.

It is common to oppose presidents for their policy positions, or distortion of facts, or on ideological principles while sharing a basic agreement on the norms of democracy, debate, and facts. It is uncommon to take issue with a sitting president’s commitment to basic American values. Even when opponents of George W. Bush (during debates about the Patriot Act) or opponents of Barack Obama (in light of health care reform) challenged the sitting president’s basic commitment to American values, the response of those presidents recognized the concerns of opponents and reaffirmed, at least rhetorically, the administration’s commitments to our democratic principles. Obama, Bush, and every modern president before them (with the possible exception of Nixon in his most dark times before resignation) recognized the legitimacy of the press and of the opposition, both within and outside of government. All of the modern presidents spoke of the great contributions of immigrants and of the value of tolerance toward others. All of the modern presidents before Donald Trump paid homage to the international community of nations, with respect for other cultures and with a commitment to international leadership.

The 2016 presidential election was far too close for anyone to claim a mandate from the electorate. The nation appeared not only closely divided, but deeply divided, as evidenced by the protests both in favor of the new president and against the new administration almost immediately. The size of the Women’s March on the weekend after the inauguration is a case in point, demonstrating the concern within the American public, and the protests following the President’s travel ban is another example of the unease.

Further concern can be measured by the historically low approval ratings that President Trump had in his first weeks in office. Gallup’s numbers show that most citizens feel “strongly” in their approval or disapproval of the new president, with 41 percent in late February expressing “Strong disapproval” of the way the President is handling his job.

We should all be concerned about such low approval ratings; these are not simply a concern of the Trump White House, but rather a deeper reflection on the angst in the American citizenry.

Republicans and Democrats in leadership positions need to resist the siren call for partisan battle and join in the common defense of basic democratic norms. There is absolutely nothing wrong with partisan battles, but they must be conducted within the framework of democratic principles, with shared facts and basic norms of tolerance and respect of opposition points of view.

Post-Truth Politics

On February 16th, I participated in a panel on Post-Truth Politics at Castleton University. The panel was part of a series organized by the Castleton University Library, and the session I participated on was titled, “Fake News and Truthiness.” The following post is a slightly revised version of my opening remarks from that panel session.

Facts hold a special place in political discourse. In his defense of British soldiers following the Boston Massacre, John Adams spoke, “Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passions, they cannot alter the state of facts and evidence.” Senator John McCain (R-AZ) reiterated these words in a Senate hearing on Russian interference in the 2016 Presidential Election. Both men assumed that facts were irrefutable and held a special place in our deliberations.

People may disagree about the meaning of facts, but the facts exist independent of individuals’ opinions. Still, it is important to draw a distinction between facts and truth. A fact is something that cannot be refuted through reasoning or observation, whereas truth is something which depends on a person’s perspective and experience.

In a New York Times Op Ed piece last August (Aug. 24, 2016), William Davies, a Professor of Political Economy at the University of London, wrote that, “We have entered an age of post-truth politics.” The presumption that follows is that the previous age was an age of truth politics—a dubious presumption for sure. Consider the verbal gymnastics of Donald Rumsfeld, the parsing of words by Bill Clinton, the enigmatic statements from Fed Chairmen, or other creative political communications that have stretched truth, often past the breaking point. I don’t know that the pre-“Post Truth World” was really a place of established truth.

In fact, I think that “truth” is too lofty a goal for political communication.

I tell the students in my Research Methods class that if they want “truth” they need to go to church … or maybe a museum or philosophy discussion. Science doesn’t provide truth. It provides a method for understanding our world that is limited to physical and/or behavioral phenomena – limited to that which can be measured.

That being said, science is very useful; it has improved the lives of nearly everyone on the planet in some measurable way. At the same time it has also created weapons so horrific they could terminate our existence as a species.

Science is a method; it’s not the outcome. It is not a moral system. It is a process for understanding our world.

We who sell science as a means for understanding, however, too often simply convey results or findings of our research. In so doing, we’ve failed to propagate our methods, which are the essence of scientific understanding. By failing to instill the methods of scientific understanding—instead, focusing on findings—we failed to bring the general public along to scientific reasoning. If all that matters is the findings—and the means for obtaining those findings are irrelevant—then the public is left with little criteria for judging among conflicting findings.

And that’s where we are.
Truth may be too high a bar for the sciences—the natural and social sciences. But we should be able to agree upon a set of facts—agree about testable observations corresponding to the observable world.

Our news and political debate should be based upon fact, as Adams and McCain suggest. Yet too often facts are distorted or ignored. We live in a world of slanted news and now even the oxymoronic “FAKE news.”

I want to draw a distinction between slanted news and fake news; the former has an element of truth but may suggest motives or imply a nefarious agenda, while the latter is completely fabricated from thin air. An example of fake news is the story that claimed John Podesta and Hillary Clinton were running a child prostitution ring from a Washington Pizzeria. The story sounds preposterous, but one individual believing it to be true went to the pizzeria, armed, to liberate the victims.

The proliferation of slanted news allowed an environment where someone could believe the fabricated news. If your diet of news constantly tells you that Hillary Clinton is corrupt, without morals, and so power hungry that she will do anything, the fabricated story becomes plausible.

A Los Angeles Times reporter met with the man who went to the pizzeria to intervene on behalf of the children—a noble cause if there ever was one. He was not a bad man, looking to harm people. He went with the best of intentions. Yet a situation was created where people’s lives were endangered.

Fake news has real consequences.

Our political discourse is eroded by neglecting facts with impunity. When the public distrusts the media nearly as much as they distrust a leader who has on many occasions shown no regard for facts, we lack a firm foundation for rational political discourse.

If we aspire to reach some truth or greater understanding, it is imperative that we pave that path with universally recognized facts.

Understanding What Went Wrong with 2016 Polls Will Take Time

Trying to grapple with the failure of polling to predict the Electoral College victory of President-elect Donald Trump, I wrote a short piece for the Castleton student newspaper, The Spartan. In that piece, I argue that we need examine the systematic omission of a segment of the population who are not unreachable, but rather who refuse to participate in polling as respondents. As response rates declined over the past two decades, it was not only a result of those that we could not reach, but we also saw a rise in refusals—those who we could reach but who refused to participate in any polls. Figure 1 shows that this segment of the public is actually greater than the proportion that we cannot reach at all. While we know very little about the unreachable segment of the population, we have some, but limited, information about the refusals. We need to employ that metadata to understand as much as we can about this subpopulation.

The American Association for Public Opinion Research has put together a taskforce to examine the polling from 2016; this is something to watch closely. What I believe that I know now is this:

  1. Any explanation that employs one factor to explain the polling errors is wrong. There are many factors in play.
  2. Most of the early attempts to explain the errors are also wrong; we need a thoughtful and deep examination of the methodology, which will take time and peer discussion.

More to come.

Figure 1. Pew’s response rates, 1997 – 2012.

A student’s view of Vermont’s Opiate Addiction

The opiate addiction issue is no longer someone else’s issue. This past July Vermont Public Radio came out with a survey, conducted by the Castleton Polling Institute, that had a broad spectrum of questions related to issues in Vermont. Opiate addiction was, of course, on that list.

Eighty-nine percent of respondents in the poll said that opiate addiction is a “major problem” in Vermont. No one responded that opiate addiction was not a problem for our state. Opiate addiction in such a small state came as a shock to many, and was even featured in Rolling Stone magazine alongside an image of a Vermonter doing heroin on a maple syrup bottle.

Robertson Graphic
Figure 1. Percent of Vermonter’s who see Opiate Addiction as a major problem

This past 2016 State of the State address focused largely on the opiate epidemic. Governor Shumlin spoke about daily drug related violence, and uncared for children due to drug addicted parents. He then discussed his plans to further battle the Vermont opiate addiction.

When respondents were asked if they or someone they know have been personally affected by opiate addiction, the state was almost evenly split, with those responding “yes” (53 percent) in a slim majority.

Of those who said yes, to knowing someone affected by opiate addiction, 94 percent said that they “personally” know someone who has struggled with opiate addiction. This shows that this issue reaches much farther than just the respondents.

Despite the high numbers of those who know someone struggling with addiction there is a light that is shining over the sad news. Groups throughout the state are working throughout communities to decrease the use of opiates, as well as other drugs. In Rutland County there is an organization called Project Vision. Project Vision is a leading example of organizations that are getting the community actively involved to fight drug addiction and build the community. It also has local law enforcement actively involved, which creates a positive relationship between them and the communities they serve. Although opiate addiction is an issue it is definitely being fought by different groups of people throughout Vermont.

A students view on refugee resettlement

Support for Refugees
Figure 1. Support for resettling refugees in your community, by Party.

In a recent Vermont Public Radio poll conducted by the Castleton Polling Institute, 54 percent of Republicans (a slim majority) said that they would oppose an effort to resettle refugees in their community, whereas nearly 80 percent of Democrats voiced support for the resettling of refugees in their community.

Consistent with the partisan split in the level of support, only 20 percent of Democrats, contrasted with 60 percent of Republicans, felt as though refugee resettlement would have a negative impact on Vermont. Independents were split in their opinions, with just over 33 percent believing resettlement to have a positive impact and just under 36 percent believing it to have a negative impact.

Of the 637 complete interviews only 8 respondents cited religion as the likely source of a negative impact. For those who see resettlement as a negative, the reasoning most frequently given is the cost of domestic aid and the over taxing of local resources. Those who see refugee resettlement in a positive light are generally more united on the topic; the most frequent opinion shared was that it would make Vermont a more culturally diverse area, and a better place to live.

President Obama has promised asylum in the United States to 10,000 refugees, so far Vermont has been promised 100. Although only a fraction of the whole, a small homogeneous state like Vermont is easily affected. Vermont is.23 of a percent of the total population, yet they are accepting one percent of refugees. For the country as a whole, 10,000 is a small splash in the ocean, but Vermont is only a small pond and 100 people can make a big splash. No one knows what will happen in the coming months, but as Rutland County opens its arms to refugees the impact will become clearer.

Favorability and Vote Choice

(This post was co-written with John Graves, summer intern at the Castleton Polling Institute and student at Mill River Union High School, Clarendon, VT)

With the Vermont state primary behind us, the Castleton Polling Institute went back to the July VPR Poll to explore the relationship between the candidates’ relative favorability and their share of the primary votes. Without developing a “likely voter” model (which in low-turnout elections becomes very difficult), we simply used the favorability ratings from all of the respondents who identified themselves as either Democrat or Republican and as potential primary voters.

Using the principle of transitivity from rational choice theory, we made the following presumptions:

  • If Respondent A rated Candidate X more favorably than they rated Candidate X’s primary opponents, then Respondent A would choose Candidate X. Thus the probability of Respondent A’s vote going to Candidate X would be 1, and the probability of Respondent A’s vote going to Candidate Y or Z is 0.
  • If Respondent A rated all candidates the same, Respondent A is equally likely to choose any candidate. Thus, the vote probability in a three-way race is Candidate X = .33, Candidate Y = .33, and Candidate Z = .33.
  • If Respondent A rated Candidate X and Candidate Y more favorably than they rated Candidate Z, then Respondent A is equally likely choose X or Y but not Z. Thus the probability of Respondent A’s vote going to Candidate X would be .5, to Candidate Y is .5, and the probability of Respondent A’s vote going to Candidate Z is 0.

Even if Respondent A rated all of the candidate’s poorly, if Respondent A was to cast a vote in a rational manner, the vote would go to whomever was rated highest, on a relative scale.

Additional presumptions:

  • Respondents are more likely to vote for a candidate with whom with they have at least passing familiarity than for one they don’t recognize.
  • We presume, however, that a respondent will choose a candidate unknown to him over one whom the respondent has rated unfavorably.
  • Thus, in order of likelihood to get respondents’ votes, here are the scores assigned to each respondent for each of the candidates:

1. Very favorable rating and known to the respondent
2. Somewhat favorable rating and known to the respondent
3. Known to the respondent, but the respondent has no definite opinion either favorable or unfavorable
4. Unknown to the respondent
5. Somewhat unfavorable rating and known to the respondent
6. Very unfavorable rating and known to the respondent

After figuring out which candidate or candidates we thought each subject was going to vote for we tried to control for the most likely voters by looking at party affiliation and how likely each subject self-reported that they would be to vote in the primary. We concluded that the most representative sample of likely voters would be subjects who were affiliated with the given party and who also said they were at least somewhat likely to vote in the primary. This formed a group of 69 Republicans and 138 Democrats from the poll that were predicted to vote in the primary, representing 11.9% and 23.7% respectively of the registered voters from the VPR poll. These numbers are slightly higher than the actual 10.3% and 16.2% turnout in the actual election, but that is to be expected with the polling response bias for citizens interested in politics.

Figure 1 illustrates the percent of the vote each candidate is projected to receive based on the relative favorability ratings; in addition, the chart compares the projected vote against the actual vote received in the respective primary races.


Figure 1. Projected vote (with error bars) based on relative candidate favorabiilty ratings, compared with actual vote totals


As Figure 1 illustrates, our model did a good job at predicting both parties’ gubernatorial primary elections, with both predictions within the margin of error for the actual results, with the exception of Peter Galbraith’s projected vote total, which was lower than the model projected. In the Republican race our model predicted Scott to win with 64 percent of the vote, very close to the actual 60 percent. The model also predicted that Minter would receive 48 percent of the Democratic vote—very close to the 49 percent she actually received. It is possible—although we lack any empirical evidence—that the model’s over-prediction of Galbraith could be explained by some strategic voting, voters choosing their favorite between the two front runners out of concern that Galbraith could not win.

On the other hand, the model missed predicting the Democratic primary outcome for the Lieutenant Governor’s race, picking Smith instead of Zuckerman as the likely winner. One possible reason for this difference between the model and results could be because of a change in public perception from the time the poll was completed until Election Day. This seems especially possible in this race given the late endorsement from the extremely popular Bernie Sanders who might have changed the minds of some Vermont voters. This difference illustrates the difficulty in predicting  election results in advance in low turnout elections, especially when only using favorability rating as a proxy for whom subjects will vote. It is also possible that Progressives—who would not have self-identified as Democrats and who therefore would not be included in the model—crossed over to the Democratic primary to support Zuckerman.

Though our model successfully predicted two out of the three races, it is a respondent-level model, and therefore requires that we have a good estimate for who will vote in the primaries—which of our respondents expressing views will actually show up and cast a ballot. In a higher turnout race, such as the general election, we can estimate that a majority of respondents will follow through and vote. This is not the case with the state primary races, where fewer than 3 in 10 eligible voters cast a ballot.

Consequently, we lack a high-enough level of confidence in this model to predict a future event so we are left to test the model and do as most political scientists do: predict the past.

Campaigns Matter, Even When Most Voters Are Not Engaged

The VPR Poll in July 2016 asked Vermonters about the candidates. Respondents were asked if they have heard of each candidate for governor or lieutenant governor; for each candidate that a respondent has heard of, the respondents were asked if their opinion of that candidate was favorable or unfavorable.

The data from these two questions allowed us to assess how well a candidate is known and whether those who know the candidate have a favorable or unfavorable opinion (or no opinion at all). This is what a campaign is all about: to introduce or reintroduce one’s candidate to the voters and to create a favorable image for that candidate among those voters. The successful campaigns approach the election with a large percentage of the public holding favorable views of their candidates. As the Vermont state primary approaches, the candidate with the greatest level of name recognition is current Lieutenant Governor and gubernatorial candidate Phil Scott. Of the 86 percent of Vermont adults who recognize Scott, 58 percent hold a favorable view of him, while only 13 percent hold an unfavorable view—giving Scott a net favorability score of 45. (Net favorability is percentage of respondents with an unfavorable opinion of the candidate subtracted from the percentage of respondents with a favorable opinion; those with no opinion are not included in the calculation.) The only gubernatorial candidate with a higher net favorability score—higher by a mere and insignificant 1 point—was Sue Minter; however, only 63 percent of Vermont adults have heard of Minter.

The following graph shows the relative awareness and net favorability for all of the candidates for governor and lieutenant governor.

Figure 1. Candidates’ Name Recognition and Net Favorability Ratings, July 2015

Of course, the job of a campaign is to improve the level of public awareness and public approval for one’s candidate. In September 2015, the Castleton Poll asked Vermonters about a number of candidate who were potentially running for governor. The following table shows the changes from fall of 2015 to July 2016.

Table 1. Changes in Name Recognition and Favorability from September 2015 to July 2016

The campaign of Bruce Lisman made the most traction in getting the candidate’s name recognized by potential voters, going from having only 21 percent knowing who he is in September to 61 percent this July. Unfortunately, being known as a candidate takes a hit on one’s favorability ratings, as LIsman’s net favorability dropped from 13 to 3.  This is what hit Phil Scott, who had the biggest drop in net favorability from September 2015 to July 2016. Of course, Scott had such high favorables it was inevitable that, as a candidate, those numbers would come down.

Randy Brock, a former gubernatorial candidate, has lost ground running for lieutenant governor in both awareness and favorability.

Sue Minter has made the greatest gains in favorability, picking up a net 20 points and increasing her awareness by 25 percentage points. While she is, in July 2016, a little less known than her primary opponent Matt Dunne, her net favorability is comparably higher. This sets up a potentially close race for the Democratic nomination. The victor will likely be the one who mobilizes supporters best with the better get-out-the-vote effort.



A Disengaged Public

Heading into the Vermont state primaries held in August, the VPR Poll asked Vermonters about the candidates. Respondents were asked if they have heard of each candidate for governor or lieutenant governor; for each candidate that a respondent has heard of, the respondents were asked if their opinion of that candidate was favorable or unfavorable. The best known candidate, by polling numbers, was Phil Scott (R), who was known to 86 percent of the general public, 73 percent of which had an opinion of Lieutenant Governor Scott. In other words, only 62 percent of the general public had an opinion on the best known of the candidates for governor. The best known Democratic candidate, Matt Dunne, is known to 73 percent of Vermont adults, 66 percent of which have an opinion of Mr. Dunne—meaning that just under half of Vermont adults (48 percent) had an opinion about Dunne. The number are lower for all other candidates, including Shap Smith (D) who has served as the Speaker of the House in the Vermont legislature. The following table shows the relative proportions of Vermont adults without opinions about the men and women running for Governor and Lieutenant Governor.

Level of opinions based on percentage of those respondents who have heard of a candidate who in turn have either a favorable or unfavorable impression of that candidate.
Percent of Vermont Adults with an Opinion of the Candidates for Governor and Lieutenant Governor

Of course, there is no qualifications of knowledge about the candidates in order to vote, and it is only necessary to know about the candidate one supports. However, if elections are about choice, it would be ideal if voters knew more about the choices available to them on the ballot.

The Perils of Polling in Low-turnout Primaries

Recently Energy Independent Vermont commissioned a poll conducted by Fairbank, Maslin, Maullin, Metz, and Associates (FM3), a public opinion research group that works primarily with Democratic candidates and a wide array of governments, non-profits, and corporations. The poll interviewed 600 registered Vermont voters, and although little additional information about the methodology was published, the report claimed to represent “likely voters” defined by those “who said they are likely to vote” (from Polhamus, Mike. “Poll finds support for carbon tax, other climate change steps.” VTDigger. Accessed online on July 12, 2016). Self-reported likelihood to vote is a notoriously biased number, even in the best of elections; this is what pollsters call social desirability bias.

The poll reported 65 percent of respondents saying that they are likely to vote in the state primary; the voter turnout in the last gubernatorial primary election without an incumbent (2010) was only 24 percent, and in 2014 the turnout was only 9 percent. Given prior elections, 65 percent is an unrealistic projection for state primary turnout.

While I admire FM3’s attempt to poll in these important primaries, I contend that a much larger sample is necessary. It may be counter-intuitive to some, but polling is much easier in large populations than in small populations. What is most difficult about the projections of the population voting in primaries is that the parameters of these populations are generally unknown. We do not have exit poll data to tell us about the general patterns of state primary voters. The best indicator we have for whether or not someone will vote in the state primary is one’s past voting history, which can be obtained from voting records. Past behavior is the best predictor of future behavior.

When the voting population is small, the danger of using past voting behavior is that mobilization of just a small number of new voters—voters not picked up in a sample frame including only past voters—can make a large impact. In other words, a strong get-out-the-vote (GOTV) movement can overcome name recognition, advertising, and direct mail disadvantages.

The FM3 poll may be right on target, but it is more likely that the respondents in the late June poll will not look like the voting population in the August 9th election because, unless this is a fortunately unrepresentative sample of registered voters, most of these respondents are not likely to vote in the state primary.