Model Update 10/23

This is the fit of the new correlated model this map was made on Thursday. I ran the model again today. You may notice lots of changes from the other model. This is expected because what is happening is that this model better “learns” from similar states and uses the last election’s results as a starting point. There are lots of different forms of this model. I struggled to choose a single model and when this ends up in my dissertation, I’m going to be talking about multiple models.

Edit: Here is the google drive link for the daily model updates. This new model is labeled “correlated_fit” and then the date.

Scale: 0-.05 Safe Red (darkest) 0.05-0.15 Likely Red (second darkest) 0.15-0.25 Lean Red (light red) 0.25-0.75 Tossup (brown) 0.75-0.85 Lean Blue (lightest blue) 0.85-0.95 Likely Blue(second darkest) >.95 Safe Blue (darkest)

Average electoral votes: 359

95% credible interval for electoral college: 290-416

Analysis:

I think there might be a slight underestimation of the uncertainty in the electoral college outcome. I’m reading about a 99% probability Biden wins if the election was held today. I think that’s probably high but Biden should still win provided there isn’t some new crazy event. When applied to 2016 data this model read about a 60% chance for Clinton. I am not putting a lot of faith in the electoral college probability because I can’t reliably vet it using past data. It’s really hard to model the correlation between states.

This model is a polling aggregation model and not a forecast. So this fit is like if the election was held today. Since early voting is common and the election is so close this model is now predictive.

There are some things I’m a little skeptical of. I compared this model to the Economist’s model because they have some similarities. I think the estimate for Iowa is too high for Biden, although I would not rule out a Biden win in Iowa. I am wondering if the model is being overconfident in Michigan, Pennsylvania, and Wisconsin.

Model Update 10/15

I am writing this week’s update a day early.

So I’ve developed a functional second model that allows for correlation between states. I am working on testing it. Next week I might base the post on the second model if I show that the second model is more accurate than the current model.

This week’s map:

Scale: 0-.05 Safe Red (darkest) 0.05-0.15 Likely Red (second darkest) 0.15-0.25 Lean Red (light red) 0.25-0.75 Tossup 0.75-0.85 Lean Blue (lightest blue) 0.85-0.95 Likely Blue(second darkest) >.95 Safe Blue (darkest)

Biden/Trump is likely to lose about 1 in 4 of there lean states and 1 in 10 likely states. The expected number of electors is Biden 335, Trump 203. This adjusts for the uncertainty in winning a lean or likely state. Except for Texas, the likely and lean red states are labeled because of insufficient polling data. CA, WA, OR are likely blue because insufficient polling data.

Analysis:

Not much change in the model. The model is very stable. The number of polls are increasing which is nice. I think Biden remains the likely winner.

My 2020 Model

First up I want to be super clear this is NOT A FORECAST or a prediction of what happens on election day. This is a polling aggregation model. Think of it as a fancy Real Clear Politics average except that this model comes up with good estimates of uncertainty and is a little better at predicting the final outcome. This model only predicts well the election at the very end of the cycle. At about six weeks before the electio

This model may not be final. I am going to test a few new features on historical polling data. If they work they will be added in to the 2020 model.

I want to explain why I do what I do.

For starters, I was a little bit surprised when the Economist came out with their model. It did some things I wanted to do. I agree with most of how it is structured. But I didn’t want to basically copy them. I wanted something novel.

One interesting thing I have discovered in my research is that FiveThirtyEight’s model is not that much better at predicting election outcomes (51% for Trump, 49% for Clinton) than basic polling averages where you average the last few polls. My models from my undergraduate research were better than a polling average but not as good as FiveThirtyEight. I wanted to how accurate could a Bayesian election model be that could be run on a standard laptop in a couple of minutes.

Data Inclusion Criteria

I am using the Economist’s I assume that the results in one poll don’t affect the results in another poll. One type of election polls are tracking polls where they interview the same people multiple times. Tracking polls depend on the previous poll results so I exclude them.

What I Learned From 2016

Recently I have been working on a chapter of my dissertation about polling accuracy and modeling polling error. I’ve done a big analysis on state-level Presidential polls from 2008-2016 and I will talk about that in a later post. Looking at the data from 2016 has been a chance to really dive into the data I collected over the years and reflect on what happened.

In 2016, I was a college sophomore. I was in my first mathematically based statistics class. Now I’m a third year Statistics PhD student with most of my classes completed. I’ve grown a lot as a statistician over these past four years, and I think I still have a little more growing to do. But I’ve compiled three things I’ve learned.

  1. Don’t forget that the general public aren’t experts
  2. Embrace Uncertainty
  3. Reflect on the Mistakes of the Past to Prevent Them in the Future

Don’t Forget that the General Public Aren’t Experts

I’ve always viewed election modeling as a pathway to public engagement and education about statistics. Still, you have to remember that not everyone knows what margin of error means or understands how to interpret a probability. Formal statistical training often focuses on the limitations of statistical modeling and how it will always have uncertainty. But people often equate statistics with mathematics as something that provides the same solution with every attempt and that it won’t be wrong. If you are going to put a model out to the public there, you need to explain it in a way that can both be understood by a non-expert, but still provide the information experts need to evaluate your model

Embrace Uncertainty

Polling and polling averages are limited in their precision. If you have an election within a point, polls will not be able to predict the winner. Models have more precision, but there are many examples of cases where they, too, are uncertain. Polling data can be limited, and elections happen once. Sometimes we can’t know who will win. Additionally, it’s hard to evaluate probability predictions because there is only one 2020 presidential election. It is important to be open about the uncertainty in your model and the potential for error.

Reflect on the Mistakes of the Past to Prevent them in the Future

The polling and election modeling world has spent a lot of time reflecting on the 2016 Presidential election. The errors in 2016 were a mixture of insufficient data, some bad methodological choices, and bad luck. Some of the methodological mistakes are easy to fix, but some of the important ones are difficult. A big issue in 2016 was weighting the data for education because non-college-educated individuals are less likely to respond. But it’s not clear how to weight the data well when the composition of the electorate is constantly changing and turnout is hard to predict. But pollsters and modelers know what the challenges and know how polls have performed in the past which can help us access the likely accuracy of polls. We can learn from 2016 so that our models and polls can be the best they can be.

I hope that I’m not writing a piece like this in 2024. I hope that the public will listen to the experts on what typical polling accuracy is and not blindly assume that margin of error covers everything. I hope that the polls and models do well and that polling will be something that the public values and trusts. But now all we can really do is wait and see.

The Nuance of Polling

I’m an election modeler, and my entire dissertation is focusing on analyzing public opinion polling data in one form or another. I love polling. Often on this blog or on twitter I’m cautious about a new poll or what an election model can actually tell us. So I thought perhaps I should explain why polling is important even if it may not tell us who is going to be the next President.

I feel there is an imbalance on how polling is viewed. Some approach it as being completely certain and if it is outside the margin of error it is impossible. Others dismiss polling because they can’t understand how one thousand people can tell us what the entire country thinks or that 2016 showed polling was a failure. But neither of these views is accurate.

The truth is polling remains our only rigorous and mathematically grounded tool to estimate public opinion. Elections can be forecast using economic and other data but that is only because the true proportion voting for a candidate is eventually known. But polling can tell us what percentage of individuals approve a certain policy or unravel how an individual’s policy preferences to prevent terrorism are related to their risk assessment of future terrorist attacks (as I’ve done in a recent project). We can understand how and when people’s opinions do and don’t change.

Polling isn’t a magic problem solver. The results from a poll can not be treated as 100% correct. Polling has error. Sometimes that error puts us in positions where all we know is that a race is too close to call or that the country is evenly split in its support for a policy. We have to acknowledge that margin of error won’t solve all our problems and that polling is hard work. It’s not easy to predict who is a likely voter or decide between an expensive phone poll or a larger internet panel or try to determine why someone left a question blank.

It is possible for polling to be very important because it signals to our government what the people want and that sometimes polling doesn’t give us a clear answer. It’s possible for polling to “be wrong” just by random chance. But it is also possible it gives us a clear answer. Often, it gives us something to point to as important for the government to act on in a way that is far more representative than calls to a congressman or your friend’s opinions or social media comments. If followed by leaders, polling could be a pathway for a more direct democracy without forcing every citizen to give opinions on every issue.

This election, it’s important to embrace the nuance in polling. Every poll is unique and needs to be interpreted holistically considering when and how it was conducted. Every poll on the same issue or election should have different results and that’s expected and ok. Polling is usually going to be off by a handful or two of percentage points, but sometimes the message is clear because the support is so strong or weak. But polling can give us answers when nothing else will, and for that, it will always be valuable.

Unraveling Polling Error (with GIFs and no math)

You can’t understand what polls mean until you understand how they work. The most common misunderstanding people have is about what margin of error means and what type of error it covers. The margin of error doesn’t tell the whole story. Polling error has many different components. Some types of errors are easy to predict, but others can be impossible to guess. I’m going to talk about the three most important sources of error in polling broadly but focusing on election polling. I want to do this so that you better understand why margin of error is not the only type of error and that polls work well all things considered. This post is not about how polls are wrong, it is about how they are right in the midst of numerous challenges.

Three Types of Error:

  1. Sampling Variation: effects of using only a subset of a population
  2. Incorrect Responses and Changes in Opinion: respondent to the survey intentionally or unintentionally does give the correct response or later changes their mind
  3. Miscoverage: the population that was sampled was not the population of interest

Sampling Variation

Possibilities GIF - Possibilities EndlessPossibilities SpongeBob GIFs

One concept people struggle with in statistics is variation. Statistics involves math but there is no longer one solution. If I solve an algebra equation again over and over again my answer shouldn’t change. Statistics is based on samples that are subsets of a population. If a collect a sample over and over again the numbers will be slightly different almost every time. In most cases, my estimate for the sample will not exactly match the true population. You can see this by flipping a coin a bunch of times. A US coin is going to be split relatively evenly, but when you start flipping the coin yourself you might not always get exactly 50% heads and 50% tails. This is because coin flips are random. Polls work in a similar way.

Sampling variation is a huge driver of polling error. It is completely mathematically expected that a sample of a few hundred to a few thousand people isn’t going to tell us the exact true proportion of support for a candidate or policy. If we make some assumptions and adjustments we can calculate a quantity called the margin of error that gives us an estimate of how much randomness to expect. Margin of error only includes error from sampling variation and not the other two types of error I will talk about later.

Typically you are going to make the following assumptions:

  1. People who were sampled but didn’t participate in the survey are not any different than those that did after you control for demographic variables. This is something that is obviously impossible to verify directly.
  2. You have a decent estimate of the target population. The target population is who you want to poll, and the main target populations are likely voters (people thought likely to vote), registered voters (people actively registered to vote), and all adults (including people that can’t vote). The census gives us a pretty good idea about the population characteristics of all adults, and you can use this information combined with other information to get estimates for registered voters. Likely voters are hard to identify because the respondent isn’t always the best predictor of if they will vote, and turnout varies from year to year.

Assumptions are very common in statistics and sometimes it’s difficult to assess how reasonable an assumption is. You may have some doubts about both assumptions being valid. To a certain extent, we know that these assumptions aren’t completely true, but there is a concept in statistics called robustness. Robustness says that under certain conditions (usually large sample sizes) small violations in assumptions are ok, but it has to be acknowledged that violations in assumptions can affect results.

Depending on details about the poll the margin of error is usually about 2-5 points. I’ll omit the calculation because it gets complicated in practice because most surveys have complicated procedures (but statistically valid) to use the estimate of the population characteristics to adjust the poll to match it (this is called raking or weighting involves a lot of math). Now what does this margin of error mean? Typically you add or subtract the margin of error from each estimated quantity and this gives you a range of probable values. Theoretically, if no other types of errors exists (they do) and the assumptions hold and you had dozens upon dozens of polls, 95% of those intervals should can the true population proportions. But in election polls, it’s common to have undecided voters and while it is important to track undecided voters, they complicate things since undecided isn’t really a ballot option. A workaround in election polling is to look at the difference between the margin of candidates, double the margin of error and see if that interval contains 0, and if it does the election is too close to call based on this poll.

Incorrect Responses and Changes in Opinion

Glee Brittany Pierce GIF - Glee BrittanyPierce IsItTheTruth GIFs

People makes mistakes in all aspects in life, and polls are no exception. Additionally you can lie to a pollster. Unfortunately in most polls you don’t know if the answer from the respondent was accurate. If we didn’t have to ask the respondent to answer the question because we knew it already there wouldn’t be much reason for polling. There is no mathematical formula that tells us how to exactly adjust for lying or mistakes in surveys. Respondent mistakes and lying is not in the margin of error because it is too hard to exactly estimate. You can test the respondents ability to follow directions by telling them what response to pick on test questions (i.e. Select True for the next question) and if they get the test questions wrong you could throw them out. Individuals who later changed there mind fall into this category. Early in an election, there are going to be undecided individuals and individuals who later change their minds. Undecided voters are likely a large driver of polling error because it is unknown whether they will vote and if so for whom. Sometimes the people who decide later in the campaign vote differently than other deciders and this is believed to be a large factor in 2016 polling error. If a poll has a higher percentage of undecided voters thant the difference between the candidates, this indicates that the race can be competitive regardless of the margin of error.

Miscoverage

Make It Fit Tight Shoes GIF - MakeItFit TightShoes GIFs

There are a few ways to get a sample. Ideally, your sample is random but there are only a few ways to recruit survey respondents: call random phone numbers, select random addresses, or “randomly” recruit people on the internet, or for exit polls stand outside of polling locations and ask every nth voter. All of these sources of data are not exactly representative of the American voter. Selecting random addresses can be random and representative of the American adults but people move and mail samples are typically used to recruit people for future phone or internet polls because mail is slow. Not everyone has a phone or internet access. Statistical theory often assumes you have a perfect source to draw a sample from. We don’t have a perfect source, so we have a little bit more error. And standing outside of every single polling place is obviously impractical and it’s hard to select a subset that will be completely representative. When we use a sample that doesn’t exactly fit we don’t always get perfect results.

Conclusion

What Does It Mean What Does It All Mean GIF - WhatDoesItMean WhatDoesItAllMean GIFs

I haven’t listed every source of polling error but I wanted to give a few examples of types of error that help explain why margin of error doesn’t always match up with actual survey error.

You may be wondering if margin of error doesn’t provide the whole predict what do we know about polling error? Are polls reliable? What is the difference between margin of error and predicted error on average?

Thankfully we have a great large database from Huffington Post’s Pollster that can help us answer these questions. We have over 5,000 state-level presidential polls from 2008-2016 and over 3,000 have a listed margin of error. In most cases we care about the difference between the democratic and republican candidates as the main metric because it tells us who is leading. We know that the margin of error is double the standard margin of error because margin of error is for one candidate. For about 4 in 5 polls the true election margin was in the interval in polls the last 60 days before the election. But in 1 in 20 polls the observed error was 5 points higher than the margin of error. A key thing about this data set is the polls are not evenly distributed across years or states.

Statistical models like the ones I commonly build can help to predict a polling error given information about where, when, and how the poll was collected. While these models are helpful, they also have uncertainty. Usually polls can be used as signals of what races are competitive, but can’t always predict winners. It is helpful to take a conservative approach when looking at a poll and acknowledging the potential sources of error.

The key thing to remember is that polling is one of our best tools for evaluating public opinion in general. Sometimes people use other types of models to predict elections, but for non-election public opinion questions about policy or presidential approval, polling data is required for statistical analysis.

How to Interpret Election Polls

This is the first of my approximately weekly posts I’m planning about the 2020 election.

As election day approaches, polls are going to become more prominent. It’s important we carefully interpret polls. I suggest you stick to focusing on polls from poll aggregators (like FiveThirtyEight or Real Clear Politics) or those tied to prominent news organizations. Polls brought up by polling experts (myself included) are typically going to be good sources. But you can encounter polls out in the wild that are complete garbage and you should be skeptical of a poll from a website you have never heard of. I’m going to briefly talk about three things you have to always consider when you analyze polls:

  1. Margin of error
  2. Polls are not predictions
  3. Outliers happen

Margin of Error

The margin of error is probably one of the most misunderstood polling concepts. The margin of error comes from a statistical formula that the natural randomness that comes from estimating a proportion for an entire population with a small sample. The margin of error is meant to be added and subtracted to a single candidate’s support. When we are talking about US elections, we normally care about the difference between the democratic and republican candidates (sometimes called the margin and written as Trump +x or Biden +y) and to examine that we must double the error. The reason for doubling the error is because in a poll with a margin of error of three points, Biden could be underestimated by three points, and Trump could be overestimated by three points, which leads to a six-point gap. The margin of error doesn’t cover the rare scenario where a respondent lies or makes a mistake. The margin of error calculation assumes the individuals who respond to a poll are not that much different than the population we are aiming to poll, and we can reach every member of that population, which isn’t exactly the case. The margin of error underestimates the polling error. It’s hard to quantify before an election how much margin of error underestimates the real error, but it is typically less than one percentage point in from my analysis I did on polling error (details will come later). If the difference between two candidates is less than double the margin of error, the poll does not provide enough information about who is winning, and this signals the race is too close to predict from just that poll.

Polls are not Predictions

Polls are not designed to predict elections. Polls are designed to estimate the percent of voters who support each candidate and what percentage of voters are undecided when they are conducted and not necessarily election day. The margin of error estimates the error between the poll and the true support for the candidates while the poll is conducted. People change their minds occasionally, and it’s hard to predict what direction undecided voters will go. A good guideline from research (taken from this book and replicated my own analysis) is that polls are not predictive of the election day result until labor day weekend. The predictiveness of polls improves over time. For example, my model will start collecting data on September 6th and will hopefully be fit by September 22nd, which is about 45 days before the election.

Outliers happen

Occasionally we will see a poll somewhere that is different from other polls. Strange poll results do happen, and you can’t say there has been a change in the race until multiple polls from different pollsters have similar findings. You should look at multiple polls when you check the state of the race. I like to look at two poll aggregators: FiveThirtyEight and RealClearPolitics. FiveThirtyEight is where I am planning to get my data from for my model. The tricky part about comparing two polls is you have to add both their margin of errors together to compare single candidates and double that number if you want to look at the difference between two candidates. Consider Poll A had Trump at 45 and Biden at 48 with a 3 point margin of error, and Poll B had Trump at 48 and Biden at 42 with a 4 point margin of error. The difference between Trump and Biden in Poll A is -3 with a margin of error of 3*2=6, and Poll B is +6 with a margin of error of 8. The margin of error to compare these polls would be 6+8=14, which means if the difference between the polls is less than 14, we would say that the polls aren’t showing statistically different results and the difference we observe could be explained by random sampling error. A lot of times, outliers aren’t really outliers after you adjust margin of error to fit your comparison.

Now you have learned some basic tools to critically analyze polls on your own to follow the races that matter to you. There are many more things you can do with polling, and the analysis I do is far more complicated than this. I’ll continue to write about polls and the election if you want to learn more.