Polling analyst Nate Cohn has an article out today criticizing our methodology. His basic point is that while our results are generally accurate, he doesn't like how we get to them. The simple reality is that in an era of record low response rates, how you weight your polls is a very important part of whether they end up accurately predicting election results or not. Pollsters have different ideas about how to do that, and our methodology is unique within the industry. But it's worked for us, it has for years, and we're going to stick with it. We've never really cared about doing things the way everyone else does.
I think if you're interested in this debate, Cohn's and my e-mails are instructive. I also think it shows the extent to which we were very willing to share details of our methodology with him, which I think sort of calls into question a lot of his attacks on our transparency. There aren't a lot of companies that release as much detail about their polls as us.
I'm a writer at The New Republic magazine who focuses on elections, demographics, and polling. We've interacted a few times on twitter. I'm writing a piece on your polling and I was hoping to ask you a relatively long list of questions about your methodology and some of the critiques offered by your skeptics. Assuming you're the guy to talk to about PPP's methodology, I'd love to discuss it with you, either by email or on the phone. Let me know what works best for you and I'd be happy to oblige.
Why don't you e-mail me what you're interested in talking about and I can figure out whether it's better to e-mail you or talk on the phone Tuesday.
-In general the methodology for all of our polls is the same as we used for our Daily Kos polls. Obviously we don’t only poll between Thursday and Sunday, etc that was specific to the polls we did for them.
-Our response rates are generally around 10%. In the run up to elections, particularly Presidential elections, it can get closer to 15-20%. There’s also a fair amount of state by state variance- people answer polls at a much higher rate in for instance Wisconsin than Texas.
-For our public polls right now we are calling people who voted in one of the last 3 even year general elections.
-For a Thursday-Sunday public poll we’ll generally call six times- Thursday night, Friday morning, Friday night, Saturday morning, Saturday afternoon, Sunday night. If it’s Friday-Sunday we might just call those last 4 times or we might add a call on Sunday afternoon or Monday morning. But 4 calls generally at the least anyway.
-It is generally easier to reach older white voters on the first try. A big reason for doing all the callbacks is to get better samples of younger voters and minorities. One state over the last few years where the sample was noticeably more Democratic on callbacks than the first try was Colorado. I think that’s why you saw particularly large differences in both 2010 and 2012 on what we were finding there compared to Rasmussen, and why even though we had much rosier numbers for Obama there all year than other pollsters we were proved to be right in the end- that’s somewhere that doing all those callbacks really counted.
-The reason we have target ranges and not exact numbers that we weight to is a big part of how we weight our polls is determined by who actually answers the polls on a poll by poll basis. If we do a poll and there’s an unusually small number of black respondents we will put that at the lower edge of the range, whereas if there’s a higher number of black respondents we’ll put it at the higher end of the range because our experience is that is telling us something about the level of interest/engagement in whatever election we happen to be polling on.
-Young voters are more racially diverse so when you weight for age and reduce the value of senior respondents while increasing the value of younger respondents that is going to make the overall sample less white- when you weight it doesn’t just affect the variable you’re weighting for, it can change the proportion of other demographics too.
-I was not aware of what you are talking about with the share of white voters and the President’s performance among them. Can you point me to some specific states/surveys that you are looking at with that?
-And to your last point again a lot of how we weight our individual polls is based on who’s responding to those individual polls. Interest in the election among African Americans, Hispanics, young voters, etc. can fluctuate some over the course of the year and we try to make sure our polls take that into account. If we do one Virginia poll where 11% of raw respondents are black and another where 14% are you are likely to see those weighted to 18% and 20% respectively in a Presidential election year…
I'm happy to answer any other questions you have although I will be traveling a good part of the day tomorrow-
Thanks so much for the response. I love the Colorado point, too.
When I'm weighting a poll I'm usually mostly focused on the Hispanic and African American percentages and not so much the white or other percentages. So if I know I'm shooting for Hispanics in the 8-10% range and African Americans in the 11-13% range, then if I have a poll where both of those are on the lower end of the range and we only have something like 4% other then we're likely to be closer to 76% white in a national poll whereas if we have a poll where both of those are on the higher end of the range and we have something like 6% other it might be closer to 69%. Those fluctuations in the racial percentages can also have a lot to do with the impact age weighting has on a particular poll- if we had an unusually racially diverse set of 18 to 29 and 30 to 45 year old respondents age weighting will push it closer to the lower end of the white range whereas a whiter set of younger voters won't have as much impact on the racial composition. Since we're not operating under a strict quota system there's just going to be more variability in the effect that has.
To your last point we certainly didn't have any grand plan related to the trend you're talking about since I wasn't even aware of it until you brought it to my attention! With the specific Florida example we had more diverse electorates closer to the election because we were getting more Hispanic and black respondents to our polls. I think it's probably mostly a coincidence. But I will say that we are VERY cautious- regardless of what our Republican critics might think- about releasing polls with samples that are too Obama friendly. So even though we would never explicitly weight for 2008 vote, we might in a state like say Ohio weight on the lower end of the black scale (10 or 11%) if weighting on the higher end meant we were going to end up with an Obama +8 2008 sample whereas we would be more comfortable with 12 or 13% if that wasn't going to push it up over Obama +4 2008. That could contribute to the trend you've observed but it's not us saying 'Obama's doing well enough against Romney so we don't need to weight African Americans as high' it's 'we don't want to put out a sample that overrepresents people who voted for Obama last time and give Republicans something to attack us about.'
We are less focused on the exact breakdown for any particular demographic than we are on having an overall sample that is broadly representative of the electorate. And I think that's worked out for us- everyone knows how successful we were with polling last year's election- not just the Presidential stuff but picking up on the last minute failure of the voter ID amendment in Minnesota, calling Jon Tester's victory when even Nate Silver didn't, etc. But more importantly the way we do things has allowed us to be quite accurate in our private polling on Congressional, legislative, local races across the country- that success has helped us get more clients and revenue and ultimately given us the ability to do so much polling for public consumption over the last few years-
I get the sense you're trying to suggest that we intentionally weighted minorities higher so that Obama wouldn't be behind in an Ohio poll but over the course of October we showed Obama slightly behind at various times in polls in Iowa, New Hampshire, Florida, and North Carolina so I don't know why we would have 'allowed' him to be behind in all those other swing states but not Ohio.
There's a lot of resentment towards us in the industry because we do interesting and accurate polls at a fraction of the cost of 'traditional' pollsters and they don't like seeing us get so much attention for it. We don't feel the need to show up to the AAPOR conference and try to make friends with everyone.
We also know particularly on the campaign side that clients on both sides of the aisle sometimes ask if PPP's doing pretty good tracking polls for $2,000 for private clients, why am I paying you four or five times as much for that when they're getting just as close or more to the results. What we do is different from anyone else in the industry but in an era where record low response rates means that how you weight the data is just as important as how you collect it, I think we've adjusted to that reality better than a lot of other pollsters.