21 August #BetweenTheLinesDotVote Analysis Dornsife: Can We Trust It Again In 2020? A friend of mine told me most people just want to eat the food and aren't interested in the recipe. Perhaps I needs to give a Chef's Warning. Dornsife has a recipe.
2) They released their first analysis two days ago, 19 August. I don't mind cutting the dinner plate chase that I was deeply troubled by the changes, most especially their new headline strategy. There's also a back story. And then we'll get into the weeds, then finish up.
3) In 2016, as I recall, splashy headlines were simply NOT part of the Dornsife Method. Yes, they published stories, many for fellow polling chefs, detailing their method. If true, Biden's 11 point lead is big news. But it instantly marks a new approach to their work, negatively.
4) We saw this yesterday at Drudge. Yet the lead headlining Drudge did NOT result from the same method of polling used in 2016. It comes from one of two new methods, one previously tested, one not. The 11 point lead comes from the new, untested method. That matters.
5) "Because the panel of participants in the USC Dornsife tracking poll includes many who participated in the 2016 poll, researchers are able to compare their vote for president in 2016 to the candidate they’re supporting in 2020."
6) This is simply NOTHING like the 2016 method, period. I don't know if it's a trick or not, yet. We'll look at that possibility in a bit. But, Just on its face, I have no reason, other than trusting Dornsife, to make me support this method of polling.
7) Sad to say, I smell a rat. Here we have a new, completely untested method, it generates a whopping lead for the candidate that the LA Times supports, and capitalizing on reputation built from a DIFFERENT method, this is the first headline we see. I don't approve.
8) This particular group, not possible before, is selected from those who participated in Dornsife's 2016 poll, and asks those who voted for Trump if they will again, and those who voted for Clinton, if they'll vote for Biden. I offer, this method may work, and we'll see.
9) But it's not possible for me to intuit its validity at first blush. Also, in 2016, the poll had roughly 3,100 participants. I checked their methodology link, and I was unable to find how the 1,510 current participants in this new method were selected. That matters, hugely.
10) Again, it's certainly possible that this new method will test out. Hell, it might be the best method of all. But, it is absolutely possible that it won't, and we just don't know. To go from there, trading off of 2016 accuracy, from a totally different method. Sorry. Sketchy.
11) We absolutely know selection bias is rampant throughout the entire polling industry. What I sadly see is that Dornsife, in its first 2020 offering, opens itself up to the risk, at least, of trading its reputation in for an agenda driven by its sponsors.
12) I won't linger on the backstory. One of the main reasons I chose Dornsife as my solitary source of data, was the purity and simplicity of their method. They did one thing, one thing only, and the method was completely scrutable. It was easy to understand. Easy to feel.
13) By making their method so clear, direct, and simple in 2016, they invited credibility and ended up deserving a great deal more credibility than they received during the season. It is this that makes me sad right now. This beginning bodes very poorly, I say.
14) Here's a for instance. Until I can find the actual methodology behind this new method, as I said, I have no idea of how they selected their 1,510 participants. Even then, I'm afraid I'll still have to be skeptical for a while, as it's so easy to hide bias.
15) How might bias be employed? It could be that of the 3,100 participants from 2016, these 1,510 show just that much leaning toward the desired outcome that the other 1,590 did not. Do you see? You can select those who give you the answer you want, the right leaning.
16) Without boldly telling us how the cut was made, we don't know whether or not a much larger percentage of the original 3,100 were willing to participate, but did not show the desired leaning. That takes me back to one last part of the back story.
17) Part of my joy in selecting Dornsife in 2016, was the very fact of their funding source, the LA Times. I liked that my source supported Clinton, yet I absolutely believed in the integrity of the poll itself. That fits my personal values system. I'm a partisan, too.
18) But, when Dornsife was the most accurate poll in predicting Trump's election, I immediately suspected that 2020 would have a different rule set. I immediately feared that the LA Times would push for changes to the 2020 poll, giving them more power in its outcomes.
19) As this season has rolled through, and I witnessed absolutely new levels of what I've come to call False Polling, my fears grew. This season, False Polling has become the Democrats main strategy. To my analysis, they know that Trump killed them over Fake News.
20) With a shrinking propaganda tool kit, and with no policy or candidate momentum, no identity to their platform other than the hardest turn left in American history, I saw them use polls to lead public perception, that is, to use false and thumb-tilted polls to set beliefs.
21) My conclusion from this first release is that, even if the method does prove out in the end, they have opened up a gigantic credibility gap and as we drop down further into the weeds, next, you'll see how they know this, admit it, and have very different buried headline.
22) To show you that, I'll quote: "Notably, the researchers’ other models are showing somewhat different results.
23) "From the preliminary data, the traditional categorical model — asking voters whom they would vote for today — predicts a wider lead for Biden among registered voters, compared to the main probability model. {Note, this is NOT the Dornsife method, at all.}
24) "However, when modeling the outcome using the social circle questions, Biden’s lead drops to single digits. The race draws even closer when forecasts are based on participants’ expectations about how people in their state will vote.
25) "(Overall margin of sampling error for findings based on the preliminary data is plus or minus 3 percentage points.) The researchers will soon release findings in more detail from all three models after collecting a full wave of survey data." Wow!
26) The most important point is the very last one. Findings in more detail AFTER a full wave of survey data. What is that code for? The data we presented today is NOT supported by a full wave of data. Do you see? They're hiding in plain site.
27) BIG HEADLINE. itsy-bitsy little data set underlying. And from a new, untested method at that. But capitalizing on 2016's credibility. Now go one step deeper.
28) What is this new Social Circle Questioning? They answered that question above. It is the second of their new, untested methods. They explain: "Asking participants how they expect people in their social circles and state will vote." Interesting.
29) And what were its results that got NO headline, no attention, buried deep here in the weeds? I quote again: "Biden’s lead drops to single digits. The race draws even closer when forecasts are based on participants’ expectations about how people in their state will vote."
30 Hmm. And by the way, what are "single digits"? They run from 1 - 9. That's a big spread. If Biden's lead in this method was by merely 1 percentage point, and you didn't want to say that, you'd call it a single digit. Why not just tell us? Sketchy, sketchy.
31) And the closer to themselves the participants are asked about, the more Biden's lead diminishes. "The race draws even closer." Hey, that's drama! That's a story! Here's that headline... 2016's Most Accurate Poll Calls Race Too Close To Call
32) And if single digits means 3 points or less... Biden's Lead Less Than Poll's Margin Of Error How's that for a headline?
33) Again, I am NOT saying that these methods will not prove out. I don't yet know. I AM saying that when one new method gives an 11 point lead, and that gets the headline; and the other new method either narrows or wipes that lead, but no headline, that's just wrong.
34) There can be no other explanation than editorial policy, which is to say: AGENDA. And no, we can put that editorial policy nowhere else than on Dornsife itself. They chose the 11 point lead as their lead. They did not have to. Well, unless their bosses demanded it.
35) I hope they correct themselves. But as it stands right now, they have revealed an editorial policy that is completely tilted by the precise agenda their funding source propounds every day in its news policy. Fake News. False Polling. Editorial decisions. Policy. Not science.
36) I have to pound this home. The facts are clear, and they are actual facts. One new poll, big lead for Biden, big headline. Another new poll, weak or no lead for Biden, no headline, data buried deep within the weeds. These are inescapable facts. And they're on the record.
37) Last point. There are other changes too. More partners - I don't trust any of them. Growth in the number of participants to the main poll - this may be good, but it also may not be. I'll be watching that very closely, to the degree I'm able, as the season rolls.
38) But in sum I cannot see one sign of a positive direction here. In all these changes, and with a factually established obvious agenda now published, I have to downgrade, if not the main poll itself - time will tell - the entirety of the outfit's positioning for truth in 2020.
39) I have to warn all you non-recipe reading patrons out there, be very careful of the dishes you eat. The integrity of the chef has been brought into severe question. I won't tell you the story of the cook I loved the most, what paiea!, whom I had to fire. He stole from me.
40) But I can promise you that, at my own new polling outfit, we will never lie to you, by fact or by coloration. We will never distort the truth. The truth, and nothing but the truth, to the best of our capability. That's our promise. Come check us out.
Thread ends at #40.
コメント