|
Post by Robert Waller on Jun 8, 2017 10:12:35 GMT
Given the huge range in forecasts from different polling companies it follows some will be wrong of course. And herein lies the problem. These pre-election polls (except possibly the last ones, though I don't agree they should be excluded from the points I am about to make), are not forecasts. They are, and must be, backward looking. They are about the past, about the dates they were taken. They may or may not be an accurate in explaining who is leading or doing badly at that time. Imagine the estimable Peter O'Sullevan of yore, say, commentating on the Grand National at Aintree through binoculars on a foggy late winter day. With seven or six or five etc fences to go, he may be able to see who is at the front, or who is moving up through the field. Without his commentary, we would be worse served. But he is not predicting who is going to win, or should be judged by the eventual outcome. I agree that the pollsters should do as well as they can to gather a good sample. But maybe they have gone too far in more and more skewing their models by what is essentially guesswork about who is actually going to vote. That is, essentially, unknowable and should be added to the margin or error deriving merely from sample size - often quoted, but of no real value. I do think pollsters may have had their thinking clouded to some extent by this perception that they are predicting the results. The exit polls are a different matter entirely. It's not so much the larger sample size. It's that the respondents are those who are actually voting, at the same time and in a similar manner (secret ballot box). Also, they will be using the same polling stations as last time, so they can get a good idea of the movement 2015-2017. So it doesn't matter so much if those polling stations are not themselves a perfect sample. (They can't be, as I picked the original ones and they're probably still using some of them!)
|
|
|
Post by John Chanin on Jun 8, 2017 13:03:23 GMT
Given the huge range in forecasts from different polling companies it follows some will be wrong of course. And herein lies the problem. These pre-election polls (except possibly the last ones, though I don't agree they should be excluded from the points I am about to make), are not forecasts. They are, and must be, backward looking. They are about the past, about the dates they were taken. They may or may not be an accurate in explaining who is leading or doing badly at that time. Imagine the estimable Peter O'Sullevan of yore, say, commentating on the Grand National at Aintree through binoculars on a foggy late winter day. With seven or six or five etc fences to go, he may be able to see who is at the front, or who is moving up through the field. Without his commentary, we would be worse served. But he is not predicting who is going to win, or should be judged by the eventual outcome. I agree that the pollsters should do as well as they can to gather a good sample. But maybe they have gone too far in more and more skewing their models by what is essentially guesswork about who is actually going to vote. That is, essentially, unknowable and should be added to the margin or error deriving merely from sample size - often quoted, but of no real value. I do think pollsters may have had their thinking clouded to some extent by this perception that they are predicting the results.The exit polls are a different matter entirely. It's not so much the larger sample size. It's that the respondents are those who are actually voting, at the same time and in a similar manner (secret ballot box). Also, they will be using the same polling stations as last time, so they can get a good idea of the movement 2015-2017. So it doesn't matter so much if those polling stations are not themselves a perfect sample. (They can't be, as I picked the original ones and they're probably still using some of them!) But Robert this is exactly the problem , and why the pollsters are lying when they say they are only taking snapshots and not making predictions. They are very much making predictions of who will turn out and vote. They have long moved away from simply asking people how they will vote and which party they will support and publishing the results. They are even specifically making judgements on who is lying to them.
If pollsters returned to simply publishing the results of their surveys (with commentary if necessary to say that people don't always do what they say) then your post would have weight. As it is it doesn't.
|
|
|
Post by Robert Waller on Jun 8, 2017 13:14:38 GMT
You have highlighted part of my post, apparently agreed with it, then told me my post lacks weight. If you say that, tell me which parts you disagree with.
|
|