Exit polls - developing a technique
This paper, which won the 1992 David Winton Award for the best technical paper, was originally presented at The Market Research Society Conference 1992. It looks at some of the methodological issues in conducting exit polls, particularly the selection of polling stations, the coverage of all hours of the day, the approach to respondents and weighting the collected data.
The paper was written before the General Election. Because of the controversy surrounding the opinion polls after the results, the authors have contributed an addendum.
Background
During the 1987 General Election the BBC commissioned two exercises to predict the election result: a Newsnight panel of voters in 60 Tory marginal constituencies, interviewed three times during the election period; and an eve-of-election survey conducted by Gallup. In addition, a |poll of polls' was presented throughout the campaign, averaging the results of the main published national polls. On the basis of the Gallup poll, backed up by the evidence of the Newsnight panel (although not supported by the poll of polls), the BBC forecast a slim Conservative majority of 26 seats: much lower than ITN's prediction of a majority of 68. Both estimates were well below the actual majority of 102, but the BBC forecast was sufficiently wrong to cause embarrassment within the BBC and led to a lengthy internal investigation. The first conclusion was that the expertise of the BBC's in-house survey research experts - the Broadcasting Research Department or BRD - had not been made use of and that future embarrassments could be made less likely by involving them in any election polling. BRD, in turn, began a study to select the best method of predicting election results and commissioned NOP to produce research proposals which would comment on the available methods and make recommendations.
In the first instance, these recommendations concerned only the predicting of by-elections. There would, inevitably, be several by-elections before the next general election and this meant that a methodology had to be devised more quickly for by-elections, although it was also appreciated that by-elections provided a good opportunity to hone techniques and continually put them to the test before the key test of a general election.
The first decision to be made was the choice between eve-of-poll predictive polling and exit polling, and this proved to be one of the quickest and easiest decisions to make. Exit polling was seen as having several important advantages, particularly in by-elections. ITN have used Harris to carry out exit polls for many years, with a very respectable record, and the BBC itself had reasonable success with exit polling at previous by-elections. The record of predictive polling at the constituency level is at best patchy, as Robert Waller's paper at the 1992 MRS Conference shows. One crucial argument in favour of exit polls is that they measure only the (claimed) behaviour of people who have actually voted. Because predictive polls only measure what people say they intend to do, they, unlike exit polls, can be affected by differential turnout among supporters of different parties. By their nature, exit polls also circumvent any problem of late swing between the day of the survey and election day itself. Selection of individual respondents is also far easier for an exit poll; instead of having to set and monitor demographic quotas, one simply instructs the interviewer to approach every nth person leaving the polling station.
Whilst this aspect of an exit poll may be more straightforward than for a predictive poll, other aspects are far more complex. This paper looks at some of these issues, particularly the selection of polling stations, the coverage of all hours of the day, the approach to respondents and weighting the collected data.
Sample design
In almost any survey of the general population, some form of clustering of the sample is necessary: it is simply not economic to draw respondents from a metaphorical hat containing the names of the entire population. Clustering is normally considered as a geographical variable: one interviews a number of individuals in each of a number of selected locations, be they constituencies, wards, postcode sectors or whatever.
The use of sensible stratification in the selection of these clusters can help to offset the increase in survey error caused by the clustering effect, and this is why the selection of clusters is such a key element in the success of any opinion poll.
However, it can also be argued that a survey sample is clustered by time as well as by place. Bob Worcester describes a poll as akin to a snap-shot taken at a particular time in a horse race, showing who is ahead at that stage but not guaranteeing that they will not be overtaken in the final furlong. As well as interviewing only people in certain areas, an opinion poll only interviews people at a certain time. While for a predictive poll time is not a significant factor unless the poll is carried out over a ridiculously long or short period, the situation is very different when it comes to an exit poll. Exit polls measure a single act which electors carry out once only at a specific moment during the voting day and, if voting patterns are different across the day, the exit poll sample must account for this.
One way to achieve this is simply to cover those polling stations selected for the exit poll continuously from the moment the polls open to the moment they close. This is, in fact, the approach adopted by the BBC for previous exit polls and by Harris for ITN, but the cornerstone of NOP's sampling strategy was that a different approach would be more reliable. Taking a fairly arbitrary figure of 150 interviewer hours (which was the amount expended on the BBC's previous exit polls), one could cover 10 polling stations for 15 hours each, 25 for six hours or 50 for three hours each. The choice between approaches depends on the different levels of variability. There are three types of variability to consider:
A Variability between polling districts B Variability between hours of the day C Variability by time of day between polling districts
The aim of any sampling system is to ensure the maximum coverage of the maximum variability. Type B is clearly covered by interviewing all day at 10 polling districts and, provided the three-hour slots are allocated properly, it is also covered by interviewing at 50 polling stations for three hours each. The question then remains whether Type A is greater than Type C. All the evidence we have is that variability between polling districts can be enormous as party allegiance is such a geographically clustered variable. The more variability there is between polling districts, the more polling districts should be covered, and this is why our decision was to cover 50 for three hours each. It is true that by doing this we do not fully cover variation by time of day between polling districts, but our view, admittedly without any actual evidence, is that variation by time of day is likely to be fairly constant across all polling districts.
If this alternative approach is adopted, maximising the number of polling districts covered by interviewing at each one for only part of the day, then time has to be built into the initial sampling system in just the same way that any form of geographical stratification is carried out and will be every bit as important to the success or failure of the exit poll.
Selecting polling stations
The first stage of any clustered sampling process is to identify what are to form the primary sampling units (PSUs). Normally this is simply a matter of deciding how small the geographical area to be considered at the primary sampling stage should be, but in the case of an exit poll, even the identification of the PSU is a complex issue, particularly because of the need to include time in the sampling frame. The basic sampling unit has to be one of both time and place, which may seem on the surface a relatively simple task (and several academic statisticians of our acquaintance insist that it should be), but once the practicalities are examined, it becomes clear that this is not the case. Before attacking the main theoretical issues, we first had to confront an initial practical obstacle. Polling districts would appear to be the ideal geographical unit, but because voting takes place at polling stations, the final sampling unit needs to be a polling station, so it is necessary to treat polling districts which share the same polling station as effectively the same unit.
In any form of clustered sampling one has to choose between giving each PSU an equal chance of selection regardless of population size and then holding the sampling interval constant so that larger PSUs generate more final interviews; or alternatively select PSUs proportional to size and then vary the sampling interval so that each PSU produces the same number of interviews. In the case of an exit poll this choice is easily made. Variations of size between different polling districts within a constituency can be enormous. If a constant sampling interval were to be used, it would have to be set large enough to enable interviewers in the biggest polling districts to be able to keep up with the flow of required interviews. An interval set this large would mean that in the smallest polling districts interviewers may be conducting only one or two interviews in an hour. This is clearly uneconomic and so the selection of PSUs must be with a probability proportional to size (PPS) and a varying sampling interval.
In a pre-election poll, one can achieve PPS selection by the simple means of listing population size against each unit in the stratified list, accumulating population down the list and then applying a constant sampling interval. In the case of exit polls, however, we have seen that time as well as place should play a part in the stratification system. This means that, in an ideal world, the selection of PSUs would be a function of voters within time within place, with each polling station having 15 cells in the stratified list - one for each hour of voting. What we should then do is ensure that the chance of any of these resulting polling station hours being selected for inclusion in the sample is in proportion to its population; that is to say, the number of people who would actually vote at that polling station during that hour.
This would then give us a sampling frame of the kind shown in Table 1.
* TABLE 1 Polling station by time by voters
Ward Polling station Hour Voters Cumulative
1 AA 7-8 a.m. 75 75
8-9 a.m. 180 255
*
9-10 p.m. 90 1,575
AB 7-8 a.m. 60 1,635
*
*
n ZZ 9-10 p.m. 100 65,500
The problem with this approach is that we are not in a position to estimate what proportion of the voters in a particular polling district will vote during a particular hour of the day. There is a body of evidence from previous exit polls which suggests that there is a general pattern of distribution of votes during the day, but individual by-elections have bucked this trend to a considerable extent and we have no information as to whether the pattern of voting by time of day is likely to be different from one part of a constituency to another. Any errors in estimating the number of voters who actually go through a polling station in a particular hour will have to be corrected by weighting at the analysis stage, and this could easily involve the use of some very large weights indeed. Our view was that because of the lack of reliable data, we were more likely to make the situation far worse rather than better by trying to do this and that we would have to drop the link between voters and time of day from our sampling matrix. Instead, we would have to assume that one fifteenth of the voters will vote during each hour of the day. By assuming each hour is equal, any variation from one hour to another will be taken care of holding the sampling interval constant throughout the day so that busier periods will simply generate more interviews. This, then, gives a frame similar to that shown in Table 2.
* TABLE 2 Polling station by time
Ward Polling station Hour Voters Cumulative
1 AA 7-8 a.m. 105 105
8-9 a.m. 105 210
*
9-10 p.m. 105 1,575
*
*
n ZZ 9-10 p.m. 150 65,500
This still does not solve all the sampling problems, however, for there is another area in which the demands of theoretical statistics clash with those of the real world. If we assume a target of 150 polling station hours, then the total electorate of the constituency will be divided by 150 and from a random start point that sum applied all the way down the list. This means that each polling station may be selected for one hour, two hours or more depending on the size of the polling station but except in the case of enormous polling stations, it would not be selected for two consecutive hours. This, in turn, means that either interviewers would have to be employed on the basis of only one hour's work at a time or they would have to move from one polling station to another at the end of each hour, neither of which is acceptable. It is simply not possible to treat each hour within a polling station as a separate unit for sampling purposes and so time has to be clustered somewhat further into longer periods.
In order to fit in best with standard interviewer practice, we decided we had to allocate interviewing into three-hour shifts, which would suggest a sampling frame which has five time cells for each polling station: 7-10am, 10am-1pm and so on. Whilst this again seems sensible to our academic statistician friends, in practice it cannot be made to work. Most interviewers will want to work for more than one three-hour shift and they will need a break between shifts. With 150 interviewing hours spread over 15 hours of voting, there should be 10 interviewers working each hour. If 10 interviewers start 7am and a further 10 at 10am, the first 10 will either have to have a three-hour break until 1pm (which we cannot afford) or they will come back after a one-hour break and overlap with the second set (which means there would be 20 interviewers working from 11am till 1pm and we would therefore run out of interviewers before the end of the day). In fact, no equal shift allocation is possible and some form of compromise is necessary. The compromise we reached is based on an acceptance that it is impossible to achieve even close to an average of 10 interviewers for every hour of the day but that it is possible to achieve this for almost every hour. By ensuring that the hours where the coverage is furthest from the ideal are those where we know from experience the flow of voters will be lowest, the effect on survey accuracy is kept to a minimum. By introducing considerable variation in the number of interviewers who start work each hour, it is possible to achieve a time sampling frame which closely matches the target for 12 of the 15 polling hours, and the subsequent weights to correct for this range only from 1.67 to 0.82. If we ignore the three most undersampled hours, which account typically for only six per cent of all voting, the range is only from 1.06 to 0.82. This range is so narrow as to mean that, to all intents and purposes, the survey is as reliable as it would be had each hour indeed had the 10 interviewers theoretical statistics says it should.
This uneven allocation does, however, mean that it is impossible to build the various time slots into a sampling matrix, and so we have to divorce time from place as far as the sampling frame is concerned and work instead with two entirely separate samples: one of polling stations and an entirely separate one of time slots. The former sample is drawn PPS from a list of all polling stations first stratified by ward in whatever order local political geography demands and then ordered by size within wards. The time sample is held constant for each poll as there is no reason to change it.
We now have two entirely separate samples, one sample of 50 polling station slots and another of 50 three-hour time slots, and the final stage of the sampling task is to marry these two together. This is simply done by allocating time slots systematically down the stratified list of polling stations, but because of the relatively small number of PSUs, there is a danger that bias is introduced at this stage. It is very easy to end up, for example, with a sample in which the Conservative wards are polled at the quietest times of the day and the Labour areas at the busiest, which would give a strong Labour bias in the final sample. This is exactly what happened in our first by-election exit poll in Kensington, but fortunately we realised in time on the day and were able to correct the polling stations' varying chances of selection at different times by use of last minute weighting. In all subsequent by-elections, we took care to validate the sample before the survey began. If a constituency has a Conservative part and a Labour part, and if the Conservative part contains 60% of the electorate, it should also account for 60% of all early morning slots, 60% of all middle of the day slots and so on. If there is any variation from this, time slots have to be switched between areas until the required balance is achieved.
Selection of individuals
The final stage of the sampling process is the calculation of the sampling interval which determines the selection of the individuals as they leave the polling station. As explained above, we have chosen to disregard differential flow of voting by time of day; and also having selected polling stations PPS, it is necessary to aim for the same number of interviews in each shift. A sampling interval is calculated for each polling station, based on our best estimate of turn-out, and this is then applied to all shifts at that polling station, regardless of time of day. At each polling station, a team of two works throughout the three-hour shift, one interviewing and one simply keeping a tally of voters by means of a mechanical hand counter. The counter's job is to let interviewers know who is the next person targeted for selection, and also to keep track of the total number of voters. If, for any reason, the number interviewed is not what it should have been (most probably because at the busiest times it is impossible for interviewers to keep up with the flow), then simple weights can be applied to bring the number of interviews achieved back to the correct figure. There seems inherently no reason to suppose that those people missed by the interviewer simply because she was already interviewing someone else when they left are anything other than a sub-set of all people who left the polling station at that time. Because we recognise that changes in voting behaviour by time of day can be significant, the process of counting total voters and subsequent weighting is conducted on the basis of every individual hour rather than across the whole three-hour shift.
Because the variation in voting by time of day can be so extreme, experience has shown that it is better to modify this pure sampling method in two ways to take account of the practicalities. First, rather than interviewers being told to interview every nth person (which sometimes meant them interviewing no one at all if only a few people passed during that hour), they are instructed to interview the first person who leaves the polling station during each hour and then every nth person thereafter. This may seem to have only a trivial effect on the total number of interviews achieved but its effect on interviewer morale should not be underestimated: interviewers are generally not happy about spending time nominally interviewing when there is no-one there who meets the selection criteria. To improve the situation still further, we commonly reduce the sampling interval at the very quietest times and then restore it to its correct level at the stage of weighting. If only 12 people leave a polling station where the interview is one in 10, it is better to interview six of them and weight those down to only two respondents than to stick rigidly to the rules and only interview two in the first place.
Early experience
In our first experiments in exit polling, data were collected by means of personal interview. Each selected respondent was asked how he or she had just voted and also about voting in the 1987 general election. A subset of respondents (around one in four) was then asked four further questions on issues of the election, the results of which were used in the panel discussions on the election night programme. The first four by-elections covered were Kensington, Glasgow Govan, Vale of Glamorgan and Mid Staffordshire. The mix of urban and rural seats provided a good test of the sampling methodology and, as Table 3 shows, the results were within the expected sampling error although some judicious weighting was necessary in Mid Staffordshire to produce the final result.
Although the results were within sampling error, there was a consistent overscoring of Labour. Even though this was only one or two per cent, its consistency suggested bias rather than simple error. With a small number of observations we could not, however, be sure this was simply a pro-Labour bias. In all case except Kensington, the poll showed a bias in favour of the winning party. In all cases the poll showed a bias towards the party which had |surged' during the campaign. What made investigation of the problem more urgent was the exit poll for the 1990 European Parliaament elections. As this was a national poll, many of the sampling issues were different from those discussed in this paper, but an error of four per cent in favour of Labour (outside sampling error) confirmed our earlier suspicions and understandably made us determined to find out why it had happened so we could stop it happening again at the next exit poll.
Every possible aspect of the European exit poll was examined in great detail. We proved that the sample of wards was not socially or politically biased. We investigated the size of polling districts selected and made the interesting discovery that there was no consistent relationship between voting behaviour and polling district size. We considered the possibility of interviewer effect if, for example, the interviewers did not correctly apply the sampling rules and operated some form of conscious or unconscious selection when a group of voters left the polling station together. While initially attractive, this theory relied on there being something about Labour voters which made interviewers more likely to select them, and we fairly quickly came up with convincing reasons why the opposite was just as likely to be the case.
This left us with the whole question of refusals as a possible source of bias. Our approach in the first five exit polls had been to cope with refusals by means of replacement. For every selected individual, whether actually interviewed or not, the interviewer made a note of her estimate of their age and sex. In this way a demographic record of refusals (albeit crude) was kept. Under the replacement method, if a woman aged 18-29 refuses, a record is made of this and the next women aged 18-29 who is not already targeted for interview will be approached as a replacement. The theory behind this is thus just the same as that behind weighting; that is to say that 18-29 year old women who are interviewed are the same as 18-29 year old women who are not interviewed.
Examination of refusal rates showed some clear patterns, with older people, and especially older men, much more likely to refuse. However, it appeared that the replacement had compensated for this and demographic differences among refusals seemed not to be the cause of the bias, although one consequence of this stage of the investigation was the decision to switch from replacement to weighting in future exit polls. As well as being aware that replacement is not the easiest method for interviewers to use, we were more worried by the impact on the interviewers' overall approach of the whole idea of replacing refusals. Once interviewers know that a refusal can easily be replaced by another person, they lose the incentive to exert their normal persuasive skills on the refusal in the hopes of converting it into an interview. By switching to a weighting approach, we were able to stress to interviewers the importance of getting as many interviews as possible, which can only be achieved by persuading the initially reluctant to take part in the survey. The mechanics of the weighting itself are very simple. Analysis of the actual questionnaires completed gives us a demographic breakdown of the interviewed sample, while analysis of the separate records kept of refusals gives us a breakdown of the refused sample. Adding these two together gives us a demographic breakdown of the population target. It is then simply a case of weighting the achieved sample so that each of the eight age-by-sex groups is set at its correct proportion of the total sample.
While the demographic mix of refusals was not the answer, we were still convinced, mainly in the absence of any evidence of bias elsewhere, that refusals were the source of the problem. What we concluded was that there must be some direct link between voting behaviour and refusals - Conservative voters are simply more likely to refuse to take part in exit polls. If this is the case, neither replacement nor weighting to correct demographic imbalance will solve the problem and can easily make it worse. Unfortunately, this is a largely untestable hypothesis as we could hardly do a follow-up interview with refusers to ask how they had voted! We did, however, use a somewhat tangential method to address the problem and, while far from reliable, our tests suggested we were on the right track. On three consecutive weeks we asked respondents to NOP's Random Omnibus Survey whether they would take part or refuse if approached by an exit pollster; all respondents had earlier in the questionnaire been asked for their voting intention. Overall 57% said they would refuse. This is far higher than the 29% we had actually found in our exit polls, which is why the results cannot be fully relied on, but there was a consistent pattern on all three surveys, with Conservative supporters being much more likely to say they would refuse to answer an exit poll.
A new approach
Convinced held the key, we next set out to try to reduce the overall level of refusals. We put more effort into stressing the importance of maximising response in the interviewer instructions and increased the amount of checking of interviewer performance at the start of each interviewer's first shift. We also instructed interviewers to stand well away from the party tellers or any poll officials and, to help ensure that correct practice could be followed, we increased the level of pre-election site visits. Because confidentiality was felt to be so important, we experimented at the next by-election with the use of a numbered showcard so respondents did not need to say the party name out loud. We also tested two different interview lengths, with one version having nothing but the voting question and the other having a dozen other questions. The by-election on which these were all tested was Eastbourne, where a combination of an aged population and non-stop rain conspired to depress response. The refusal rate was 29%, although when adjusted for a nationally representative age balance, this came down to 23%. More importantly, the exit poll (ignoring some hand-weighting by the BBC which, in retrospective, was ill-advised) was within one per cent on each party of the actual result and the Labour vote (admittedly very low) was not overscored in the poll. Table 4 shows the levels of error of all polls under the new systems.
We felt the refusal rate was still too high, and at the next by-election, Bradford North, we tested a radically different method of data collection. We felt that the reason why Conservatives were more likely to refuse was that they believed more in the secrecy of the ballot and although we had taken steps to ensure no-one could overhear their answer to interviewers (the showcard approach in Eastbourne had had no effect on the refusal rate), the interviewer still knew how the respondent had voted. We therefore used interviewing as before for half the sampling points while in the other half the interviewers handed respondents a self-completion form and asked them to record how they had voted then put the completed form in an |NOP/BBC Ballot Box'. We also tested a reduction in interviewer workload to see if the pressure of keeping up with the required interviews at busy times may lead to refusals not being converted to response. This second test revealed no differences but the first was striking. The refusal rate to the interview method was 22% while for the ballot box method it was only 16%. Once again, the poll was within one per cent on each main party and once again Labour was not overscored.
We resolved to use only the ballot box approach thereafter, and introduced a new test at the next by-election, Paisley North, by varying considerably the length of the questionnaire. One half of the sample had just the voting question while the other half had two sides of A4 to fill in. The poll was again very accurate, although Labour was overscored by one per cent. Refusal to the voting only questionnaire was 16%; for the long questionnaire it was 28%. Some people are just too busy to stop to fill in that length of questionnaire, but what was interesting was that there was no voting differential between the two halves of the sample.
TABLE 4 Errors in first four by-elections
Eastbourne Bradford Paisley Ribble
Conservative 1 - -2 -
Labour - -1 +1 -
Liberal + 1 +1 - -
For our final by-election, Ribble Valley, we took the questionnaire length issue a stage further by conducting two separate surveys. There was a survey to predict the result, with 2,000 respondents filling in just the voting question, and a separate |issues' survey with 800 people completing a longer questionnaire. This second exercise was itself split into two halves, with a one-side questionnaire used in one half and two sides in the other. The poll was correct to the nearest per cent for all the main parties, although non-stop rain again pushed the refusal rate on the voting survey up to 21%. Refusal rates were again much higher for the longer questionnaires but there was again no political difference between the different versions. We therefore concluded that there is a politically-related element of refusal based on secrecy which can be reduced by use of self-completion, and an entirely separate |too busy' element of refusal which has no political component.
Conclusions
NOP have conducted exit polls at eight different by-elections on behalf of the BBC. We started out determined to design a new methodology from scratch as we had no direct experience of exit polls before. The journey through those eight by-elections and the Euro elections was a voyage of discovery. We learned a lot as we went along, we abandoned some of our new ways and returned to some aspects of previous exit polling practice, and we felt vindicated in some of our other original ideas.
Across all eight by-elections, the average error on each of the three main parties has been less than one per cent, well within the levels of sampling error one would expect. For the last three by-elections, over which our survey methodology was held pretty much constant because we felt by then we had learnt all we were going to learn, the average error was less than 0.5% per party. We are now confident that we have a methodology which will successfully predict the result of by-elections. General elections are, of course, a different matter ...
Post-election addendum
For the general election, NOP Social and Political conducted two separate exit polls on behalf of the BBC. The main exercise was a survey of over 15,000 voters in 100 marginal seats, and the second a survey of 4,500 in 75 nationally representative seats.
The main exercise had a single purpose - to drive the BBC. seat-projection computer model. The seats were chosen to represent the key Conservative-held marginal seats, and the initial seats projection on election night was based on a combination of initial assumptions about safe seats, expert predictions on special seats, and a calculation of likely results in 180 marginals based on the results of the exit poll in the 100 marginals selected for the poll.
Based on all these figures, the projection made by the BBC at 10pm was that the Conservatives would fall short of an overall majority by 25 seats. A later prediction was made at 11pm, based on the final exit poll figures. As Table 5 shows, the final projection under-estimated the Conservative total by 32 seats, and over-estimated the Labour share by 25 - outside the expectation that the prediction should be within 15 seats of the actual result for each main party.
* TABLE 5 Final seats projection
Projection Actual results
Conservative 305 337
Labour 296 271
Liberal Democrat 22 20
However, because the prediction is based on factors other than just the exit poll, it is not the best test of how well the exit poll itself performed. Although party share of the vote across all 100 seats was quite irrelevant as far as the projection was concerned, it is the only way in which the poll can be assessed post hoc. Table 6 shows that the error on the share of vote was in the same direction as the error in the seats projection, but while it was greater than the level of error we hoped for it was by no means a poor performance. The two main parties were measured to within 2%, and the Liberal Democrats exactly.
* TABLE 6 Share of the vote in the 100 marginals
Exit poll Actual results
% %
Conservative 42.7 44.8
Labour 38.9 37.2
Liberal Democrat 16.2 16.1
The national poll was conducted in order to provide an insight into why people had voted the way they did and into the issues of the election, but it too included a voting question. The error on the national share of the vote was similar to that on the marginals projection poll. Comparing the exit poll with the national share of the vote (in columns 1 and 2 of the Table 7) shows an overscore of Labour by only 1.1%. The Conservatives were underscored by a bigger 2.8%, but this was mainly because of an overscoring of the |other' parties (mainly the SNP) by 3.5%. To test a theory that this may have been a function of the sample of seats used (the sample was tested in advance for balance between Conservative and Labour, but not for the minor parties), we also compared the poll result with the actual result in those 75 seats. This is shown in column 3 of Table 7. This smooths out some of the distortion, and brings the error on the Conservative share to only 2.5%, within sampling error, but does not remove all of it. It is most likely that we are also seeing the effect of the |other' vote being clustered in certain polling districts within constituencies - an |other'candidate will tend to do better in his or her own local area.
* TABLE 7 National share of the vote
Exit poll Actual result Actual result
National 75 seats
% % %
Conservative 40.0 42.8 42.5
Labour 36.3 35.2 35.5
Liberal Democrat 18.3 18.3 18.4
Others 5.4 3.7 3.6
Harris also carried out two polls for ITN. Their seats projection poll also indicated a hung Parliament and their national poll had a similar level of error to ours.
The performance of the polls during the election has obviously come in for a lot of criticism. With far less justification, there has also been criticism, much of it ill-informed, of the exit polls. While we did not do as well as we hoped, the NOP/BBC exit poll nevertheless performed very creditably, though on the night it failed to distinguish between a hung Parliament and a small but workable majority.
Reference
WALLER, R (1992). Constituency polling in Britain. Proceedings of The Market Research Society Conference.
Copyright: COPYRIGHT 1992 Sage Publications, Inc.
www.sagepub.com