APPENDIX A
THE 1999 NEW ZEALAND ELECTION STUDY
Acknowledgements
The 1999 New Zealand Election Study (NZES) is funded primarily by the Foundation for Research, Science, and Technology (FRST), with supplementary funding from University of Waikato Internal funds, and the University of Waikato Faculty of Arts and Social Sciences Research Committee. Jack Vowles has been supported by a James Cook Research Fellowship for part of the project. Jeffrey Karp and Susan Banducci acknowledge Holli Semetko and the Amsterdam School of Communications Research (ASCoR), both for research support, and for time to revisit New Zealand in January 2001. More generally, research support for Susan Banducci has been provided by the European Unions Fifth Framework Programme. Research support for Jeffrey Karp has been provided by the Netherlands Organization of Scientific Research (NWO). Peter Aimer acknowledges with thanks the continued institutional support of the Political Studies Department, at the University of Auckland
Research Design and Implementation
(a) Samples
The 1999 NZES has four major components:
1. A New Sample. This was randomly selected from the electoral rolls, proportionately from each of the 67 parliamentary electorates. Questionnaires were sent by post to arrive on Monday, 29 November onwards, followed by a reminder postcard ten days later, with a second questionnaire ten days after that to non-respondents. After the end of January, a month in which many people take holidays, about three weeks of shorter telephone interviews were conducted with non-respondents for whom telephone numbers could be found. The postal response rate was 58% (N=940), with the telephone interview adding another 6% (N=119), making a combined response rate of 64% (N=1059).
2. Election to Election Panels. These contained respondents from the 1990, 1993 and 1996 NZES (Vowles and Aimer 1990, Vowles, Aimer, Catt, Lamare, and Miller 1995; Vowles, Aimer, Banducci, and Karp 1998). The 1990 panel had an N of 960; 1993, 1128; 1996, 1770. Of all panel respondents, 2231 completed the postal questionnaire, and 149 were followed up by phone. The mailing and interview schedule followed that for the new sample. Respondents within each panel were subject to different levels of response rate attrition.
3. The Campaign Pre-election Sample N=3790 (54%). This was a random national sample from households with telephones numbers provided by Telecom. Respondents were randomly selected from within households. During the 5-week campaign, 3409 respondents agreed to a 15-minute interview and 381 to a shorter interview. The short interview was designed to enhance the response rate for the key voting variables. Respondents who gave long interviews were asked to participate postelection and were mailed the post-election questionnaire. Of these, 2060 responded again by post, 428 by phone, making for a final response rate for the pre- and postelection panel of 65 per cent. The mailing and interview schedule followed that for the new sample.
4. Maori Election Study. The Maori Election Study is a sample of 1000 based on personal interviews conducted on behalf of the NZES by A. C. Neilsen (NZ) Ltd. The method used was a fully national multi-stage stratified probability sample with clustering. A. C. Neilsen-defined area units containing less than 5 per cent Maori were excluded, but those covered only 2 per cent of the Maori population. Households were sampled and respondents chosen randomly within them. The sample is weighted by age and gender to reflect the Maori population. Personal interviews were chosen because of the high rate of residential mobility among Maori, especially younger Maori, plus factors such as lower access to telephones, a tendency to live in larger households than the general population, and the culturally more acceptable practice of interviewing kanohi ki kanohi (face to face). The interviews followed a structured format and lasted approximately 40 minutes. The response rate was 54%. Questions were adapted from the main post-election questionnaire, with some additions.
(b) Vote Validation
A researcher employed by the NZES inspected the marked rolls held at Electoral Offices throughout the country and identified whether or not identified respondents had cast a vote. Those who misreported a vote when they did not were subsequently redefined as non-voters.
(c) Non-Response Error and Weights
Response rates differed substantially across subsamples, raising issues of non-response bias. Response patterns to key questions were compared across the three subsamples directly administered by the NZES. In general, there was little evidence of an increase in obvious non-response error in the two subsamples subject to panel attrition. Given this, for most purposes the three subsamples were combined. All showed a slight bias toward Labour, consistent with a usual tendency of post-election samples toward the most popular party. They also had a bias toward people with higher education, and a slight over-representation of women. The total sample was therefore weighted to correct these biases, with weights for education, age and gender (household size for the campaign sample), and validated party vote. An over-representation of Maori electorates due to 1996 over-sampling of Maori electorates in the panel section of the data was also corrected.
At times, the between-election panel, the full pre-election campaign data, and the pre- and post-election panel have been used separately for particular purposes. Post-election analysis of the campaign pre-election sample indicated biases to Labour at particular points in the campaign. Where this bias is non-problematic, this pre-election data has simply been weighted by age, gender and household size. For the pre- and post-election panel, the data is weighted by the validated votes of respondents on election day for every day of the campaign. Over-representation of voters for the winning party is common in post-election surveys, but less so in pre-election surveys, and prolonged investigation of our data and methods can find no consistent explanation. However, we note that a similar study before the 2001 British election also appears to have encountered a similar Labour over-estimation problem. Some remaining differences between our findings and those of published polls may be due to question ordering. We asked questions which are the main predictors of vote before vote intention, contrary to other pollsters. Questions on the first debate, in which most observers agreed Alliance leader Jim Anderton did best, may have had the effect of shifting some of our respondents from a Labour to an Alliance intention in late October and early November. Similarly, there may have been a greater sensitivity in these estimates to issue effects which favoured National in mid-November.
Further details of variations in weights and use of subsamples may be found in individual chapters.
Candidate Survey:
The candidate survey provides data on the respondents background, recruitment and selection, role as an MP (if relevant), and attitudes. Questions on issues and policies replicated those in the voter surveys, enabling comparison between the attitudes and behaviour of voters and those of political elites. Similar surveys were conducted in 1993 and 1996. The survey was administered by means of a post-election, self-completion questionnaire sent to all candidates nominated by the ACT, Alliance, Green, Labour, National and New Zealand First parties. Together these parties accounted for 93 per cent of the party votes, and 119 of the 120 MPs. All candidates in the sample received a follow-up letter and postcard. The overall response rate was 62 per cent (N=282), distributed among the parties as follows: ACT 68%, N=48; Alliance 77%, N=55; Green 72%, N=52; Labour 64%, N=56; National 52%, N=44; New Zealand First 40%, N=27. The response rate for the subset of candidates elected to Parliament was 50 per cent (N=60).