APPENDIX 1
THE 2002 NEW ZEALAND ELECTION STUDY
The 2002 New Zealand Election Study (NZES) was conducted by telephone and mail questionnaire. Post-election questionnaires were sent by mail so that they arrived on Monday 29 July, and were followed by a reminder postcard ten days later and a second questionnaire ten days after that to those who did not respond. From mid-October until the end of November, 15-minute telephone interviews were conducted with non-respondents for whom telephone numbers could be found. Not all questions could be included given the interview length constraint. The overall response rate from the citizen samples selected was 60.8 per cent. However, some subsamples were taken from previous samples and therefore subject to attrition.
The 2002 NZES has five major components.
1. A New General Sample. This was randomly selected from the electoral rolls, proportionately from each of the 62 general parliamentary electorates, and conducted immediately after the election, as detailed above. For the new sample the postal response rate was 44.6 per cent (N=1338), with the telephone interview adding another 8 per cent (N=248), making a combined response rate of 52.2 per cent (N=1586).
2. Election to Election Panels. Administered after the election as explained above, these samples contain respondents from the 1996 and 1999 NZES (Vowles, Aimer, Banducci and Karp 1998; Vowles, Aimer, Karp, Banducci, Miller and Sullivan 2002). The 1996 panel had an N of 533 and 1999, 537. Of all panel respondents 1040 completed the postal questionnaire, and 120 were followed up by phone. Respondents within each panel were subject to different levels of response rate attrition, but no significant or obvious non-response bias was apparent.
3. The Campaign Pre-election Sample and Pre- Post- Panel. The pre-election campaign N was 3590, with a target of 100 interviews for the 36 days immediately before the election. The response rate was 34 per cent. This was a random national sample from households with telephones conducted for the NZES by ACNeilsen (NZ) Ltd. Respondents were randomly selected from within households. Campaign respondents were also asked to participate after the election and the 3190 who agreed to do so were mailed the post-election questionnaire. Of these, 2008 responded again by post and 514 by phone, making for a response rate of 79 per cent of those who had agreed to participate. Merging the pre- and post- data by respondent thus constitutes the pre- post- panel. The post-election mailing and interview schedule followed that detailed above.
4. Maori Election Study. The Maori Election Study is an over-sample from the Maori electoral rolls, with an N of 500 (403 postal, 97 by phone). The response rate was 33.3 per cent, of which 27 per cent came from the mail questionnaire and the remainder by telephone. The mailing and interview schedule followed that explained above.
5. The Candidate Study. Over the same period as the voter surveys, although excluding the telephone top-up, mail questionnaires were sent out to all candidates standing for parties with seats in the House, or likely to gain any. The candidate survey provides data on the respondents background, recruitment and selection, role as an MP (if relevant), and attitudes. Questions on issues and policies replicated those in the voter surveys, enabling comparison between the attitudes and behaviour of voters and candidates. Response rates were as follows:
Candidate Survey Response Rates, 2002 NZES
Vote validation
A researcher employed by the NZES inspected the marked rolls held at electoral offices throughout the country and identified whether or not identified respondents had cast a vote. Those who misreported a vote were subsequently redefined as nonvoters.
Non-response error and weights
Response rates differed substantially across subsamples, the lower ones raising possible issues of non-response bias. In particular, the initial response rate for the campaign survey was much lower than encountered in our previous experience. However, our campaign response rate was marginally better than that reported for the Annenberg National Election Study of the United States Presidential election, which is reported as being typical of telephone survey response rates in the United States (Romer, Kenski, Waldman, Adasiewicz and Jamieson 2004, 15). Response rates for survey research in Australia and New Zealand have been steadily declining for some time (Bednall and Shaw 2003). Low response rates as such do not necessarily lead to unrepresentative samples (Keeter and others, 2000; Jago and Shaw 1999). The campaign poll tracked closely other polls taken during the 2002 campaign (for which no response rates have been published). Political polling in general indicates that relatively low response rates do not prevent polling data about political preferences from corresponding to actual election results within normal sampling error (Panagakis 1999). There remain concerns where question responses may correlate with variables that may help determine response and non-response.
Response patterns to key questions were compared across the subsamples. In general, there was little evidence of an increase in obvious non-response error in the subsamples where response rates were low and/or subject to panel attrition. Given this, for most purposes the three subsamples were combined. The total sample was weighted to correct some minor biases, with weights for age and gender (household size for the campaign sample), and validated party vote. Different weights were calculated both including and excluding the telephone top-up, as that excluded significant numbers of questions. Overrepresentation of Maori electorates due to 1996 and 2002 over-sampling of Maori electorates was also corrected.
At times, the between-election panel, the full pre-election campaign data, and the pre- and post- election panel have been used separately for particular purposes. Further details of use of subsamples may be found in individual chapters.
As with its earlier studies from 1990 onwards, the NZES 2002 dataset is available for other researchers to analyse from the Australian Social Science Data Archive, Australian National University. Further information about the NZES can be found on its website, www.nzes.org.
APPENDIX 2
METHODOLOGY: STATISTICAL METHODS AND VARIABLE CODING
The nature of the data
There are three main kinds of data analysed in this book: nominal; ordinal; and interval.
Nominal data is categorical: for example, voting choice itself, which is for National, for Labour, and so on. The categories are, in a sense, qualitative, for there is no obvious ordering or ranking of these choices.
Ordinal data, by contrast, can be ranked: a respondent is given a choice of more, the same, or less, but we have no way of knowing how much more, or how much less.
Interval data, however, clearly indicates how much more or less the age of a person, for example.
Correlation
Correlation is a simple and straightforward measurement of the strength of a bivariate relationship that is, one between two variables only. Normally one of these will be an explanatory or independent variable, and the other a dependent variable, the value of which we seek to explain. One of the most powerful indications of correlation is the Pearsons r, which should be used with interval data only. Its values can range between 1 and +1. A less powerful but less restrictive alternative is the Cramers V.