Abstracts

25 Nora Cate Schaeffer
Actions, Interactions, and Interviewer Training: Suggestions from an Interactional Model of the Recruitment Call
Although many data collection efforts rely on self-administered web, CASI, and paper instruments for some or all of the self-reports that they obtain, interviewers still perform key functions on important statistical and research surveys in obtaining cooperation and consent – in addition to performing traditional and innovative data collection tasks.  Early models of decisions to participate focused on the decision processes of the interviewer and respondent separately.  This psychological model of decision making is complemented by interactional models that examine the actions of each party, the structure and features of the actions, and how they are ordered in interaction.  An interactional model is oriented toward conversational practices, the actions that compose them, and how they are deployed to serve the goals of research interviewing.  Because of this orientation, an interactional model suggests research topics and experiments that have implications for field protocols and for training interviewers.  In addition, such a model directs attention to the context of the interaction created by the field protocol (e.g., advance letters), mode (e.g., telephone or face-to-face), population (e.g., younger or older) and technology (e.g., landline or cell).  I use an interactional model of the recruitment call to describe what we know about actions in which we want to train interviewers (e.g., introduce themselves, describe the study).  By examining the path of interaction in the recruitment call, we also identify specific locations with possible implications for the field protocol, for example, sample members who say “not interested” and the exit from a refusal call.

39 Annemieke Luiten, Edith de Leeuw and Barry Schouten
First results of the International questionnaire on nonresponse

Many of us  will have come across, read, or even cited, the 2002 chapter of Edith de Leeuw and Wim de Heer on trends in household survey non-response, in which they demonstrated that response rates were slowly going down. This work was based on the International Questionnaire of Non-response, for which the initiative was taken in the first Workshop on Household Survey Non-response in 1990. Data collection took part from 1991 to 1997. Although the work is still cited today, we felt that it is time to supplement the old series with information on the period between 1997 and now.  That’s why we made an updated version of the Questionnaire, that incorporates new phenomena like web and mixed mode data collection. The questionnaire was sent to previous participants, other NSIs, but also to academia and commercial organisations. At the workshop, we will present first results.

32 Annemieke Luiten
Incentives in Official Statistics: Effects on response, representativeness and target variables. 
All CBS person and household surveys employ a mixed mode design, with web as the first mode. A high web response is of considerable financial importance. In the past years, CBS has experimented with unconditional incentives (included in the advance letter), conditional incentives in the form of a gift certificates for respondents, and conditional incentives in the form of raffled (large) prizes, like iPads or gift certificates worth €250,=. In this presentation I will present the results of a series of experiments in response, representativeness and target variables. 

14 Edith de Leeuw, Joop Hox, Benjamin Rosche
Survey attitude, Nonresponse and Attrition in a probability based Online panel

A high response of the sample units approached is one of the cornerstones of survey research (Groves, 1989) and the growing nonresponse has been a constant worry of survey statisticians all over the world (De Leeuw & De Heer, 2002). Several theories on why nonresponse occurs have been developed over the years (Stoop, 2005) and survey climate and attitudes towards surveys are key concepts in these theories (Loosveldt and Storms, 2008).
To measure survey attitude a brief nine-question scale was developed for use in official statistics and (methodological) survey research. This survey attitude scale is based on earlier work by multiple authors (Cialdini (1991), Goyder (1986), Singer (1998), Stocke (2006), and Rogelberg et al (2001). Key dimensions in the development of this scale were survey value (value ascribed to surveys), survey enjoyment (reflecting the assumption that respondents do like participating in surveys), and survey burden (reflecting that too many surveys are being undertaken, increasing respondent burden).
Preliminary research with this scale in a Dutch online survey (CenTERpanel) and a German telephone (PPSM) survey indicate the survey attitude scale is reliable and has predictive validity. Survey attitude is weakly related to survey nonresponse in the expected direction (more enjoyment, high value, and less burden is associated with less nonresponse).
In this study we investigate whether survey attitude and its three subdimensions is stable over time, and related to wave nonresponse in a probability-based online panel. Data of six consecutive waves of the LISS-panel are analyzed, using multilevel modeling.
 
29 Wojciech Jablonski
Interviewers in Computer Assisted Telephone Interviews: Standardization Controversy
 
The paper describes the selected results of the methodological study that was carried out between 2009 and 2012 among CATI fieldwork staff employed in twelve major Polish commercial survey organizations. The paper focuses of the outcomes of the qualitative part of this project: In-Depth Interviews conducted with experiences telephone interviewers (32 cases) and IDIs with CATI studio managers (8 cases). The interviewers were encouraged to discuss their experiences associated with conducting interviews. In particular, interviewers were asked how often and in what context they were likely to infringe the standardization rule. CATI studio managers were invited to comment on the remarks delivered by the interviewers. The interviewers referred to many situations in which they would not adhere to a standardized script when conducting. The interviewers indicated that the main reason for this practice is the fact that questions used in the CATI scripts are often formulated using complicated vocabulary and syntax. CATI studio managers tend to share this opinion. As they noticed, if the researchers placed greater significance on the design of research tools, there should be no reason for destandardizing the procedure. Although both the interviewers and fieldwork managers are aware of the importance of standardizing the interview protocol, there seems to be unspoken consent to deviate from the rules across most research firms when the respondents appear to experience cognitive
difficulties. interviewers’ opinions are valuable sources of information about the interview process, and these perspectives should be taken into consideration while designing and testing CATI scripts. Moreover, as strong standardizing regulations often do not correspond to the reality of fieldwork, it seems that some elements of conversational interviewing should be introduced/developed, so that the interviewers could more  effectively handle difficult situations that may occur during telephone interviews.
 
 
16 Eva Furubjlelke, Chandra Adolfsson, René Ojeda
Handling incoming calls and messages from respondents in order to streamline the data collection and provide good service.
 
Experiences within Statistics Sweden show that the way how incoming calls, emails and text messages from respondents in different surveys are being handled, may have an impact on the efficiency of the data collection process. Statistics Sweden receive thousands of incoming calls or messages from respondents each month (only counting the voluntary surveys). Today, a few separate “help-desk-functions” have been established at SCB, each one responsible for handling incoming communication with respondents in different fields such as individual- and household surveys, mandatory business surveys and surveys concerning organizations and authorities. Templates and procedure descriptions have been developed in order to support the process of handling incoming calls and messages.
In the past year, SCB has started to evaluate these services (the help-desk functions for respondents) in more depth. Especially within the field of interview surveys, since this field suffer from increasing costs due to an increased amount of outgoing call-attempts to receive each interview. The proposed paper and a presentation at the Non-response workshop will describe how we measure and evaluate effects and results of the incoming contacts from respondents (calls, e-mails and text messages), mainly within the field of interview surveys. We will present to what extent we manage to make the incoming calls result in interviews, fewer non-contact results and fewer out-going calls.
Another important future concern for SCB is how we best should organize these “help-desks” for incoming contacts with respondents with different needs. Can and should all incoming calls, emails and text messages be handled by a single joint “help-desk” for all respondents? What is required of such a function in that case, given the different needs from different type of respondents and surveys?
 
4. Jamie Moore, Gabrielle B. Durrant, Peter W.F. Smith
Using linked census data to estimate survey non-response biases
 
Non-response biases in survey estimates are deviations from population values caused by some sample members not responding to the survey and differences between respondents and non-respondents. Since they compromise dataset quality, considerable effort is expended by survey designers to evaluate and minimise such biases. However, an issue is that these biases are often difficult to accurately estimate, due to the lack of information available about non-respondents. One solution to this is to use linked information on all sample members from other sources to estimate non-response biases, either by comparing the attributes of the sample and the respondent dataset or directly from equivalent covariates. Here, uniquely, we use linked 2011 census attribute data to evaluate such biases in three contemporaneous UK social surveys. To compare sample and respondent dataset attributes, we utilise representativeness indicators, which provide multivariate, decomposable measures of (risks of) non-response biases given an attribute covariate set based on variation in estimated sample response propensities. As well, we quantify biases in single covariates, and compare census- and survey-based estimates. We evaluate biases both in final datasets, and, given the availability of call record paradata detailing attempts to interview sample members, over the course of data collection. In the latter case we also investigate points in call records after which dataset quality does not substantially improve and where alternative methods (including ending data collection completely) should be considered. Given our findings, we then offer guidance concerning the use of these techniques to assess non-response biases during survey data collection.
 
1. Gina Walejko
A New Contact Strategy: Targeted Advertising for Address-Based Survey Samples
 
Surveys are in peril as response rates continue to dwindle. Such decreases are primarily related to increased refusal rates and decreased contact rates. To make matters worse, this decline appears across survey modes. Fortunately, a rise in digital technology use provides new ways to contact survey respondents. For example, advertisers increasingly serve individual consumers hyper-targeted digital ads by matching postal addresses and other geographic information to digital profiles such as Facebook and mobile device identifiers. This process is similar to how address-based surveys match phone numbers to sample addresses. Inviting matched address-based sample addresses to respond to online surveys via targeted digital advertisements offers a potentially useful way to connect with potential respondents. The opportunities afforded by this new contact strategy include targeting the message, the form, and the placement of survey invitations based on information from the sampled addresses’ digital profile history and other databases, like past media consumption patterns. Contacting potential respondents online also has the potential to reach “hard-to-reach” populations who are less likely to check their postal mail, be home, and pick up a phone.
However, using targeted digital messages to invite respondents to participate introduces new considerations. From a data quality perspective, individuals with digital footprints on Facebook and mobile phone users are more likely to be younger and members of minority groups, introducing coverage error. Similarly, nonresponse errors may vary by digital media consumption, which is not consistent across demographics. From a policy perspective, digital advertising challenges government statistical agencies to reexamine policies that make it difficult to share sample and response information with outside entities like restricting the passing of sample addresses to organizations whose employees obtain training and whose systems are certified.
 
12. Folasage Ariyibi for Andrea Lacey and Matthew Greenaway
Investigating attrition on the Labour Force Survey
 
Abstract:
 
With response rates for social surveys continuing to fall, the issue of responders ‘dropping out’ of longitudinal or rotating panel surveys – known as ‘attrition’ – is becoming increasingly important. Relatively little is known about the process of attrition on the UK Labour Force Survey, and an investigation has been carried out to address this issue. The focus of this research was to answer three questions:
 
·         What are the key respondent characteristics that influence an individual’s likelihood to drop out of the LFS?
·         Does attrition have an impact on headline LFS estimates?
·         How can we mitigate attrition bias?
 
Logistic regression was carried out using the six most influential characteristics which appear to influence an individual’s propensity to drop out - age, region, tenure, disability status, ethnicity and household type - and parameter estimates from this model were used to apply a ‘sample based’ adjustment to the LFS weighting. The impact this had on headline labour market statistics was fairly small, but was consistent across time periods in terms of magnitude and direction, suggesting that attrition does have an impact on headline LFS figures.
 
Applying an adjustment from logistic regression is not a practical method for use in the automated LFS production system – so further research into the imputation method of rolling data forward on the LFS were explored.
 
This paper describes how the logistic regression model was constructed, how it was used to adjust the weighting methodology and the impact this had. We also provide conclusions from investigating other methods to remove attrition bias and present recommendations that could be used to improve LFS estimates.
 
 
21 Annette Scherpenzeel, Michael Bergmann, Sabine Friedel, Johanna Bristle 
Can we use the relationship between income, item nonresponse and panel attrition in an adaptive fieldwork design? A study in the Survey of Health, Ageing and Retirement in Europe (SHARE)
 
For the sixth wave of data collection, a responsive fieldwork design was implemented in the German sub-study of SHARE. We monitored several respondent characteristics, known from previous waves, in relation to response outcomes and implemented adaptations of procedures. However, the lowest response probability we observed was related to income item nonresponse in the previous wave. Respondents who gave no answer to the income question in the previous wave started with a much lower response probability than any other group and mostly remained low. Although it hence seems to be a group for which responsive measures are especially worthwhile, it is difficult to translate into effective measures during the course of the fieldwork or in preparation of a new wave of fieldwork without knowing more about the possible common cause of the income nonresponse in one wave and unit nonresponse in the next wave. Therefore, our follow-up study now concentrates on this group and on the common cause. First, we are exploring the extensive information available in SHARE about panel members and about the response process. By this, we try to find out whether attrition is preceded by a pattern of never answering to income questions up to a certain wave, or whether the drop-out follows immediately after one wave of item nonresponse. In addition, we thoroughly analyze the characteristics of this group of respondents to reveal a possible relationship with other types of item nonresponse and to answer the question to what degree the interviewer can be viewed as the common cause. Second, we will interview a selection of these panel members about their reasons for not answering the income questions. In the end, the project will result in a proposal for better adapted strategies for this subgroup of respondents, to prevent them from dropping out.
 
7. Ineke Stoop
Panel: Non response, data protection and research ethics
 
Survey researcher aim at increasing response rates and collecting information on respondents and nonrespondents to assess nonresponse bias. These efforts – if successful – will clearly improve survey quality. Research ethics, including the need to protect the privacy of sample persons, may stand in the way of these efforts, however. May we re-approach initial refusers persons and convert them into participants? What information should advance letters give on the possible use of data collected and on the persons that will have access to the data? Is it allowed to have the interviewers observe characteristics of the dwelling and neighbourhood of nonrespondents, and to record the reason for refusal? And may we link register data to sample frame data as auxiliary data to better assess nonresponse bias, and possibly to adjust for this. In this panel session short introductions will be given by the following persons:
Ineke Stoop (introduction)

8. Dominique Joye 
Survey Climate

Many countries are facing very different conditions for surveys and this has of course impact of the field and the way to organize it. In international comparison we have a well know paradox: the countries that have the highest response rate are often the ones that invest the least efforts in the fieldwork. Of course that does not mean that it is not useful to invest resources in the fieldwork but rather that the contextual conditions are of prime importance. But how to take into accounts these contextual differences? One way is to have comparison of measures taken by the different countries in international surveys in order to enhance the response rate, another one is to try to conceptualize the local conditions encountered. The concept of “Survey climate” proposed by Lars Lyberg in the nineties could be a tool in this direction. In line with this perspective, some measures of the survey climate has been proposed by using surveys. Even if it is a little bit paradoxical to use surveys in order to study surveys, this approach can be useful to precise the different facets that a concept like survey climate can refer too. We propose to discuss the usefulness of the concept of survey climate for studying non-response and the way to approach it using, among others, a set of surveys realized in Switzerland during the last ten years.

9. Linn-Merethe Rød (linking auxiliary data to survey data)
LEGAL AND ETHICAL ISSUES WHEN COLLECTING AND USING INFORMATION ON NON-RESPONDENTS
With decades of falling response rates, even in what can be considered as benchmark surveys, nonresponse is acknowledged as a considerable and increasing problem for most general population surveys. One of the approaches to deal with the challenge of nonresponse is to collect available information on the non-respondents from other sources than directly from the individuals, which correlates with the response propensity and the key variables of the survey.
 
Available information could for instance consist of neighbourhood contextual information such as population density, crime rates and poverty, geo location indicators with descriptions of geography including transport networks and local amenities such as the proximity to schools, libraries and hospitals, behavioural or attitudinal data captured from social media and administrative data on tax collection, pension or welfare benefit computations.
 
Privacy concerns emerge, however, when a relatively large number of this information is collected and furthermore, is possible to link to households, addresses and in the last instance; identifiable individuals (e.g. by name or code referring to name). In general, the respondent’s informed consent usually constitutes the ethical and legal basis for processing of personal data in survey research. Consent to collect data on non-respondents is however typically not feasible to obtain. The main privacy questions thus relate to what is required to rely on legal and ethical alternatives to consent based processing. An important aspect to take into consideration is that the non-respondents which have refused to take part in the survey presumably expect the researcher not to include them at all.
The presentation will focus on privacy issues in relation to processing of personal data on non-respondents with reference to ethical guidelines, current EU legislation and the new EU Data Protection Regulation which will be effective from May, 2018.
 
10. Sarah Butt (Practicalities in the ADDresponse project)
11. Cato Hernes Jensen (refusal conversion and collecting information on nonrespondents)
 
28. Johanna Bristle (special session)
Do Interviewers’ Reading Behaviors Influence Survey Outcomes? Evidence from a Cross-National Setting
 
Interviewers play a fundamental role in collecting high-quality data in face-to-face surveys. Here, standardized interviewing is the gold standard in quantitative data collection, while deviations from this interviewing technique are supposed to have negative implications for survey outcomes. This paper contributes to the literature on deviant interviewer behavior by analyzing to what extent interviewers vary in their reading behavior with regard to intra-individual changes across the survey’s field period and if this has implications for the survey outcomes. Using item-level paradata from the Survey of Health, Ageing and Retirement in Europe (SHARE), we focus our analyses on introduction items to selected modules of the questionnaire. In contrast to previous research, this offers us the possibility to disentangle reading and response time from interviewers and respondents. In addition, the data source allows us to carefully control for confounding effects. Based on a fixed effects approach, our results show systematic changes in interviewers’ reading times. First, interviewers’ reading times significantly decrease over the survey’s field period, even when controlling for period effects, relevant respondent characteristics as well as specific aspects of the interview. Second, a cross-national comparison including 14 European countries plus Israel reveals that the decrease is uniform in nearly all of them suggesting its generalizability over a wide spectrum of conditions. Third, the decrease influences survey outcomes less negatively than expected and to a varying degree depending on the informational content of the item read by the interviewer. However, it is especially relevant for within-survey requests. On the basis of these findings, we discuss possible consequences for questionnaire design as well as interviewer training and fieldwork monitoring.
 
 
34. Gudridur Helga Thorsteinsdottir
In 2013, the mode of the Icelandic Household Budget Survey was changed from CAPI to CATI in order to reduce costs. Several problems accompanying the new survey mode contributed to an increase in non-response. Although many of those problems have been dealt with in the past 3 years, response rates for the expenditure diaries are still lower after the structural change. Each sample household is asked to keep a diary of the expenditures of the household over a two week period. Before the change, the survey interview was conducted face to face in the home of the respondent after the two week expenditure period was already finished, leaving the collection of the expenditure diaries to the interviewer in person. After the change, respondents give approval for participation and answer the survey interview before the expenditure period. On completion of the expenditure period respondents are asked to mail the diaries back. However, now that collection is no longer done in person, less than half of the participants return completed expenditure diaries. With the object of increasing respondent’s sense of obligation and commitment to completing the expenditure period, Statistics Iceland is carrying out an experiment. From April until August 2016 respondents that have completed the survey interview will randomly be split into three groups:
Group 1: A control group which receives the expenditure diaries with no change in the procedure.
Group 2: Receives the expenditure diaries accompanied with a thank you note and a lottery ticket, approximate value 5 Euros.
Group 3: Receives the expenditure diaries accompanied with a thank you note and a lottery ticket, approximate value 10 Euros.
 
22. Anton Johansson 
Noncontacts in the Swedish Labour Force Survey – impact on survey quality, costs and survey operations
As is well known, increasing nonresponse rates in general and increasing noncontact rates in particular have a large impact on both survey quality and costs. In addition to this, decreasing probability of contact puts a lot of strain on survey operations – especially in interviewer administered surveys. For example, if the probability of contact at the first contact attempt changes (i.e. decreases) there will be more cases to follow up at the second call attempt and so on. As the probability of contact decreases, the workload for survey operations increases. For each contact attempt you can specify a cost and see how many contact attempts the data collection budget can afford. The total number of contact attempts then has to be transformed into a data collection strategy where the contact attempts are “assigned” to each sample unit. For example one could divide the resources (contact attempts) equally between all sample units, (that is treat all cases equally). Another strategy is to allocate the contact attempts differently between subgroups in the sample (i.e. more contact attempts to hard-to-reach groups and less in the easy-to-reach groups). The aim of the contact strategy might, for example, be to reduce nonresponse bias or reducing cost. There is likely a tradeoff between these aims. The paper will describe ongoing work within the Swedish Labour Force Survey (LFS) to find a more cost effective data collection strategy within current survey budget when  noncontact rates are increasing. Since the LFS is a longitudinal survey our main focus is to look at strategies for sample units that have been noncontacts in several waves of data collection. The paper will give some preliminary results and topics for discussion.
 
5. Michael Blohm, Achim Koch 
Regularities in changes of sample composition during the data collection period. The case of the German General Social Survey (ALLBUS)
 
Little is known about typical patterns or systematic changes of sample composition and -bias during data collection period. Various factors of the study implementation will influence these patterns, for example the sampling procedure, the ways interviewers are deployed or characteristics of the target persons. In contrast to panel studies, for cross-sectional surveys like the German General Social Survey generally less  information to estimate these biases during data collection period is available. Although frame or auxiliary information is available, this is mostly not suitable since they often have little explanatory power or they deliver reliable results only if many cases have been realized. As a result, possible measures for counteracting biases can affect only a few cases at the end of the field period. Another way to improve the availability of bias information is to validate a survey on an external criterion like the (micro-)census. If there are typical patterns of sample composition and bias, this can help to estimate early during the field period, whether a sample bias is expected at the end of fieldwork - or not. So it would be possible to counteract potential biases early during the field time.
In this presentation we discuss to which extent various variables from the ALLBUS deviate from an external criterion over the data collection period for several ALLBUS studies over the years. It examines whether it is possible-at least in the ALLBUS surveys- to find regularities in changes of the sample composition. As indicators of the sample quality, the deviations from the German Microcensus are calculated and compared using different indicators like the Index of dissimilarity and for the multivariate case either population based R indicators or an adjusted index of dissimilarity.
 
15. Celine Wuyts, Geert Loosveldt
Workload-related interviewer characteristics and unit nonresponse in ESS Belgium
 
In general, interviewer burden is thought to negatively affect interviewer performance when it comes to contacting sample units and persuading them to participate. To assess the impact of ‘interviewer burden’ on this part of the interviewer task (contacting and persuading sample units), however, the definition and measurement of interviewer burden is crucial. In this context, interviewer burden may arise from different jobs and other responsibilities, and may be evaluated in a subjective as well as an objective sense. In quite a few cases, interviewers are involved in multiple survey projects simultaneously, possibly from different fieldwork organizations. As a consequence, the total interviewer workload is broader than the number of cases assigned for one particular survey project. Many interviewers also combine the job as survey interviewer with a full-time or part-time job elsewhere. In addition to interviewers having different levels of (survey and other) workload, the burden experienced is not necessarily a deterministic function of objective workload measures. Some interviewers may be able to handle large workloads efficiently without perceiving them as particularly burdensome, while others are not, and this perceived burden may also affect interviewers’ performance. So in this paper we will evaluate the impact of different aspects and operationalisations of interviewer burden (total survey workload, other job workload, and subjective workload) on unit non response. We will also consider the effects of changes in the project-specific workload during the fieldwork. Whereas data on project-specific interviewer workload is directly available from fieldwork monitoring, measures of other aspects of interviewer workload in the broader sense are often unknown to researchers. We will derive some new measures of interviewer burden based on interviewer data collected by an interviewer questionnaire administered after the ESS round 7 fieldwork in Belgium, and relate them to interviewers’ survey response outcomes (nonresponse, noncontact and refusal rates). The short paper will present some initial results.
 
17 Susanne Helmschrott 
The effects of weighting on secondary outcomes in PIAAC Germany
 
If a survey is subject to nonresponse, not all survey estimates are similarly affected by nonresponse bias. Indeed, the magnitude of bias depends on the strength of the covariance between the survey variable and response behaviour. When calculating weights to adjust for nonresponse, survey practitioners have thus to choose (a) “central” study outcome(s) for which the reduction of bias is of upmost importance. However, surveys in the social sciences often cover a multitude of topics and “secondary” outcomes of a survey might not benefit from the weighting adjustment to the same extent. As a result, researchers investigating these outcomes risk reporting biased estimates, even when using the weights that are published in the data sets. PIAAC, the Programme for the International Assessment of Adult Competencies, is an excellent example to explore the effects of weighting on secondary outcomes. Its core part is the assessment of proficiency among adults in literacy, numeracy and problem solving. Additionally, the respondents were extensively interviewed about their education, aspects of their current and past jobs, and social outcomes like health, attitudes and volunteering. These interviews offer rich research opportunities that do not necessarily need to be linked to proficiency outcomes. However, the weights have been designed to alleviate bias in the proficiency scores. Hence, research on secondary outcomes potentially yields biased results even when applying the official PIAAC weights.
The paper presented is part of my dissertation on the effects of weighting on central and secondary study outcomes of the German PIAAC data collected in the first round of PIAAC in 2011/2012. By evaluating changes in relative bias and t-tests across estimates before and after weighting, I show that bias is indeed distinctively more reduced in central than in secondary study outcomes when PIAAC weights are applied.
 

35. José Gaudet
Strategies to handle non-response in the Canadian Community Health Survey: Past, Present and Future
The Canadian Community Health Survey (CCHS) is a cross-sectional survey that collects information related to health status, health care utilization and health determinants for the Canadian population. It relies upon a large sample of respondents and is designed to provide reliable estimates at a regional level where the health system is managed.
As is the case for household surveys in most national statistical organizations, the response rates for CCHS have been declining over the last few years. In order to alleviate this problem, many initiatives have been put forward recently to help get more respondents to complete the survey. These initiatives include work on the frame and collection methodologies as part of a survey redesign in 2015. We will present some comparative results between the pre- and post-redesign strategies from the perspective of non-response. Also, in an effort to reduce cost and improve the respondent’s survey experience, Statistics Canada is progressively introducing electronic questionnaires (EQ). Some tests have been conducted in the past on the use of EQ and some surveys, like the Canadian Labour Force Survey, have started using it as part of their collection strategy. Another health survey on children and youth will be piloting the EQ this fall and the results will be useful in planning the multimode collection strategy for CCHS. The current plans involve implementing the EQ as part of the multimode collection with the next redesign in 2021. We will discuss non-response related considerations
linked to the introduction of an EQ.
 
18 Seppo Laaksonen
Anticipation of unit nonresponse or not in the sampling design
 
Various sampling designs in household surveys are applied. There can be one or more stages. If two or more stages are used, cluster sampling is one part of the design in most cases. It is possible to design the sampling for the whole target population, or to stratify. This paper concentrates on the latter case, thus to sample allocation in the case of non-ignorable unit missingness. My question is: is it good to include the anticipated response rates into sample allocation or not? This question is considered in the European Social Survey (ESS) that since round 6 recommends proportional allocation of the gross sample. If another allocation could be used, explicit strata are necessarily needed. Many countries are applying the design without explicit strata, and even the gross sample weights are varying much. Hence proportional allocation might be motivated. This however is not any good reason to require the allocation without anticipated response rates. If the disproportional allocation for the gross sample would have been successfully applied, the weights of the respondents could be more equal than those of the gross sample. Typically, the anticipation is used so that the gross sampling fraction of urban strata is higher than that of rural strata, since the response rates are substantially lower in urban areas. Other auxiliary variables such as gender and age can also be attempted. My target thus is to get support for this type of anticipation.
However, it is not clear how to use it in practice and how radically or conservatively? I am not favor for the post-anticipation via reserve samples that is used in some countries. What are best practices applied in your countries? The paper includes statistics about the recent ESS country weights, including the so-called post-stratified weights.
 
31 Sabine Friedel
Item nonresponse and interviewer effects on asset questions in SHARE
This paper deals with item nonresponse on asset questions in a cross-national face-to-face panel survey, the Survey of Health, Ageing and Retirement in Europe (SHARE). The reason for studying these items is their badly affectedness of nonresponse. The average item nonresponse rate of the asset questions is 6% - compared to other question tropics representing the highest. Since missing data is often imputed or suffers from listwise
deletion one must understand the mechanism behind to handle them. To detect the relationship of item nonresponse rates in asset questions and its emergence I focus on features of the interviewers, as they play a crucial role in the data generating process besides the respondent. Furthermore, interviewers are rather under control of the researcher than the survey participant. It is important to note that missing data does not necessarily mean bad data explicitly. If the survey participant does not know the answer it is a true and correct value. However, if the missing answer is related to any other data collection aspect, like interviewer characteristics, one must care about it and interventions concerning the interviewer might help to avoid missing data. My outcome of interest is the item nonresponse rate on asset questions in the fifth SHARE wave in Germany. I use a multilevel approach to differentiate effects of the respondent from the interviewer precisely. Interviewers’ expectations and their own hypothetical behavior of reporting income on the second level are highlighted because self-prophecy and own reporting behavior are potential influences on the outcome. Detailed information about interviewers is coming from the SHARE Interviewer Survey. First results show the importance of the interviewer as the nonresponse rate differs by interviewers.
 
6. Koen Beullens, Geert Loosveldt
Reducing the differences between respondents and nonrespondents or increasing the response rate? What is best in order to reduce nonresponse bias?
 
We will use data from ESS round 7 in order to illustrate whether additional fieldwork attempts lead to a reduction of the contrast between respondents and nonrespondents, additional to a response rate increase. Apparently, the many countries participating in ESS seem to follow various strategies. Some countries increase their response rate without considerably reducing the contrast between respondents and nonrespondents (measures using interviewer observations and variables from the register), and this does not efficiently result in a (strong) reduction of nonresponse bias. In some other countries, the reduction of contrast is stronger, and therefore the bias is reduced.
 
3. Nancy Bates, Matthew Virgile
Title? 
During Census 2000, the US Census Bureau began using paid advertising as a means to raise awareness and encourage participation. In 2015, the U.S. Census Bureau conducted a test of digital advertising and other communications techniques as part of a Census Test. The test experimented with spend levels and type of digital advertising as part of a traditional communication campaign. This test marked the first time the agency used communications and paid advertising to not only raise awareness of data collection efforts, but also to drive direct response through the online data collection instrument. The test was conducted in 17 counties in Georgia and three in South Carolina. We divided the area into five panels, by assigning each of the residential ZIP codes to a single panel. Residents of all panels were exposed to traditional advertising (e.g. TV, radio, billboards), partnership efforts, and organic social media outreach. Digital advertising was withheld in one panel, which served as a control. Spending level and targeting strategy varied among the other four panels. In addition to the digital advertising experiment, the test also included a 3  panel mail strategy experiment among 90,000 households. This paper focuses on the effects of digital advertising among households located in Census tracts characterized by three hard-to-survey populations: (1) areas with low penetration of internet access (2) areas containing young people who rent and move frequently and (3) areas with many female-headed households with modest income and low education.
 
19 Carina Cornesse, Michael Bosnjak, 
The association between survey design and representativeness: A meta-analysis of common approaches
 
In light of decreasing response rates, new technological as well as survey statistical developments, and the rise of commercial nonprobabilistic online panels, the social scientific research community faces new challenges when deciding how to collect survey data and when to trust the results. In an ongoing project on modeling the association between survey design features and representativeness, we identify, synthesize, and analyze the existing literature on survey representativeness using meta-analytic techniques. Our data set consists of a comprehensive collection of more than 100 studies published worldwide. The research questions we address include, among others, (1) whether probabilistic surveys are more representative than nonprobabilistic ones, (2) which survey mode leads to the highest degrees of representativeness, and (3) how the sample size influences different operationalizations of representativeness. Additionally, we test (4) the consistency of results across response rates, benchmark comparisons, and R-Indicators as the most commonly used measures of survey representativeness.
Some open questions emerged, which we would like to discuss with our colleagues participating in the International Workshop on Household Survey Nonresponse. These open questions are related to the theoretical and applied backgrounds of our hypothesized relationships on the association between survey design and representativeness, and do also involve issues related to the relevance of our expected findings for survey design decisions and survey operations.
 
26. Annelies Blom
Comparing interviewer characteristics and explaining interviewer effects on nonresponse across four German face-to-face data collections
 
Interviewer effects on nonresponse processes are found across all types of interviewer-mediated surveys crossing disciplines and countries. Collaborative research across several data collection projects comparing the magnitude of interviewer effects across different parts of the nonresponse process (in particular contact and cooperation processes) have been difficult, often due to restrictive access due to data protection regulations to the contact protocols, frame data and other auxiliary data. Furthermore, while many studies on interviewer effects on nonresponse have detected clustering effects attributable to the interviewers, few have been able to identify the causes of these effects due to a lack of data on the interviewers. Our analyses investigate interviewer effects on nonresponse across four large-scale face-to-face data collections in Germany all conducted around the same time: the Programme for the International Assessment of Adult Competencies (PIAAC) in 2011, the face-to-face recruitment interviews for the German Internet Panel (GIP) in
2012 and 2014, and the refreshment sample of the Survey of Health, Ageing and Retirement in Europe (SHARE) in 2013. The four data collections were all conducted by the same survey organization, which means that they all employed largely the same interviewers. We implemented the same interviewer questionnaire just before fielding the survey and collected the same auxiliary data enabling comparative analyses of interviewer effects on nonresponse across surveys. First results show that, despite the shared interviewer corps, the nonresponse processes differed across data collections. In particular, differences in the relative impact of interviewers on the contact as compared to the cooperation processes were found. These effects can only partially explained by what we know about the interviewers.
 
13. Gerrit Müller, Juha Alho, Verena Pflieger, Ulrich Rendtel
A Statistical Framework for the Fade-Away of an Initial Nonresponse Bias in Panel Surveys
 
Under certain conditions a nonresponse bias at the start of a longitudinal survey ''fades-away'' at later points in time. This effect is discussed in a general Markovian framework. The theory is applied to a register-based panel survey, the German Panel Study on Labor Market and Social Security (PASS). A general contraction theorem for weakly ergodic non-homogeneous Markov chains is presented. This shows that two chains that have different starting distributions will eventually have the same state distribution, albeit that the
common distribution need not be fixed. In the application, the two starting distributions come from the gross sample and the net sample of the first wave of a panel survey. Two conditions have to be met to assure the weak convergence of the two Markov chains. The transition probabilities must be equal for respondents and nonrespondents, and the attrition in later panel waves must not depend on the state of the process. For
homogeneous chains an asymptotic distribution exists and the rate of convergence can be established. The empirical results from the PASS for transitions in and out of social benefit payments show that, for practical purposes, these conditions are fulfilled. Although there is some evidence that attrition is not completely independent of the state, this has only a negligible impact on the contraction property. Furthermore the speed of  convergence is fast. The speed approximation derived from the assumption of homogeneous transitions gives a realistic impression of the speed of the fade away effect.
These findings are in line with previous results from European surveys linked with register information (ECHP, EU-SILC) on transitions between income quintiles. Our results demonstrate the potential of such a non-response analyses. Consequences for the detection of a nonresponse bias, the calibration of panel surveys and alternative population estimates are discussed.
 
38 François Laflamme
Framework to assess the "maximum" expected response rate for different designs and field conditions
 
Statistics Canada, like many national statistical organizations, has observed a downward trend in response rates, particularly since 2011. Changes in the external environment (e.g., increased number of cellular phone only households, increased use of telephone caller display and new technologies) as well as internal structural changes (e.g., the introduction of Electronic Questionnaire (EQ) collection mode and introduction of the new
Common Household Frame) have led to a sustained decrease in response rates for household surveys. To address this complex question, the agency is currently looking at different initiatives and options to improve data collection processes, and is formulating strategies to improve response rates. In the meantime, recent surveys have provided us with a better understanding of external and internal factors that might impact response rates before and during data collection. Data collection organizations have no, little or some control over some of these factors, while for other factors they have some control, especially during collection. Thus, before collection starts, there is a ‘maximum/optimum’ attainable response rate that can be achieved by the data collection unit within organizations given all the factors for which data collection unit within organizations have little or no control. The main objective for each data collection unit is to draw near to this “maximum response rate” using best practices before and during collection. In theory, this “maximum response rate” should be very close to the target response rate. This document begins with an overview of the category of factors impacting response. These categories are based on the “level of control” data collection organization has on each of these factors. The next two sections describe in more detail the factors impacting response rates: before collection starts and during collection. The last section discusses the relationship between the maximum and observed response rate in the perspective of developing an indicator to assess the performance of field data collection organization.
 
23. Linda Mohay
Early results of PIAAC field test
 
Hungary is participating in PIAAC (Programme for the International Assessment of Adult Competencies) Round 3 and the field test has started this spring. This survey is a good opportunity for our Institution to test some new methodologies and the purpose of the paper is to share some of the early results. We developed new detailed disposition codes and we would like to test how interviewer can understand and use them on field and how we can apply them to detect over-coverage or other errors of the frame. There are interim
disposition codes as well so we can follow up the impact of fieldwork efforts.
The sample was selected from a population register and we have some basic information (age, gender) about the nonrespondents. In addition there is also a non-interview report form which helps us to gather more information about this subpopulation based on interviewers’ observations. We will analyze nonresponse and sample composition during (and after) fieldwork and we will test the effect of using ‘Respondent Refusal Conversion Letter’ and reserve sample(s) to reduce or compensate nonresponse. It would be very helpful to us to share our experiences and gather some feedback and use them by designing the main study and improving our other surveys.
 
37 Geert Loosveldt, Anina Vercruyssen
Validating Interviewer-observed paradata with auxiliary data from Google Street View
 
In a face-to-face household survey, information about the sample units’ type of house and neighbourhood characteristics are considered as relevant information to assess non-response bias. Contact forms can be used to collect this information. One can ask interviewers to register the house type, the presence of impediments (e.g. intercom), the physical state of the houses, the presence of vandalism/graffiti and litter/rubbish. One can
consider this information as interviewer-observed paradata. It is an additional task for the interviewers during the data collection process and the information is available for respondents and non-respondents. The same kind of house and neighbourhood characteristics can be obtained by using Google Street View as an external source of information. These variables can be coded for all sample units prior to the data collection process. Therefore these variables can be considered as auxiliary data. It this paper we examine whether Google Street View based auxiliary variables can be used to validate interviewer observations.
To explore the usefulness of GSV-data, a 20% random sample (N=640) was taken from the ESS Round 7 gross sample in Belgium. This 20%-subsample is a stratified sample with the same distribution as the gross sample
according to their final disposition coding. The stratified subsample contains interviewer observations from 126
different interviewers, each being assigned to between 1 and 8 sample units. For coding the GSV-data, an
anonymized address lists was used with no other variables to enable blind coding. The coding was done by one
single coder who first of all registered how the exact addresses was found in GSV. Once the house was identified, the same instructions and coding scheme were followed as given to the interviewers to describe the house and neighbourhood. First we will discuss some pitfalls of coding GSV information ( e.g. availability, visibility, GSV can be censored and or outdated, …). Next, we will assess the external and predictive validity of the interviewer observed paradata. The external validity is assessed by evaluating the concordance between the interviewer-observed paradata and the GSV auxiliary data. We also will compare which data works best to predict non-response (predictive validity). The results will show that the use of GSV auxiliary data is not without problems and that the added value of these data should not be overestimated.
 
20 Nino Mushkudiani, Lisette Bruin, Barry Schouten
A Bayesian analysis of survey design parameters
 
In the design of surveys a number of parameters like contact propensities, participation propensities and costs per sample unit play a decisive role. In on-going surveys, these survey design parameters are usually estimated from previous experience and updated gradually with new experience. In new surveys, these parameters are estimated from expert opinion and experience with similar surveys. Although survey institutes have a fair expertise and experience, the postulation, estimation and updating of survey design parameters is rarely done in a systematic way.
This paper presents a Bayesian framework to include and update prior knowledge and expert opinion about the
parameters. This framework is set in the context of adaptive survey designs in which different population units
may receive different treatment given quality and cost objectives. For this type of survey, the accuracy of design parameters becomes even more crucial to effective design decisions. The framework allows for a Bayesian analysis of the performance of a survey during data collection and in between waves of a survey. We demonstrate the Bayesian analysis using real examples.
 
27. Alexandre Pollien
This paper focuses on the strategies of interviewers. It aims to analyze the effect on response of contact strategies. Since 2002, we carried out a face-to-face survey - ESS and MOSAiCH (ISSP) - with the same survey agency, i.e. roughly the same staff of interviewers. A questionnaire about strategies of interviewers was conducted periodically, whose nearly two thirds of all interviewers answered.  The analysis focuses on factors that contribute to the cooperation of the selected target. One of the most important results is that the pleasure conveyed by the feeling of the interviewer seems to have no effect. Mentioning that the interview will be “short and nice” appear even against-productive. Everything seems as if the interviewer should not presume what makes the target-person wants to participate. The leverage of decision appears more based on arguments, “work of persuasion”, than on a self- legitimacy coming from intrinsic or institutional characteristics of the survey. Our results show that these are the most aware interviewers about the difficulty of involving cooperation who have the best response rate. The need to convey the need to participate perform the transformation of a “without quality” questionnaire into expectations about interesting questioning. Thus the best strategies need to be adaptive. On the other hand, anticipating the wish and interest of the respondent is against-productive. Prejudging feeling of security, privacy or importance leads to rejection. The confidence is not automatically acquired by the procedure itself. It must be translated into the concrete interaction between the interviewer and the target person. In summary, the solely trust in the excellence of the survey is not sufficient to make it perceived as worthwhile. The interviewer must perform a work of persuasion, that is to say, he meets a target person who does not wish to participate “a priori” and must arouse confidence and interest.

36. Bengt Lagerstrom, Aina Holmøy, Sverre Amdam, Cato Hernes Jensen, Martin Isungset, Lisa Andersen and Arnhild Torsteinsen
Strategies to get response online
A presentation of two cases to illustrate the effect of strategies created to get response within the first week of data collection. This is possible to implement in these two cases with digital messaging for web surveys. There are two challenges, getting the organization to mobilize for digital fieldwork, and picking a strategy that provides a necessary or adequate response. Organizing digital fieldwork is a practical issue, but fundamental to any result and it can facilitate or damage the outcome, as I will demonstrate. Necessary response is the response sought for the survey within the time limit existing. Digital fieldwork is the whole operation of setting up a web survey with digital messaging including email answering service during data collection. The main idea is that there is a limit in time to collect data. Within that brief time the organization must be operating without issues. By creating strategies for sms and email we can present results from message strategies applying variations in the intensity of contact with the respondents. The baseline is parallel messaging with sms and email. This baseline was applied in both projects for different populations, tenants renting from the Rent survey and students from Eurostudent. The main results are promising if one accepts the main idea that sometimes there is limited time to finish a survey. Key words: nonresponse bias, data collection strategy, cost efficiency

2 Bengt Lagerstrom, Aina Holmøy, Sverre Amdam, Cato Hernes Jensen, Martin Isungset, Lisa Andersen and Arnhild Torsteinsen
Transition from postal advance letters to digital communication – impact on response behavior and costs 
Since 1997 Statistics Norway has conducted voters surveys among the immigrant population after the regional and national elections in Norway. Previously, these studies have primarily been conducted as telephone surveys (CATI).  After the regional elections in Norway in September 2015, the survey was for the first time conducted as a web survey, with few exceptions. In March 2015, Statistics Norway began using a register of all citizens digital contact information (KRR), strictly to be used by only public authorities in contact with citizens.
We wanted more information on how the transition to digital information and contact compared to information and contact in advance by a traditional letter would affect the response rates- and, if any difference, was the cost worth it?  Statistics Norway’s interviewers have often pointed out the importance of our letter in advance. The population of 18 207 was parted into five random groups and each group was given different treatment with a variety of traditional letters, digital letters and sms-messages where they received the digital information at different times of the day.
 
30 Wim Burgers
E-learning for starting interviewers: effects on efficiency and interviewer performance

Previously, aspiring Statistics Netherlands interviewers were trained in an extensive programme, lasting several months. Interviewers received the training centrally by one of our trainers. The training includes getting cooperation, fieldwork strategy (following the fieldwork strategy rules), and specific interviewer skills like probing. Since two years, potential interviewers receive a blended training. This training is a mixture of e-learning, training and fieldwork. This blended training covers the same contents and skillstraining. The course includes tests that potential interviewers need to complete succesfully in order to be admitted as interviewer.
The presentation will show examples of the module. I will present results of an analysis that compares the traditional training with the e-learning training on respons, contact and cooperation rates, on fieldwork compliance and drop out of new interviewers, and finally, on training efficiency. 

33. Matt Jonas (NatCen)
Can interviewers target incentives effectively?
Declining response rates for household surveys has led some providers to increase the value of participant incentives. Increasing any incentive, either conditional or unconditional, that is available to all panel members, has large implications for costs. In the context of falling survey budgets, NatCen is exploring ways to deploy fixed incentive budgets more efficiently.
One avenue identified for increasing the efficacy of a set incentive budget is the use of targeting; offering some people more than others. A number of approaches have been proposed to achieve this; basing incentive values on previous wave data in longitudinal surveys, the use of geo-coded third-party data, and finally allowing interviewers to target incentives.
Incentives offered at interviewer discretion have been used on a small number of NatCen surveys, with some degree of success at reissue – in particular on Understanding Society. However, while boosting response rates, this has little benefit in terms of costs. The principal rewards would be obtained by implementing this this type of incentive during first issue fieldwork.
Over the past 4 years, NatCen Social Research has run a series of iterative trials on their British Social Attitudes survey to explore delivery options and evaluate the discretionary incentives approach.  These trials include small-scale non-experimental trials and full split-sample experiments. In summer 2016, NatCen will run a full split-sample experiment to test the efficacy of our chosen approach.
This presentation will discuss the results from our various trials over the past 4 years, as well as preliminary results from the 2016 experiment. It will look in detail at the impact on key outcomes and costs, as well as considering ethical issues.