Bias in Interviews

From Sustainability Methods
Revision as of 06:18, 23 September 2021 by Christopher Franz (talk | contribs)

In short: This entry revolves around biases in Interview preparation and conduction. For more general insights on biases, please refer to the entry on Bias and Critical Thinking. For more in Interview methodology in general, please visit the Interviews page.


A bias is “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment” (Cambridge Dictionary). You might know this already, least from the entry on Bias and Critical Thinking. Researchers are humans, and humans are always biased to some extent - therefore, researchers are biased, if they are aware of it or not. This understanding is rooted in our assumption that epistemological knowledge is subjective. It is however better to be aware of one's biases in order to be able to counteract them in one's research. So, in this entry, let us focus on the biases that influence your research that revolves around any form of Interviews. We will have a look at different biases, and for which methods and situations they are especially relevant. For biases influencing the analysis of Interviews - which is the next step - please refer to Bias in Content Analysis and Bias in statistics.


Biases in Interview Preparation, Literature Research, and Sampling

In preparation to any form of Interview, you may conduct a systematic literature review or some other form of literature-based theoretical preparation to be able to construct and conduct the Interviews. To this end, you need to make a compromise between the broader literature that may be helpful to gain validity and follow common scientific norms when designing your study, but also at more focused literature in order to have sufficient specificity regarding your interview questions. Here, a Reporting / Publication Bias will influence your literature search, which refers to the issue that journals and researchers often only report specific types of information, and leave out others. One of the main problem to this end is that most research is biased towards positive or dramatic results, hence you will find few papers describing in detail on how many levels they struggled or failed. In addition, journals often only publish what fits into its narrative, making editors and journals the gatekeepers of science. This is not your fault - but you should be aware of it, factor it into your theoretical assumptions, and ideally try not contribute to these biases in the end by yourself.

Further, there is often a Full-Text-On-Net Bias, which emerges from the limited access researchers may have to scientific journals and the fact that they thus often favor open access journals in the references. We are quite sure to have this bias here on this Wiki, and you will also struggle with it. Open Access Journals are not necessarily worse than the ones you do not have access to - but by relying on them only, you are missing out from a large share of available research, and you may also try at least to get a hold of papers that are not accessible with one click.

Then, there are biases which is ultimately rooted in you as a researcher. There is the Academic Bias which highlights that researchers often let their beliefs and world views shape their research. Think about it: which topic are you investigating? Which assumptions are you making, which hypotheses are you stating? Why do you do this research - is it purely because there is a need for it, or do you maybe have a personal opinion or prejudice which you let guide your research design? We are all clouded by our subjective perspectives, but you should question your research approach time after time to unveil such biases. These kinds of biases are especially relevant for rather deductive Interview approaches, such as Surveys and Semi-structured Interviews.

Further, you might commit a Congruence Bias if you only test for their hypothesis directly but not for possible alternative hypotheses. There may be other explanations for a specific phenomenon, and by limiting yourself to one perspective, you are limiting your research results and the discourse around a given topic. A similar bias is the Confirmation Bias, where researchers only confirm their expectations again and again by building on the same kind of design, analysis or theories. You are not creating new or even novel knowledge by doing the same kind of research over and over. To find new solutions, you should broaden your theoretical and methodological perspective. These biases are also mostly relevant for deductive designs.

In the creation of a sample for any kind of Interview, Sampling / Selection Bias is a most common problem. We might only sample data that we are aware of, because we are aware of it, thereby ignoring other data points that lack our recognition. In other words, we are anchoring our very design and sampling already in our previous knowledge. This, again, will likely not create new knowledge. We might also choose an incorrect survey mode (digital, face-to-face) for the intended population, or simply address the wrong population overall. Examples are the exclusion of people who only have a limited Internet access when conducting an online survey or asking teachers instead of their students about an issue that students can provide more accurate information on. Biased samples are “unrepresentative of the intended population and hurt generalizability claims about inferences drawn” sampling bias problem (Bhattacherjee, 2012, p. 81) from it.

It is difficult to create a representative sample for quantitative studies, and to know which characteristics of research subjects (e.g. individuals) are of relevance to ensure representativity. The sampling strategy can heavily influence the gathered data, with convenience sampling potentially inhibiting more diverse insights. Especially for more qualitative Interviews with smaller samples, the selection of Interviewees needs to be well-reasoned and will shape the results.

The non-response bias is relevant in Survey research, and refers to the response rate of the targeted sample which – if too low – can negatively affect the validity as well as generalizability of the results. Here, it can be important to investigate why the response rate is low in order to assess the impact this might have on the results or to eliminate the causes. (Bhattacherjee, 2012) Besides, the data can be weighted to account for the non-response and approximate “some known characteristics of the population” (Gideon, 2012, p. 32).

While conducting an observation as part of our sampling you might have fallen under the Observer Bias: you (dis)favor one or more groups because of one reason or another, thereby influencing your sample and thus your data subconsciously. Then of course, there are Bribery, Favoritism, or Lobbying, which may influence your research design and your sampling strategy if you engage with them. We hope that it is more relevant for you to be aware of these when reading existing literature, and to question who wrote something under the influence of whom, and why. Lastly, there are systemic problems, which we can all only hope to counteract as much as possible, but which are deeply entrenched into our societal and academic system. These are Racism, Sexism, Classism, Lookism, and any bias that refers to us (sub)consciously assuming positive or negative traits, or just any preconceived ideas, about specific groups of people. Humans have the unfortunate tendency to create groups as constructs, and our worldview - among many other things - creates a bias in this creation of constructs. This can limit the diversity of our sample, even if we attempt stratified sampling. It is not always easy to detect these biases in your own work, but we urge you to pay attention to them, and consider it as a responsibility as citizens to educate ourselves about these biases, and to reflect on our priviliges. They will also resurface in the interpretation of your data, and you should consider their influence when reading publications from other researchers.

Lastly, we should mention the Dunning Kruger Effect. You might have heard of it - it refers to individuals that overestimate their competence and abilities, and who think that they are the best at something, when they are clearly not. Make no mistake - we are not referring to Interviewees here. Admittedly, there will be Interviewees who claim things that are not supported by any other data, and you should also be aware of this, and use a second source of information where possible. More importantly however, we are talking about you as the researcher. Do not overestimate the strength of your theoretical foundations, hypotheses or sample design. Question the validity and representativity of your sample, and be humble. Otherwise, you might confidently create results that do not bear scrutiny, and it will not make you look good.


Biases in Interview conduction and transcription

So you are in the Interview situation. You limited the bias in your research design and sample, and you have your survey handed out, or your Interview situation set up. What can go wrong?

First, it is important to acknowledge a potential Social Desirability Bias. This refers to the “tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner” (Bhattacherjee, 2012, p. 81) and has a negative impact on the validity of results. This kind of bias can be better managed in an interview survey than in a questionnaire survey. (Bhattacherjee, 2012) Interviewees may tend to answer in a supposedly socially acceptable way, especially when the research revolves around taboo topics, disputed issues, or any other socially sensitive problems. Akin to this is the Observer-Expenctancy Bias, which is the subconscious influence a researcher's expectations impose on the research. Yes, this is related to the Observer Bias mentioned above. In the Interview situation, it refers to the way that you as the researcher - and Interviewer, or Survey creator - may impact the Interview itself, e.g. by phrasing questions or reacting to certain responses in a specific way. We can also mention Framing at this point, which is not always a bias in itself, but simply revolves around how questions, data, or topics, are presented by researchers. Framing a specific phenomenon in a negative way might prompt Interviewees to respond rather negatively to it, even if this was not their initial opinion. Observer-Expectancy Bias may also emerge purely from the fact that there is someone listening to the Interviewee. As a result, (s)he may answer differently from what (s)he really thinks, e.g. to impress you, or because (s)he thinks you need specific answers for your research. (S)he might also just not like you, or mistrust you, and respond accordingly. It is not easy to always prevent this. Try to be unbiased with your question design, neutral in your interviewing demeanor, and motivate the interviewee to be as honest as possible. This applies to all kinds of Interview situations.

A challenge one needs to be aware of when conducting and analyzing Focus Groups is the censorship of certain – e.g. minority or marginalized – viewpoints, which can arise from the group composition. As Parker and Tritter (2006, p. 31) note: “At the collective level, what often emerges from a focus group discussion is a number of positions or views that capture the majority of the participants’ standpoints. Focus group discussions rarely generate consensus but they do tend to create a number of views which different proportions of the group support." Further, considering the effect that group dynamics have on the viewpoints expressed by the participants is important, as the same people might answer differently in an individual interview. Depending on the focus of the study, either a Focus Group, an individual interview or a combination of both might be appropriate (Kitzinger 1994).

When survey participants do not completely and accurately remember events, personal motivations, or behaviors from the past, this is referred to as recall bias. Their difficulties to recall information that is queried can stem from their “motivation, memory, and ability to respond” (Bhattacherjee, 2012, p. 82).

The common method bias relates to the fallacy that there is a certain “covariance shared between independent and dependent variables that are measured at the same point in time […] using the same instrument” (Bhattacherjee, 2012, p. 82). Statistical tests can be used to identify this type of bias. (Bhattacherjee, 2012)

Lastly, there is a group of biases revolving around how you perceive your Interviewees. This may influence how you conduct your Interview, and how your transcribe recordings, even before analyzing them further. Attribution Bias* is about systematic errors based on flawed perception of others' or one's own behavior, for example misinterpreting an individual's behavior. We have the Halo / Horn Effect, which means that an observer's overall impression of an entity influences his/her feelings about specifics of that entity's properties. If you like an Interviewee, you may interpret their responses differently than if you didn't like them as much, and will ask different kinds of questions therefrom. There are Cultural Biases, which makes you interpret and judge behavior and phenomena by standards inherent to your own culture. And again, we have Sexism, Lookism, Racism, and Classism, which may lead you to interpreting specific responses in a misguided way due to preconceived ideas about how specific groups of people live or see the world. These influences mostly apply to qualitative face-to-face situations, so mostly to all forms of Open Interviews and Focus Groups.

There are certainly more aspects, as this is a complex and emerging topic. Not all biases are easy to detect, and it is not always easy to avert them. A first important step is to acknowledge how you and your cultural, institutional, or societal background may influence how you set up and conduct your research. This will help you limit their influence, and there are some additional tips for how to conduct the different types of Interviews, which are mentioned in the respective entries. In your critical reflection of your methodology - which you should always do - you should develop and access knowledge about potential biases, and how much these biases may influence or shape your research. This makes it transparent for readers, helps them evaluate your results, and improves the overall discourse around biases.


The authors of this entry are Christopher Franz and Fine Böttner.