Making a research design
In short: This entry revolves around the basics of a methodological design when planning empirical research.
Contents
- 1 No researcher is an island
- 2 Stand on the shoulders of giants
- 3 Designs are safeguards
- 4 Knowledge saturation and process understanding
- 5 Sample effort
- 6 Induction or deduction
- 7 Pre-plan the analysis
- 8 For bold topics, be conservative, yet test bold methods with established topics
- 9 Methodological designs as a critical part of reality
A methodological design is one of the three central stages of any scientific endeavor that builds on a methodological approach. While analysis and interpretation are conducted after the empirical data has been gathered, designing a study is what hardwires the type of knowledge that is being gathered, and thus creates a path dependency on the overall scientific output, for better or worse. It is next to impossible to reduce the essence of all the diversity of different approaches to methodological designs into a text, yet there are some common challenges scientists face, and also mistakes that are made frequently to this day. Hence nothing written here can serve as one size fits all solution, yet it may serve as a starting point or safeguarding in order to get most empirical research on track. Let us start with the basics.
No researcher is an island
Any given research builds on what happened so far. In order to create a tangible and tested design, you need to read the literature within the specific branch of science. Read the highly cited papers, and the ones that match your specific context most precisely, and build on a ration from there. While many people start with textbooks, this should only be done by absolute beginners. Methodological textbooks are usually too static and constructed to do justice to the specific context of your research, yet they may be a good starting point. The examples in textbooks are however often rather lifeless and hard to connect to what you actually want to do. This is why you need to read papers that connect more clearly with your specific research project. If you believe that no one focussed on this specific part of knowledge, then you are probably mistaken. Other researchers may not have worked within your specific location, but they may have worked within systems that are comparable regarding the patterns and dynamics you want to investigate. Context matters. Try to find research that is comparable in the respective context, or at least embeds your context into their approaches and results. This is a craft that takes time to master. Beginners in the craft of science usually need quite some time to read papers, which is mainly because everything is new and they read all of it. With time, knowledge becomes familiar and you will learn to read papers faster. However you can only create a proper scientific design if you build on previous literature. If other researchers point out that your research is flawed because you ignored or missed previous research, then all is in vain. This brings us to the next point.
Stand on the shoulders of giants
It is perfectly normal if you do not know how to approach the design of a complex research design. There are however several workarounds. The most simple one is to start with a simple research design. Not all research needs to be complex or complicated, and indeed many questions that we have towards new knowledge are based on simple methodological designs. Hence it can be more than enough to start with a simple and tested research design that you find well established within science. Take the example of interviews: We can probably agree that science has not asked enough people yet in order to integrate the knowledge they have. There are exiting opportunities out there to ask actors and stakeholders about their knowledge and experience.
The next workaround is obvious: Ask an expert. The most important advise to this end is to be well prepared, honor the time of the expert, and trust in their opinion. People with expertise in research mounted usually thousands or hour of experience, and are knowledge brokers that are quite in demand. Hence, make sure to be to the point and also be aware that you may have to read up on what the researcher pinpoints you at. Yet asking experienced researchers only makes sense if your design is advanced enough to merit the input of an experienced researcher. If such an expert explains to you a robust yet simple approach, go for it. Such a talk may not take longer than 10 minutes, hence do not expect two hours of your time if the matter is solved with simple solutions. Be trustful, remember that they usually had all the troubles you faced already, if not worse. This brings us to the next point.
Designs are safeguards
We create scientific designs in order to make sure that our gathering of data ultimately works and produces the knowledge we potentially want to analyze and interpret. Hence, we create our design based on past experience of research and researchers. We need to be realistic concerning the potential problems we may face. There are often aspects within research that are prone to errors or unforeseen circumstances and a good research design creates a certain failsafe and guard-rails against unforeseen circumstances. A low rollback in an interview campaign or moulded samples in a soil chemistry lab are a known example where your research faces challenges that a good design should fortify you against. Thus, a research design is not only about what can go right and how it is done, but also about what could go wrong and how do you deal with unforeseen circumstances. This is nothing to obsess about, but indeed something that we should simply implement in order to be prepared and ready if all else fails. Research is often failure, and then we pick ourselves up. Yet there are also more concrete safeguards.
Knowledge saturation and process understanding
When you have enough information on a given empirical phenomenon and more sampling does not add more knowledge, one can speak of saturation. We have to be aware that from a philosophy of science standpoint, this type of knowledge saturation is impossible, because nothing is permanent and just like this, new knowledge may emerge and replace old knowledge. Yet, for a snapshot in time, within a given system, and considering that maybe not all systems change dramatically within one and the same day, or the time frame of our research, saturation is a helpful concept. If you kept for example interviewing people, yet did not gain more knowledge since several interviews, then you probably reached saturation. Within quantitative science this is easier to achieve, which is why within qualitative branches of science saturation is also a question of experience. However, within a research design process, saturation is useful because it allows you to plan your maximum sample, yet of course then one has to consider also the absolute minimum that is necessary.
Sample effort
An important factor in any given scientific research design is the sample size in relation to the effort and resource put into the consequential research. To this end, one has to be careful not to waste resources due to sampling or analysis that do not add the knowledge we are looking for. Anticipating the resources needed for a certain research project can also simply boil down to such a simple factor as time. Will we be able to conduct qualitative interviews with thousands of people? Probably not, at least not if you do not have a massive number of interviewers at your disposal. Hence, a sampling is always a fine balance between anticipated knowledge and available resources. Yet, for most research methods there is ample experience not only how to sample, but also how much to sample.
Induction or deduction
Within the deductive science, the experience concerning sample intensity is often rather clear, because new research always adds a piece of the puzzle to already existing knowledge. Since deductive approaches test a specific hypothesis and are usually designed to be reproducible, such research is clearly more easy to plan. What sounds like a benefit is likewise also the biggest criticism, at least from a stance of philosophy of science. What seems robust can also be static, and instead of adding knowledge is merely the same knowledge added, or maybe becoming more precise. It is beyond this text to discuss this criticism in depth, yet this underlines that part of any deductive design has always to be a discussion of the biases and limitations. This is equally true for inductive research, yet at a different scale and focus. Inductive designs are more about modes and conduct than about sample size and variance. An example is the setting and preparation of a qualitative interview, including ethical guidelines and clear guidelines concerning bias. Inductive research is less concerned with being reproducible which makes documentation a key issue. Especially within qualitative research an extensive documentation is key in order to enable that the path from raw data to interpreted knowledge is understandable. Since inductive research is less static, a clear documentation can also contain information that is not clearly relevant from the get-go, which may especially be important if other researchers re-examine inductive research later on.
Pre-plan the analysis
Yet both inductive and deductive researchers should already be aware of the initial design of the sampling and how the research is going to proceed into analysis, this is also were the framing of a research question becomes important. Most methodological approaches used in analysis -both concerning quantitative and qualitative approaches- have been used hundreds if not thousand of times. There is research that can pinpoint you on how to plan your analysis. Again, it is central to strike a balance between a critical perspective and knowledge that is well established. Mayring is a typical example of a methodological approach that has proven its worth concerning myriads of analysis, yet is also notoriously underestimated concerning the necessary experience a researcher needs. Reading up on studies applying Mayring is a good starting point, yet it does not stop there, since you additionally should read up the major criticism that exists concerning Mayring. Otherwise you will not be able to defend your research against known problems that could also affect your specific approach and context. Another example would be an ecological green house experiment. In such a case a respective sample and manipulation design automatically translates into the respective statistical analysis. This leads so far that in quantitative deductive research such as in much of psychology, medicine or ecology the sample design is completely determined by the statistical analysis, and vice versa. Such methods are well established and, for the respective knowledge they produce, have stood the test of time.
For bold topics, be conservative, yet test bold methods with established topics
Conservative methods can be indeed helpful for the case of of bold topics, because bold topics are often deeply contested and not yet deeply investigated. Hence, a robust method can help to add credibility and robustness to an overall research design. However some research also advances methods and develops these further. In these cases tested topics can help to maintain an element of stability if a method is developed or developed further. Hence the methodological design within scientific research strongly depends on the topic as well as the underlying theories or concepts. To this end, it is clear that the interplay of these three components is a matter of deep experience, which is why it is so important to not only rely on the literature, but also build on the expertise of experienced researchers. Therefore, early career scientists may consider to build on more established procedures, while it does not come as a surprise that most scientific innovations are created by already emerging or established researchers. While one would wonder if more bold ideas emerge at a younger age and evolvements based on deeper experience originate later in life, this remains to be tested. Still, command of the literature is vital in order to move a specific branch of science forward or even evolve a new one.
Methodological designs as a critical part of reality
Knowledge of the current state of the art is a precondition in order to move a specific part of science forward. Empirical reality presents an imperfect reality to us that we will probably never fully understand from an epistemological stance. Consequently, research adds pieces to a puzzle, yet we have to be aware that we may never see a full-blown epistemological reality as a whole. Reality is messy, imperfect and deceives us. Consequently, a critical perspective is a vital prerequisite for any given empirical researcher. This in turn demands a clear recognition of the literature, both concerning the empirical approaches that were being utilized, but also how the respective knowledge is limited in a broader stance. A critical perspective can hence range from simple and known biases towards broader questions of philosophy of science. Usually more established branches and procedures in science are more tamed when it comes to biases, while more emerging areas in science often contain new challenges. Interactions with stakeholders are a good example of research that is hard to tame in a rigid methodological design, and right now we rely heavily on this horizon evolving in order to gain the knowledge necessary for deep transformations. Our epistemological research being pieces of the puzzle means also that our research is a small contribution, and we should never forget this. Our piece of the puzzle will hardly be a big contribution, because after all, what is this? Yet sometimes, in rare occasions, when you have a piece of the puzzle in your hand, you know exactly where it fits, and it snugs right into the empty space connecting all the surrounding pieces. Just as one gains from practice and experience in a puzzle, one can equally gain from experience in research designs.
The author of this entry is Henrik von Wehrden.