Mixed Effect Models
Method categorization | ||
---|---|---|
Quantitative | Qualitative | |
Inductive | Deductive | |
Individual | System | Global |
Past | Present | Future |
In short: Mixed Effect Models contain both fixed and random effects, i.e. models that analyse linear relations while decreasing the variance of other factors.
Background
Mixed Effect Models were a continuation of Fisher's introduction of random factors into the Analysis of Variance. Fisher saw the necessity not only to focus on what we want to know in a statistical design, but also what information we likely want to minimize in terms of their impact on the results. Fisher's experiments on agricultural fields focused on taming variance within experiments through the use of replicates, yet he was strongly aware that underlying factors such as different agricultural fields and their unique combinations of environmental factors would inadvertedly impact the results. He thus developed the random factor implementation, and Charles Roy Henderson took this to the next level by creating the necessary calculations to allow for linear unbiased estimates. These approaches allowed for the development of previously unmatched designs in terms of the complexity of hypotheses that could be tested, and also opened the door to the analysis of complex datasets that are beyond the sphere of purely deductively designed datasets. It is thus not surprising that Mixed Effect Models rose to prominence in such diverse disciplines such as psychology, social science, physics and ecology.
What the method does
Mixed Effect Models are - mechanically speaking - one step further with regards to the combination of regression analysis and Analysis of Variance, as they can combine the strengths of both these approaches: Mixed Effects Models are able to incorporate both categorical and/or continuous independent variables. Since Mixed Effect Models were further developed into Generalized Linear Mixed Effect Models, they are also able to incorporate different distributions concerning the dependent variable.
Typically, the diverse algorithmic foundations to calculate Mixed Effect Models (such as maximum likelihood and restricted maximum likelihood) allow for more statistical power, which is why Mixed Effect Models often work slightly better compared to standard regressions. The main strength of Mixed Effect Models is however not how they can deal with fixed effects, but that they are able to incorporate random effects as well. The combination of these possibilities makes (generalised) Mixed Effect Models the swiss army knife of univariate statistics. No statistical model to date is able to analyse a broader range of data, and Mixed Effect Models are the gold standard in deductive designs.
Often, these models serve as a baseline for the whole scheme of analysis, and thus already help researchers to design and plan their whole data campaign. Naturally, then, these models are the heart of the analysis, and depending on the discipline, the interpretation can range from widely established norms (e.g. in medicine) to pushing the envelope in those parts of science where it is still unclear which variance is being tamed, and which cannot be tamed. Basically, all sorts of quantitative data can inform Mixed Effect Models, and software applications such as R, Python and SARS offer broad arrays of analysis solutions, which are still continuously developed. Due to the complexity of the approach, Mixed Effect Models are not easy to learn, as they are often hard to tame. Within such advanced statistical analysis, experience is key, and practice is essential in order to become versatile in the application of (generalised) Mixed Effect Models.
Strengths & Challenges
The biggest strength of Mixed Effect Models is how versatile they are. There is hardly any part of univariate statistics that can not be done by Mixed Effect Models, or to rephrase it, there is hardly any part of advanced statistics that - even if it can be made by other models - cannot be done better by Mixed Effect Models. They surpass the Analyis of Variance in terms of statistical power, eclipse Regressions by being better able to consider the complexities of real world datasets, and allow for a planning and understanding of random variance that brings science one step closer to acknowledge that there are things that we want to know, and things we do not want to know.
Take the example of many studies in medicine that investigate how a certain drug works on people to cure a disease. To this end, you want to know the effect the drug has on the prognosis of the patients. What you do not want to know is whether people are worse off if they are older, have a lack of exercise or an unhealthy diet. All these single effects do not matter for you, because it is well known that prognosis often gets worse with higher age, and factors such as lack of exercise and unhealthy diet choices also influence the general health status and prognosis .What you may want to know if the drug works better or worse in people that have unhealthy diet choice, are older or lack regular exercise. These interaction can be meaningfully investigated by Mixed Effect Models. All positive factors' variance is minimised, while the effect of the drug as well as its interactions with the other factors can be tested. This makes Mixed Effect Models so powerful, as you can implement them in a way that allows to investigate quite complex hypotheses or questions.
The greatest disadvantage of Mixed Effect Models is the level of experience that is necessary to implement them in a meaningful way. Designing studies takes a lot of experience, and the current form of peer-review does often not allow to present the complex thinking that goes into the design of advanced studies (Paper BEF China design). There is hence a discrepancy in how people implement studies, and how other researchers can understand and emulate these approaches. Mixed Effect Models are also an example where textbook knowledge is not saturated yet, hence books are rather quickly outdated, and also often do not offer exhausting examples to real life problems researchers may face when designing studies. Medicine and psychology are offering growing resources to this end, since here the preregistration of studies due to the reproducibility crisis offers a glimpse in the design of scientific studies.
The lack of experience in how to design and conduct Mixed Effect Models-driven studies leads to the critical reality that more often than not, there are flaws in the application of the method. While this got gradually less bad over time, it is still a matter of debate whether every published study with these models does justice to the original idea. Especially around the millennium, there was almost a hype in some branches of science regarding how fancy Mixed Effect Models were considered, and not all applications were sound and necessary. Mixed Effect Models can also make the world more complicated than it is: sometimes a regression is just a regression is just a regression.
- 4) Normativity**
Mixed Effect Models are the gold standard when it comes to reducing complexities into constructs, for better of worse. All variables that go into a mixed effect model are normative choices, and these choices matter deeply. First of all, many people struggle which variables are about fixed variance, and which variables are relevant as random variance. Second, how are these variables constructed, are they continous or categorical, and if the latter, what is the reasoning behind the category levels? Designing mixed effect model studies is thus defiantly part of advanced statistics, and the is even harder when it comes to integrating non-designed datasets into a mixed effect model framework. Care and experience are needed to evaluate sample sizes, variance across levels and variables, which brings us to the most crucial point: Moodle inspection. Mixed Effect Models are build on a litany of preconditions, most of which most researchers choose to conveniently ignore. Now comes the real whammy: In my experience, this is more often than not ok, because it does not matter. Mixed Effect Models are -bless the maximum likelihood estimation- sturdy as hell. It is hard to find a model that is not having some predictive or explanatory value, even if hardly any pre-conditions are met. Does this mean that w should ignore these? No!! In order to sleep sound and save at night, I am almost obsessed with model inspection, checking variance across level, looking at the residuals, and looking for gaps and flaws in the models fit. We should be really conservative to this end, I think, because mind you that by focussing on fixed and random variance, we potentiate things that could go wrong. As I said, more often than not- this is not the case, but propose to be super conservative when it comes to you model outcome. In order to get there we need yet another thing: Model simplification. Mixed Effect Models lead the forefront of statistics, and this might be the reason why the implementation of AIC as a parsimony based evaluation criteria is more abundant here as compared to other statistical approaches. P-values fell widely out of fashion in many branches of science, as did the reporting of full models. Instead, models reduction based on information criteria approaches is on the rise, reporting parsimonious models that honour Occam's razor. Starting with the maximum model, these approaches reduce the model until it is the minimum adequate model in other words, the model that is as simple as possible, but as complex as necessary. The AIC is about the equivalent of a p-value of 0.12, depending on sample size, hence beware that the main question may be the difference from the null model. This links to the next point: Explanatory power. There has been some sort of a revival r2 values lately, mainly based on the suggestion of r2 values that can be utilised for Mixed Effect Models. I deeply reject these approaches. First of all, Mixed Effect Models are not about how much the model explains, but whether the results are meaningfully different from the null model. I can understand that in a cancer study I would want to know how much my model may help people, hence an occasional glance of the fitted value against the original values may do no harm. However, r2 in Mixed Effect Models is to me a step into the bad old days when we evaluated the worth of a model because of its explained variance. This led to a lot of feeble discussions, of which I only mention here how good a model needs to be in terms of these values to be not bad, and vice versa. This is obviously a problem, and such normative judgements are a reason why statistics have such a bad reputation. Second, people are starting again to actually report their models based on the r2 value, and even have their model selection not be independent of the r2 value. This is something that should be bygone, yet it is not. Beware of the r2 value, it is only deceiving you in Mixed Effect Models. Third, r2 values in Mixed Effect Models are deeply problematic because they cannot take the complexity of the random variance into account. Hence r2 values in Mixed Effect Models make us go back to the other good old days, and this is the days when mean values were ruling the outcomes of science. Today, we are closer to an understanding where variance matters, and why would we embrace that. Ok, it comes with a longer learning curve, but I think that the good old reduction to the mean was nothing but mean, sorry! Another very important point of Mixed Effect Models is that they probably more than any statistical method remark the point where experts in one method -say for example interview- now needed to learn how to make interviews as a scientific method, but also needed to learn advanced statistics in the form of Mixed Effect Models. This creates a double burden, and while learning several methods can be good to embrace a more diverse understanding, it is also challenging, and highlight a new development. Today, statistical analysis are increasingly outsourced to experts, and I consider this to be a generally good development. In my own experience, it takes a few thousand hours to become versatile at Mixed Effect Models, and modern science is increasingly build on collaboration.
- 5) Outlook**
In terms of Mixed Effect Models, language barriers and the norms of specific disciplines are rather strong, and explaining the basics of these advanced statistics to colleagues is an art in itself, just as the experience of researchers being versatile in interviews cannot be reduced into a few hours of learning. Education in science needs to tackle this head on, and stop teaching statistics that are outdated and hardly published. I suggest that at least on a Masters level in the long run all students from the quantitative domain should be able to understand the preconditions and benefits of Mixed Effect Models, but this is something for the distant future. Today. PhD students being versatile in Mixed Effect Models are still outliers. Let us all hope that this statement will be outdated rather sooner than later. Mixed Effect Models are surely powerful and quite adaptable, and are increasingly becoming a part of normal science. Honouring the complexity of the world while still deriving value statements based on statistical analysis has never been more advanced on a broader scale, yet statisticians need to reecognize the limitations of real world data, and researchers utilising need to honour the preconditions and pitfalls of these analyses. Current science is in my perception far away from reporting reproducible analysis, meaning that one and the same dataset will be differently analysed by mixed effect model approaches, partly based on experience, partly based on differences between disciplines, and probably also because of many other factors. Mixed Effect Models need to be consolidated and unified, which would make normals science probably better than ever, I think.
Key Publications
The author of this entry is Henrik von Wehrden.