Is It Really Robust?
Reinvestigating the Robustness of ANOVA Against Violations of the Normal Distribution Assumption
Abstract
Empirical evidence to the robustness of the analysis of variance (ANOVA) concerning violation of the normality assumption is presented by means of Monte Carlo methods. High-quality samples underlying normally, rectangularly, and exponentially distributed basic populations are created by drawing samples which consist of random numbers from respective generators, checking their goodness of fit, and allowing only the best 10% to take part in the investigation. A one-way fixed-effect design with three groups of 25 values each is chosen. Effect-sizes are implemented in the samples and varied over a broad range. Comparing the outcomes of the ANOVA calculations for the different types of distributions, gives reason to regard the ANOVA as robust. Both, the empirical type I error α and the empirical type II error β remain constant under violation. Moreover, regression analysis identifies the factor “type of distribution” as not significant in explanation of the ANOVA results.
References
2005). Inference by eye: Confidence intervals and how to read pictures of data. American Psychologist, 60, 170–180.
(1995). Randomization tests. New York: M. Dekker.
(1996). G*power: A general power analysis program. Behavior Research Methods, Instruments, & Computers, 28, 1–11.
(1972). Consequences of failure to meet the assumptions underlying the fixed effects analysis of variance and covariance. Review of Educational Research, 42, 237–288.
(2003). GNU Scientific Library Reference Manual, (2nd ed.) Network Theory Ltd.
(1992). Summarizing Monte Carlo results in methodological research: The one- and two-factor fixed effects ANOVA cases. Journal of Educational and Behavioral Statistics, 17, 315–339.
(2008). A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes. Psychological Methods, 13, 110–129.
(1998). Statistical practices of educational researchers: An analysis of their ANOVA, MANOVA, and ANCOVA analyses. Review of Educational Research, 68, 350–386.
(1996). Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance F test. Review of Educational Research, 66, 579–619.
(1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156–166.
(1988). Random number generators: good ones are hard to find. Communications of the ACM, 31, 10.
(1990). Statistical power analysis for multiple regression/correlation: A computer program. Educational and Psychological Measurement, 50, 819–830.
(1997). Power comparison of non-parametric tests: Small-sample properties from Monte Carlo experiments. Journal of Applied Statistics, 24, 603–632.
(2000). Demonstrations and activities in the teaching of psychology, Vol. I, Erlbaum.
(2002). An overview of remedial tools for the violation of parametric test assumptions in the SAS system. Proceedings of 2002 Western Users of SAS Software Conference.
(1998). Invalidation of parametric and nonparametric statistical tests by concurrent violation of two assumptions. Journal of Experimental Education, 67, 55–68.
(