Rethinking Statistical Education: A Necessary Shift in Approach
Written on
Chapter 1: The Flaws in Current Statistical Education
The conventional methods of teaching statistics at many universities may be counterproductive to scientific advancement.
This paragraph will result in an indented block of text, typically used for quoting other text.
Section 1.1: Understanding Bayesian vs. Frequentist Methods
Initially, my intention was to explore the differences between Bayesian and Frequentist methods. However, as I delved deeper, I recognized that the real issue lies not in merely understanding these differences, but in comprehending how both approaches can be effectively utilized in various scenarios. In fact, these seemingly distinct perspectives on data and reality often reveal more similarities than differences.
As I explored literature on Bayesian methods, I was struck by their potential applications, particularly considering how little exposure I had to them during my university statistics courses. I had learned about Bayes' Theorem and its use in clinical contexts—famous examples involving diseases and tests—but that was the extent of my education. I remained unaware of the vast array of techniques that could stem from this theorem.
Moreover, I did not grasp that many of the traditional methods I had learned, such as t-tests, ANOVA, and Linear Regression, are all part of the General Linear Model. Our instructor touched on this in our final class, attempting to bridge the gap between the numerous methods we had encountered as if they were standalone concepts. Unfortunately, this often left many of us more perplexed than enlightened. One of my classmates questioned why we didn’t start with the overarching principles and then address the specific cases later. The instructor expressed a desire to do so but feared that students might struggle to follow along and subsequently dismiss statistics as unimportant. His concern was that this might lead students to avoid statistics altogether, opting for the simplest methods to bypass deeper engagement.
Yet, I observed my peers grappling with statistical concepts in their theses regardless. The root of the issue was not a disdain for the subject, but rather a lack of knowledge and confidence in selecting appropriate methodologies. With many having completed their last statistics course two years prior, some attempted to tackle research questions that didn’t align with the methods they had learned. For those without supportive supervisors to guide them toward suitable resources, the outcome often mirrored the very issues we had been taught to criticize, resulting in increased frustration with statistics and research.
I believe this is a common experience for many.
Section 1.2: The Replication Crisis and Its Implications
In addition to my personal experiences, there is a significant replication crisis affecting various scientific disciplines, such as psychology, economics, and medicine. Numerous studies have failed to replicate or have been discredited in recent years, despite many findings being readily accessible in popular literature. A primary cause of this crisis is the application of inappropriate statistical models and methodologies. While I do not hold individual researchers entirely accountable, many may genuinely lack the necessary knowledge. Although there is a growing awareness of the pitfalls associated with blindly relying on p-values and a fragmented array of techniques, many remain uncertain about how to navigate beyond these traditional tools or where to seek alternatives. Some may not even perceive the necessity for change. Each active scientific field grapples with unique challenges in measurement and interpretation, often employing its own terminology to articulate theories and concepts. As such, researchers require a cohesive set of principles to design, develop, and refine specialized statistical methods.
To provide these principles, we must fundamentally re-evaluate how we teach statistics. This involves broadening the curriculum beyond classical frequentist models and fostering a comprehensive understanding of statistics from the ground up. To enhance future research quality, we must present statistical models in greater depth and as interconnected procedures. Additionally, we should illustrate how these methods correlate with hypotheses, measurements, and the natural processes that underpin our inquiries. We need to critically assess whether the pervasive use of null hypothesis significance testing is justified. Furthermore, we must address statistical models that do not adhere to the Normal Distribution. It is imperative to teach both Bayesian and Frequentist perspectives, along with the critical thinking skills required to determine when to apply each approach.
-Merlin
References: [1] McElreath, R. (2020). Statistical Rethinking: A Bayesian Course with Examples in R and Stan. CRC Press. [3] Martin, O. (2018). Bayesian Analysis with Python. Packt.
Chapter 2: The Importance of Comprehensive Statistical Knowledge
In this video titled "Statistical Rethinking - Lecture06," the speaker delves into the intricacies of Bayesian statistics, presenting it in a manner that is both engaging and informative. The lecture highlights the importance of understanding Bayesian methods and their relevance in practical applications.
The second video, "Statistical Rethinking 2023 - 01 - The Golem of Prague," offers a fascinating exploration of Bayesian principles. It emphasizes the necessity for a shift in statistical education to better equip researchers for modern scientific challenges.