Have you ever questioned the “Research has shown” speech during a pre-service or professional development meeting? If you’re anything like me, then yes, you have. You’ve probably wondered to yourself, what research? How was it conducted? What is the critical review of the work? Often, I am left with Jerry Maguire-like outbursts of “SHOW ME THE RESEARCH!” It’s exhausting. It’s also, to some degree, dangerous.
Pretend for a moment you are back in your high school English class. (Okay, I know, we don’t call it “English” anymore…Communication Arts?…humor me). Your teacher assigned your first research essay. Perhaps it is a persuasive argument. In your argument, you want to convince people that they should become a vegan. (Perhaps you are a vegan, and you love being one. You think everyone should experience this lifestyle.) Once you’ve written your argument, you are ready to find research and citations to support your case. You find your way to google scholar, do a couple of searches and viola, you identify some evidence to back up your claim.
The problem with the above scenario is the order in which the evidence was researched. Finding research to back up your claim after you’ve made it is akin to a scientist creating a claim before running an experiment. A claim should only be made once the data from the experiment is collected, or in this case, after it has been researched. Teachers struggle to get this important point across to students every year. Unfortunately, it is difficult to convince someone (especially someone in their teenage years) to research a topic on which they already hold strong opinions. Evaluating evidence takes skill. Each year in my chemistry course, I conduct a lab in which students graph the energy (temperature) of ice melting. Before the lab, they predict the line they should see.
Give it a try. Use the empty graph below to draw a line of what you would expect the temperature to follow when ice is melting.
Now that you’ve made your prediction, I’ll show you what my students often do on their graphs.
Notice in the graph above the temperature goes up in a positive relationship with time. When I ask students to reflect on why they made this prediction, they often journal about life experience and their common knowledge about when ice melts. In other words, they scoff at me and say, “because everyone knows that ice melts as it gets hotter.”
Fair. Life experience should not be discredited. Observation is an important part of being a scientist. Yet, it is how observation is interpreted that reveals our biases, whether intentional or not. The graph below shows the actual behavior of temperature of ice during the melting process.
Here’s my point: Even after seeing the data of ice melting students continue to claim that temperature climbs steadily over time in a linear fashion. They completely disregard the two distinct plateaus where temperature levels off. When these plateaus are pointed out during post-lab discussions, students attribute this to instrument error, the probe moving around, or ice shifting as it melts. Their preconceived claim, that temperature increases steadily, is so strongly rooted in their understanding of the natural phenomenon of melting, that contradictory data is ignored. Their sub-conscience is telling them that their opinion, based on personal experience, is more actuate than empirical data. Even scarier, they are finding ways to argue that the data (somehow) supports their claim.
Reflecting on the structure of the educational system, I wonder if those making decisions are basing their quest for the implementation of “best practices” on in-depth analysis. How initiatives are implemented and how data is collected, assessed, evaluated, etc. are often the result of an administrative directive. Districts will implement yearly goals set to current trends in research. Yet, No one really discusses what research was conducted. Who did the research? To what extent was it done? What personal biases were admitted during the process to reduce misinterpretation? It is generally understood that those presenting are not to be questioned. Lorin Anderson, one of the co-authors of the famous “Revised Bloom’s Taxonomy,” stated in a recent article on the subject that, “…after 40+ years in the business, I am greatly dismayed that many educators get their information from oral presentations and secondary (and sometimes tertiary) sources. This practice tends to result in passing along half-truths and misinterpretations.” That’s frightening…TERRIFYING. It seems we are in a hurry to implement new strategies, goals, techniques, etc. without understanding the entirety of their origin. Perhaps more accurately, we are implementing these initiatives without having critical discussions concerning their validity.
For example, John Hattie, an educational researcher, states that class size has little effect on student learning. John Medina, on the other hand, a molecular biologist argues that smaller class sizes have a significant effect on student learning. Hattie uses observational research methods while Medina focuses on brain structure and development. Which claim is valid? Were biases evaluated and eliminated during the investigation? What are the current critiques of the research? I’ll let you analyze the data and decide for yourself. It might require some time to actually read it, but ultimately, that’s what we should be doing. Hattie’s research, Visible-Learning, can be found here, and Medina’s research, Brain Rules, can be found here. If your district has taken on one these researchers to guide their initiative, have they researched it fully prior to making the decision? Or is it another example of Lorin Anderson’s half-truths and misconceptions.
There is a definite sense, that education is striving to “Keep up with the Joneses.” Research appears to be being used to justify the decision-making process instead of being discussed and critically analyzed to guide it.