Over the course of my still relatively short teaching career, I have changed focus on the skills I want my science students to come away with from my classes. Straight out of faculty, my focus was on content and curricular outcomes, period. The curriculum document was thick, and I had little time to waste on executive functioning, blogging or project-based learning (which can proceed at a snail’s pace compared to direct instruction). After years of cramming information into empty vessels only to see it spill out and be lost time and time again, are started doing some reflecting on my practice. In my work with colleagues, we decided the next step was to encourage students to “think like a scientist.” We created lessons to develop critical thinking, questioning, analysis and reflection skills in our students. I felt better about myself as a teacher, but the results were ambiguous – if I even looked for results. Most of these skills were taught as one-off lessons, with little connection to the future content I still had to slog through (after all, this was in addition too, not instead of), and I often wondered why students in my upper-level science classes weren’t able to apply these thinking skills to make connections to the more advanced concepts.
In one of our first course readings, Clark et al. references previous research stating: “Kirschner (1991, 1992) also argued that the way an expert works in his or her domain (epistemology) is not equivalent to the way one learns in that area (pedagogy).” The underlining point being that a base of knowledge is necessary – a good foundation of scientific principles needed, before a student can begin to think like a scientist. I didn’t think of this obviously, and that is the point of this post – I didn’t do any research! I have an aversion to research (which makes this program problematic).
For several years, I have also devoted some of my introduction weeks of class exploring “bad science” with students (using this wonderful poster from Compound Interest to guide us). This is a great introduction to thinking scientifically, as we debate and debunk YouTube myths, online articles, commercial products based on “sound research” – the students love finding BS. We work on sniffing out poor scientific rigor in research, discussing correlation vs causation, and looking at logical fallacies like confirmation bias. Years at this, along with my own experience with the world, as reinforced one immutable fact: data can be created for anything and is easily manipulated to serve a purpose. All the changes I have made over a decade of teaching have been done without sound data to support the change. It feels right to move in this direction or that. This feels like the best thing for helping my students. Research data can’t be trusted.
So here we are on my first week of grad school and I have been given two academic sources, Clark et al. mentioned above, and a second one by Barron & Darling-Hammond – both of which have cited the same specific research, regarding unguided learning in medical students, to support two diametric claims. This has only acted as further confirmation bias for me, adding to what we are exposed to in the news from sensationalized headlines about diet, space or the environment. What can be trusted? In particle physics, they have the observer effect, in social science research the effect is seemingly mirrored – as Ellis et al. states in another article we were tasked to read: “Even though some researchers still assume that research can be done from a neutral, impersonal, and objective stance…most now recognize that such an assumption is not tenable.”
And while that article wasn’t about research neutrality wholly, it did support this existing issue – will I be able to trust the “research” I find during this program, or will I just expect that if I dig longer, I will find evidence to the contrary? Perhaps searching for counter-claims is important in my research going forward so that I can feel more confident in my direction and the justification for my direction. What kind of research am I going to trust, to find, to search for, to utilize, to create? Fancying myself as a scientist, I have trouble with qualitative evidence. As mentioned, I press my students to find the flaws in this kind of data for evaluating medicine, technologies, recent discoveries. But I also recognize that education as a career, and specifically as a research area is well suited, if not better suited for qualitative data. Even if quantitative data can be found, again could I trust it, knowing full well how many variables contribute to student success. Perhaps an autoethnographic methodology is a strong approach here too, as high school and high school students have a distinct culture that contributes to the outcomes of any academic approach. These are questions that I need to explore further as I learn more about research methodology.
Image by Arek Socha on Pixabay.
October 5, 2019 — 7:39 pm
It’s been three months since you first posted this. What are your thoughts now on qualitative vs quantitative data regarding pedagogy?
October 15, 2019 — 8:02 pm
I have a better understanding of why things like literature reviews and meta-analyses are so important in educational research. There are too many variables involved to make purely quantitative research the “correct” methodology, but spending too much time theorizing and qualitatively assessing can’t be the answer either. It feels like a cop-out to say “I’ll take mixed methodology, please.” But the reality is that many lens are needed to get any sense of the effects of an approach, and frequently stepping back and looking at all the different approaches, effects and conclusions is vital to move pedagogy forward.