After one brief session on searching for academic sources in our library system I tried to do my first research search – I even played around with Boolean coding (which is a decidedly fun word to say, and I intend to use it to my every social advantage in dinner conversations moving forward).
We were given sage advice by our resident Education Subject Librarian to try to get it to a manageable “200” hits. With some selective tweaking, I was able to get the number of hits to 243. I focused my search on using self-reflection tools to improve student motivation and knowledge acquisition. Of course, my first thought was “Do I have to read them all?”
Over the last 6 years, I have had my students complete a major academic paper in my senior science courses. In my scaffolding of the process, we discuss finding primary sources to support their claims and how to search for “scholarly articles” using Pubmed. I keep trying to think back to all the advice I have given students over the years. It seems surprising now, even only after a couple of days in this program, that I was able to give students ANY credible skills on finding information at all considering I still feel very much in fake it till you make it” territory.
But on to methodologies….
Last night, a conversation on Twitter about Twitter, led me to this blog by Wasim Ahmad. Ahmed gives an overview of the key trends in research methodology, specifically how to use Twitter data effectively in given a multitude of methodologies. Along with his list of the various methods for coding and analyzing social media data, he provides an extensive listing of the tools needed to pull desired data from social media accounts – all of which seems a little devious for some reason in its language. Things like overcoming difficulty in accessing old data using something called “web-scraping” just give me the chills.
My professor discussed the need to adapt to the vocabulary of research methodologies as a first step (I have a running glossary of new terms – axiology, ethnography, ontology…. lots of “ologies”). What I was able to take away from this blog, in conjunction with our reading by C. Lewin (Understanding and Describing Quantitative Data), greatly aided in my understand of another reading on social media by DeGroot et al.
This was a fascinating paper (and will play a role in another blog about my Twitter history)
DeGroot et al. explore the question of whether an instructor’s social media use affects student perception of credibility (and other traits). Looking at this article for my course work and couldn’t help but notice the research methodology of the article. I was considering the 4 R’s, looking and the stats generated – I am beginning to see the overlap in the information being offered by my two concurrent courses.
The opening paragraphs of the reading contains a lot of research methodology – looking at the research question in more detail, the subjects, determining factors of influence, methodological approach, statistics collected. The study used a mixed methodological data collection (qualitative and quantitative) – with the quantitative being of experimental design.
Reading “Using a sliding response scale, participants rated their own tweets” I recognized these data as measurable descriptive statistics used to summarize and describe the data. They also used ranked data sets for a lot of information.
I considered sample sizing – 239 individuals who met the study criteria – which is large enough for a major subgroup, and the fact that participants were randomly signed up ensured a representational sampling.
Inclusion of standard deviation and mean values began to hold more significance; reliability scale values; open-ended, labeled and coded qualitative data – all of which was made more understandable having looked at the Ahmed article which gave an overview of thematic analysis. Even the author’s conversations on external and internal validity based on data and trial conditions, all made a little more sense:
“We do not know how these results might extend to different ages of students or nontraditional students.”
Now, it is possible that I have misinterpreted all this information…after all, I have only three days looking at research methodologies, and have had very little instruction yet in identifying components properly. That being said, I find it interesting how much must be considered when “asking a few questions” (which is how I saw research before last week).
While I was marveling at my new knowledge and feeling more confidence in my understanding, I did attempt to think deeply about this study from a methodological perspective, and had one crystallized thought:
DeGroot et al. point out: “Perhaps a student’s familiarity and use of Twitter could alter the way they assess the credibility of instructors who use Twitter.”
In questioning the role of the participants researched in this study, I was considering if the authors thought of how personal history may have played a role in outcomes of the data. While social media usage was documented using some basic ranked questioning, the authors never mention the influence of a participants passed social media interactions. I am left to consider how a participant’s associations and emotions might be altered if they had experienced cyber-bullying or had any other significant negative interaction on social media in the years prior to being part of this study. Equally, those who suffer from social anxiety, those who may present with Autism-spectrum disorder trends, and any other number of personal history factors would likely also play a role in how these participants perceive a professor’s social media usage and the role of social media in a course in general, even in these hypothetical experiences. Even a participants past interactions with instructors in class and the interpersonal relationships they had developed with passed educators might have biased their interpretation of an instructors social media usage.
The only other thought I had is “Are these valid considerations?” I’m I beginning to see deeper into research methodology – or am I reaching? I don’t think I can answer these alone.
PHOTO: “books” by stewartcutler is licensed under CC BY-NC-SA 2.
July 7, 2019 — 9:29 pm
I’m loving the overlap in the two courses- and really starting to understand why we take research methods right off the bat!
I appreciate your question about the impact of personal history and past social media usage. It certainly seems possible that this would have an impact, although I also wonder if some of this impact would come out in the open-response questions? I also found it interesting that one of the criteria for participation in the study was that the participant had to be a Twitter user. What would the results be if non-Twitter users were also included?
I definitely think all of your considerations are valid, and kind of serve as an explanation of why there are so many research projects and publications- one study typically can’t answer all the questions, so follow-up studies are necessary! I’m surprised you didn’t question whether the results would also apply to classroom teachers!