My Social Science Crises

My Social Science Crises

By: Stephen M. Ryan

“Following the science,” has become the politician’s mantra of choice in these virus-ridden times. I am not a scientist, but, as a teacher, I have long been a consumer of social science. Following the science usually seems like a good idea at the time, but following it blindly is a terrible idea that has led me fairly regularly to false turns and pratfalls. I offer my cautionary autobiographical sketch to encourage others to handle science with care.

"...the practical impossibility of designing experiments to compare two teaching methods"
Stephen M. Ryan
TT Author

Teacher training, all those years ago, included the obligatory History of Methods course, designed to show that methods inspired by a Communicative Approach were far superior to other methods for teaching a foreign language. Various studies (for a more recent overview, see Natsir & Sanjaya, 2014) were cited pointing to both the efficiency and efficacity of this approach compared to others, and I set out on my career inspired with science-infused missionary zeal. My rude awakening came in the form of an article from Sheen (1992) pointing out the practical impossibility of designing experiments to compare two teaching methods. Teachers will teach as they will teach and, even after they receive instruction in how to follow a certain method or approach, teaching is such a complex operation that it is often difficult, in the minute-by-minute unfolding of a lesson, to distinguish between a communicative approach and any other approach. Given these limits, there was no real way to attribute discernible differences in outcome to a particular method or approach that had been prescribed. In other words, the apparent superiority of a particular approach was in the rather unscientific eye of the beholder.

The frustrations of teaching far from my homeland, with students and colleagues who often behaved in ways that I could not fathom, drove me to seek solace in another field of social science: Intercultural Communication. This field offered clear, research-based distinctions between the ways people in different “cultures” behave. The methodology was fairly transparent: they asked people in each of the “cultures” (in fact, countries) how they would behave in certain situations and contrasted the responses. Statistics were run to distinguish “significant” differences from non-significant ones, the former being tabulated in a series of statements about how people in country B differed from those country A in thought, word, and deed. From these results, theory was generated.

It was only when I dabbled in doing similar research into the expectations about classroom life (How should a “good student” behave? What were the attributes of a “good teacher”? etc.) held by my own students, in Japan, as contrasted with those held by the students in a colleague’s classes in Australia, that I began to understand the compromises involved in comparing “culture A” with “culture B.” What to do about the international students in our classes? Best exclude them for fear of contamination from “culture C, D, E, and so on.” Students who have studied abroad? Same logic, exclude them from the sample. Students who had not studied abroad but had spent some time overseas? Well, exclude them if they were there for too long (how long is that?). In the end, we had pretty “pure” samples: no foreigners, returnees, recently arrived immigrants, or people who had taken long vacations overseas. In other words, we had samples unlike any classes we had ever taught in Japan or Australia.

This may seem crude, but the more sophisticated act of “balancing” samples also had its problems. Because we had understood that age, gender, and socioeconomic status were also “cultural” variables that could skew our inter-country results, we wanted to balance our samples in terms of these variables. Rather than eliminate further swathes of our respondents, we used statistical methods to control for the variables. In this, we were imitating the practices of more experienced researchers (eg. Barnlund, 1988; Matsumoto, 1992) whose methodology can be summarized as: 1) collect data from similar groups of people in the two countries; 2) control for all the variables you can think of; 3) find what differences between the samples were left after that; 4) call these differences “cultural.” This did make me wonder if the “cultural” results were simply confounding variables the experimenters had not thought of controlling for. More importantly, it undermined any understanding of why some variables are seen as “cultural” while others are considered “confounding” and need to be controlled for.

Such issues made me rather suspicious of Intercultural Communication research, as conducted at the time. (It has to be said that the field has shifted its focus since then, to investigating actual cases of misunderstanding between people from different backgrounds). Cross-Cultural Psychology, toward which I found myself gravitating next, has a longer and stronger pedigree than Intercultural Communication, although it does share some of its methodological dilemmas. Its theory-building tends to be based not on a search for cultural essences but, like its parent discipline, Psychology, on a series of case studies of individuals. This seemed to me to be more rooted in lived experiences and less susceptible to the ravages of statistical abstraction, so I started reading research from this field.

What happened next did not so much shake my confidence in Cross-cultural Psychology as undermine my faith in anything I had ever learned about human psychology. It was a throwaway line in a presentation by a renowned cross-cultural psychologist: If it’s not cross-cultural, he said, it’s not psychology.[1] Self-serving, no doubt (since all his research was cross-cultural), but this resonated and resonated with me. It really changed my whole perspective on the field. If Psychology is the exploration of the human mind, how can its results be valid if the minds being studied come mainly from a single sub-section of humanity, born of the same gene pool, and shaped by very similar experiences? Of course, I am referring to the ever-eager students enrolled in large, Mid-Western (U.S.A.) research universities, participating in psychological experiments for extra credit, whose responses form the core of so much of what we “know” about the human mind (Azar, 2010). Until recently, most of them men, apparently. (WEIRD!—Azar’s acronym for research subjects from Western, educated, industiralised, rich, democratic societies)

Then there were the male rats. You know, the rats that find their way through maze after maze to get their motivational cheese, the ones who overcome (or don’t overcome) the obstacles placed in the way by inquisitive research psychologists? Turns out that over 90% of them are male (Becker et al., 2016). So, we may know a lot about the motivation of male rats, but I’m not sure that tells us much about “motivation” in the abstract.

[1] I have struggled to find a source for these words, although my memory of them is very clear. Liebel and Haun (2018) make a similar point.

I’d known for a while that the reliance of some social sciences on self-reported data was pooh-poohed by people in the so-called hard sciences, but this hadn’t stopped me from asking my students to tell me about their own thought-processes, expectations, and values, whether in formal questionnaires or in informal chat sessions. It was only when I started reading reports from Cognitive Science that I understood why this is not a helpful approach.

What’s so wrong with self-reported data? It now seems clear that pretty much all the decisions we make are made unconsciously (Nisbett, 2015). Our unconscious brain works tirelessly to help us navigate the constant stream of decisions about what to do next that constitute human life. It does not consult “us” (the conscious mind) before deciding to take an extra deep breath when entering a new situation, dodge to the left to avoid an on-coming hazard, place one lace under another when tying shoelaces, check the time in a never-ending meeting. Nor does it consult “us” on what would appear to be really big decisions: when to move on from presentation to practice in a lesson, how to react to the latest imposition from our employer, when to stop what we are doing and spend time with family.

The conscious mind—the voice in each of our heads, the apparently continuous narrative of who you are, what you are doing, and how you got to where you are—is subordinate to the unconscious mind. The unconscious mind lies. It fabricates. The conscious mind works, just as tirelessly as the unconscious mind, to create post hoc rationales for unconscious decisions it has no role in making. So, if one day I present you with a questionnaire asking why you did what you did, or how you think you would react in a given situation, you have only the resources of the mendacious conscious mind to draw on in constructing an answer. Self-report interviews and questionnaires are a great way to investigate the fictions the conscious brain creates. They are of very little help, though, in understanding motivation, decision-making, attitude, etc., etc., and yet so much social science research is based on the analysis of self-reports.

It was about this time I became aware of disillusionment among social scientists themselves. Some striking and original research which had become foundational to whole areas of investigation turned out to be very difficult, if not impossible, to replicate (the so-called Replication Crisis; Diener & Biswas-Diener, 2021). Researchers under pressure to produce results running similar statistics on their data again and again until they came up with the one time that a significant result was indicated, and putting aside all the times the same data showed no significant results (p-hacking; Head et al., 2015). Questioning of the general agreed statistical threshold between randomly obtained results and those that could not be attributed to chance (Johnson, 2019). Much of social science is clearly going through a period of self-doubt.

Where does all this leave me? What are the lessons of these many cautionary tales? I am certainly no longer a naïve consumer of social science research. I want to know where “the science” is coming from, how the results were obtained, what safeguards were in place against the more well-known pitfalls, and, above all, how the community of interested scholars is reacting to the findings. I suggest you might want to do this, too. Expressions like “following the science” and “Research tells us . . .” are intended to reassure and to buttress the credibility of the speaker, but it is the credibility of the scientific process that is the key to consumer confidence. When consuming social science research, the same principle applies as when a student tells you her dog ate her homework: keep an open mind, but ask a lot of questions. Verify. Verify. Verify.

from tylervigen.com

References

  • Sheen, R. (1992) In defense of grammar translation. The Language Teacher, 16(1): 43-45.

Stephen M. Ryan teaches English at Sanyo Gakuen University, in Okayama, and tries to keep current.

Leave a Reply

Your email address will not be published. Required fields are marked *