Who’s Testing Really for, Anyways?

Who’s Testing Really for, Anyways?

By: Julia Daley

Early in my journey to become a teacher, back when I was an undergraduate student, my old junior high school graciously allowed me to fulfill my practicum hours and observe classes there. I was able to observe the 7th and 8th grade English Language Arts classes taught by the same teachers who had taught me when I was 13 and 14 years old. For two weeks, I observed the gifted and honors students in these classes as they went about their days learning English. To thank the teachers for their time, I helped with grading, making copies, and squashing the occasional uninvited scorpion.[1] The most memorable experience from this practicum came when I was grading a set of quizzes—a memory that has been quite formative in my educational career ever since.

[1] Teaching in Arizona has some unique challenges! Fortunately, students are trained from a young age to call for an adult if they see a scorpion and to watch it safely, from a distance, until it can be killed. It’s not uncommon to find one in a classroom once a week or so, depending on the school’s location.

The exact content of the quiz escapes me now—perhaps something to do with prepositional phrases—but it was directly based on a lecture and worksheet the students had completed the day before. Four of the five classes listened to the entirety of the lecture, which contained information on an “exception to the rule” that the worksheet afterwards did not cover. One of the classes, however, missed that key slide in the teacher’s PowerPoint (another apology, for my memory is faulty as to the why and how here, since it was a long time ago). We did not notice this at the time, but, as I graded the quizzes, I noticed that every single student in that one class missed the same question. I kid you not, all thirty-odd students missed it entirely, so not a one of them had a perfect score in that class. The other classes had, for the most part, gotten that question correct.

Initially I was surprised, but as I thought back to the previous day, I realized that they had missed learning about the concept that question had assessed. I brought up this realization with the teacher during our reflective meeting later that day. I felt a bit awkward doing it, as someone who wasn’t yet a certified teacher; it wasn’t as if I had the standing to critique a teacher with decades of experience. Yet the teacher dismissed me outright. “The students just didn’t learn it properly,” she said.

That conversation has always stayed with me, that knee-jerk reaction of blaming the students’ lack of learning instead of taking a moment to reflect on her teaching. I tried as gently as I could to say, “but don’t you think it’s strange that every student got this one question wrong in that class, even the smartest ones?” and got nothing for my efforts. There was nothing strange about it; let’s go eat lunch. And that was that. The students from that class all kept their 9/10 grades on that quiz; they never went back and learned the exception they’d missed; and I never saw them again, as my practicum finished right after.

It’s more than a decade later, and I’m still bothered by that situation. It haunts me whenever I am grading or making an assessment. Are these grades really accurate reflections of what my students have learned? Are these questions the right ones for measuring these particular learning outcomes? Is this question too easy or too hard? How much weight should this assessment have? Do I even need to assess this? Why do we even have tests, assessments, and quizzes?

Why do we even have tests, assessments, and quizzes? That’s the hardest question of all.

How much of our lives are shaped by our test results, starting from early childhood[2]? Just in reflecting on my own life, I’ve realized there were so many milestones that depended on acing tests. That gifted program I joined in elementary school required me to get a high score on a placement exam at 8 years old; the scholarship I received to attend university was dependent on a high score in another test; which universities I could attend were also influenced by my scores on two other tests, the ACT and SAT; graduating with my bachelors in four years was facilitated by the early college credits I earned in high school by passing various AP tests with high-enough scores; getting my teaching license required passing more tests; keeping my teaching job and having positive teaching evaluations required my students to do well on tests. Tests can have huge impacts on students (and teachers) for the rest of their lives. Talk about high stakes!

[2] Explicit language warning in advance of this video by John Oliver on Standardized Testing. According to a clip he shares, the average American student will take around 113 different standardized tests by their high school graduation.

It’s no wonder that so many students struggle with test anxiety. When the value of their entire lives, their hopes and dreams, their very worth to society is boiled down to a single percentage score on a test, why wouldn’t students be feeling anxious? And if you, fellow teacher, are thinking “wait a second, my assessments don’t have such high stakes, what does any of this have to do with me?” well, think again! Have you ever impressed upon students the seriousness of your assessments? Do the final grades in your course rely heavily on your students’ assessment results? Does success in your course influence a student’s access to future classes? If you answer “yes” to any of the above, then your assessments have an influence on your students’ lives, too, and could well be contributing towards their anxiety.

This is not to say that tests and assessments are unimportant or unnecessary—as tools for measuring learning outcomes and topic mastery, they are essential! A certain amount of stress, too, can be healthy for us all. But how we use assessments matters. As educators, I believe we need to maintain skepticism towards our assessments—we need to question them, test them, and reflect on them regularly. I would hate to one day become a teacher who does not once doubt the validity or reliability of the assessments I have made.

I’m not consistent at it yet, but I’ve been trying to do more item analyses of my multiple-choice vocabulary tests. This is much easier to do now that I distribute my tests on an online platform, as I can easily convert the data into an Excel spreadsheet and crunch the numbers. I also am using rubrics that are aligned with the CEFR and IELTS standards to assess my students’ writing and speaking abilities. I make sure that I grade as fairly as I can, in the same state of mind for all students—if I’m feeling pressured or stressed, I take a break to calm myself down before I examine the next student’s work. I engage in reflective teaching and use students’ results as a chance to gauge the effectiveness of the assessed unit’s activities. If my students aren’t performing as I expected, my first reaction is to question where my teaching went wrong, not where their studying went wrong.

But that’s what I have been getting out of these assessments. What about my students? After all, it’s their learning that we care about, isn’t it?

One way in which I’m trying to make my assessments more “student focused” is by implementing more diagnostics, or formative assessments, with my writing students; I inform students that as long as they complete the objectives of the task, they’ll get a 100% of the points.[3] This guarantee of a certain score takes a lot of the pressure off the task and lets my students focus on showing me what they can do. When the results are returned to them, I include a rubric showing them their current writing level and arrange one-on-one meetings with them so that we can have a dialogue about their English writing. Taking the “grade” away from the “results” seems to allow my students to analyze their own writing abilities more objectively. As they work towards their final papers (or summative assessments), they always produce two drafts—the drafts are always worth 100% (again, as long as the student achieved the minimum objectives of the task). For most of my students, the efforts they put towards their revisions results in polished final writings that they can be satisfied with. Many of my students have been quite surprised with how much quality English writing they can actually produce!

[3] You can learn more about this style of grading, known as “ungrading,” in next month’s issue. Stay tuned!

I am by no means a perfect English teacher, and I’m sure there’s more I could be doing to make my assessments a better reflection of my students’ abilities. I’ve certainly found inspiration with the other articles in this month’s issue. They’ve reminded me that teaching is as much a science as it is an art, and there’s no one-size-fits-all assessment that will work perfectly with every student in every context. I’ll just have to do my best to keep muddling on, learning from my students as much as (I hope) they’re learning from me.

Julia Daley is a lecturer at Hiroshima Bunkyo University, where she teaches English conversation and writing. She earned her MA in TESL at Northern Arizona University and is certified to teach secondary English in Arizona. When she isn’t grading student writing, she’s working on updating and revising tests as part of her department’s assessment committee.

Leave a Reply

Your email address will not be published. Required fields are marked *