Last school year, my principal explained to me that she wanted to do practice prompts in the computer lab to help students prepare for the AIR test. This school year is the first where students have to take the test on a computer. In previous years, this has always been an option, and we’ve always gone with pencil and paper. So now we have to make sure students can transfer their writing skills from paper and pencil to keyboard and screen.
On the AIR test, students will have to read a passage; read a question about the passage, and answer the question with a typed response.
Not gonna lie, I was super duper dreading it at first. It sounded like the opposite of fun. But when you don’t have a choice about what you have to do, you still have a choice of how to do it, so I went with cheerfully. And if I was going to do that, I was going to take ownership of this whole thing as well.
We collaborated with other teachers on a rubric to use, and settled on one with thirteen points, spread across different areas: Organization, Evidence & Elaboration, and Conventions. We looked at the calendar and selected multiple dates to cycle this activity; that way we can make sure students continue to improve, instead of treating it like a one-and-done. I figured out the resources that worked best to suit my needs, my students’ needs, and my admin’s need: a combination of Google Classroom and Edcite.com. My principal picked some passages and I wrote questions for them. And then, over the course of a week, my students came in and took their “pretend” test. Despite knowing it was “pretend,” they took it quite seriously overall. We gave them a paper copy of the rubric, so they could do pre-writing on the blank side and use the rubric checklist on the other.
Then I assessed all their responses against the rubric.
Then I printed all their graded responses out, and stapled them to the rubric sheets they used during the activity. My principal wants to share these with homebase teachers during TBTs.
That was all I needed to do. But it wasn’t all I wanted to do.
I wanted to see where students succeeded and where they failed in the task so I could plan future instruction around it. And since I was keeping a spreadsheet of their scores anyway, I just took it a little further.
So I made a horizontal representation of all the rubric criteria. Then I set some conditional formatting into the field: pink for an empty cell, blue for a not-empty cell. The not-empty cells also turn any contents into the same shade of blue. I just thought that was easier to understand, visually. Anyway, if a student got that point on the rubric, I typed in a “1,” turning the cell all blue. If they didn’t, then I left it blank – pink. I went across the student’s whole row like that. Column T was a double-check, a sum formula adding up all those invisible number one’s. If the number was different than the number in coumn F (copied and pasted from Edcite reports) then I knew I needed to double check something.
Then I took it another step further. The remaining columns towards the right are goalposts for students to reach in the next prompt activities. For example, a student that got 5 points of 13 in the first prompt needs to get 7 points in the next one to stay on track to be considered proficient overall. (We’re aiming for 10/13 for everybody.) I didn’t just pick those numbers out, either. I used a formula that helps with reasonable growth expectations. That kid who got 5 points this time? It’s not reasonable to expect them to get 13 on the next try. But 7? That’s do-able. But if they remain at 5, or worse, dip down lower, then I know that kid might need further intervention to succeed. And I can start that intervention in November instead of February.
Kids who got 2 or fewer on the first task, though – they won’t reach 10/13 points by the end by my formula. They need intervention now. Edited to add: My principal points out that, even if they don’t reach the goal of 10/13, a student who goes from a 0 or 1 or 2 to a 7 an 8 or a 9 has still made incredible growth that merits celebration.
Then, when I changed the view a few times, I realized that many kids were missing the same criteria. Not all, but many. So I wondered, which criteria are the most commonly missed?
I scrolled to the bottom of the data and, under each of the columns, I input a sum formula that added up all the invisible one’s in each column. So I was really glad I used 1’s instead of x’s in that moment! Once added, I looked at which criteria had the lowest numbers. So that highlighted 13 down there? That means only 13 out of over 100 third graders wrote a closing sentence in their response. (The pointer was in a different cell when I took the screenshot. 72 students used evidence from the passage and/or other sources.) So, closing sentences are a weakness for most of our grade, but using evidence from the passage is a strength. I can use this information to help plan my instruction, and I can share it out with other teachers so they can plan their own instruction and provide guidance and support.
So that’s the teacher side of my current spreadsheet mania. Tune in tomorrow to find out how I’m delivering feedback to students!