I was working on writing a midterm the other day so figured I'd talk a bit about my test grading policy.
Before getting to the specifics, let me set the stage. I spent most of my career at Stuyvesant - a public magnet school in NYC. There are many great students who are interested in learning but there's also a focus on grades. and this leads to a non-insignificant portion of the student body that is grade obsessed and will do everything and anything for every point possible.
So, it turns out that there were 601 perfect scores on this years APCS-A exam. Over on Facebook a great question was raised - what does this mean and should we celebrate this?
What does it mean? There's no way to know. Maybe the number of perfect scores is just scaling up linearly with test takers. Maybe More kids are being exposed to CS prior to APCS-A and that's leading to more correct answers.
Just saw this:
Evaluation metric idea: take snapshots of students' grades each week (specifically, the grade they actually see in your LMS). How well do these correlate with your final assigned grade? Were students getting good estimates?
— Austin Cory Bart (@AustinCorgiBart) May 18, 2019 It made me think of a couple of conversations I had with more senior teachers early in my career.
They'd tell me "by and large, you know what the kids are going to get after a few week.
I tweeted this the other day:
Why don't so many teachers and professors understand that the test or assignment you can do in 15 minutes will take your beginning students at least an hour and probably a lot more to complete.
— Mike Zamansky (@zamansky) April 18, 2018 What led to the tweet was a discussion I was having with some students about not having enough time on tests which led to a discussion of having to drop everything to spend every waking hour on a project.