Just saw this:
Evaluation metric idea: take snapshots of students' grades each week (specifically, the grade they actually see in your LMS). How well do these correlate with your final assigned grade? Were students getting good estimates?— Austin Cory Bart (@AustinCorgiBart) May 18, 2019
It made me think of a couple of conversations I had with more senior teachers early in my career.
They'd tell me "by and large, you know what the kids are going to get after a few week." By and large they were right. Sure there were some kids that would turn it on midway and raise end up earning a higher grade and some who fell off a cliff but for the most part, you knew pretty early.
This doesn't mean that you don't need assessments along the way - both to inform the student on how they're doing and to inform both teacher and student on how to best proceed in order to benefit the student. Of course, sometimes, even when you present some students will their dire situations they can remain in denial for a remarkable period of time.
At Stuy, the standing grading policy was 2 full period exams each marking period - this meant at least 6 a semester. Add to that a final which usually counted as two tests and we had more than enough to drop the lowest grade - a practice followed by many teachers. Depending on subject you'd also add in papers, projects, quizes, Homework, participation and anything else you'd like weighted in a variety of ways.
As an interesting aside, I was able to evaluate my senior classes at Stuy entirely on projects - no tests but not my sophomore classes. Stuy students were so conditioned on exams that they really needed them to keep themselves honest. It took time to ween them off.
I started at Seward Park High School and that school had a similar policy.
In any event, those teachers that told me that you'd know the final grades early on were pretty spot on. A number of times I tried an experiment - for final grades, I'd first write down what I thought the grades would be and then I ran all the assessments through the weighted average formula. The "guesses" were almost always pretty spot on. I also compared the final grades to the second marking period grades. I chose the second because the first marking period grades were just letters - E for excellent, S satisfactory, N needs improvement, and U. There was some movement but it was more from my grading up or down due to the amrking period rather than a real change. I would grade down in the second marking period and up in the third.
This isn't to say that Cory's suggestion doesn't make sense and it also doesn't mean that I or other teachers are fabulous estimators - for all I know, I was biasing some subjective students grades based on some preconceived notion I wasn't aware of making the final grade a self fulfilling prophecy.
In any event, I was able to do this when I was teaching in high school but not so far in college. Maybe it's because I've only been at Hunter for three years and I developed my high school chops over decades. Maybe it's because I have far less contact time with the students - 2 days a week for an hour fifteen vs five days a week for 43 minutes a shot. It could be that there were more opportunities for assessment in high school due to more contact time. I'm really not sure. Something to ponder further.