4 Comments
User's avatar
James Cantonwine's avatar

I think this misses a few key points with state tests. I'll caveat this with a note that state assessment systems vary; however, many states are using the same vendors for similar assessments under different names. My own experience is as a district assessment coordinator in Washington State.

1. Preliminary results are available to school and district staff FAR earlier than they are made public. Rarely are the official results any different than preliminary. For that reason, my district now sends preliminary score reports home at the end of the school year. Reasons for public delays at the state level include: staff shortages within data departments, discrepancies with determining accountability site for high mobility students, and disputes around individual student scores. (I'm sure there are others I don't see from a district point-of-view.)

2. State tests are not always measuring the same constructs as other assessments like MAP and NAEP, which feeds into the varied testing times. If the state is assessing English Language Arts, some amount of writing will need to be produced by the student. When assessments only measure reading, they can be shorter and are easier for machine scoring. Something similar happens with math tests that are designed to elicit evidence for the thinking and reasoning on the student's end. Whether we should measure these is an open question. MAP is highly correlated to state test results and maybe we don't need to assess writing, too.

3. State tests aren't necessarily any longer than tests like MAP or NAEP: it just looks like they are. First, assessment developers report the "typical" time taken for a test, but teachers are affected by the time taken by slowest student. That's generally much longer than vendors will share. Second, weeks-long testing windows show when the assessment could happen, not when it did happen. When testing does drag on too long, which it often does, it's frequently a result of the perceived stakes of the assessment. Staff may feel an incentive to really drag out the big accountability measure to eke out any last score improvements.

Expand full comment
Chad Aldeman's avatar

Hi James, I agree with a lot of this. On #1, I think that should be standard practice! Kudos to you for sharing the preliminary results with parents. On #2 and #3, I totally understand that different types of question formats might take longer to administer and to score. I've made similar points before. But the results could still be much faster than they are now, especially if we agree on releasing preliminary results as soon as they're ready. See more of my thinking on this here: https://www.chadaldeman.com/p/the-state-assessment-game-of-telephone

Expand full comment
James Cantonwine's avatar

"Faux accountability" is definitely a real occurrence, especially here in WA where the main consequence of low proficiency rates will be bad press. It still leads to this interesting situation where teachers complain about the time it takes students to complete MAP or i-Ready while wanting to further extend time for state tests. (Through-year assessments addresses both issues.)

Do you think preliminary results should be shared with the broader public and media? I'm of two minds on that. My state's data shop isn't staffed or funded to be able to handle that - though I would love that to change.

Expand full comment
Chad Aldeman's avatar

Hi James, my focus is on making results (preliminary or otherwise) available to families and educators as quickly as possible.

I do think the speed of the final results matters too, but that's for slightly more wonky technical reasons like school improvement planning purposes.

In other words, processing the preliminary, individual results as quickly as possible would be my top priority.

Expand full comment