It’s NAEP season, my friends. The 2015 National Assessment of Education Progress results were released this week to a barrage of spin, rhetoric, and general “misNAEPery.” I’ve mostly seen this misNAEPery pop up in the form of certain folks using the data to show that education reform efforts aren’t working. (For now, we’ll ignore the crushing irony of using test scores to prove that testing isn’t valuable.) That’s a bummer, so let’s spend a few minutes today talking about what this year’s results do and do not mean.
First, let’s talk briefly about the results themselves. Chalkbeat ran a pretty good piece on Colorado’s 2015 NAEP scores that included some nifty graphs. Nifty or not, however, I take some issue with the graphs’ reliance on percentages of kids scoring proficient or better rather than scale scores. Not that I blame Chalkbeat for going this way; graphs showing what appears to be actual change are a lot more exciting than what you get when you look just at scale scores over the past ten years. Those graphs look like this: