In the most recent back and forth in the debate over this year’s budget, I’ve seen a number of claims about our school district advertised. One such claim is the “fact” that, since the implementation of full day kindergarten three years ago, our reading scores have doubled. This is advertised as a reason to support the proposed pre-k initiative. Apparently the line of thinking goes: if full day kindergarten worked to improve test scores, then pre-k will improve them even more. Setting aside for now some of the basic problems I have with this line of reasoning, the issue of early childhood reading scores perfectly highlights the problem with using isolated test data as a basis for arguing for, or against, programming.
Full day kindergarten in Windsor began in the 2012-2013 school year. It came about after a two year period of research and discussion on how to reorganize our elementary schools, hopefully with the outcome of being able to accommodate full day kindergarten. Since the implementation of full day kindergarten increased reading scores have been touted as proof that the move to full day has worked to better educate our children, particularly in the area of literacy. However, taking a deeper look at test scores paints a much more complicated picture. Comparing the 2011-2012 to the 2012-2013 group of kindergarteners certainly shows significant improvement in DRA, or Degrees of Reading Assessment, scores. However, in year two of full day kindergarten we saw scores for our 2013-2014 kindergarten students dip below those of the kindergarteners in the first year of implementation. Following the cohort of kindergarten students from 2012-2013, we would expect that their reading scores in the first grade would be higher than those of the first graders from the prior year, because they had the benefit of full day kindergarten. However, at Poquonock Elementary School DRA scores in the first grade actually dropped from the prior year, meaning that the first graders in 2012-2013, who did not have the benefit of full day kindergarten, outperformed the first graders in 2013-2014, who did have the benefit of the full day program. Perhaps even more confusing are the historical score trends prior to the implementation of full day kindergarten. For instance, DRA scores for the second grade class of 2010-2011 nearly doubled over the second grade class of 2009-2010 – this had nothing to do with full day kindergarten, as it had not been implemented, but yet the data shows a significant increase in scores. A similar, although less dramatic, increase in scores can be seen in comparing the first grade scores from 2008-2009 with the scores from 2009-2010.
I believe it’s reasonable to expect that moving to full day kindergarten would improve the year end DRA scores of our kindergarten classes over what our kindergarten students achieved with only a half day of kindergarten, and the test scores appear to show this. But what they don’t show is what effect full day kindergarten will have in the long term on reading test scores. In fact, some of the data seems to show that it did not have a positive impact. The data alone also does not speak to why we saw gains in reading scores in years prior to the implementation of full day kindergarten, which begs the question: can we attribute all of the gains we are seeing now to full day kindergarten alone, and if not, what are the other factors? Perhaps more salient to the current discussion, if other factors can be isolated, what impact would they have on our need for pre-k and how it is implemented?
The overarching point of this to me is that there is both danger and inaccuracy in using test score data alone as a basis to argue for or against program implementation. While standardized tests can be a useful tool is evaluating the success or failure of an initiative, isolating data points outside of a historical context or failing to look at all relevant cohorts allows for distortion of data to fit a particular narrative. Put simply, “there are three kinds of lies: lies, damn lies, and statistics.”