Friday, July 9, 2010

TAKS Qualms - Part 2




The last blog reported a change was made in the passing rate on the TAKS Social Studies Grade 8 tests for 2003 and 2004. It was a far greater deviation than that found between 2009 and 2010 of concern in the Houston Chronicle by Ericka Mellon.

Further study revealed that the passing rates on all four subjects (English Language Arts, Mathematics, Science, and Social Studies) were all changed on the Grade 10 tests for the years 2003 and 2004. Texas allowed their Rasch One Parameter IRT (ROPIRT) to roam the open range for two years.

In mathematics and science it returned impressive passing rates the first year and lower on the second year. Without the changes, it would have taken seven years for the passing rates to exceed the initial 2003 benchmark values. That just did not look right.

By 2006 Texas had fenced in their ROPIRT. It thereafter produced values that looked right to education officials. By changing the passing rates for 2003 and 2004, the resulting curves looked very right: slow and continued progress toward an impossible goal of a passing rate of 100% by 2014.
The largest change was in science: 69% passing in 2003 was changed to 42%, a change of 27 percentage points or a lowering of the original figure by 39%.

IMHO what we are seeing here is the result of learning to use a new statistical tool that many would like to believe is a “standard statistical process”. It generates the numbers the states believed were wanted by the federal government. Secretary Duncan now considers the results as, “lying to our students”.

Unlike Knowledge and Judgment Scoring (KJS) that assesses at all levels of thinking and Confidence Based Learning (CBL) that assesses at the mastery level, the Texas ROPIRT has been fed right count scoring (RCS) data at the lowest level of thinking. The emphasis in assessment has changed, from earning high scores, to justifying the lowest cut point score. With KJS and CBL, the emphasis is on producing self-correcting high quality achievers who would in general find these tests, “a waste of time”.

The one striking observation in all four charts is that the average percent test score, in general, shows a gradual increase from year to year. If these tests were of comparable difficulty (with unchanging cut scores, they must be of comparable difficulty for 2005-2009), student performance on these tests was increasing prior to 2010.

As is, the English Language Arts and Social Studies tests are performing at the mastery level, above an average test score of 80% and passing rates at 90% and above. These tests now function as check lists of what experts in these fields consider necessary. Quibbling over a few points on test scores must now give way to serious concern about the quality of these tests to detect those students who will succeed in future schooling and on the job. Passing the test must be meaningful in the real world as well as in the edu-politic-money games currently being played.

We do not yet know how the ROPIRT does its work, but we can observe its behavior. Texas has seen three periods of different behavior: 2003-2004, where the wild rate of passing results were later tamed to look right; 2005-2009, where the average test score and the cut scores changed in unison; and 2010, where all four tests showed increased passing rates even though the 2010 average test scores were higher than for 2009 on one test, the same on one test, and lower on two tests.

And the rate of change in passing was more than twice that of former years where the results looked right. The change was all in the same direction: up.

1.  The difference in behavior of the Texas ROPIRT model in 2010 was do to:

    a.  political influence.
    b.  Texas losing control again.
    c.  student performance.
    d.  all of the above.
    e.  none of the above.

Please be able to support your answer from your own experience or with information from trusted sources. (Good judgment is to omit if you cannot trust your mark to be right.)