#7
The ability of an item to place students into two distinct
groups is not a part of the mathematical model developed in the past few posts.
Discrimination ability, however, provides insight into how the model works. A
practical standardized test must have student scores spread out enough to
assign desired rankings. Discriminating items produce this spread of student
scores.
Current CCSS multiple-choice standardized test scoring only ranks, it does
not tell us what a student actually knows that is useful and meaningful to the
student as the basis for further learning and effective instruction. This can
be done with Knowledge and Judgment Scoring
and the partial credit Rasch IRT model using the very same tests. This post is
using traditional scoring as it simplifies the analysis (and the model) to just
right and wrong, no judgment or higher levels of thinking are required of
students.

The item error mean sum of squares (MSS), variance, for a
test with an average score of 8 was 1.83. I then adjusted the MSS for the other
three grades to match this value. A right and a wrong mark were exchanged in a
student mark pattern (row) to make an adjustment (Table 27). I stopped with 1.85,
1.85, 1.83, and 1.83 for grades 5, 6, 7, and 8. (This forced the KR20 = 0.495
and SEM = 1.36 to remain the same for all four sets.)


The question is often asked, “Do tests have to be designed
for an average score of 50%?” If
the SD remains the same, I found no difference in the KR20 or SEM. [The
observed SD is ignored by the Rasch IRT model used by many states for test
analysis.]
The maximum item discrimination value of 0.64 was always
associated with an item mark pattern in which all right marks and all wrong
marks were in two groups with no mixing of right and wrong marks. I loaded a
perfect Guttman mark pattern and found that 0.64 was the maximum corrected
value for this size of data set. (The corrected values are better estimates than
the uncorrected values in a small data set.)
Items of equal difficulty can have very different
discrimination values. In Table 26, three items have a difficulty of 7 right
marks. Their corrected discrimination values were 0.34 and 0.58.
Psychometricians have solved the problem this creates in
estimating test reliability by deleting an item and recalculating the test
reliability to find the effect of any item in a test. The VESEngine (free download
below) includes this feature: Test Reliability (TR) toggle button. Test
reliability (KR20) and item discrimination (PBR) are interdependent on student
and item performance. A change in one usually results in a change in one or
more of the other factors. [Student ability and item difficulty are considered
independent using the Rasch model IRT analysis.] {I have yet to determine if
comparing CTT to IRT is a case of comparing apples to apples, apples to oranges
or apples to cider.}

Traditional, classical test theory (CTT), test analysis can
use a range of average test scores. In this example there was no difference in
the analysis with average test scores of 5 right (45%) to 8 right (73%).
Rasch model item response theory (IRT) test analysis
transforms normal counts into logits that have only one reference point of 50%
(zero logit) when student ability and item difficulty are positioned on one
common scale. This point is then extended in either direction by values that
represent equal student ability and item discrimination (50% right) from zero
to 100% (-50% to +50%) using the Rasch model IRT. This scale ignores the
observed item discrimination.
- - - - - - - - - - - - - - - - - - - - -
The Best of the Blog - FREE
The Visual Education Statistics Engine (VESEngine) presents
the common education statistics on one Excel traditional two-dimensional
spreadsheet. The post includes definitions. Download
as .xlsm or .xls.
This blog started five years ago. It has meandered through
several views. The current project is visualizing
the VESEngine in three dimensions. The observed student mark patterns (on their
answer sheets) are on one level. The variation in the mark patterns (variance)
is on the second level.
Power Up Plus (PUP) is classroom friendly software used to
score and analyze what students guess (traditional multiple-choice) and what
they report as the basis for further learning and instruction (knowledge and
judgment scoring multiple-choice). This is a quick way to update your
multiple-choice to meet Common Core State Standards (promote understanding as
well as rote memory). Knowledge and judgment scoring originated as a classroom
project, starting in 1980, that converted passive pupils into self-correcting
highly successful achievers in two to nine months. Download as .xlsm or .xls.
No comments:
Post a Comment