Date of Conferral

2019

Degree

Doctor of Education (Ed.D.)

School

Education

Advisor

Jennifer Seymour

Abstract

Educators in a rural charter middle school in the United States were challenged with the reliable assessment of student thinking skills even though the development of higher order thinking was an espoused goal for the school. The purpose of this study was to validate a new rubric based on Bloom’s Revised Taxonomy (BRT) to reliably assess student levels of thinking as reflected in the students’ written work. A quantitative, nonexperimental design was used. The focus of the research questions was on the BRT rubric’s reliability and validity. Interrater reliability was assessed using Krippendorff’s alpha. Validity was explored by assessing the relationship between the BRT scores collected in this study to the original teacher scores of students’ archived writing samples. Reliable, unrelated scores would have suggested that the two processes were scoring different constructs. The convenience sample of 8 volunteer teachers scored papers using the new BRT rubric. Each teacher scored 52 writing samples, 2 each from 26 students in the 7th grade. The Spearman correlation coefficient between the BRT and original teachers’ scores was not statistically significant. The teachers’ original scores could not validate the BRT as a measuring tool. Also BRT measure failed to demonstrate evidence of reliability (Krippendorf’s α = .05). A position paper was created to present the results of this study and to explore possibilities for improving the assessment of thinking. Positive social change may be encouraged by the use of a reliable and valid scoring process to quantify levels of thinking. A reliable scoring process for levels of thinking could lead to more balanced curricula, instruction, and assessment ultimately providing a base for customized student learning experiences.

Share

 
COinS