Assessment, CEP813, Education

Iterations of an Assessment Rubric

“To understand is to be able to wisely and effectively use—transfer— what we know, in context; to apply knowledge and skill effectively, in realistic tasks and settings” (Wiggins & McTighe, 2005, p. 7). An authentic performance task is one way to assesses students’ ability to transfer what they know (Wiggins & McTighe, 2005). It only makes sense, then, that one assignment in my Assessment class is to create a rubric by which to judge other assessment instruments.

I began this assignment two weeks ago, with Rubric 1.0 which you can read about in an earlier blog post. In that first iteration, I identified three criteria for quality assessments: direct and specific feedback, transparent learning targets, and a self-assessment component.

In this second iteration, Rubric 2.0, I added two criteria for quality assessments: the assessment requires only target knowledge, skills, and abilities (KSAs) to complete; and the assessment requires a transfer of knowledge to demonstrate understanding.

Requires Only Target Knowledge, Skills, and Abilities (KSAs) to Complete

One approach to creating valid and fair assessments is to require only target knowledge, skills, and abilities (KSAs) to complete the assessment. Assessment designers first identify what evidence is needed to judge whether students have demonstrated specified aspects of learning. After determining what knowledge, skills, and abilities (KSAs) are required, assessment designers then examine the assessment tasks to determine whether other unwanted, non-target KSAs are required to complete the assessment. If unwanted KSAs are included in the assessment, the assessment will yield results about the target KSAs and non-target KSAs, such as language skills or math skills (Trumbull & Lash, 2013). Therefore, non-target KSAs should be eliminated.

Assessment Requires Transfer of Knowledge to Demonstrate Understanding

As stated in the introduction, a well-crafted assessment that assesses students’ ability to transfer what they know should include an authentic performance task (Wiggins & McTighe, 2005). The assessment tool should clearly describe criteria for degrees of understanding. Understanding should be assessed separately from other traits, like mechanics, organization, and craftsmanship. According to Wiggins and McTighe (2005), those other traits should be assessed in a separate rubric, or all of the traits should be assessed in a grid-style rubric.

Conclusion

Eventually, my Rubric 2.0 will become Rubric 3.0, and finally, Rubric 4.0. By then, it will include ten carefully selected criteria for judging assessment instruments. I look forward to learning more and sharing those rubrics in future posts.

References

Trumbull, E. & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory. San Francisco: WestEd. Retrieved from www.wested.org/online_pubs/resource1307.pdf

Wiggins, G.P. & McTighe, J. (2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development. Retrieved from http://p2047-ezproxy.msu.edu.proxy1.cl.msu.edu/login?url=https://search-ebscohost-com.proxy1.cl.msu.edu/login.aspx?direct=true&db=e000xna&AN=133964&scope=site

One comment on “Iterations of an Assessment Rubric

  1. Pingback: Assessment Rubric 3.0: Carefully Considering Feedback – Sarah Van Loo

Leave a Reply

Your email address will not be published. Required fields are marked *