Saturday, March 6, 2010

Obsessing over Assessment

The law of attraction is at it again. Testing and assessment is running through my mind and it seems to be running through my life as well. There is a certification project currently being implemented in my company that involves training and testing. I am taking a course on Inquiry and Measurement that also involves testing. And, during this week’s #lrnchat, I was drawn in by a discussion thread in which the pros and cons (well, mostly cons) of assessment were being discussed. Why am I so preoccupied with Level 2 of Kirkpatrick’s Four Levels of Evaluation? It is mostly because of the certification project at work.

Certification is a term that gets thrown around a little too loosely in training departments these days. Put someone through a course and a post-test and BANG, you are certified – or worse, you are not. It is a tricky thing to put together a certification process that is valid and reliable. Very few companies can stand to wait out validation process, so they jump right in and begin training and “certifying” people.

Certifications do not guarantee that the person being certified has learned more than he or she would in a regular training program, but business leaders often feel it is operationally necessary to validate a level of knowledge or skill required to meet goals and targets. If certification is needed or required, here are some things to consider for the assessment process:

Test items should be directly related to learning objectives, which should be directly derived from performance requirements. This may seem obvious but I have seen many tests that have included filler material alongside valid questions.

Test only on important items, not obscure ones. It is not necessary to test someone on small details unless they are critical. Very few corporate employees are doing life-saving work that needs to be tested at a granular level.

Test items should be straight-forward. Don’t try to be tricky. What is the point? It only serves to confuse the learner and adds no value to the assessment.

Choose question formats that make sense for the level of learning you need to assess.  Multiple choice questions are commonly used on knowledge tests because they are easy to score and easy to tie to outcomes – but they are not always easy to write. Good multiple choice questions will have a clear premise in the stem of the question, a correct answer, and reasonable alternative choices. There should not be any throw away responses or convoluted choices such as “a and b, but not c” or “a and c” only. If your test item has more than one correct answer, then rethink the question format. Consider short answer questions or a matching column.

If possible, use randomized test questions. Most learning management systems have this capability. They allow you to create a bank of questions that can be drawn upon at random so that test-takers will be deterred from sharing answers. But don’t make the bank of test questions so large that everyone feels like they are taking a completely different test.

Pilot your test. This is the hard part, because it takes time and patience. You need to let a few people complete the learning experience and take the test to give you the opportunity to analyze the questions. You will want to take a second look at questions that everyone got right, or everyone got wrong, or questions for which many people chose the same incorrect answer.

Create rubrics for skills assessments.  Skills assessment usually requires direct observation.  It is important that all of your assessors are using the same criteria and weights when judging performance.  Validate the process by having multiple assessors review the same performance.  If they are more than a few points off from each other, either redesign the rubric or re-train your assessors.


  1. Mike, as usual you are pushing the envelope.

    I'd be interested in hearing your thoughts on the respective roles of knowledge tests/assessments vs doingness/skill tests and assessments

  2. John,

    In a current program we are running, we have levels of certification and corresponding levels of assessment. The first level is focused on baseline knowledge, so we are using a test created using our LMS test engine. That is the easy part. The second level is skills-based. The assessment for this level involves performing a demonstration. We are doing these as live demonstrations and flying assessors around to observe and do the scoring. It is not as efficient as we would like it to be, but we have worked hard for consistency among the assessors.

    Thanks for your comment. It is always a pleasure to hear from you.