Assessment Of Learning Following Instructional Delivery

Assessment Of Learningfollowing Instructional Delivery Schrunk 2012

Assessment of learning following instructional delivery involves various techniques to evaluate whether students have grasped the intended learning outcomes. Schrunk (2012) identifies five broad methods: (a) direct observation, (b) written responses, (c) oral responses, (d) ratings by others, and (e) self-reports. Each method offers unique advantages and challenges that educators must consider when designing assessments.

Direct observation is effective primarily when assessing actions or behaviors. For example, in health sciences education, observing a nursing student conducting a physical examination provides concrete evidence of practical skills. The reliability of this method improves when using detailed checklists that break down complex tasks into specific, observable components. However, this approach has limitations, notably that the absence of behavior during assessment does not unequivocally indicate a lack of learning; the behavior may have been learned but not exhibited during the observation. Additionally, observer bias and fatigue can influence the assessment’s objectivity (Schrunk, 2012).

Written responses are widely used, especially in higher education, as they effectively assess cognitive understanding. Common formats include essays, short-answer questions, and multiple-choice tests. These assessments can be administered to large groups and scored efficiently, particularly with computerized grading systems that also generate item analysis data, aiding in refining assessment tools. However, grading subjective written responses can be time-consuming and prone to bias, and certain learning aspects, such as practical skills or written language proficiency, may not be fully captured through this method. Furthermore, multiple-choice assessments risk fostering false confidence in students' knowledge, as they may mistakenly believe distractor options are correct (Goubeaud & Yan, 2004; Schrunk, 2012).

Oral responses provide an immediate, interactive way to evaluate understanding. Teachers can ask targeted questions to gauge student comprehension, revealing depth of knowledge and reasoning abilities. In clinical education, oral assessments often take the form of case-based questioning by examiners. While this approach offers insights into thinking processes and allows probing of student responses, it can induce anxiety, which might impede performance. It is also labor-intensive, especially when assessing numerous students, and subject to evaluator bias. Well-structured questions and standardized evaluation protocols can mitigate some of these issues (Joughin, 1998; Schrunk, 2012).

Ratings by others, including peers, mentors, or instructors, and self-reports are subjective assessment approaches that explore the affective and attitudinal domains of learning. The primary advantage is capturing information on students’ emotions, motivation, and perceptions that are otherwise difficult to measure objectively. These assessments are particularly useful in health professions for evaluating traits like empathy or professionalism. However, their subjectivity can lead to reliability issues, with potential biases influencing ratings. Studies indicate that external ratings tend to be more predictive of actual performance than self-assessments (Atkins & Wood, 2002; Schrunk, 2012). Nonetheless, in higher education settings, such methods are less frequently employed due to concerns about validity and consistency (Alquraan, 2012).

Applying these assessment techniques to measure a complex construct like empathy among healthcare providers involves additional considerations. Empathy is inherently intangible and not directly observable. Researchers utilize validated instruments, such as the Jefferson Scale of Physician Empathy, to quantify empathy levels with reliable psychometric properties (Hojat et al., 2001). These tools produce scores that can be correlated with behaviors, like the administration of analgesic medication, to infer the influence of empathy on clinical decision-making. For instance, lower empathy scores may be associated with withholding necessary pain relief, a hypothesis requiring further investigation.

In conclusion, effective assessment of learning encompasses multiple methods, each suited to different domains and objectives. Combining techniques—such as direct observation for practical skills, written responses for cognitive understanding, oral exams for reasoning, and subjective ratings for attitudes—provides a comprehensive evaluation of student learning. When assessing qualities like empathy in healthcare, validated measurement tools and behavioral observations are crucial for capturing meaningful data. Thoughtful integration of these methods enhances the reliability and validity of assessments, ultimately supporting better educational outcomes and improved clinical practice.

References

  • Alquraan, M. F. (2012). Education, business and society. Contemporary Middle Eastern Issues, 5(2).
  • Atkins, P. W. B., & Wood, R. E. (2002). Self- versus others’ ratings as predictors of assessment center ratings: Validation evidence for 360-degree feedback programs. Personnel Psychology, 55(4), 871–904. doi:10.1111/j..2002.tb00133.x
  • Fields, S. K., Mahan, P., Tillman, P., Harris, J., Maxwell, K., & Hojat, M. (2011). Measuring empathy in healthcare profession students using the Jefferson Scale of Physician Empathy: Health provider – student version. Journal of Interprofessional Care, 25(4), 246-253. doi:10.3109/.2011.566648
  • Frank, M., & Barzilai, A. (2004). Integration of alternative assessment in a project-based learning course for technology teachers. Assessment and Evaluation in Higher Education, 29(1), 41-61.
  • Goubeaud, K., & Yan, W. (2004). Teacher educators' teaching methods, assessments, and grading: A comparison of higher education faculty's instructional practices. The Teacher Educator, 40(1), 1-16.
  • Hojat, M., Mangione, S., Nasca, T. J., Cohen, M. M., Gonnella, J. S., Eedmann, J. B., & Veloski, J. (2001). The Jefferson Scale of Physician Empathy: Development and preliminary psychometric data. Educational and Psychological Measurement, 61(2), 349-365. doi:10.1177/0013164401061002002
  • Joughin, G. (1998). Dimensions of oral assessment. Assessment and Evaluation in Higher Education, 23(4), 377-388.
  • Schrunk, D. H. (2012). Learning theories: An educational perspective (6th ed.). Pearson.
  • Ward, J., Schaal, M., Sullivan, J., Bowen, M. E., Erdmann, J. B., & Hojat, M. (2009). Reliability and validity of the Jefferson Scale of Empathy in undergraduate nursing students. Journal of Nursing Measurement, 17(1), 73-88. doi:10.1891/.17.1.73
  • Williams, B., Boyle, M., & Earl, T. (2013). Measurement of empathy levels in undergraduate paramedic students. Prehospital and Disaster Medicine, 28(2), 123-129. doi:10.1017/S... (Note: incomplete DOI, verify source for final citation).