Demystifying Assurance of Learning: Q&A With Kathryn Martell
Kathryn Martell, a nationally known expert on the topic of assessing student learning, provides insights into the somewhat complex subject of AoL.
Assurance of learning (AoL) refers to demonstrating, through assessment processes, that students achieve learning expectations for the programs in which they participate. Determining how to best demonstrate AoL may seem intimidating at first. Kathryn Martell, a nationally-known expert on the topic of assessing student learning, provides insights into the somewhat complex subject of AoL.
From your extensive experience in the field of AoL, what would you say are the biggest challenges schools encounter in measuring learning objectives? How have they or could they overcome those challenges?
There are two problems I’ve seen a lot in measuring student performance. The first is that individual student performance—not group performance—needs to be assessed. So many faculty rely on group projects in their classes that it can be difficult to identify an individually written paper for AoL. A way to overcome this is to have a smaller individual paper accompany the team project to evaluate students’ skills.
A second measurement problem I have seen frequently is assessing students’ leadership skills. The typical assessment of leadership is knowledge questions on leadership theories. I think most of us would agree that when we think about leadership skills, we’re thinking about far more than performance on five to 10 multiple-choice questions on theory. This is an area in which measurement can be improved.
The most significant problem related to AoL is not measurement, however; it is how to use the data to improve student learning. Many faculty balk at having to change their courses in an attempt to “close the loop.” A good example is writing. Many undergraduate assessments conclude that student writing does not meet expectations. The obvious remedy is to have students write more and receive more feedback; however, many faculty are not willing to incorporate more writing into their classes. It is an issue of will not measurement.
How do AoL processes differ regionally?
AoL does not differ regionally within the U.S. Ten years ago, the SACS (Southern Association of Colleges and Schools) regional accreditors were the most demanding on AoL, but now all the regional accreditors require assessment.
I do see differences among countries, however, when it comes to AoL. The process was adapted most easily in Australia, in my opinion. For historical reasons, the Australians already had a tradition of shared curriculum. Groups of faculty collaborated in creating courses, exams, and projects before the AoL mandate took hold. Thus, the Australians already had the mindset of uniformity across the curriculum, and the systems to support it, that made AoL (program assessment) less foreign. Another difference I’ve noticed is that in some parts of the world—Europe, Mexico, and the Middle East, for example—it is common to have staff persons devoted to accreditation and AoL. These staff, who collect and analyze data for faculty review, are much less common in the U.S.
A further variance that can affect AoL has to do with the supremacy of the faculty. In some parts of the world, faculty are not to be questioned or instructed on what to include in their curriculum. This outlook would challenge the mechanics of collecting student products for AoL, drawing conclusions about what needs to be improved, and making changes in the curriculum to improve learning.
What types of insights can AoL measurements provide for the future direction of programs?
AoL can and should be the driver of curriculum change. For example, one of the early steps in the AoL process is curriculum alignment, where learning goals are mapped on the curriculum. The focus in curriculum alignment is the common learning experience—i.e., required courses—that all students in a degree program are exposed to. In my experience, it is not unusual for a school to have a learning goal—ethical or global perspective, leadership, or sustainability, for example—but have no required curriculum for students that allows them to develop that skill, knowledge, or attitude.
The second important driver for change is the assessment data. Often, despite faculty doing their very best, students have not developed the competencies the faculty thought they would have as a result of their curriculum. This can be disappointing—shocking even—and it is not unusual to respond by trying to modify the instrument instead of the curriculum. But if faculty rise to the challenge, they can develop learning experiences that produce strong student performance on their learning goals. It may take some trial and error, but over time students can improve their skills in writing and critical thinking, their outlook on ethics or global issues, and their ability to lead a team, as a result of thoughtful, data-driven curriculum development and management. That is the point of AoL—to assure student learning—and it is very gratifying when it happens.
Aside from aspiring to meet accreditation standards, how does AoL and the measurement process benefit business schools and/or their stakeholders?
The first obvious benefit of effective AoL is that it should lead to an improvement in student learning. Raising the quality of graduates can result in many benefits for the individual, the school, and society. Second, AoL can go a long way in addressing the intensifying pressure to develop data-driven responses to public demands for justification of investment in higher education. The final significant benefit that I’ve seen involves the business schools’ culture. When the conversation turns toward, “What are our students learning, and how can we improve that learning?” and we work together to make that happen, it reminds many of us of why we chose to be a college professor. In an inspiring article, “Doing Assessment As If Learning Matters Most” (1999), Thomas Angelo talks of “assessment as culture transformation.” I have seen it happen and it is inspiring to witness faculty embrace examination and reflection in order to improve their students’ learning.
What advice would you give to program administrators who feel daunted by the process of AoL?
Initially, the language of assessment and the techniques can seem daunting. The AACSB Assessment Seminars have been instrumental for many schools to come up the assessment learning curve, and I would recommend them to any school that is just getting started. Another piece of advice is to keep assessment simple but meaningful. Focus on a handful of learning goals that are important to the faculty and dive in. Finally, fight the urge to focus on the assessment methods as opposed to what the assessment data are telling you. Perfecting the method is not “closing the loop.” Approach assessment “as if learning matters most,” and wonderful things can happen.
Kathryn Martell (PhD, University of Maryland; BA, University of Chicago) is dean of the College of Business at Central Washington University. Martell is a nationally known expert on the topic of assessing student learning. Since AoL was incorporated into AACSB's accreditation standards in 2003, Martell has worked closely with AACSB to help schools meet the Assurance of Learning standards. More than 900 faculty and administrators from 250 universities have attended the AACSB seminars that Martell has facilitated on Assessment of Student Learning. She is also a frequent speaker at AACSB national and regional conferences. She developed the original content for AACSB's online Assessment Resource Center and edited the books published by AACSB and the Association for Institutional Research (AIR) – Assessment of Student Learning in Business Schools: Best Practices Each Step of the Way. She has provided consulting advice to more than 100 business schools around the world on assurance of student learning.