CAS President Gavin Henning learned about improvement science at a recent conference. Read his thoughts on how the Plan, Do, Study, Act cycle can help with your assessment-for-improvement efforts.
Recently, I attended the Assessment Institute in Indianapolis, IN. During a discussion with a friend, she introduced me to the concept of improvement science. I had no idea there was such a field of study, so I immediately search Google to learn more. After reading a few online articles, I was intrigued enough to purchase for my flight home a copy of New Directions for Evaluation (2017), number 153, as the issue was dedicated to improvement science.
To those implementing assessment, the concept is familiar. According to Moen, Nolan, and Provost (as cited in Christie, Lemire, & Inkelas, 2017, p. 11), “improvement science is an approach to increasing knowledge that leads to an improvement of a product, process, or system.” That certainly sounds like the continuous improvement purpose of assessment. What is a little different is that improvement science explicitly focuses on systems thinking. Because of this systems approach, a central principle of this approach is change management, as it is critical for improvement (Christie, Lemire, & Inkelas, 2017).
The journal issue goes into detail regarding the components of improvement science, an operational model, and then uses case studies to illustrate what the authors are describing. But, there was one discussion point that jumped out to me. A cycle of testing and learning is foundational to improvement science (Lemire, Christie, & Inkelas, 2017). This cycle includes four steps: Plan, Do, Study, Act (PDSA).
In this cycle, improvement occurs in small steps to see how effective implementation is and to better understand concomitant issues. The PDSA cycle is critical to change management as there are many issues that impact implementation of an improvement including user buy-in, resources, and sometimes even politics.
The PDSA cycle can easily be applied to assessment in higher education. Once the assessment is complete and recommendations for change are made, those changes should be implemented in small steps starting first with planning out the execution. After the planning takes place, a small-scale version of the improvement is implemented. Third, there would be assessment of that small-scale improvement. With this information, the improvement is scaled up.
Here is a basic example of how this might look in assessment practice. The orientation office at a small college completed a CAS self-study and learned that their program regarding sexual assault prevention did not achieve the learning outcomes intended. A recommendation in their self-study report is to contract with a professional organization that has developed a program called “Consent4U.” However, the program costs almost half of the entire orientation budget. While the benefits could be great, the orientation director wants to test the program first before making the significance financial investment. The director ran a pilot of the program with one of the residence hall learning communities. To understand change in learning, the director developed a pre- and post-test regarding sexual assault. In the first step of the cycle, “plan,” the director collaborated with the residence hall director to schedule a time and partnered with a faculty member in sociology to create the pre- and post-test. The program was implemented as part step 2, “do.” After the program was done, the director administered the post-test and “studied” the results. Based on the data, the director determined that given the amount of learning students obtained from this program, they were going to implement the Consent4U program and they requested additional funding from the provost for the program.
Some of the tenets of improvement science mirror those of assessment. However, the Plan, Do, Study, Act model may provide a way to manage the change that comes with making an improvement.
Christie, C., Lemire, S., & Inkelas, M. (2017). Understanding the similarities and distinctions between improvement science and evaluation. New Directions for Evaluation, 153, 11-22.
Lemire, S., Christie, C., & Inkelas, M. (2017). The methods and tools of improvement science. New Directions for Evaluation, 153, 23-34.
Dr. Gavin Henning is Professor and Program Director for the Master of Higher Education Administration and Doctorate of Education at New England College. He also is the President of CAS and a recent past present of ACPA: College Student Educators International. Gavin actively contributes to higher education assessment literature, and he recently co-authored Student Affairs Assessment: Theory to Practice and co-edited Coordinating Student Affairs Divisional Assessment: A Practical Guide. He holds a Doctor of Philosophy degree in Education Leadership and Policy Studies and a Master of Arts degree in Sociology both from the University of New Hampshire as well as a Master of Arts degree in College and University Administration and a Bachelor of Science degree in Psychology and Sociology from Michigan State University.
Write something about yourself. No need to be fancy, just an overview.