Moving from Instituting Large-scale Changes to Understanding How Those Changes Are Impacting Student Outcomes

Over several years of study, my colleagues and I at the Community College Research Center (CCRC) have found that colleges are successfully leveraging new technology-mediated advising tools to fundamentally transform how students experience support services. This relatively recent retention strategy, sometimes referred to as Integrated Planning and Advising for Student Success, or iPASS, is based around three general types of technology systems:

  • Education and planning—tools for selecting programs and courses, mapping degree plans, and tracking progress toward degree completion.
  • Counseling and coaching—tools for improving students’ connections to support services and enabling a case management approach to advising.
  • Risk targeting and intervention—tools for monitoring early indications of academic struggle and creating systematic processes for supporting and following up with at-risk students.

As I discussed in a webinar for the Learn Forward series this past March, however, the true potential of these tools lies not just in the technology, but in the college’s ability to use these tools to drive larger reforms in advising and student support services.

For example, one college introduced a new education planning tool to help advisors and students plan out coursework for an entire program or degree, rather than just focusing on registration for the upcoming term. At the same time, the college changed the role of their advisors from generalists to specialists, allowing advisors to become experts in a limited number of programs. Together, this structural change to the role of advisors, coupled with a new approach to education planning aided by the technology, were creating a far more comprehensive and holistic advising experience for students. Now my colleagues and I want to learn more about whether or not those types of changes are impacting student outcomes, and if so, how?

Should be easy, right? The college seems to have improved its advising process, and we believe better advising should lead to better student outcomes. But shortly before the college introduced iPASS, it also dramatically redesigned its approach to developmental education. And it is currently in the process of reorganizing its programs and degrees into meta-majors that make it easier for students to select a program or degree and stay on track. If retention and completion rates start going up, how do we know it was because of iPASS?

Change is rampant in higher education these days. In response to increasing pressure to improve completion rates, colleges around the country seem to be undertaking more and more retention efforts. A quick scan of the Learn Forward retention blog series illustrates just how numerous and diverse these efforts are:

  • Professional development for faculty
  • Retention committees
  • Data-driven decision making
  • Summer bridge programs
  • First-year programs (orientation, first-year seminar)
  • Sophomore-year programs
  • Proactive advising
  • Peer tutoring
  • Student success coaches
  • Learning communities

Focusing on retention is (obviously!) a good thing. But how does a college know that a particular retention strategy is working? Especially when students may be participating in, or benefitting from, multiple efforts simultaneously?

On the one hand, you could argue that if retention and completion rates are going up, then the impact of one effort versus another may not matter. On the other hand, given tight resources, colleges want to know that what they’re doing is worth the investment.

In reality, when you’re looking at a number of efforts that have the same goal, it is extremely challenging to disentangle the effects of one program or initiative from another. This is even more true when a program or reform touches a variety of stakeholders across the campus, and depends on the nature and quality of subjective human interactions.

In these cases, it is hard to know whether it is the *design* or content of the program that is having an impact, or the *way* it is being adopted (or not) by various individuals.

Take the iPASS project I described above. Just because the college changed advisors from generalists to specialists, doesn’t mean advisors have learned more about their programs of focus. And just because the college made a new education planning tool available doesn’t necessarily mean that advisors and students are using it. Or if an advisor and student make a plan together, that doesn’t necessarily mean the student will follow the plan.

These issues don’t have anything to do with the design of the iPASS intervention—they have to do with how iPASS is being adopted by different individuals.

So what can colleges do? Most colleges don’t have the resources to carry out the kinds of rigorous experimental research (think randomized controlled trials) that can get at causality (can we say intervention x caused outcome y). But that doesn’t mean colleges should throw up their hands in defeat.

Here are some ideas and tips that we would recommend based on what we’ve learned from studying colleges that are being very intentional and deliberate about how they approach their retention strategies:

Understand which intervention is having what kind of impact.

Be thoughtful and intentional on the front end about which retention strategies you choose. Only undertake initiatives that are clearly connected to the specific goals you want to achieve. Before engaging in a new strategy, be sure you understand how and why it is reasonable to expect the strategy you have chosen to lead to the outcomes you want. In addition, be sure the strategy you have chosen supports and enhances existing student success efforts on campus, rather than duplicating services or leaving important service gaps. There are a variety of different exercises that can be helpful in this process. For example:

Initiative mapping. Think carefully about how a new strategy will complement existing efforts; map out the goals, intervention components, and key players for all strategies.

Logic model. Develop a theory of change to illustrate how and why a strategy is intended to impact students.

Understand how adoption by individual stakeholders is influencing intervention outcomes.

In order to understand whether a change (or lack of change) in outcomes can be attributed to a particular retention strategy, it is crucial to understand how individuals are engaging (or not) with that particular strategy. In order to promote meaningful engagement, it is also important to involve relevant stakeholders, including faculty, advisors, students, and other staff from the beginning so that they understand the reason for undertaking the strategy, and see value in participating.

Secure stakeholder buy-in. Be sure that the people who will be involved in the intervention are involved from the beginning and have a say in how it is selected or developed. Include end users on implementation teams.

Gather feedback. Conduct surveys or focus groups with staff and faculty to get their feedback and opinions when setting up the process. And, talk to or observe staff, faculty, and students to assess whether or not they are engaging in the strategy as intended.

And don’t forget students. They are a crucial part of the process, so it is important to also get their feedback and opinions through surveys or focus groups. Consider including them on implementation teams as well.

My colleagues and I at CCRC are constantly wrestling with the question of how best to evaluate the effects of multiple simultaneous institution-wide reforms. The one thing that is certain is that without the foundation of a well-designed, purposeful approach to implementation and stakeholder adoption, it will be difficult to maximize the potential of any retention strategy. iPASS is one retention strategy that we are excited to see being implemented and adopted by many colleges in this way, and one that we are continuing to observe and study. In our current iPASS research, we are supplementing qualitative fieldwork with the analysis of student outcomes, as well as a potential causal impact study. Our hope is that this work will contribute to the growing body of evidence about the importance of retention strategies, and help build the knowledge base for colleges seeking to improve supports for students.

About the author: Serena Klempin conducts qualitative research on technology-mediated advising reform and the role of community colleges in cross-sector collaborations. She is a doctoral student in sociology and education at Teachers College, Columbia University. She holds a BA in English from Kenyon College with a concentration in African and African American Studies, and an MSW from Columbia University School of Social Work with a concentration in policy. Previously, Klempin worked for Columbia University School of Social Work’s Center for Research on Fathers, Children and Family Well-Being, where she conducted program evaluations, survey research, and qualitative research. Her research interests include organizational change and the impact of support services and non-cognitive skills on persistence and graduation.

Serena Klempin

Research Associate, Community College Research Center