Which matters more, gauging the health of an institution or the success of its students?
Recently, the project director and I submitted the Annual Performance Report for the second year of our university’s Title III grant, for which I serve as part-time consultant. The five-year award from the U.S. Department of Education is a competitive grant intended to strengthen the academic quality and fiscal stability of the institution. Our particular project aims at transforming curriculum and academic support systems to improve the success of a number of student groups, including freshmen, sophomores, juniors, those who are “near-completers” but haven’t yet graduated, and transfers.
For this cycle of annual reporting, the Education Department revised its template, which afforded an opportunity to cite some achievements as well as reflect upon setbacks. Some of the latter were occasioned by the abrupt pandemic lockdown as well as by an announcement, made publicly by the former system chancellor, of a quickly discarded plan to close the institution. The report required several data elements, which were pre-filled by the Department of Education. Among them, predictably, were retention percentages, as well as four- and six-year graduation rates, all based upon the federal cohort, whose members are first-time, full-time degree-seeking undergraduate students.
One problem with this for our institution, as well as for others across the country, is that the federal cohort represents a partial and declining percentage of the overall student body. Another problem is that, although the project we were funded to carry out has explicitly to do with student success, the required indicators have much more to do with the success of institutions. They are not the same. Student success nearly always conveys evidence of institutional success, but the reverse is not necessarily true. Frequently, for instance, favorable results from analyzing the federal cohort shape a positive image for an institution, but coexist alongside policies (such as those governing credit transfer) that unduly prolong the collegiate careers of its transfers.
Student-ready or college-ready?
A burgeoning student success movement has made this distinction abundantly clear, while fostering a conviction that institutions ought to pay more attention to being student-ready, versus worrying whether students are college-ready. Yet, the movement, in spite of its achievements and some happy exceptions at individual institutions, has not created a groundswell of support for new indicators that more accurately reflect the goals and aspirations of students, rather than the successes or failures of their institutions.
Honestly, I have to admit that our own efforts to identify suitable indicators, in other sections of the report, were no improvement. For instance, the project proposal identifies the retention and graduation of transfers as one of the student groups singled out for attention. This makes sense as, according to the National Institute for the Study of Transfer Students, “transfer has become the new norm for American college students, with over 60% of students transferring between institutions at least once.” Yet, any insights on transfers are simply buried in data found within categories such as sophomores, juniors, and so on.
The foregoing signifies a more general problem. We all know we have to make decisions on the basis of data; yet the data we have often fail to match the kinds of decisions we want to make. We express an awareness of today’s varied and diverse student body, all the while continuing to describe students’ higher educational progression based on the traditional federal cohort. We talk the language of success, but most often, the indicators have more to do with the institution’s success in getting students to remain and finish (retention and graduation). At times, we do talk the language of persistence, but possess little direct evidence of how students define their own goals, compared with abundant evidence on institutional retention as a measure of accountability, especially for public funding.
The statements of accrediting bodies such as the New England Commission of Higher Education (see below) tend to be fairly general and non-prescriptive. As such, they do not offer definitive insight into the specific means by which we might gauge and understand student success. This is understandable. The logic of accreditation is that vital and high-quality institutions are a precondition of, and necessary vehicle for, their students’ success. The central purpose of accreditation is thus to assure the educational quality of institutions. Accrediting standards aim neither to contain nor prescribe a set of student success indicators. And accrediting bodies also serve as guardians for other public functions of colleges and universities, which are of importance to the well-being of the citizenry and the economy—for instance the production of new knowledge through research—but may have little to do directly with most students.
In New England, the New England Commission of Higher Education Standard 8 on Educational Effectiveness provides the most explicit guidance on student success. For instance (from 8.6-8.10):
The institution defines measures of student success and levels of achievement appropriate to its mission, modalities and locations of instruction, and student body … The institution uses additional quantitative measures of success … to understand the success of its recent graduates … The results of assessment and quantitative measures of student success are a demonstrable factor in the institution’s efforts to improve the curriculum and learning opportunities and results for students … The institution devotes appropriate attention to ensuring that its methods of understanding student learning and student success are valid and useful to improve programs and services for students and to inform the public …
These considerations provide colleges and universities with plenty of leeway to develop whatever measures will help them gauge the success of students at their institutions. Although, again, there are surely exceptions, many institutions have taken a minimalist approach, limiting their information-gathering to traditional measures such as retention, graduation, licensure passage and employment.
In short, a gap exists between our superior ability to gauge the success of institutions by deploying a relatively simple set of measures typically based upon the federal cohort, versus the ability to monitor the successful (or unsuccessful) progression of the varied students who move through them.
Which matters more, to gauge the health of an institution or the success of its students? Differences in perspective on this matter surely exist—say, between administrators and frontline student support personnel. But one thing is sure: A failure to measure and highlight all students’ successes and needs, in terms of their own stated goals and aspirations, can make an institution’s claim to hold student success as a high priority ring somewhat hollow.
This gap between what we’re able to measure conveniently and what we should actually care most about, deserves to be addressed—so that the measure of a college or university truly becomes the success of its students. Clearly, we need a new set of indicators, not to replace but at least to exist alongside, the current ones, to guide decision-making and to reflect and track the more complicated composition and trajectories of today’s students. Solutions will vary but, in every case, a good place to start would be to talk with students and glean their own aspirations. An added benefit of doing so, whatever the communication platform adopted, is that such conversations might themselves be a modest retention mechanism, as many students will appreciate having a hearing for their experiences, hopes and concerns.
Colleges and universities have much to gain from such a shift in perspective. For instance, helping a student prepare for and negotiate a transfer, which leads to eventual graduation, could be a real part of the institution’s story, instead of a footnote or addendum or, worse, an unmentionable. It could also be regarded on the plus side of the ledger for the sending college or university, instead of a failure, even though it doesn’t add to its bottom line. It could be considered part of the institution’s success, as long as the student, rather than the institution, is the first concern.
Daniel Regan is a sociologist and the retired dean of academic affairs at Johnson State College, now part of Northern Vermont University.