In April, NEJHE launched its New Directions for Higher Education series to examine emerging issues, trends and ideas that have an impact on higher education policies, programs and practices.
The first installment of the series featured Philip DiSalvio, dean of the College of Advancing & Professional Studies at the University of Massachusetts Boston, interviewing Carnegie Foundation President Anthony Bryk about the future of the credit hour; the second featured DiSalvio’s interview with Fastweb.com and FinAid.org Publisher Mark Kantrowitz about student debt; the third, DiSalvio’s interview with Lumina Foundation President and CEO Jamie P. Merisotis about Lumina’s commitment to enrolling and graduating more students from college; followed by his Q&As with American Council on Education (ACE) President Molly Corbett Broad about the efforts ACE is making to raise educational attainment in the U.S. and around the world and with AAC&U President Carol Geary Schneider on liberal education.
In this installment of the series, DiSalvio speaks with Richard Arum, co-author (with Josipa Roksa) of Academically Adrift: Limited Learning on College Campuses. Arum and Roksa pose fundamental questions about whether undergraduates are really learning anything during their undergraduate experience. For a large proportion of students, Arum and Roksa’s answer to that question is a definitive “no.”
Drawing upon survey responses, transcript data, and the Collegiate Learning Assessment (CLA) of more than 2,300 undergraduates at 24 institutions, Arum and Roksa deliver a stark message about the nature of undergraduate learning in the U.S.
Their analysis suggests that more than a third of American college seniors are no better at crucial types of writing and reasoning tasks than they were in their first semester of college. Furthermore, 45% of these students demonstrate no significant improvement in a range of skills—including critical thinking, complex reasoning, and writing—during their first two years of college.
Other findings show that, on average, students devote only slightly more than 12 hours per week to studying. Based on students’ self-reports, Arum and Roksa suggest that such meager investments in studying might reflect meager demands placed on students in their courses in terms of reading and writing. Another finding of their work points to persistent racial and ethnic gaps in CLA scores—and those gaps widen, in the case of African-American students—over the course of four years of college
Some see their research results as a damning indictment of the American higher-education system. Conversely, their work has drawn its share of critics, who say their analysis falls short in its assessments of certain teaching and learning methods.
Nevertheless, the suggestion that four years of undergraduate classes make little difference in a student’s ability to synthesize knowledge and to put complex ideas on paper has serious implications for academic leaders and policymakers. Arum and Roksa may be delivering a wake-up call for those institutions failing to make undergraduate education a priority and falling short of properly preparing their undergraduates for 21st-century challenges.
DiSalvio: In your book Academically Adrift, Limited Learning on College Campuses, you conclude that a significant percentage of undergraduates are failing to develop the broad-based skills and knowledge they should be expected to master. How did you come to that conclusion?
Arum: The CLA, which we used to identify scores before and after a performance test, did not move up in a significant way for many students. The analysis showed that if the test was scaled from 0-100, 36% of the students didn’t go up even one point after four years.
What’s important to emphasize is that our conclusions do not rest solely on that instrument. We also surveyed students and asked them what they were doing in their course work and in their undergraduate education. It was those responses that we found even more powerful and troubling than the lack of growth demonstrated by the large numbers of students on the assessment.
Specifically, we asked them how many hours per week they studied and prepared for class. We found that on average, full-time college students in the U.S. studied only 12 to 13 hours per week and a third of that time was spent with their friends. Studying with their friends didn’t track at all with growth. In fact, it led to negative performance. Traditional studying alone is down today. Our analysis showed that 36% of these students studied alone five or fewer hours per week—less than one hour a day.
We also asked them about their reading and writing requirements in their class. Half of the students reported that in a typical semester, they did not have any class that asked them to write more than 20 pages over the course of the semester, and 32% said that they did not have a single class where they were asked to read on an average more than 40 pages per week.
This does not apply to all students, however. We found that some students grew on this indicator and that some were studying long hours. Some were taking rigorous and demanding courses. But what we identified in our data very clearly is that large numbers of students in U.S. higher education today are able to navigate through this system with very little demanded of them, and very little effort and very little learning, as measured by this CLA assessment.
It might be useful to say a bit about what the CLA performance test assesses. The CLA assesses critical thinking, complex reasoning and the ability to communicate in writing. These are general skills most individuals think all college graduates should develop. These skills are also what employers increasingly see as the core of what college graduates need to be successful in the workforce. These competencies are assessed by the CLA not through a standardized multiple-choice test, but rather students are given a task that might be asked of them by an employer in the future. Students are given a set of documents. They are asked to think critically about the documents. The reliability and validity of the information in the documents varies so they have to think critically about the usefulness of the data. They need to synthesize information across these various sources—a complex reasoning task. They then have to write a logical argument based on these documents using that information to provide evidence to support their points. This is, again, not a simple task but exactly the kind of task you’d expect college students to improve upon the longer they were in school. Unless of course, these students were either not studying at all nor being asked to study, and were not reading or writing. A pattern emerges where one might conclude that there is limited learning for all too many of these students.
DiSalvio: While not a direct rebuke to Academically Adrift, studies by the Council for Aid to Education offer a sunnier counter-narrative to your assertions about meager learning. Their studies suggest that college does have significant effects from freshman to graduating seniors. How do you account for that discrepancy between your research and the council’s research?
Arum: In terms of the narrow measurement issues, our study followed several-thousand students from when they entered college to when they left college. They were given a test at two points in time, so that we could measure the actual growth that occurred. So our work, in a sense, is a descriptive report of what actually exists.
The Council for Aid to Education study that came out measures freshmen out of college and measures seniors out of college at the same point in time. That study tries to infer growth statistically through manipulation of the data. They need to statistically adjust their data because freshmen and seniors at a college or university look very different. Large numbers of students drop out of college. In many institutions, it’s 50% or more. So comparing a freshman class to the senior class is comparing apples and oranges.
Inferring growth by statistically adjusting for SAT is just coming up with an estimate based on these techniques. We’re not doing statistics at all on the descriptive findings I just noted. We simply said that this is where they started and this is where they finished. In fact, their statistical adjustments are inadequate and do not fully account for the attrition going on in colleges and universities. So they overestimate growth. That’s technically the difference.
Our work is often misinterpreted and misreported to say that students aren’t benefiting from college, that nothing is being learned and that college has no value. We have never said that. We have said that too many students are not applying themselves and not learning and moving up on this important indicator. This is not the only purpose that colleges and universities serve. And they have many functions outside this. Colleges and universities teach subject specific skills and contribute to larger socio-emotional development.
Yet our work is often mischaracterized as a condemnation of all college and all undergraduate education. In page after page, we highlight the important variation we see in the data and where some students are actually applying themselves and learning at quite reasonable rates. So again, some of the problems between our two studies are around this measurement issue.
There are significant numbers of students who grow from the undergraduate experience. We hope that no one misunderstands our work and thinks that we feel college is not a useful, critical, important thing for a young adult to embark upon. Even at the high cost of U.S. higher education, where students often go into significant debt to finance their education, we feel it is still a very sound investment for the vast majority of these students.
I think what happens often is when things become as public as the book has become, people take from it pieces that they like and use it for their own ends. The larger point we are making, is that colleges and universities should do a better job at dealing with these large numbers of students that clearly can be applying themselves and developing themselves more.
DiSalvio: You observed that the existing organizational cultures and practices of contemporary colleges and universities often fail to put a high priority on undergraduate learning. What do you think are the factors that contribute to that failure?
Arum: There are multiple factors. One of the complications of identifying those factors, of course, is that U.S. higher education is quite diverse. Institutions such as UMass Boston may face a whole different set of issues around organizational culture than Boston College. Again, we have to be careful not to generalize, but at the same time we might be able to put forward some observations that may not apply everywhere but could apply to many schools.
A good place to start to address the question of factors is to observe how U.S. higher education has moved in the direction of a consumer-client satisfaction model. Higher education institutions have increasingly come to compete over individual students—to get them to apply and attend their colleges and universities—and then to keep them happy and satisfied while they are there.
In some ways, that’s a good thing. Organizations should be responsive to the clients that they’re servicing, but it has also led to all sorts of behaviors that are less than ideal. I have visited over 100 college and university campuses over the last few years and at almost every campus I visited, they would point out the new gym, the new student center and often the new dormitories being built. There is an arms race going on in U.S. higher education to provide greater amenities to 17-year-olds who go and visit campuses and make choices about where they should spend their next four years. While this is not a sensible way for the sector as a whole to be moving, it has moved very far in that direction.
This can be seen in the way instruction is often assessed on college campuses today, where too often the only measure of teaching quality that the institution considers is the course evaluation surveys that students fill out at the end of the semester. Those course evaluation surveys all too often are nothing but consumer-satisfaction surveys. “Did you like the class? Would you recommend it to a friend? Did you find it interesting?”
Many institutions are not asking questions about the academic content. “How many hours did you study? What kind of reading requirements did you have? What kind of feedback did you get on your paper?” So to the extent that institutions assess learning or teaching, it’s often again the consumer-client model, rather than a model that would promote academic rigor and excellence.
Another way to look at this issue would be to track expenditures. Over the last couple of decades, colleges and universities have increasingly invested in non-full-time faculty and student support services. The fastest growing sector of higher education is professional and quasi-professional staff in charge of student well-being. Now again, all things being equal, that may be well and good, except these are choices that the institution has to make about where investments are going to be made and where they are not going to be made. Increasingly, colleges and universities are making investments in non-academic functions through these amenities, through the student support services and again with a decline of full-time faculty.
So I think in broad strokes, those are some of the cultural factors that might be seen throughout higher education. This varies in many institutions and certainly not all are being judged exclusively on research and scholarship productivity, but many of the colleges and universities are busy, working to move up in their rankings. That again has little or nothing to do with undergraduate education and academic rigor. It has to do with what type of students you’re going to attract to the institution, the research and productivity of faculty. However it is not a measure of actual learning that’s occurring.
DiSalvio: In your research, you have found that learning in higher education is characterized by a persistent and a growing inequality. You maintain that there are significant differences in critical thinking, complex reasoning, writing skills when comparing groups of students from different family backgrounds and racial and ethnic groups. Given that there is a gap, what do you think has to be done to deal with that gap?
Arum: The inequalities within higher education in terms of learning are a very important part of our book that is often ignored. In K-12 education, these inequalities have long been noted and we have a great deal of public policy discourse and public policy made to target and reduce them. For example, “No Child Left Behind,” with all its faults, brought increasing attention to assessing learning outcomes for not just average groups of students but for different racial groups of students, to make sure that all students, regardless of their backgrounds, were demonstrating growth and learning. In higher education, we haven’t had much of this debate at all. In fact, faculty often are in many institutions discouraged from working with students that are struggling because of the time commitments that are involved. They are implicitly encouraged to limit those investments and instead invest in the talented, and rewarding students in terms of their academic promise.
The struggling students are often diverted to remedial course work or remedial centers on campus. The effect of some of these programs and course work is unclear. I know there is a whole national debate emerging on the lack of effectiveness of remediation and remedial course work. Students are often put there, spin their wheels and take these courses repeatedly and their progress is compromised and suffers. We need to figure out how to do all the remediation in a smarter, more effective and efficient way. I think we need to pay greater attention to that area, just the way we do in K-12 education.
DiSalvio: You have said that the limited learning that exists on campuses qualifies as a significant problem and should be the subject of concern for policymakers and practitioners and parents and citizens. Yet you also say that limited learning on college campuses isn’t a crisis because the institutional actors implicated in the system are receiving the kind of organizational outcomes that they seek, so neither the institutions themselves nor the system as a whole seems to be in any way challenged or threatened. Why then should limited learning be a subject of concern?
Arum: In the book, we argued that we didn’t think it was a significant crisis because all the actors were generally happy. Institutions increasingly have been able to generate additional revenues through increasing tuition and other means. The students were generally happy, even those from whom not much was asked, because with their short-term orientation, it seemed like they had the best of both worlds. They are getting a college degree and it actually didn’t take much to do it. They had increasing time to devote to other pursuits, whether that was working additional hours to pay for their education or just engaging in leisure pursuits.
I think there’s evidence that actually both those things are going on. Students spend about five more hours today working for pay than they did in 1960. And there are another eight hours that are just unaccounted for in terms of their studying less, but not working more. So I think there is a lot of reason to think there has not been a crisis. Since that book came out in January 2011, increasingly there is a sense of crisis in the air. That is, I think, in part because of this unsustainable increase in the cost of higher education and that financial model coming up against some real limits.
The ability to increase tuition at twice the rate of inflation indefinitely and still have more students every year attend your institution, may now be hitting some natural limits. Debt load of students, the availability of additional funding for programs in Washington and economic circumstances are leaving large numbers of people to raise questions that during the time we wrote the book were not being asked. Additionally, what has changed is that technological disruption has accelerated. And so there is increased experimentation with alternative instructional delivery mechanisms, such as online education.
For those reasons, the sense of a crisis in higher education is becoming a more plausible argument, and in fact we are in a period of uncertainty when there appears to be some profound restructuring or reengineering happening.
Economically, today’s graduates have to compete in an increasingly globalized world. And so it’s not the case as it was maybe 40 or 50 years ago that their limited learning might not have mattered because they are still going to do better than the high school graduate who did not go to college. That still may be true to some degree, but they are competing today not just against the high school graduate but the software engineer in Ireland and the architectural design firm in Germany and the engineering companies in Asia.
The consequences of this limited learning for them and for the country as a whole are increasingly going to require addressing and remedying.
Limited learning in terms of critical thinking and complex reasoning and communication also has implications for the ability of our country in the future to have a functioning democratic system. The college graduates today are expected to be the civic leaders of tomorrow and yet the students that we’re tracking into the labor market were asked how often they read the newspaper in print or online and 36% of them said monthly or never. How can a functioning democratic system exist when the educated elite in the country are no longer reading news on a regular basis?
So, to think critically about the political rhetoric and to think systematically about the complex problems of the day are essential skills for the broader civic society. If we ignore this problem, I worry it will have much more profound implications than just our economy losing its competitive edge.
DiSalvio: What should higher education leaders and policymakers take away from your work and is it a call for action?
Arum: Hopefully it will be read as a call for action and that call for action will come from the leaders of higher education institutions. We feel strongly that the federal government, in a well-intentioned effort to address these problems, should not impose an accountability scheme on higher education in a centralized manner. We feel that would end up leading to all sorts of distortions and counterproductive changes as was seen in the “No Child Left Behind” program and the resultant imposed accountability frameworks imposed.
Because there is a real problem, demonstrated by consistent and overwhelming empirical data, we feel higher education has a responsibility to address this in a systematic way. The best place for that problem to be addressed, particularly in the U.S., with its decentralized higher education system and very large private sector, is at the institutional level.
Institutional leaders should take these issues seriously and must be willing to assess learning in their own institutions and to identify areas of concern to improve. We’ve been very clear to say that higher education institutions must answer three questions: 1) How are you measuring learning on your campus? 2) Where are the areas that need improvement? 3) What are you doing to improve those areas?
We should not need the federal government or a state government or even a board of trustees to tell us to do that. Every academic leader should be asking those questions because that is, in part, what defines academic leadership and what differentiates higher education institutions from an ordinary business.
DiSalvio: In your experience, how have higher education leaders generally responded to that call for action?
Arum: Higher education leaders overwhelmingly agree with the idea that there is a real problem. Some might express concern about the reputation of their own institution and say that the problem does not exist on their campus. But overwhelmingly, higher education leaders acknowledge that there is a problem because their institutional data confirms that the problem exists.
So, while it appears there is a widespread sense that there is indeed a problem that should be addressed, many feel they do not have the adequate tools to be able to do the work. There is not always a sense that the assessment tools available match up with their institution’s needs.
While course evaluations are commonly used in higher education, I think different instruments might be used to track student’s objective learning growth. As those tools become available, our hope is these tools will be embraced and widely utilized by the vast majority of institutions.