Innovation and Accreditation: A Natural Pairing?

By Mark LaCelle-Peterson

Accreditation has been in the hot seat of late. It is both faulted for being asked to do too much—serving a “regulation-by-other-means” function as gatekeeper for federal student financial aid dollars—and for asking too little in terms of student learning and life outcomes. Along with these criticisms have come some interesting proposals for improvement. The following summarizes the more constructive criticisms and describes how a new programmatic accreditor in education, the Association for Advancing Quality in Educator Preparation (AAQEP), is building new ideas into the fabric of peer-based accreditation.

Critiques of accreditation seem to come in two varieties. Some critics dismiss accreditation’s fundamental premise: that quality assurance and organizational improvement is best accomplished by engaging those with the greatest knowledge of and familiarity with the higher education enterprise through peer review. Quality assurance through peer review is often cited as a unique characteristic of U.S. higher education, but not one equally cherished by all observers. While it is worth defending that quality-assurance role itself, this article will focus on issues raised by the other set of critics—those who believe in the peer-review based system but seek to refine and improve it. Their proposals are worth a look, and are likely to show up in reauthorization of the Higher Education Act now before Congress. After all, an enterprise like accreditation that is committed to reflection and improvement should welcome critical attention.

One commonly cited problem is that accreditation tends to promote caution and compliance, rather than improvement and innovation. This is sometimes attributed to the growing influence of regulators due to government reliance on accreditation as a gatekeeper for financial aid funds. Yet the “compliance mindset” problem seems deeper; it is evident not only in institutional responses to regional accreditors—the gatekeepers for most student financial aid—but also in the context of programmatic accreditation where more emphasis is on the academic and experiential dimensions than on the administrative. In the current era of scrutiny in higher education, both the applicant and the reviewer are apparently comforted by boxes to check and kept interested by the occasional hair to split. How can accreditation make headway on an improvement and innovation agenda against strong undercurrents of a compliance mentality? Can a system be designed to support, for example, competency-based approaches while adding to the knowledge base regarding its impact?

Another frequent criticism is that, despite a language and apparatus of measurement and specificity, accreditation is a pretty blunt policy instrument. Programs and institutions strive to compile evidence aligned to ever-more detailed standards, and evidence specific to their operation, yet accreditors are seen as handing out binary decisions—thumbs up or thumbs down—and treating all cases with a one-size-fits-all set of processes. Interestingly, this criticism comes not only from outside observers but from colleges and universities themselves. Given the great diversity of institutions, inflexibility with regard to process and lack of transparency with regard to findings, rankle clients as well as critics. Accreditation is encouraged to provide nuanced accounts of quality, and to differentiate processes on the basis of evidence through “risk-based” approaches that reward strong performance with reduced reporting burden while focusing attention and support where evidence shows it’s needed. Might an accreditation system yield both a decision and a description—a profile of relative strengths as does the Umultirank system developed in Europe? Could the evidence monitored in annual reports allow customization of site visits?

And finally, a broad concern about quality assurance in higher education goes to how we measure quality. While the assessment movement continues to grow—and though faculty and departments across colleges and universities are much more conversant with student outcomes and assessment strategies than they were when Dick Light first convened the Harvard Assessment Seminars in the 1980s—concerns remain over possible narrowing of aspirations in light of constraints posed by assessments, and distrust of local assessments on the part of outsiders (trust in faculty grades ends at the departmental corridor) and of external assessments by faculty. Nonetheless, progress has been made through the development of assessment in some fields. Even critics of institutional accreditation often point to specialized programmatic accreditation as a source of models for assessment thanks to the greater shared specificity about profession-based performance expectations for graduates. Accreditation of programs that educate for professions such as nursing and teaching benefit from widely accepted standards and well-developed performance assessments.

A new accreditor

It is in the context of programmatic accreditation that the Association for Advancing Quality in Educator Preparation is seeking to incorporate new ideas from the policy conversation as well as new ideas from its own field of educator preparation into a new accrediting system for programs that prepare teachers, school administrators and other educational personnel. In its standards and processes, AAQEP has built upon the valued tradition of quality assurance through self-study and peer-review (collaboration is a stated organizational value) and incorporated means of differentiation and performance measurement, and in doing so, has sought to create a system that is equally effective at assuring quality (the “accountability” function) and supporting innovation (the “improvement” function).

The field of education offers unique possibilities for a new look at quality assurance at this moment. For one, significant advances in the performance assessment of teacher candidates (and measures developing for other school roles as well) provide new, credible sources of evidence regarding graduates’ competence. The need for differentiation is also clear given the variety of pathways into teaching that have emerged in the last decade, both within higher education and through new organizations such as the Boston Teacher Residency program, often operating in partnerships with school districts. And the field has some fresh theoretical resources to tap in the “improvement science: thinking of Tony Bryk and his colleagues at the Carnegie Foundation, who suggest that small cycles of experimentation, rapidly tested, can promote innovation and improvement in complex systems.

How have these resources been deployed? To support innovation over compliance in the accreditation processes, two strategies have been used. First, in developing standards for program graduates’ performance and program practices, a distinction was made between outcomes that are clear and can be measured at program completion (a “completer performance standard” and a “quality program practices standard”) and those that are more contextual and forward-pointing (a “professional competence and growth standard” for individuals, and standard that asks programs to document their engagement to support local schools). As noted above, the field has the advantage of well-developed performance measures that are widely available and reliably scored such as the edTPA developed at Stanford, the PPAT developed by ETS, and similar measures developed by or for individual states. So in relation to the completer performance standard, the evidence base must contain multiple measures, including measures of content knowledge and knowledge of how people learn, but the performance of program completers is the focus and the center of gravity. Similar clarity is provided for the quality program practices standard. Innovation, though encouraged on all dimensions, is especially invited for the latter-named standards.

Supporting innovation and reducing the tendency toward a compliance response requires addressing a dilemma all accreditors share: how to maintain consistent standards of quality while recognizing and respecting the great diversity of institutional and programmatic types and settings. In the field of education in particular, this extends to the challenges of holding equally high aspirations regarding graduates’ readiness to teach in all types of schools while acknowledging that any particular program is embedded in a defined local community.

Local lessons

AAQEP addressed these challenges by embracing them, by making “successful engagement in the local context” (a theme addressed by previous NEJHE articles) itself a hallmark of quality. In order to ensure consistent quality in divergent engagement models, a process dimension was invoked. As institutions complete the planning process for their self-study, they will submit a proposal that details how they will address the “contextual challenge” aspects of the standards. Proposals will benefit from peer review that will provide a common point of reference for the accreditation-seeking program and for the review team. Furthermore—and this is perhaps the most exciting anticipated outcome—by assuring that all institutions in the system study a common set of challenges in preparation, each in its own context, and by sharing those “case studies” of local invention and innovation as part of the reporting process, the field’s ability to learn from its collective experience will be enhanced. (If you ever wondered “what have we learned, collectively, from decades of accreditation experience,” we’re hoping to start providing some answers to questions such as “what strategies are effective in what contexts for increasing diversity in the educator workforce?”)

We believe that including a proposal process to support local solutions to common challenges will support both more innovation and greater differentiation in the accreditation process. Differentiation will take place on another dimension through implementation of “strengths-based” customization of site-visit requirements. (Some authors refer to such flexible approaches as “risk-based” [see Brown, Kurzweil, and Pritchett (2017) and Education Counsel (2014)]; we have preferred to frame it as “strengths-based.”) This approach will differentiate processes on the basis of evidence, rewarding strong performance with reduced reporting burdens while focusing support and scrutiny where needed.

By providing means for differentiation and support for innovation, AAQEP aims to re-cast accreditation as a learning system that informs the field as it provides quality assurance. Important core questions drive quality assurance in educator preparation: Are new teachers prepared to meet the demands of the classrooms they enter? Are new school leaders ready to support learning? Can all educators support all learners equitably in our increasingly diverse student populations? Do preparation programs—regardless of delivery model—provide appropriate, high-quality clinical experiences? Are local schools engaged as partners? Can the provider ensure quality, ongoing improvement and innovation? Can the public be confident that its interests are served? Credible answers to these questions require solid evidence, transparent systems and collaboration. AAQEP’s standards and processes meet these demands while incorporating the most promising ideas from the vibrant, improvement-focused strand of our collective conversation about accreditation.

To return to the question of the title, must accreditation necessarily devolve into a compliance exercise that suppresses rather than supports innovation? AAQEP is betting against the conventional view. Collaboration, after all, is a natural way to generate ideas and possibilities. The “improvement science” paradigm invites rapid prototyping and nimble adaptation. A system that invites proposals for change and innovation and makes public the results will build capacity and encourage creative adaptation. As accreditation’s fundamental premise of self scrutiny and peer interaction is affirmed and put into play, what could be a more natural connection?

Mark LaCelle-Peterson is president and CEO of the Association for Advancing Quality in Educator Preparation (AEQEP).

Related Posts:

New Directions for Higher Education: Q&A with Judith Eaton on Self-Regulation

Accreditation: Will We Love It or Leave It or Reform it? A Radio Higher Ed Q&A

“University Unbound” Rebounds: Can MOOCs Educate as well as Train?

Does Community Engagement Have a Place in a Placeless University?

COOCs Over MOOCs


[ssba]

Leave a Reply

  • (will not be published)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>