White paper explores management-based approach to accreditation
‘Differentiated’ Accreditation
White paper explores changing the accreditation system to encourage continuous improvement and open the door to “alternative” education providers.
Common to all of those organizations is that they have taken a “management-based” approach to quality assurance. Unlike the existing accreditation system in the U.S., which often takes the form of a once-in-a-decade paperworkathon culminating in a college being (or not being) reaccredited, such an approach emphasizes smaller, more frequent reviews, continuous improvement and peer benchmarking.
By using a management-based approach, the U.S. higher education accreditation system would be well positioned to handle an influx of alternative education providers and a larger shift toward quality assurance that places improving student outcomes front and center, the authors of the white paper argue.
“It shouldn’t be about just policing the bottom end of the market,” Martin Kurzweil, director of educational transformation at the consulting and research group Ithaka S+R, said in an interview. “If you’re just ensuring that the worst providers don’t have access to financial aid or are unable to operate, then you’ll raise the average level of quality, but it’s by eliminating the bottom end of the distribution. What we actually need to meet the needs of jobs and citizenship of the future is to shift the whole distribution [upward], and the only way to accomplish that is ensuring that providers across the spectrum — not just at the bottom — are improving.”
Ithaka S+R’s Proposal:
- Reduce barriers to entry for new accreditors
- Focus accreditor review and recognition on the capacity to assess and reinforce educational quality and financial stability
- Promote partnerships between new entrants and existing providers
- Establish differentiated “tracks” for new entrants
- Expand Title IV eligibility, shifting the focus of approval from gatekeeping to formative evaluation
- Assess programs annually on a small set of standard student outcome and financial stability measures
- Create differentiated consequences for not meeting standards
- Make accreditation and student outcome reports public
Ithaka S+R this morning published the paper, which was written by Kurzweil, Ithaka S+R analyst Jessie Brown and Wendell Pritchett, the Presidential Professor of Law and Education at the University of Pennsylvania Law School. The paper builds on an event Ithaka S+R hosted in February at Penn that brought together about 30 administrators, accreditors, leaders of higher education associations and policy makers to discuss the proposal for a new accreditation system (see fact box).
The Accrediting Commission of Career Schools and Colleges and the New England Association of Schools and Colleges, members of which were in attendance at the event, did not respond to a request for comments on the proposal.
The paper doesn’t go as far as other critiques of the accreditation system. It doesn’t refer to accreditors as members of a “cartel,” for example, or advocate razing the existing system to the ground and starting over from scratch. Nor does it suggest breaking the link to the federal government for financial aid.
Instead the paper argues that the “basic infrastructure” for an effective quality-assurance system already exists, though it needs to be repurposed to respond to the challenges of rising student debt and middling completion rates, Kurzweil said.
“It requires some important changes in practice in order to do that, but it’s not ‘rip it up and start over,’” Kurzweil said. “It’s having a tighter focus on educational processes and outcomes. It’s having more frequent engagement between the accreditor and the provider. It’s having differentiated results, as opposed to ‘approved/not approved.’ It’s having a range of consequences for shortcomings, as opposed to the death sentence or nothing.”
The paper suggests that programs should be evaluated annually, though that review would look only at a “small set” of student outcomes and financial stability measures. Then, every three years, the accreditor would conduct a larger review of how programs are making progress toward standard and self-defined goals, focusing on areas that the previous such review identified as needing improvement.
The structure means colleges would be scrutinized more frequently by their accreditors, Kurzweil acknowledged, but “if the focus is different and the reviews are less burdensome, they might take that deal,” he said.
Failing to meet minimum standards would trigger an investigation that could lead to “tailored support and consequences,” ranging from more frequent reviews and probation to partial loss of access to federal funding and removal of accreditation. On the other side, colleges with a track record of positive outcomes could see fewer reviews.
One of the proposal’s main objectives is to open the door for coding boot camps, massive open online course providers and other “alternative” education providers to become accredited. New programs are therefore evaluated based on criteria such as their partnerships with accredited providers, plans for growth and — if worst comes to worst — exit strategy.
And just as there are different consequences for an accredited education provider that doesn’t meet minimum standards, the paper also proposes that there should be “differentiated ‘tracks’” for providers to become accredited based on their performance.
Finally, the reports about the reviews should be made public in order to make the accreditation system more transparent, the paper proposes.
If done correctly, the approach “weeds out the poorest performers, while motivating and facilitating other institutions to re-examine and improve their processes and results continuously,” the paper reads.
The U.S. higher education sector doesn’t need to look to other industries for an example of an organization that uses a management-based approach to accreditation. During the Obama administration, the U.S. Department of Education unveiled the Educational Quality Through Innovative Partnerships program, or EQUIP, in which third-party quality-assurance organizations evaluated pairs of alternative providers and accredited colleges. Many accreditors are also reviewing their own processes.
Some disciplinary accreditation bodies come close to what the paper is proposing. The Accreditation Board for Engineering and Technology, which is highlighted in the paper, reviews programs every six years, looking at learning outcomes that are aligned with industry standards. Programs that don’t meet those standards, however, may see more frequent reviews or be required to do additional reporting.
Since those processes have been in place, a 2006 report found, they boosted student performance and college planning efforts.
“In addition to the advantages cited, it is also important to recognize that every one of over 3,700 academic programs in 30 countries accredited by ABET has the benefit of both an industry advisory committee cooperating with the program leadership and a program-specific internal quality-management system that supports the program’s continuous improvement implementation,” Joe Sussman, ABET’s chief accreditation officer, said in an email. “It is instructive to observe that far less energy is spent measuring what students are taught, and there remains a significant focus on what they have learned.”