Overseeing The Overseers - Ithaka S R

Transcription

RESEARCH REPORTMay 25, 2022Overseeing the OverseersCan Federal Oversight of Accreditation Improve Student Outcomes?Cameron ChildressJames Dean WardElizabeth Davidson PisacretaSunny Chen

Ithaka S R provides research and strategic guidance to help the academic and culturalcommunities serve the public good and navigate economic, demographic, and technologicalchange. Ithaka S R is part of ITHAKA, a not-for-profit with a mission to improve access toknowledge and education for people around the world. We believe education is key to thewellbeing of individuals and society, and we work to make it more effective and affordable.Copyright 2022 ITHAKA. This work is licensed under a Creative Commons Attribution 4.0International License. To view a copy of the license, please HAKA is interested in disseminating this brief as widely as possible. Please contact us withany questions about using the report: research@ithaka.org.We thank Arnold Ventures for their support of this research.Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?1

IntroductionSince the passage of the Higher Education Act (HEA) of 1965, the federal government has reliedon the accreditation process to ensure quality at postsecondary institutions receiving federaldollars. In the ensuing decades, spending on higher education by the federal government—mostsignificantly through federal student loans and grants—and by individuals has increasedexponentially. Even as this spending helped fuel substantial growth in enrollment, a completioncrisis has left many former students with debt but no degree, 1 and highly uncertain labor marketprospects. If accreditors have failed to ensure an adequate level of quality control for studentsattending postsecondary institutions, increased oversight by the federal government is necessaryto guarantee institutions are providing high-quality programs.The accreditation process and the federal role in shaping it are clearly of critical importance forensuring all students have access to quality postsecondary options. And yet, due to thecomplexity of the relationship among the federal government, accreditors, and institutions, andthe opacity of the accreditation process itself, there is little systematic evidence about the effectson higher education institutions and their students of this quality assurance regime and theperiodic federal policy changes that reorient it.Ithaka S R began a pilot study in 2021 to assess the feasibility of using publicly available dataon the accreditation process and outcomes to evaluate the impact of federal oversightmechanisms on institutional and student outcomes. In this report, we provide an overview ofaccreditation in the US and assess the impact of changes in federal oversight. We focus onchanges to oversight that were part of the 2008 reauthorization of the HEA and specifically lookat the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC)institutions given data availability. We also examine an oversight dashboard of accreditorslaunched in 2016 to understand if this form of public accountability may help improve studentoutcomes.Our findings suggest that the 2008 reauthorization of the HEA resulted in no statisticalimprovement of student outcomes at institutions accredited by the agency we examine, but the2016 NACIQI pilot project, as well as other efforts made by the Department of Education (ED)towards greater data transparency, may have led to improved student outcomes.What is accreditation and how does it work?In 2018 approximately 122.4 billion in Title IV funds were made available to students seeking apostsecondary education at an eligible institution. Institutions access these funds by passingthrough a “regulatory triad” comprised of state authorization, recognition by ED, andaccreditation from an ED recognized accreditation agency. The state and federal governmentseek to ensure consumer protection and administrative compliance of institutions, respectively,Julia Karon, James Dean Ward, Catharine Bond Hill, and Martin Kurzweil, “Solving Stranded Credits: Assessing the Scope andEffects of Transcript Withholding on Students, States, and Institutions,” Ithaka S R, 5 October g the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?2

while accreditation agencies are meant to provide “quality assurance” of the education providedby institutions. 2 Jointly, the triad is supposed to ensure that public and private investments intohigher education are beneficial for taxpayers and students.The lack of a centralized authority to monitor the quality and performance of schools has led tostudent outcomes that vary dramatically across the country. Currently there are no minimumexpectations on metrics like graduation rates or loan default rates, and poor-performinginstitutions are rarely punished by accreditors. In 2014, 16 billion in government aid was sentto students at four-year institutions with a six-year graduation rate less than 33 percent, and aUS Government Accountability Office (GAO) report found that, from 2009-2014, accreditorswere no more likely to act against institutions with poor student outcomes than they wereagainst institutions with strong outcomes. 3 Despite the important role accreditation plays in theregulatory triad, the current structure of accountability is not properly meeting the needs ofstudents.Currently there are no minimum expectations on metrics likegraduation rates or loan default rates, and poor-performinginstitutions are rarely punished by accreditors.There are two types of accreditation agencies: institutional and programmatic. Institutionalaccreditors are either regional, which cover most private and nonprofit colleges in the US, ornational, which accredit most for-profit and religious institutions in the country. Eighty-fivepercent of students attend institutions accredited by a regional accreditor. Institutions that areaccredited by a federally approved national or regional accreditor are eligible to receive Title IVfunds (e.g., Pell grants and federal student loans). 4 Programmatic accreditation involves thereview and approval of specific programs within an institution. Although programmaticaccreditors can seek approval from ED and grant Title IV eligibility to institutions with a singleprogrammatic focus, these types of accreditors more commonly serve as quality control forprofessional or graduate schools within a larger institution. For example, 19 states limiteligibility to take the bar exam (and therefore practice law) to students who have graduated froma law school that is accredited by the American Bar Association. 5 Because regional accreditorsoversee the largest institutions and the most students, understanding how federal policy impactsthe policies of these agencies is an important step towards analyzing how federal changes canbest support student outcomes.Alexandra Hegji, “An Overview of Accreditation of Higher Education in the United States,” Congressional Research Service, 16October 2020, 26.2Antoinette Flores, “The Unwatched Watchdogs: How the Department of Education Fails to Properly Monitor College AccreditationAgencies,” Center for American Progress, 19 September 2019, chedwatchdogs/; Andrea Fuller and Douglas Belkin, “The Watchdogs of College Education Rarely Bite,” The Wall Street Journal, 17June 2015, ege-education-rarely-bite-1434594602.3Robert Kelchen, “Accreditation and Accountability” in Higher Education Accountability, (Johns Hopkins University Press, 2018), 9798.45Ibid.,102-103.Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?3

Although different accrediting agencies have different standards, accreditation usually worksthrough a series of the same institutional checks. An institution first performs a self-assessmentbased on the standards of the agency through which they are seeking accreditation, then theyare subject to a peer review conducted by a team of volunteers, usually made up of faculty andadministrators from institutions accredited by the same agency. The accreditor then evaluatesthe materials compiled through the self-study and peer review, sometimes asking forclarification or further information from an institution, before administering a decision onaccreditation. Schools that receive a favorable decision will then seek reaffirmation ofaccreditation once their allotted time is up, usually five to ten years.For participating institutions to be eligible for Title IV funds, an accreditor must be recognizedby ED, a process that reoccurs every five years. ED staff review each accreditation agency basedon criteria set forth in the Higher Education Act (HEA) that require accreditors to maintain andenforce standards on student achievement, curricula, distance education, fiscal capacity,program length, and other factors. ED staff then submit their report to the National AdvisoryCommittee on Institutional Quality and Integrity (NACIQI), an appointed group of educatorsand stakeholders who, after reviewing the ED staff report and conducting a public hearing,submit a recommendation to a senior department official (SDO) in ED who then makes the finalrecommendation on recognition, which is then approved by the Secretary of Education. 6NACIQI, an integral piece of the accreditation process, was first formed after the 1992reauthorization of the HEA as a part of a broader effort by Congress to strengthen therecognition process. 7 The reauthorization detailed new standards by which to assessinstitutions, and set the stage for increased government involvement in the accreditationprocess. It is important to understand what effects, if any, NACIQI oversight of accreditors hason the quality of postsecondary programs and student outcomes.Despite accreditation’s key role in the disbursement of billionsof dollars in federal money, little empirical research has beendone to explore how student outcomes change in response tofederal policy shifts.Despite accreditation’s key role in the disbursement of billions of dollars in federal money, littleempirical research has been done to explore how student outcomes change in response tofederal policy shifts. One qualitative research study found that institutions reacted to changes intheir accreditor’s policies most commonly by identifying the value of new or changedaccreditation policies and implementing them in a way that is meaningful for the institution,leveraging existing committees to make recommendations, and integrating accreditation policyJudith S. Eaton, “An Overview of U.S. Accreditation,” Council for Higher Education (CHEA), November 2015, df.6Alexandra Hegji, “An Overview of Accreditation of Higher Education in the United States,” Congressional Research Service, 16October 2020), 7, 26.7Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?4

changes with existing internal institutional policies. 8 Other studies on accreditation policy focuson programmatic accreditation. One such study finds evidence that changing accreditationstandards for engineering students in 2004 had a positive effect on student outcomes. 9However, because programmatic accreditation is generally used as an industry-specific markerof quality rather than a general gatekeeper for Title IV funds, changes at these smaller agenciesmay not mirror changes to institutional accreditation standards. More evidence is needed toidentify the extent to which federal changes have made a difference in student outcomes. Asstudent loan debt continues to grow, instituting proper accountability for poor-performingschools and the accreditors who oversee them is of vital importance to ensure students haveaccess to high-quality postsecondary opportunities and taxpayer dollars are spent effectively.As student loan debt continues to grow, instituting properaccountability for poor-performing schools and the accreditorswho oversee them is of vital importance to ensure students haveaccess to high-quality postsecondary opportunities andtaxpayer dollars are spent effectively.In the next sections, we provide estimates of how federal actions related to accreditation mayimpact student outcomes using two case studies. First, we examine how the 2008 HEOAchanges in accreditation standards were incorporated into an accreditor’s review process andestimate the effects of these changes on credential completion efficiency, median debt, and loanrepayment rates. Second, we look at NACIQI’s 2016 accreditor dashboard pilot project tounderstand if this additional oversight influenced institutional behavior.The 2008 Reauthorization of the HEAThe 2008 Higher Education Opportunity Act (HEOA), the last time the HEA has beenreauthorized, also marks the last time federal accreditation policy has been changed significantlyin the US. Among other changes, the 2008 reauthorization granted institutions more flexibilityto define “quality standards” on student achievement with which they are to be judged byaccreditors, restructured NACIQI by splitting appointments up between the secretary ofeducation and the two branches of Congress, required accreditors to address transfer of credit,introduced greater transparency to the public, outlined rules for due process for institutions,required all institutions publish definitions of credit hours, and explicitly required accreditors tomonitor the enrollment growth of institutions. 10 Although the HEOA provided specific areasaccreditors must consider in their review process, it strictly prohibited the secretary ofKim Levey, “When Accreditation Policy Changes: An Exploration of How Institutions of Higher Education Adapt,” PepperdineUniversity Theses and Dissertations, 2019, viewcontent.cgi?article 2053&context etd.8J. Fredericks Volkwein, Lisa R. Lattuca, Betty J. Harper, and Robert J. Domingo, “Measuring the Impact of ProfessionalAccreditation on Student Experiences and Learning Outcomes,” Research in Higher Education 48, no. 2 (2007): 007/s11162-006-9039-y.pdf.9Vincent Sampson, “Dear Colleague Letter: Summary of the Higher Education Opportunity Act, Office of Postsecondary Education,December 2008, 78, .10Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?5

education from “establishing any criteria that specifies, defines, or prescribes the standards thataccrediting agencies use to assess any institution’s success with respect to studentachievement. 11” Without any strict guidelines, each accreditor has the ability to self-determine ifit meets these standards, with the five-year NACIQI review being the only opportunity for thefederal government to weigh in. This process is an example of how the patchwork accreditationsystem results in colleges across the country being held to different quality standards.Data AvailabilityWe began the study by looking for information on how regional accreditors responded tochanges in federal accreditation policy. The scope of our study required that the information bepublicly posted by an accreditor on their website. Although there are many ways to evaluate howan accreditor responded to federal policy changes, we found that analyzing meeting minutes andversions of an accreditor’s Standards of Accreditation from around the time new federal policychanges took place was the most useful practice. As the largest accrediting bodies, we focusedour search on regional accreditors, most of whom have detailed Standards of Accreditation andother policy documentation from the most recent years readily available on their website. TheSouthern Association of Colleges and Schools Commission on Colleges (SACSCOC) had the mostcomprehensive historical meeting minutes and Standards of Accreditation available on theirwebsite. In fact, most regional accreditors had no historical documents publicly available. Thesedocuments from SACSCOC allowed us to evaluate how the accreditor responded to the 2008reauthorization, making SACSCOC an ideal choice for our study. After reviewing documents, wemanually coded accreditation actions and institutional outcomes. We have included these dataas a downloadable appendix on our website.ApproachIn this section we descriptively analyze institutional outcomes following the 2008 HEOA andseek to understand if changes in federal oversight of accreditors influenced behaviors in waysthat impacted student outcomes. We also seek to understand whether the reauthorizationinfluenced accreditor behavior in ways that impacted an institutions likelihood for reaffirmationor sanction. We utilize a difference-in-differences quasi-experimental design to evaluatewhether SACSCOC policy changes made in response to the HEOA resulted in statisticallysignificant changes in various outcome measures chosen to reflect a theoretical increase ineducational quality, efficiency, and labor market value of credentials.Following the 2008 reauthorization, regional accreditation agencies such SACSCOC wererequired to update internal policies governing the accreditation of the institutions they oversee.In the case of SACSCOC, the board of trustees met multiple times over the course of severalyears to discuss the new law and update the agency’s policies to comply with federal mandates.These discussions culminated in several changes, most significantly an update to the agency’s“guiding document,” the Principles of Accreditation. Institutions accredited by SACSCOC wouldnow be required to publish transfer criteria, provide the definition of a credit hour, evaluate11Ibid.Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?6

student outcomes according to their mission, and take measures to protect against fraud indistance education. This change officially took effect January 1, 2012, and any institution thatsought reaffirmation of their accreditation status after that date was subject to the new policies.We hypothesize that the improvements in efficiency and quality of education that result fromaccreditation policy changes will manifest in increased productivity of degree completions (asmeasured by completions per 100 FTE), lower levels of undergraduate federal student loan debt,and greater levels of undergraduate federal loan repayment.The sample for our analysis includes all institutions that received a decision on reaffirmationfrom SACSCOC from 2012 through 2017 (i.e., cohorts that began the reaffirmation process from2009 through 2014). This provides us with a set of institutions subject to the old rules and a setof institutions subject to the new rules that went into effect January 1, 2012. We specify ourmodel with and without fixed effects, as well as across our restrictive sample of data to test fordifferential effects. We also employ modelling techniques created by Cengiz and colleagues andCalloway and Sant’Anna to estimate the treatment effect of each cohort of institutions separatelyand create a weighted treatment effect that accounts for the staggered implementation ofpolicies. 12To estimate accreditation outcomes, we use the same sample and implement a linear probabilitymodel to test if institutions that began the reaffirmation process after the 2012 changes weremore likely to be denied reaffirmation. We use a binary variable equal to “1” if an institutiongoing up for reaffirmation was denied and placed on sanction by SACSCOC as the main outcomevariable. The model uses the same set of controls described earlier and was ran with and withoutfixed effects to check the robustness of our findings.While this model only accounts for institutions that underwent the reaffirmation process, thereare several “negative actions” that SACSCOC can take against an institution during the course ofthe year. Actions such as “denying the request for a substantive change” or “placing aninstitution on warning” operate on a different timeline than the reaffirmation process. Theseactions are broadly referred to as “Sanctions and Other Negative Actions” in the SACSCOCmeeting minutes. To account for the various timelines associated with different negative actions,we have created an interrupted-time-series model aimed at understanding the difference inincidence of negative actions and sanctions handed down by SACSCOC before and after the2012 policy changes. The sample for this model consists of every institution that was eitherreaffirmed or faced a negative action by SACSCOC from 2009 to 2016.FindingsThe following 2x2 tables outline the difference in outcomes across control and treatment groupsin the period before and after the policy changes were implemented. Before sharing the resultsSee Doruk Cengiz, Arindrajit Dube, Attila Lindner, and Ben Zipperer, "The Effect of Minimum Wages on Low-Wage Jobs," TheQuarterly Journal of Economics 134, no. 3 (2019): 1405-1454, https://doi.org/10.1093/qje/qjz014, and Brantly Calloway and PedroH.C. Sant’Anna, “Difference-in-Differences with multiple time periods,” Journal of Econometrics 225, no. 2 (2020): 200-230.Additional details on our data and estimation strategy can be found in the methodological appendix.12Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?7

of the implemented model, this simple picture of the difference-in-differences approachprovides an informative starting point.Table 1: Simple DiD of Main Outcome 4)(Post-Period) 2014)(Post-Period) (Pre-Period)BA Completion per 100 FTEControlTreatmentTreatment (2009, 2010)(2012-2014)ControlControl (2009,2010)Median DebtTreatment(2012-2014)Treatment 21470.721459.82-10.9AA Completion per 100 FTE1 Year Repayment 4145-0.06922-0.073185-0.003965Here we see the average level of each outcome variable for the control and treatment groups inthe pre- and post-treatment periods, as well as the difference between these groups acrossperiods. As a reminder, the control group is made up of any institution that received a decisionon reaffirmation from SACSCOC in 2012 or 2013 (cohorts that began the reaffirmation processbetween 2009 and 2010), and the treatment group is any institution that received a decision onreaffirmation in 2015, 2016, or 2017 (cohorts 2012, 2013, 2014). Institutions in the 2011 cohort(received a decision in 2014) have been excluded from our analysis to account for fuzziness inimplementation date because it is unclear from policy documents if they would have beensubject to the new accreditation standards. The descriptive results show modest growth in eachof the completions per FTE metrics for the treatment group compared to the control group. Thelevel of median debt increased for both the treatment and control groups from the pre- to postpolicy period at an almost identical rate. Similarly, the one-year repayment rate decreased forboth groups, with a difference of only about 0.4 percent in the final difference.Table 2 provides the estimates only for the main specification of the model, which includesinstitution and year fixed effects. The point estimates of the treatment X post-period interactionvariable are not far from what the simple difference of averages shows. For example, on average,all else equal, the application of treatment is expected to increase bachelor’s degree credentialproduction per 100 FTE by 1.03 degrees. However, none of these results are statisticallysignificant, thus there is no statistical evidence that the SACSCOC policy changes resulted inimprovements in credential production per FTE, median debt, or one year repayment rate.Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?8

Table 2: Main Specification of Differences-in-Differences cyBachelor'sper 100 FTEAssociate'sper 100 FTEMedianDebtOne YearRepaymentRateTreat x 1,842R-squared0.0330.0350.2900.3360.641Number of Inst363221142307307ControlsYESYESYESYESYESYear FEYESYESYESYESYESInst FERobust standard errors in parentheses*** p 0.01, ** p 0.05, * p 0.1YESYESYESYESYESThese estimates are robust to our other specifications. Running the model without fixed effectsgave point estimates similar to those of the main specification. In addition, the statisticalsignificance remained unchanged.“Treatment” in this instance may not actually occur for each cohort of institutions at the sametime. There is a possibility that institutions may not alter their behavior immediately followingthe SACSCOC policy changes but would instead react to the policy changes once they themselvesbegan the reaffirmation process or that the time until the next reaffirmation may influenceinstitutional behaviors. To investigate this, we replicate the strategy of Cengiz and colleagues toestimate the treatment effect of each cohort separately. The institutions in the 2012 cohort willbe analyzed as the treatment group, then the institutions in the 2013 cohort, and finally theinstitutions in the 2014 cohort. Treatment, in this case, occurs when each cohort begins thereaffirmation process. The 2012 cohort undergoes treatment in 2012, the 2013 cohort in 2013,and the 2014 cohort has treatment year 2014. For the cohort-based analyses our control groupwill consist of the institutions that began the reaffirmation process prior to the accreditationpolicy changes in 2009 or 2010. We also follow Calloway and Sant’Anna’s approach using thestata code csdid, which creates a weighted average treatment effect that accounts for thestaggered start of accreditation review across cohorts. The results of the Cengiz analysis givepoint estimates that mostly match those of the main specification in sign and magnitude,suggesting cohorts are responding similarly regardless of time until reaffirmation. The statisticalsignificance of these estimates are, again, unchanged in our robustness checks. Similarly, ourresults of the Calloway and Sant ’Anna approach do not provide evidence that the staggeredtiming of review is impacting our primary estimates.Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?9

Table 3: Main Specification of Linear Probability ModelVARIABLESTreat x PostTreatment(1)LPM- DenialofReaffirmation(2)LPM- )2,1782,178R-squared0.070Number of Inst363363ControlsYESYESYear FEYESInst FERobust standard errors in parentheses*** p 0.01, ** p 0.05, * p 0.1YESThe primary results of the linear probability model are shown in Table 3. The point estimatesremain similar, and the statistical significance holds when the model is run with and withoutfixed effects. The results suggest institutions that began the reaffirmation process after 2012 areabout 3.13 percentage points more likely to have been denied reaffirmation than thoseinstitutions that began the process prior to 2012.The number of negative actions handed down to institutions varies by year but does not seem toincrease after the policy changes in 2012. Figure 1 shows that there is considerable variabilityfrom year to year in terms of the number of negative actions delivered by SACSCOC, rangingfrom 26 in 2014 to 41 in 2011.Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?10

Figure 1: Negative Actions or Sanctions delivered by SACSCOC, by 2016Table 4: Interrupted Time-Series Analysis of Negative ActionVARIABLESTime TrendPolicyTime x LessRestrictiveSampleLessRestrictiveCohort 2015LessRestrictiveCohort 2014LessRestrictiveCohort Number of InstStandard errors in parentheses*** p 0.01, ** p 0.05, * p 0.1Overseeing the Overseers: Can Federal Oversight of Accreditation Improve Student Outcomes?11

Table 4 provides estimates from out ITS model of the relationship between SACSCOC’s newaccreditation standards and negative actions taken against member institutions. The coefficienton the variable Policy provides an estimate of the immediate effect of the change. The coefficienton Time x Policy provides an estimate of the annual effect following implementation. Theresulting negative point estimates associated with the 2012 policy change imply that SACSCOCmay have been less likely to deliver a negative action or sanction immediately following thepolicy. These results are, however, mostly insignificant, and small in size at a one percentagepoint decrease in the likelihood of receiving a sanction. The results of the most rest

access to high-quality postsecondary opportunities and taxpayer dollars are spent effectively. As student loan debt continues to grow, instituting proper accountability for poor-performing schools and the accreditors who oversee them is of vital importance to ensure students have access to high-quality postsecondary opportunities and