Can educational return on investment be meaningfully measured?

Various methods exist to help students decide which courses will pay off, but all should be taken with a grain of salt, say David Levy and Harvey Graff

July 30, 2023
A cost/value graph, symbolising return on investment
Source: iStock

For years, the concept of direct economic return on investment (ROI) in college education has attracted attention. However, it wasn’t until the US Department of Education launched the College Scorecard (CSC) in 2013 that meaningful value comparisons between colleges and programmes became possible.

By focusing on tangible financial outcomes, the CSC – like the UK’s Longitudinal Educational Outcomes dataset – provides earnings data that directly address a critical concern for students and families: is college worth the increasingly large investment?

However, assessing educational ROI can be complex. Most ranking methodologies grapple with a central challenge: reconciling programme costs and their economic value. The main issue is that differences in earning potential between colleges and programmes are hard to adequately account for.

The Washington DC-based thinktank Third Way has been a pioneer in CSC-based college rankings. Its price-to-earnings premium (PEP) estimates the amount of time required to recoup the cost of a degree or credential at a particular institution based on salaries shortly after graduation. For example, if the total cost of attending a school is $50,000 and a student on average earns $25,000 more than they would otherwise earn with just a high school diploma, the PEP would be 2.

However, if two schools have the same relationship between cost and value, the PEP model treats them the same, even if one school (the higher-cost one) demonstrates substantially higher earnings.

Georgetown University’s Center on Education and the Workforce’s (CEW) net present value (NPV) model uses average earnings 10 years after students’ first enrolment to calculate the value of earnings over the next 40 years. But that admirable aspiration is very hard to fulfil because earnings growth varies by field. For example, a nursing major might earn a comparatively high income a few years out of school, but a liberal arts major might overtake them in mid- and late-career.

The methodology that one of us developed for Degreechoices calculates a metric dubbed “economic score” by adjusting PEP according to how a college’s earnings compare to a benchmark of similar schools, adjusted as far as possible by geography and programme ecology.

Imagine that the average earnings of the $50,000 school mentioned above were 120 per cent of the earning benchmark. Its PEP (2) would be divided by 1.2, to get 1.67. The lower the score, the better, indicating a shorter payback period.

While this does split the difference between the cost-focused PEP and the earnings-focused NPV, adjusting cost by one year of marginal earnings is essentially arbitrary. Like Georgetown’s model, the dependence on short-term earnings figures devalues slow-burning majors that might perform much better over a longer time frame.

Most ROI methodologies rely exclusively on the CSC’s reports because the CSC is the sole source of information on earnings data by college and programme, as well as educational cost averages. However, the first measured cohort graduated only four years ago. To measure long-term earnings performance, two additional key data sources are now available: the Bureau of Labor Statistics’ (BLS) Occupational Employment and Wage Statistics and the Census Bureau’s American Community Survey (ACS).

The BLS offers interesting state- and national-level career data, but its cohorts consist of the entire market, so differences in age, college, degree level and major are not reported. While this might help determine mid-term earnings within a career category, it does not keep track of majors without a definitive career path, nor does it differentiate between different schools.

The ACS surveys approximately 300,000 US households monthly, curating information on a wide range of topics, including educational attainment, demographics, income and employment status.

There are many important differences between ACS and CSC earnings cohorts. The CSC reports on students by institution and groups them by enrolment or graduation year. ACS earnings data are reported by age and degree level. In other words, the datasets represent two different, albeit often overlapping, groups of people.

In addition, while the CSC uses federal tax returns for earnings data, the ACS relies on surveys of supposedly representative populations. Arguably, survey-based information is less reliable.

On the other hand, a critical deficiency in the CSC data is the absence of racial disaggregation. The rankings based on this data therefore unintentionally create disincentives for colleges to recruit students from disadvantaged backgrounds.

The thinktank FREOPP has created a lifetime ROI analysis that tries to augment CSC data with ACS age-level earnings figures to estimate how the earnings of graduates from each major increase over a career, in order to arrive at an “entire-career” comparative earning figure. This complex extrapolation also attempts to adjust for various differences based on ethnicity, race, gender, geographical considerations and individual attributes such as cognitive ability, motivation, health and family background. But while each assumption might seem reasonable, the end result recalls the ship of Theseus: after all these adjustments, are we left with the same ship?

Moreover, a lifetime ROI analysis has, at best, a tenuous relationship with any student’s reality. The farther from the beginning of a career you look, the more variables, life experiences, choices, opportunities, failures and chance events impact earning potential in ways that cannot be measured.

Researchers must wait to see if the Department of Education will begin disaggregating the data as recommended. In the meantime, they should try to explain to all users why ROI rankings should be taken with a grain of salt.

David Levy developed the ranking methodology for degreechoices.com. Harvey J. Graff is professor emeritus of English and history, Ohio eminent scholar in literacy studies, and academy professor at Ohio State University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

The returns to education is a very interesting and relevant debate and the article is extremely well researched. The returns to a degree should also include the opportunity cost. The OECD use the minimum wage for convenience but at the individual level is only a benchmark. Benefits (and costs of course - but typically short-ter) should be discounted. The discount rate is usually the risk free rate and is applied equally to all graduates irrespective of their degree. This is not appropriate and should reflect the riskiness of the investment.

Sponsored