Richard Vedder participates in a panel discussion, moderated by the American Enterprise Institute’s Alex Pollock. Together with Andrew Kelly of AEI, Matt Chingos of Brookings Institute, and Jason Delisle of New America Foundation, Richard Vedder discusses how the current federal financial aid system has directly led to increased costs of college tuition. Richard points out eight deficiencies with the current financial student assistance programs, which are discussed at length in the recent CCAP study: Dollars, Cents and Nonsense: The Harmful Effects of Federal Student Aid.
“What if that rate of tuition increases had continued after 1978 instead of the 3 to 4 percent increases actually observed? What would tuition fees be today? About 59 percent lower. State universities charging $10,000 in-state fees (a common fee today) would instead be charging a bit over $4,000.”
View the full discussion below.
To some, pursuing a degree in the humanities is a noble and laudable pursuit. Be it in English or comparative literature, the analysis of humanistic study can expand human knowledge and understanding. However, as the Modern Language Association (MLA) reports, doctoral programs in the discipline may prove more damaging than previously envisioned. With a median time-to-degree of nine years, sparse job market, and limited funding, the field has fallen into serious academic jeopardy.
Doctoral students in the humanities spend on average 9.2 years to complete coursework. Between 2006 and 2010, nearly 44 percent spent more than ten. Delayed workforce entrance, family planning, and financial security have all come at the expense of the program, and compared to other advanced degrees – such as a MD, JD, and MBA – a nine-year crash course might seem particularly unattractive.
According to the National Science Foundation’s 2012 Doctorate Recipients from U.S. Universities and Colleges, roughly 1,928 of the 5,503 humanities doctorates surveyed – or 35 percent – are “seeking employment or study.” Additionally, the academic job market, which employs 82.8 percent of doctorates, is declining. According to the MLA’s Job Information List (JIL), tenure-track positions in the humanities have declined sharply since the 2008-09 academic year. Today, fewer than 30 percent of faculty hold tenure or are on the tenure track.
Federal support for the humanities has similarly been disconcerting. According to the Humanities Indicator Project, spending in the humanities equaled 0.48 percent of science and engineering expenditures while academic institutions covered more than 55 percent of expenses. Doctoral students now incur more than $20,000 in student debt.
Reforming the humanities doctorate by limiting degree completion and encouraging careers outside academia can ease uncertainty. Students can benefit by broadening the conventional job market and incurring more tenable debt. The future of humanistic study nonetheless rests on its vindication, and until more serious measures are taken, its legitimacy will remain in doubt.
With Dollars, Sense, and Nonsense: The Harmful Effects of Federal Student Aid, CCAP provides an in-depth analysis of structural issues within higher education, the incentives produced by the federal student financial aid system, and offers reforms and solutions to improve college.
From the introduction:
The federal government was once almost nonexistent in higher education. Now, it plays an important role in financing attendance at America’s colleges and universities, and funds vast amounts of collegiate research. The intentions of those responsible for expanding federal involvement in higher education were admirable, although the Higher Education Act of 1965 was part of a Great Society effort to solidify and expand the progressive wing of the Democratic Party in determining national public policy. Beyond politics, however, most advocates of an expanded federal role wanted to use higher education to improve educational opportunity for those of modest financial means, thereby increasing economic opportunities and narrowing differences between the affluent and the poor.
After reviewing the various federal programs that evolved to assist college students, we conclude that they have largely failed. For example, the proportion of lower-income recent college graduates is lower than when these programs were in their infancy. The programs are complex and Byzantine, leading to forms such as the FAFSA (Free Application for Federal Student Aid), whose very complexity has reduced participation by low-income students. The law of unintended consequences has reared its ugly head.
For more information and an overview of main points, see the study’s research page.
With the advent of the student loan crisis and the inflationary cost of college, one would be hard-pressed to find a more opportune time to eliminate marginal research programs which have plagued higher education. As the federal government funds almost 60 percent of university research, universities and campus administrations have pushed faculty to produce an exorbitant amount of research.
To be sure, federal grants have led to significant advances in the health and sciences. From World War II to the Cold War, it can be argued that erudite scientific inquiry helped set the stage for American preeminence at the turn of the century. By 2005, more than half of all leading researchers and Nobel Prize laureates belonged to American university faculties.
Scholarly research, however, comes with a price tag. Lower teaching loads, increased discretionary time, and the de-emphasis of instruction have been unintended consequences. Between 1988 and 2004, for instance, it is estimated that teaching at loads at research universities dropped 42 percent. What’s more, faculty members, responding to incentives, succumbed to a “publish or perish” mentality where quantity beats quality. In the field of English and foreign literature, for example, scholarly publications rose from 13,757 in 1959 to around 70,000 in recent years.
The dilemma arises when paltry research is subsidized. According to Derek Bok in Higher Education in America, a remarkable 98 percent of all articles in the arts and 75 percent in the social sciences are never cited. What’s worse, departmental research tends to be highly specialized, bearing little relevance to undergraduate learning. Through nationwide emphasis of research over instruction, universities have fostered an environment which stresses scholarship over student.
The best way to combat the increasingly deleterious effects of trivial research might be to reexamine its purpose. If English professors, for instance, are more effective in the classroom than in the library, the impetus to publish should be reconsidered. Departments which systematically favor research to instruction should reassess the purpose of their mission to determine if departmental literature is the most salient route. In the meantime, however, the federal government should reevaluate how it finances academic inquiry. The most germane solution might be the simplest – namely, the reallocation of funds to more relevant disciplines. In doing so, the government might just make research – and college – more affordable.
The Obama administration plans to release a federal rating system of colleges with an unspecified launch date to measure the benefit of attending a specific college. They hope to use it to determine how much financial aid colleges receive. Regardless of the quality or accuracy of the system, tying financial aid to it is incredibly problematic.
The president argues that colleges often act in their own interest to improve their rankings, and that the federal rankings will ameliorate that effect. Tying financial aid to rankings, however, will exacerbate the problem. The huge financial incentives (the ability to attract more students) offered by financial aid will make colleges aim for high scores, even if they don’t improve any outcomes for students. Then, as students will have more money available due to higher financial aid, colleges will be able to increase their costs to students: financial aid actually increases costs since it increases the students’ ability to pay through scholarships or, more commonly, loans, and thus, colleges can charge more. Even if the rating system accounts for affordability —which it almost certainly will, as increasing affordability is the main goal of the system — it will always be possible to hide costs, such as having cheap and low-quality housing in their price estimation but charging students more for higher-quality housing.
The administration’s overconfidence does not engender any hope for the quality of the system— Jamienne Studley, a Department of Education official, claimed that rating the higher-education system is “like rating a blender.” As Reason’s Bobby Soave points out, the claim is absurd. Unlike blenders, colleges are expected to perform several tasks—house and feed students, teach them huge varieties of knowledge, help them get employment, etc. While a blender will blend the same regardless of the quality of fruit, the outcomes from college depend heavily on the characteristics of the student enrolling. If the administration thinks that a college can be rated as easily as a blender, their rating system will be oversimplified and offer no real information.
Rather than increasing college accessibility, the proposed system will encourage colleges to reach their metrics without actual improvement. Additionally, given the simplistic approach that seems to be taken on the ratings, the metrics it sets forth may not even be worthwhile goals. Most problematically, by tying financial aid to the ratings, the system will increase already overinflated college costs.
Last Wednesday, Richard Vedder testified at a Senate budget committee hearing concerning student financial aid, debt, federal loan interest rates, and related topics. Video of the hearing is on C-SPAN, but CCAP has Dr. Vedder’s unabridged testimony online. An excerpt follows:
I wish to make three key points this morning. First, the current student loan debt crisis would never have happened had college costs increased at the general rate of inflation. The major cause of the student debt problem is increased university fees –period. To deal long term with this issue, you must address the root cause, namely runaway college cost inflation.
Second, there are many reasons for this university price inflation, some of which are mentioned in this written statement that I submit for the record. But one relevant major contributor to the rise in tuition fees, in my judgment, is the federal student financial assistance program itself. No significant successful solution to the problem of rising college costs can occur without rethinking the magnitude and nature of the federal financing role.
Third, we are at or near a tipping point, where fundamental change will come to higher education. Early indications are that these changes are starting to happen. I will elaborate a bit on this. I will argue that many policy proposals gaining prominence these days do not fundamentally address the problems leading to big changes, and, indeed, they would likely worsen rather than improve the existing situation.
Recently, as attention focuses on the rising cost of college tuition, some colleges have been moving to some form of a guaranteed tuition model. It ensures that the nominal price a student (or a student’s parents) will pay for four years will not increase from year to year. Supporters say that it helps parents plan for the student’s financial needs.
A guaranteed tuition plan takes the expected tuition total for four years and divides by four to get the annual payment. It means students pay more in the first two years than they would under an unbounded tuition system, and less in their last two years.
Who does this help? In at least one significant way, universities. According to the National Center for Education Statistics almost 41 percent of students fail to graduate within six years. Of those, consider drop-outs or transfers during the first year. They will pay some percent more for the year that they are enrolled than they would under a traditional tuition system the schools capture a premium on those students. Under a guaranteed tuition system, the schools are not incentivized to push out students, but for students who drop-out or transfer, schools collect higher tuition revenues. Statistics show that failure to complete is higher for minority and low income students, exacerbating this problem.
For instance: Ohio University will be moving to a guaranteed tuition system in Fall 2014 for the class of 2018. Under the current system, the National Center for Education Statistics tuition calculator estimates a 2.8 percent annual tuition increase for a four-year in-state total of $44,845. A guaranteed tuition model would charge $11,211.25 per year rather than $10,744 the first year, $11,050 the second, $11,364 the third, and $11,688 the fourth. When we consider the full-time first-time enrollment and a first-year retention rate of 79 percent, the university will make an additional $500,000 with a guaranteed tuition model on the freshman year transfers and dropouts alone. This does not even include those that transfer or dropout after their sophomore year.
Source: U.S. Department of Education, author’s calculations.
By guaranteeing tuition a university ensures that annual tuition payments will not increase. This allows for families to better plan their financing. These students will pay an above cost price the first two years, and below cost price the latter two. Each cohort of students faces a different price that reflects the average cost of attendance over the next four years. The University captures more on students that leave, drop out, or are awarded scholarships for later years in their first two than they do under a non guaranteed system, and this difference can be significant.
The primary purpose of colleges and universities is to educate their students, and it’s necessary for professors at these institutions to teach effectively. Evaluating the teaching quality of professors allows students to make better decisions about which classes to take and incentivizes good teaching. Schools usually evaluate teaching quality in three ways: standardized testing, the success of students after graduation, and student evaluations. However, no method can be objective, and although subjective reports can be useful for students, systematically incentivizing good teaching is next to impossible.
Standardized testing has been implemented at the K-12 levels of education to measure teaching quality, starting with the No Child Left Behind Act, but that has had mixed results at best. There’s no evidence to think that it will have more success on the post-secondary level. However, there are compelling reasons to think that implementing it would have negative consequences. It will result in a higher homogeny of knowledge among students and narrow the possibilities for coursework down to the absolute necessities. Additionally, it will incentivize professors to teach for the test instead of comprehension of the material. Most significantly, it will impair the abilities of less privileged students to attend college, since the institutions will only want students that are certain to perform well.
Measuring teaching quality through graduate success poses problems, as it is institutional, not individual, feedback, and doesn’t account for the effect of the individual student on outcomes. Additionally, good teaching does not always result in post-graduate success. Some courses do not correlate to material well-being, no matter how well they are taught.
Student evaluations are the least of the three evils, as they offer direct information about teachers. For potential students, knowing how other students view a professor helps them assess the value of a class. Yet one can already see the main complication from a website such as Rate My Professors: selection bias. Only students with strong opinions about the teacher will submit a rating. Mandating them won’t fix the problem either, as students who have no interest in providing feedback will not evaluate a professor accurately or fairly.
To resolve the issue, Rebecca Schuman at Slate suggests attaching names to student evaluations. Accountability might increase the amount of care put into evaluations, but it does not resolve the fundamental problem of student bias. Additionally, it will discourage negative feedback and constructive criticism. If a student is going to take a teacher again (which is fairly likely if they major or minor in the professor’s department), they do not want to stress their relationship with a critical evaluation. If evaluations cannot be negative, they are pointless.
While student evaluations might help other students choose teachers and classes, none of the three methods are sufficient if we want to have objective measure of teaching quality. Although it’s very important for schools to be able to ensure that their professors are teaching effectively, more complex methods will have to be implemented to determine the quality of a teacher. Teaching quality is an essential part of a professor’s job, and one might think that they should have their teaching evaluated and be rewarded based on those results. Although the sentiment is sensible, the inherent bias precludes the use of teacher evaluations to incentivize better teaching in higher education.
by Christopher Denhart and Joseph Hartge
Earlier this year, the Bureau of Labor Statistics released the 2012-2022 employment projections report, analyzing the employment landscape and its relation to education attainment. It concludes that over half of United States workers over-qualify for their work.
Using the data, we can distinguish between the level of education “typically required for employment” in a particular occupation and the number of people at various levels of education who have that job. A total of 52.6 percent of employed people are overqualified for their job, meaning that their highest level of education exceeds the average requirements for that job. So don’t be surprised if your taxi driver is as educated as you are. One out of 20 holds a bachelor’s degree or higher, and 85 percent have at least a high-school education. The BLS says that this education is unnecessary for the job.
Being overqualified for a job carries an opportunity cost. If a graduate waits tables at a restaurant, he forfeited four years of wages to pay tuition. But fear not: he should rest assured that he is in good company, with 354,330 other graduates.
Most people do not plan to take those jobs. The condition of the labor market plays a role in labor outcomes. Many who, out of desperation, take jobs that don’t require a degree search for better employment as they work. However, this phenomenon, known as frictional unemployment, does not seem to explain the pernicious underemployment problem.
Are college degrees still as strong of a signal for employers as colleges claim? Is the labor market flooded with degrees? If so, is it the specific degree’s fault or because companies are not hiring? Most importantly, why is this problem so persistent?
Most likely, it’s signaling. Regardless of whether a job requires a degree, an employer is likely to choose the more educated applicant. The degree conveys information to the employer which is otherwise costly to obtain. For example, a bachelor’s degree serves as an indication of work ethic, discipline, and general intelligence, while a high-school diploma might indicate laziness or a lack of ambition. Even if that is a misrepresentation of the applicants, or if the job does not require a degree, a college graduate is usually a safer hire than a high-school graduate. That reasoning crowds out qualified applicants who lack a degree, and they trickle down to jobs that they are overqualified for. When there is nowhere to trickle down, underemployment and unemployment rise.
The popular line is that unemployment rates decline and income increases with improved education. Therefore, college is the only way for a high-school graduate to prosper. But not necessarily. No law of economics says a college degree is a ticket to success, nor should we treat it as one.
The fact that many graduates are in positions that don’t require a degree suggests that too many people attend college. Combine that with rising tuition costs, one trillion dollars in federal student loan debt, and a college incompletion rate of approximately 40 percent, and it’s an equation for an economic disaster in education.
When a college ranking is released or updated, backlash inevitably follows. Usually, colleges who performed poorly lead the charge, but it’s also a reaction against a root issue. Higher education is so diverse that it’s impossible to objectively compare schools.
Which is great! Diversity gives students a wide swath to pursue their education. Universities can achieve different goals. Would the system be better if students were limited to Harvard or the local community college? Yet, as a result, any metric or ranking will be of limited use in comparing some schools. Community colleges, liberal arts colleges, and state research universities don’t expect the same from students, and it is tricky to measure them on an equal footing.
So, when Jordan Weissmann at Slate noted a new return on investment (ROI) metric released by Payscale.com, criticism of using an ROI metric came swiftly. Unsurprisingly, Weissmann found many colleges with a negative ROI; that is, the graduate would be better off if he or she didn’t attend. Weissmann then wrote a rebuttal to the criticism.
On a certain level, not addressing the concept of college leaves those discussions fruitless. College falls into three categories: economic, hedonic, and intellectual. It’s used for job training, but it’s also viewed as a consumption good to enjoy, and a process of self-fulfillment in the pursuit of truth and knowledge.
Payscale’s ROI figure treats college as job training. Weissmann evaluates Payscale’s usefulness with that lens. The best argument against that is not “college isn’t just about getting a job.” It’s to point out the limitations of one metric. In fact, a ROI metric is relatively weak due to data limitations. Colleges resist transparency, especially for information about graduate earnings, and Payscale uses self-reported data. That’s problematic, but more useful than anything legally available at this point until higher education institutions provide the information. Otherwise, we’ll remain standing on the quad surrounded by fog.
I’m not sure Payscale claims to present an all-encompassing ranking; criticisms of their rankings reveal more about the critics than the target.