In an effort to collect more information, The Chronicle of Higher Education recently expanded a project on tracking adjunct pay across the nation. The Adjunct Project relies on adjuncts to submit information about their salary to create a robust, easy-to-search database by state and university. Considering the record of colleges and universities releasing information, the project provides access to a goldmine; the site gives analysts a peek at the market rate for adjuncts and factors affecting salaries.
One reason that could explain the dearth of information on adjuncts (and the hesitation of institutions to make the data easily accessible):
“The prospect that people will flood to California and to other higher-paying adjunct environments, if they can, is quite likely,” says Ms. Hanzimanolis, who is now teaching at three institutions: De Anza, City College of San Francisco, and Cañada.”
Making it more difficult to find salary information means that low-paying institutions don’t need to compete against high-paying institutions for adjuncts, thus keeping pay low. However, with budget cuts and staff reductions during the last few years, competition among adjuncts willing to relocate might not work ideally. Until an oversupply of adjuncts disappears (or adjuncts refuse to work for little pay), salaries won’t increase until the higher-paying institutions start to hire the above-average and average adjuncts. Or, if adjuncts leverage their position for a better job at their respective institutions, little might change.
Another bright side of this project: It could spur colleges and universities to self-report the data to protect institutional integrity. Inaccurate data could result from a lack of due diligence, deliberate errors from slighted adjuncts, or a lack of context could make the institution look worse. While not questioning the project’s integrity, large databases are prone to error. Though universities aren’t always trustworthy for data, either.
If the project eventually builds a thorough database, it could be used to compare adjunct salaries on the state level, unionized vs. non-unionized faculty, competition among universities in close proximity, etc. It’s depressing that such basic information has barriers to prevent quick and easy access; institutions of higher education wax poetically about the call to disseminate knowledge and educate the public, but tend to obscure that part of their mission when faced with scrutiny.
As CCAP releases a new study today, the media coverage has already been strong. “Why are Recent College Graduates Underemployed? University Enrollments and Labor-Market Realities” has been discussed in USA Today, National Review, Inside Higher Ed, The Chronicle of Higher Education, CNN Money, and The Pittsburgh Tribune-Review, to name a few.
Briefly, the study by Richard Vedder, Christopher Denhart, and Jonathan Robe finds that nearly half of recent college graduates are underemployed, holding jobs that require less than a four-year college degree. That finding, coupled with data from the Bureau of Labor Statistics, suggests that public policy directs too much state and federal funds to higher education, resulting in over investment and burdening graduates with large amounts of student-loan debt.
Instead of ensuring taxi drivers and retail sales clerks hold college degrees (15 percent and 25 percent, respectively), our system of educating students and preparing them to enter the workforce might need a reformation ranging from fewer students at four-year institutions to alternative methods to verify competency.
We’re currently updating and modernizing the commenting apparatus for our blog by transitioning to the platform provided by Disqus. All of the comments you’ve left for us previously will be imported to the new platform (we’re almost done with that process). In the meantime, if you have any problems with the new system, please let us know. Just shoot us a quick email or leave a comment below.
I found this blog essay by David D. Perlmutter to be a delightful read, in particular this insightful–and self-introspective–comment:
To take a more charitable view, professors love professing and assume everyone else loves what they have to profess.
Exactly so. And I think this may get close to the heart of what may be a disconnect between the view college professors’ have of academia and the corresponding view held by the general public. It seems to me that, roughly speaking, college faculty view the purpose of college to be the creation and furtherance of scholarship. Many in the general public wouldn’t necessarily disagree with that, but a good number would, as some survey data suggest, place a greater explicit emphasis on the job outcomes of a good college education. The response from the professoriate would be that “learning for learning’s sake is good and noble and beautiful, in and of itself.” To which the “average Joe” would naturally respond, “sure, if that’s the way you want it for you, more power to you, but I have far more practical concerns in my life to worry about.” Besides, it’s not strictly true that professors are motivated solely by a noble love of learning rather than some more mundane interest in earning a living; after all, faculty are paid to teach and research, to engage in scholarship. While I certainly think it is appropriate to pay college faculty (and I would not advocate that we cease to do so), I will hazard a guess that no college professor would willingly forgo all compensation as a way to demonstrate an unfaltering fealty to the noble cause of learning.
Is the response of “average Joe” to the faculty one that is fundamentally anti-intellectual? Perhaps, but it is dangerously hasty to jump to that conclusion. Rather, could it not be that the real undercurrent to the general public’s dismissal of scholarship is really nothing more than the (somewhat obvious) observation that a lot of scholarship has more or less zero effect, practically speaking, on the lives of most people? How many people really care about, for example, whether yawning is contagious with the red-footed tortoise (I mean other than those awarding the Ig Nobel Prize). What I suspect happens with a good many people in the general public is that they don’t give a lot of scholarship a second thought not because they think it’s garbage (though some do think that of a lot of research) but because they simply do not have the time (they have jobs to attend and families to look after and other means of leisure) or, more importantly, do not see how much of the scholarship has much, if any, relevance to their lives. And that’s not, as I see it, a failure on the part of the general public; in many ways, it’s actually the fault of those inside the Ivory Tower for not giving attention to making the public case for the importance of scholarship. Rather than thumb our noses on the vast, uneducated plebeians, a much better course would be to take Perlmutter’s caution humbly to heart and realize that just because other people lack a keen interest in the same research field as we do, it by no means necessarily means that they are anti-intellectual. Then proceed to build a convincing case (from the point of view, not of the one making the argument, but of the one who needs to be convinced) that this research is indeed important. If we want to raise the public out of an anti-intellectual stupor, let’s first treat them as fully capable of doing so.
I fully concur with this very thoughtful (though brief) caution that Lloyd Armstrong has raised:
Many of the traditional nonprofit universities and colleges are jumping into the online business because they see it as a new source of much needed revenue. As a former administrator, I understand the need for new revenues as much as anyone, so I am a fan of increasing revenues. My concern is that in most cases the online initiatives are not being done in a way that incorporates the online education into the educational mission of the institution – it is a financial, not educational advance. As a result, little emphasis is being placed on educational effectiveness in many of the new online programs. I have great fear that when the educational outcomes of many of these new programs are evaluated, they will be shown to be relatively ineffective. This result willlead many to conclude that online education is intrinsically inferior, when all it will really show is that inferior pedagogy leads to inferior learning. Nonetheless, such a negative, albeit flawed, analysis could be a big setback in the much needed expansion of effective online learning in higher education.
I share Armstrong’s concern, but I think that it is more than just financial concerns which are crowding out educational ones; in some cases, the proximate cause deals with the desire to maximize prestige or some other intangible measure of institutional success (though, given that these measures–especially prestige-seeking–are keys to new revenues, perhaps even this motive boils down to the financial realm). It’s not just for financial reasons, for example, that Harvard, MIT or other elite institutions are welcoming (to one degree or another) the emerging MOOCs; I suspect at least part of the reason is that these institutions are wanting to build some political capital against further protestations from the public about college cost escalation (if someone complains that Harvard, for example, is out of touch with the public, Harvard can point to edX as its attempt to address the problem). For other less prestigious institutions (take San Jose State as an example), an additional motive for jumping on the MOOC (or even online education more broadly) bandwagon is the fear that they will be doomed if they don’tget on because the last institution to embrace the rise of online initiatives very well may be left behind and crowded out of the market.
The problem with any of these motives is that they don’t necessarily have much to do (directly, anyway) with the education of students. The danger is that if improved educational outcomes are not the preeminent motive for institutions to join these online education initiatives, the colleges and universities will not take a comprehensive view to online education. Instead, they will try to slap on new technologies to the old model; if that’s the case, the new technologies, rather than lowering costs and/or improving outcomes will wind up increasing costs and even lowering outcomes. This is exactly the approach that institutions of higher education have taken for centuries and for many technologies: whether it is the blackboard, overhead transparencies, VCR, Powerpoint, or “clickers,” colleges have merely taken these technologies and imposed them on the existing classroom with no fundamental change to the operating model. Of course, given that approach, it should not be any surprise that costs subsequently rose because the new classroom cost is the sum of the old classroom cost plus whatever the cost of the new technology is (which is always a positive number). Compared to a medieval lecture, the cost of the 21st century lecture that includes a powerpoint slide is essentially the real cost of the medieval lecture plus any cost associated with running a powerpoint presentation.
Sometimes the use of these new technologies will positively impact outcomes, but there’s no reason to believe this necessarily must be the case (as an illustration of how new technology can actually lower educational outcomes, my favorite anecdote is that, as several of my own professors told me, with the rise of calculators, student errors were magnified by orders of magnitude, relative to typical errors from the age of the slide rule). But even if the effect on outcomes is positive, it is entirely possible that that gain will be washed out by the loss due to increased costs. In order for higher ed to leverage online education and gain from it, it first has to develop and embrace a new operating model. Otherwise the takeaway from the experiment will be the wrong one and exactly what Armstrong warns: a renewed conviction that “online education is intrinsically inferior.” But the real problem, as I see it, is a confused (or as Armstrong puts it, an “inferior”) pedagogy resulting from a failure to comprehensively consider what a true online model for education really is.
I’m a bit of a late comer to the party, but new Purdue University President Mitch Daniels made national news even before he officially left the Governorship of Indiana to oversee the University, particularly with his compensation package. On the main, I think there is considerable merit in the way Daniels has insisted that he be paid (utilizing a performance-based contract), even though only time will tell how well this will work out in practice. For now, though, Daniels deserves commendation for his willingness to put into actual practice some of the ideas floating around for increasing accountability in the Ivory Tower. The fact that this has been brought about by a voluntary arrangement on his part is also a good thing because now it’s very much in his interest for this to work out (part of the risk associated with externally
forcing these changes is that rather than try to bring about the success of the new approach, the existing leadership may try to fight it instead) and no one outside the University forced him to do this. If the Daniels’ project at Purdue succeeds, he will gain as a legacy far more than he ever would in foregone income. In this respect, Daniels is just taking one step further the recent “leadership-begins-at-the-top” realization of a number of incoming college leaders.
Even were Daniels to meet all of the incentives targets in his contract, his annual salary ($546,000) would still only be the 9th highest in the Big Ten conference as it is currently structured (being 9th used to mean something in this conference but since it appears that the conference has difficulty in ensuring that its name corresponds to its actual size, who knows what this may mean in five years). It is, however, unlikely that Daniels will hit all the targets, however, so his actual level of compensation may be a significantly lower $420,000.
Besides just the big picture, the details of the contract (that is, the performance metrics) are worth additional attention. According to the Purdue press release, there are five general categories for measuring Daniels performance: “student affordability, graduation and student achievement,
strategic program development with demonstrated student outcomes in knowledge and understanding, philanthropic support, and faculty excellence and recognition.” The first category (affordability) is, at least on its face, a good one, though one does have to be careful in defining the term correctly. For example, since Purdue is a public institution, keeping future tuition increases low by leveraging Daniels’ connections in Indianapolis to massively increase state subsidies to the university may meet some definition of affordability to the student, but it may not actually be the appropriate way to maintain college affordability in a broader sense for society (i.e., taxpayers). Should affordability be a metric for assessing college president performance? Yes, but to borrow an expression from constitutional court cases, such a goal must be “sincere” and not a “sham.” It won’t work just to shift costs around just to make it seem like the students are getting a break when in reality, once they join the august group known as “taxpayers,” they’ll be stuck with the tab.
What also complicates the picture on affordability is the fact that, in one sense, the traditional model of education, with its emphasis on face-to-face instruction, is inherently very expensive, especially if it is of high quality. As I see it, many of the cost savings that can be gained from leveraging new technology may actually only be possible with a model that exists, at least in large part, outside of the current system. In other words, it won’t be possible to maintain quality while increasing affordability simply by slapping new technologies on the existing model (that’s a bit of the new wine in the old wineskin problem). Part of the problem with some online classes is that all we’ve managed to do is to the combine high costs of the traditional with low quality, the exact opposite of the goal. Unfortunately, I think too many within the existing higher ed system who get excited about using new technologies miss this crucial point; they get excited about using something that’s brand new without first considering the more fundamental change that would be needed in order to effectively achieve efficiency gains.
The second metric–graduation and student achievement–presents its own set of problems (at least with respect to graduation rates), despite the fact that, at first glance, this metric may be the most appealing of all five. For starters, graduation rates, by themselves do not convey nearly as much information as people say they do. To illustrate this point, take two very different institutions, one a diploma mill and the other an elite school with a very, very low admission rate–both very well may have identical graduation rates (~100%) and yet obviously there are enormous differences in educational quality. To be sure, that is an extreme example, but it is helpful in demonstrating the soundness of Arthur Hauptman’s thoughtful point that it is “a mistake to use institutional completion rates as a measure of educational quality, because institutional selectivity is by far the principal predictor of completion rates.” Thus, the best way for Purdue to achieve a higher graduation rate would be to start turning more applicants away. To be sure, given existing constraints (its own enrollment targets, its status as a public institution, etc.), there are limitations to how extensively Purdue might use increased selectivity to effect higher graduation rates. Another way Purdue could increase its graduation rate would be to dilute its standards just to push more students up and across the stage. I’m sure that Daniels is well aware of the fact that graduation rates need a proper context, which would possibly explain why he is linking “student achievement” directly to “graduation.” (As an aside, if anything, shouldn’t “student achievement” be listed first, before graduation?) Nonetheless, the point is that the focus should be, as Hauptman argues, on “faculty who are good teachers and a commitment to provide a quality education to whichever students are admitted.” Graduation rates can be helpful in reflecting positive qualitative changes, but the whole picture includes a lot more.
I view the last three metrics as mostly restating traditional college administrative goals, albeit it in new terms (to allow for increased accountability). That’s not to say the affordability and student achievement metrics aren’t already mentioned frequently; they just aren’t often thought of directly relating to the performance of a college president. Securing philanthropic support is just part of the normal duties of a college president nowadays; given that Daniels the politician gained skills in soliciting donations, he may find doing so for a university to be a piece of cake. Next, I think it is laudable to tie strategic development to student outcomes (provided, of course, that those outcomes are clearly and correctly defined and properly measured); without this constraint, it is perhaps too easy to set a course that is advancing an institutional objective while marginalizing the attention directed towards students’ education.
Of all five metrics, I’m actually the most curious about the incentive related to “faculty excellence and recognition.” It’s possible that this in practice will mean nothing more than what countless universities refer to when they give grand promises about investing in the faculty, promises which result sometimes in, well, nothing (they’re just statements made in the hopes that it will keep the faculty quiet and happy), or sometimes in advancing faculty interests (no matter how noble they are in and of themselves) at the expense of undergraduate education (see, for example, the awarding of tenure, with its emphasis on research but precious little, by comparison, on student learning). Yes, we need to promote “faculty excellence and recognition” but it’s even more important to make sure that that is truly supplementing–and not discarding–the educational mission towards the students.
It does remain to be seen how the Daniels era at Purdue will work out, if it will be the first big step in the emergence of a broader reform agenda in higher ed. Here’s to hoping for its success.
Talk about really missing the forest for the trees (or finding the speck in your neighbor’s eye while ignoring the plank in your own, or whatever metaphor one prefers), but this story by Michael Stratford for the Chronicle ended with a rather deft counter to the recent hand-wringing by the Consumer Finance Protection Bureau (CFPB) on private student loans. As the story describes, the CFPB has issued, over the past year, multiple reports on the dangers and shortfalls of the private student loan industry. But as Stratford dryly notes at the very end of his story,
Private loans represent only a fraction of the overall market for student loans, accounting for about $150-billion of the more than $1-trillion in outstanding student debt. The vast majority of student loans are held by the government.
It goes without saying that one can list legitimate grievances against the private loan
industry, but it is somewhat amusing to see the CFPB
spill so much ink over was is, in reality, only a very narrow segment of the student loan market. Sure, I suppose it is true that we have to start somewhere to fix the student loan system, but ignoring the elephant in the room really won’t serve students or taxpayers well in the long run. Besides, if all we do is just shift the loans from private holders to the public, might not that put taxpayers at a higher level of future risk than they have already? If anything, rather than just a 15% fix, shouldn’t we focus on how to retool the whole system?
I don’t know whether I should laugh, cry, dance a jig or merely shrug my shoulders at this new story by Eric Hoover in the Chronicle which reports that over the past year, “Boston College saw a 26-percent decrease in applications this year, a drop officials largely attribute to a new essay requirement.” If writing a measly 400 words is enough to dissuade around 9,000 students from applying to a school ranked 31st in the nation (by US News and World Report), I suppose one can be forgiven for wondering what this might mean in general for the nation’s aspiring college students. In one sense this very much reminds me of the more salient points buried in Dave Tomar’s screed The Shadow Scholar” How I Made a Living Helping College Kids Cheat (as Tomar recounts, one of his “clients” once maintained that while he can do the course work, he would prefer not to “waste” his time on it).
Perhaps the most interesting issue raised by the Chronicle story, however, is one related to measuring academic quality of institutions of higher education. As Hoover himself points out, this trend at Boston College, if it can be generalized across all of higher education, “reveals the slipperiness of application tallies, widely viewed as a meaningful metric.” Perhaps the most intelligent immediate response to this may be “why in the world have we ever thought the application rate is a meaningful metric for academic quality?” After all, as Kevin Carey perceptively pointed out a few years ago, the use of application rates in college rankings is really a measure of institutional “exclusivity,” and which merely “confirm[s] the status of colleges and universities that by virtue of their prestige are valuable to students irrespective of the quality of the education they provide.” The problem with that, obviously, is that even if exclusivity can be related to superior educational outcomes for students, that connection is not necessarily a direct one and it leaves the door wide open for institutions to promote their own exclusivity at the expense of educational outcomes. This may exactly have been what happened at Boston College before they added the essay requirement this year: they could enjoy a lower admissions rate but who knows what their educational outcomes really were.
Of course, it would be impermissible to single out Boston College to the exclusion of anyone else in higher ed (the College is at least seeking to improve its admissions process), but this is just another example of why we should be continually questioning the underlying rationale for the status quo. It is perhaps, unrealistic to expect higher ed at any time soon to abandon the use of prestige metrics like application rates as a proxy for institutional quality, but we must not forget that simply because elite school can turn tens of thousands of prospective students away, it does not necessarily mean that the education the admitted students receive is, in fact, superior to the education they would have otherwise received or even superior to the education that the rejected students would have obtained at a lower tier institution.
The New York Times today broke the news that Sebastian Thrun’s Udacity is partnering with San Jose State University to offer “a series of remedial and introductory courses” to students. This announcement, while certainly part of the (almost) daily news updates in the press about “massively open online courses” (or MOOCs), is unique because, as the Times puts it, this is “the first time that professors at a university have collaborated with a provider of a MOOC… to create for-credit courses with students watching videos and taking interactive quizzes, and receiving support from online mentors.”
As I see it, this a very smart move by both Udacity and San Jose State. For the latter, the upside is both that the school may be able to lower costs and increase student outcomes for remedial courses (that, of course, very much remains to be seen) but also that San Jose State may be able, by incorporating MOOCs into their curriculum, forestall any future threats that MOOCs may have to their operating model. Notably, the agreement with Udacity maintains the presence of San Jose faculty in the courses, though the teaching/tutoring work will also partly be borne by online mentors employed by Udacity. San Jose State is, therefore, maintaining at least a nominal control over the courses while still incorporating the basic model of a MOOC; for the purposes of these course offerings, San Jose can both verify the quality relative to their traditional offerings while at the same time make political hay for embracing new approaches using technology. Given the potential for increasing political and economic pressure that schools (particularly the mid-quality state institutions like San Jose State) are likely beginning to feel to utilize cost savings from MOOCs (see Kevin Carey’s article in the Washington Monthly a few months back), perhaps San Jose State did not have much of a choice.
This is also a shrewd move by Udacity. Of course, only time will tell how successful this experiment with San Jose State will turn out to be, but then again, Udacity does not have anything to lose and everything to gain. While it surely is remarkable how swift and widespread the rise of the MOOCs have been, it is also very easy for many in academia to dismiss MOOCs as nothing more than a passing fad, all the rage today but nothing compared to the resilience of an Ivory Tower. If Udacity can conclusively demonstrate to San Jose State that it can improve outcomes at reduced costs in either introductory or remedial courses (courses which present their own unique challenges), then that will go a long way in establishing the right of the MOOC to remain at least a feature in higher education going forward. But even if the experiment cannot unambiguously demonstrate that the MOOC model beats out the traditional model, it will still show that Udacity is serious about improving education (and not just trying to maximize media hits or running up huge initial course enrollments) and will provide valuable feedback directly to San Jose State about how to leverage technology to improve education–both by lowering costs AND improving outcomes.
Of all the places that MOOCs should try to make their mark in higher education, introductory courses seem to be the most likely. But they can make an even bigger mark if they can improve remedial education. That would be a win-win for everybody: the MOOCs will secure their place as a positive force in education, students will win because they can obtain their remedial courses for minimal to zero cost and the colleges and universities will win because they can specialize exclusively in what they are supposed to be good at: offering quality higher education.
On Wednesday, Richard Vedder appeared on CNBC to discuss the rising cost of college tuition, reform, and Mitch Daniels’ presidential contract.