Posts Tagged “college rankings”

College Ratings and Rankings: The Debate Continues

Last month’s Huffington Post College article, “Grading Higher Education: When Worlds Collide,” sparked a lively offline discussion and is worthy of additional exploration.

In that post, we examined how various magazines, journals, and other outlets grade American higher education in their version of what some call the higher education “swimsuit edition.” We noted that various perspectives often shape the approach utilized.

We argued that many ranking strategies are heavily based on inputs causing the numerical rankings to change slowly from year to year because they rely more heavily on reputation and selectivity.

Based on a study by James B. Stewart in the New York Times in October examining the evolution of the higher education literature on this topic, we concluded that Mr. Stewart demonstrated convincingly that measurable outcomes — more attuned to the metrics sought by most American families – have lasting value.

With the growth in popularity of outcomes-based rankings assessments, we offered the following thoughts:

  • American higher education is moving closer to center stage visibility in the court of public opinion
  • The decades-old disagreements between higher education officials and the editors of US News over issues like methodology is at best “navel gazing,” and
  • The new battleground is likely to be over outcomes-based surveys.

The most interesting response came from a highly respected, distinguished colleague, Dr. Alexander Astin, the Allan M. Carter Professor Emeritus and Founding Director of the Higher Education Research Institute at UCLA. Many of us use Dr. Astin’s survey research in our work on student life in our own institutional planning as the basis for subsequent research.

Dr. Astin finds: “the pecking order of American higher education institutions that drives the annual ‘admissions madness’ in this country has little to do with rankings and ratings. It is, rather, part of our culture, part of our shared beliefs about which are the most ‘excellent’ institutions. These beliefs have remained largely unchanged for more than 50 years. What US News and others have been trying to do is simply to codify – put numbers on – these shared beliefs.”

Dr. Astin also notes that while there is considerable value to the “outputs” approach that is gaining ground, the cold fact is that no one has the data required to properly look at outputs. He uses research on earnings as an example.

Dr. Astin reports that decades of research on earnings suggest that it is not simply dependent on the level of a student’s ability when admitted as a freshman but on other factors, including career choice, major, parental occupation, degree aspirations, and social class, among others. Each carries its own set of biases.

Accounting for such bias, a researcher would need to calculate an “expected earnings” for each entering student to measure against their actual earnings. Astin suggests: “By aggregating these expected earnings and actual earnings figures for all students entering a particular college, you’re able to obtain a much more valid estimate of that college’s effect.” He further argues that earnings is only one potential output and must be grouped with other outcomes to assess the value of a college education.

Whatever the approach, Dr. Astin regrets the emphasis placed by federal and state officials as well as US News-type rankings on degree completion rates.

He reports that degree completion rates are largely dependent on the level of the academic preparation of entering freshmen. As such, they are an indirect measure of SAT/ACT scores making them in turn a reflection of shared cultural beliefs about the pecking order.

Put in other terms, students with high test scores prefer elite colleges because they – and their families – believe that these colleges are “the best.”

Dr. Astin’s comments are a cautionary tale for those who are interested in seeking broad, standardized measures of excellence and value. His words add a cultural, social, and psychological dimension to a “paint by numbers” approach. He does not reject the value of a blended approach to combine inputs and outputs in measuring quality. But he is correct to argue for a longitudinal approach that adds nuance and perspective to data on the value of higher education.

One fact is clear. The scattershot and prescriptive efforts now utilized are insufficient and a poor basis on which to develop sound public higher education policy.


Grading Higher Education: When Worlds Collide

One of the most persistent problems facing American higher education is how best to explain its importance and endearing value to the public.

The problem is that various perspectives shape the approach utilized. Higher education leadership – especially at research universities and liberal arts colleges – often speak to the need for America to produce an educated citizenry. It’s an ambitious but justifiable argument that once resonated well with the American public. It also appeals to higher education’s stakeholders, especially staff and faculty. But the failure to sharpen and expand this definition has diminished the almost exclusive claim that higher education once held on people’s perception of it.

The reasons for this erosion are complex and reflect the changing economic circumstances, consumer preferences, state and federal regulatory climate, and shifting demographics that carry different expectations and their own set of perceptions. It’s been a long time in coming.

Unfortunately, the incremental inertia that shapes higher education’s patterns of behavior, including timely response to emerging opinions in American society, hinders higher education’s ability to make a case for itself.

The “Swimsuit Edition” of College Rankings

For many years, the principal competition in higher education came from the rankings survey drawn up annually by US News and World Report. It’s a profitable endeavor for the magazine, drawing subjective, perception-driven analysis that many higher education colleagues derisively and somewhat accurately call the “swimsuit edition.” Heavily based on inputs, the numerical ratings change slowly year to year because they are based on reputation and selectivity.

The input-based approach sets tight parameters around the US News release. While Americans like competitive rankings, the failure of this survey to look at outputs inhibits its value to many families in a post-recession society, who are searching how best to educate their children in markedly different terms than a generation ago.

Measurable Outcomes More Attuned to What Families Want

Writing in the New York Times earlier this month, James B. Stewart looked at the evolution of the higher education ratings literature. He noted the wide variety of relatively new players in the rating game, citing PayScale, The Georgetown University Center of Education and the Workforce, The Economist, Forbes, and Money magazines. Each looks more at the outcomes largely eschewed by US News and by most colleges and universities. Mr. Stewart demonstrates convincingly that these measurable outcomes have value. They are more attuned to the metrics sought by most American families.

Stewart’s point is a telling one.

There is more consumer value among the newer rankings to knowing what your tuition dollars buy than in the claim to bragging rights made possible by a college’s good standing in the US News and World Report annual survey.

He further concludes that two recent efforts by The Wall Street Journal and Times Higher Education (no relation to the newspaper) “did a creditable job blending a wide variety of factors, including outcomes and student engagement.”

There are lessons for America’s colleges and universities in Mr. Stewart’s report.

First, American higher education is moving closer to center stage visibility in the court of public opinion. Absent an ability to shape this opinion, higher education will be increasingly subject to it. Perhaps the best strategy is to find a way, therefore, to influence it more directly and with greater common purpose.

Second, the decades-old disagreements between high education officials and the editors of US News over issues like methodology is at best “navel gazing” if the goal is to make a positive impact on general perceptions. They are one thorn among many for which accommodation must be provided, but the center of the rankings debate has shifted away from US News.

Third, the new battleground is likely to be over outcome-based surveys. Outcomes shape consumer preference and the polling data and anecdotes from which (sadly) so much state and federal policy is developed. There is undeniably a “what’s in it for me” quality to this debate that is somewhat softened by the real and legitimate concerns over issues like employability and having “high meaning” to post graduate work.

There is a counter argument, of course. There have been well-intentioned efforts within higher education that attempt to answer consumer concerns to reflect the enormous diversity, differences in type and scale, and purposes of America’s decentralized system of higher education. These are positive efforts – sometimes defensive and politically calculated – but they represent a good start.

American Families Good Analysis, Not Another College Ranking

To support American higher education’s ability to shape its own destiny, it may be that what American families need is not more surveys but a clean, credible, and simple analysis based upon exiting data.

Higher education will need to move more willingly – and at times more gracefully – into deeper consideration of outputs.

The American consumer will also benefit from education about higher education, including its remarkable diversity, which moves well beyond questions that stop at how much money a graduate makes.

It’s simplistic and laughable to think that a college or university should be ranked on a numerical scale to determine quality. Do comprehensive metrics ultimately support such claims? It is reasonable, however, to imagine a consumer education system that addresses quality “going in” and “coming out.” It would be wise for American higher education to lead the charge before they are overwhelmed by the blizzard of new survey data.