Admissions season at colleges and universities across the United States is quickly approaching. Soon every higher education institution will be mailing decision letters to applicants, and campuses will be overrun with prospective students and their ambitious parents for the ubiquitous spring visit days.
It is
not uncommon this time of year to hear mention of college rankings. Admissions
brochures and other promotional literature at colleges and universities will
play up favorable performance in rankings. Students and their parents will use
rankings, along with a host of other factors, to determine institutional
quality and, perhaps more importantly, prestige. In the end, that’s what
rankings are all about for institutions and prospective students: the pursuit
of prestige.
College
rankings emerged in the 1980s as a result of the accountability movement and
have grown in popularity since then. Today, a cottage industry has developed
around rankings. The annual rankings produced by U.S. News and World Report (USNWR) are the most popular,
influential, authoritative, and—perhaps because of the preceding
attributes—critiqued rankings in the business. USNWR’s annual “Best Colleges”
issue sells 2.2 million copies, reaching an estimated 11 million readers.
Together with the more detailed guidebook, USNWR publications account for
almost half of the college rankings market.
Many
students and parents see the USNWR rankings as a way to sort through the
massive number of higher education institutions in the United States, as well
as a way to compare colleges and universities. The problem is that USNWR
rankings actually provide remarkably little in the way of useful information to
students and parents. Here’s a list of 10 research-informed reasons why anyone
interested in learning about higher education institutions should disregard the
USNWR rankings. I include at the end of the list references and resources for
further reading.
1. They
falsely claim to be objective.
USNWR
uses numerical information and statistics to convey objectivity. However, their
methodology of ranking colleges and universities is based upon weights subjectively assigned to seven criteria the editors believe to be measures of institutional
quality: academic reputation, student selectivity, faculty resources,
graduation and retention rates, financial resources, alumni giving, and
graduation rate performance. Bob Morse, who is in charge of USNWR’s rankings,
has explained that what they do is not based upon social science research.
Rather, it is part of USNWR’s “consumer journalism.” However, at no point do
they acknowledge the inherent subjectivity of their methodology—the legitimacy
of the publication rests upon its perceived authority.
2. They
base 25% of the rankings on academic reputation.
At
first blush, this seems fairly straightforward. An institution’s academic
reputation should be taken into consideration. The issue is that USNWR captures
academic reputation through a survey mailed to presidents, provosts, and
deans of admissions. They are asked in this survey to vote using a Likert scale
on the colleges and universities with the best academic reputation. In some
instances, this has the effect of rubber stamping into the top of the hierarchy those institutions
with a historic and cultural association with prestige. When
academic reputation surveys are sent out to deans of particular departments for
USNWR’s more program-specific rankings, this undoubtedly affects their voting
patterns. One commonly cited study showed, for example, that law school leaders ranked
Princeton’s law school as one of the best in the country—even though it closed
in the 19th century. In other instances, campus leaders try to vote
out the competition. One administrator at my university admitted to giving our
institution a 1 on the academic reputation survey, and all other schools a 3.
3. They
place too much emphasis on the profile of applicants.
The
criterion that probably receives the most attention from students deals with
selectivity. Receiving 15% of the total in weighting, student selectivity
includes the percentage of applicants admitted, the yield rate, the number of
in-coming students in the top 10% of their class, and the average SAT or ACT of
entering freshmen. Aside from the possible positive benefits of being in class
with other talented students, this criterion conveys no information about the
institution or the quality of education offered there. It essentially tells
students how much they will pay to sit next to others who are of a similar
academic caliber. Additionally, many institutions have difficulty accurately
reporting student selectivity data. For example, most high schools in the state
of Maryland do not rank students. In order to report how many students were in
the top 10% of their class, many colleges and universities must extrapolate
based upon a formula.
4.
They are of questionable validity.
Webster
(2001) concluded that average SAT/ACT score of in-coming students is the
criterion that most affects an institution’s rank. Kuh and Pascarella (2004)
repeated and confirmed Webster’s test, demonstrating that “for all practical
purposes, U.S. News rankings of best colleges can largely be reproduced simply
by knowing the average SAT/ACT scores of their students” (p. 53). This means
that, despite USNWR using student selectivity measures as proxy indicators of
quality, the two are largely unrelated. Dichev (2001) sought to discern the
validity of USNWR rankings by looking at the predictability of changes over
time. Her logic was that a “good” ranking should not change in predictable
ways. Yet she discovered that between 70 and 80 percent of variation in
rankings is transitory, changes are likely to reverse within two cycles, and most of the changes are due to aggregated noise in underlying components.
5. They
encourage prestige-seeking behaviors.
The
prestige that accompanies performing well in the USNWR rankings also helps
colleges and universities attract gifted students and resources in the form of
grants and donations. Given that state appropriations have reached a 10-year low, these resources are increasingly important. Thus, institutions striving for prestige
often emphasize graduate over undergraduate education. They also prioritize
research, particularly in areas where the knowledge can be commercialized.
Faculty are expected to win competitive grants and publish research, meaning adjuncts
and graduate assistants shoulder much of the teaching. In other words, the
pursuit of prestige has the potential to undermine the teaching mission of
colleges and universities.
6. They
promote institutional isomorphism.
Institutional
diversity has been a defining feature of American higher education. We have
many different types of people seeking higher learning, so it makes sense to
make many different types of institutions. However, rankings are causing
competitive comparison, which results in institutions, over time, coming to
resemble one another. Consider a liberal arts college that falls in the middle
of the USNWR rankings. They look to those schools within their field of play
and develop a list of “aspirational peers” from which to borrow ideas and
practices. Scholars have thus likened the U.S. higher education system to a
snake. The head represents prestigious schools, the body represents the
prestige-seeking schools, and the tail represents those schools attempting to
minimally increase their reputation. The body, then, follows the head, and the
tail follows the body. Not all institutions can or should be research one
institutions. But that hasn’t stopped them from trying.
7. They
create a positional arms race.
The
essence of an arms race is that there is no absolute goal, only the relative
goal of staying ahead of other competitors. This makes it difficult to leave
the race, and it means there is no finish line. Colleges and universities simply
continue competing against one other to secure a relatively better position in
the USNWR rankings. Repositioning oneself usually requires that
institutions spend more or charge less, and both options necessitate additional
non-tuition resources. Most competition takes the form of increased spending,
especially on amenities like dining halls, gymnasiums, and residence halls.
Positional competition caused by rankings, therefore, increases the cost of
higher education.
8. They
make institutions less accessible.
Monks
and Ehrenberg (1999) were interested in how changes in USNWR rankings among
elite schools changed their admissions practices and pricing policies.
According to their study, an improvement in rank means an institution is more
selective, offers less grant money to students, and experiences an increase in
the average SAT score of applicants. Meredith (2004) corroborated these
findings. Several observers have also noted that institutions have turned to
early-decision applications as a means of improving yield rate. Early-decision
applicants tend to be those students from families that can easily say “yes” if offered admission
to a university without having to shop around for the best financial aid
package. As follows, the early-decision application process advantages upper and
middle-income students.
9. They
are unrelated to activities that contribute to learning.
Questions
surround the relationship between USNWR rankings and the extent to which an
institution promotes activities that contribute to learning. Pike (2004)
measured the strength of the relationship between the USNWR criteria and five
benchmarks of the National Survey of Student Engagement. In short, Pike’s
analysis revealed that, with the exception of students at selective
institutions reporting enriching educational experiences, USNWR criteria and
NSSE benchmarks are unrelated.
10. They
have caused unethical behaviors.
There
is a great deal of pressure placed on administrators to improve, or at least
maintain, USNWR rankings. This pressure has caused some administrators to
report false information or manipulate the numbers for personal and/or
institutional gain. There have been repeated stories in the media about such
cases, the most recent of which involved George Washington University and
Tulane University’s business school.
These 10
reasons provide sufficient justification to shrug off the U.S. News and World Report college rankings. Next time you see the
newest “Best Colleges” issue at your grocery store, walk by it as you would a
tabloid. If a friend posts the rankings to Facebook, give the link a confident
“dislike.” Most importantly, encourage students not to make their college
choice based upon these rankings. Instead, push them to visit campuses and seek
out alternative rankings, such as the one created by Washington Monthly, which accounts for how an institutions contribute to the public good. The
reality is that the only people benefiting from the USNWR rankings is the
corporation profiting from them.
For Reference and Reading
Dichev,
I. (2001). News or noise? Estimating the noise in the U.S. News university
rankings. Research in Higher Education, 42, 237-266.
Ehrenberg,
R. G. (2003). Reaching for the brass ring: The U.S. News and World Report
rankings and competition. The
Review of Higher Education, 26(2), 145-162.
Kuh,
G. D. & Pascarella, E. T. (2004). What does institutional selectivity tell
us about educational quality? Change, 36(5), 52-58.Meredith, M. (2004). Why do universities compete in the ratings game? An empirical analysis of the effects of the U.S. News and World Report college rankings. Research in Higher Education, 45(5), 443-461.
Monks, J. & Ehrenberg, R. G. (1999). U.S. News & World Report rankings: Why they do matter.
Change, 31(6), 43-51.
O’Meara, K. (2007). Striving for what? Exploring the pursuit of prestige. J.C. Smart (ed.). Higher Education: Handbook of Theory and Research, Vol. XXII, 121-179.
Pike, G. R. (2004). Measuring quality: A comparison of U.S. News rankings and NSSE
benchmarks. Research in Higher Education, 45(2), 193-208.
Webster,
D. S. (1992). Reputational rankings of colleges, universities, and individual
disciplines and fields of study, from their beginnings to the
present. Higher Education Handbook of Theory and
Research: Vol. VIII,
234-304.
Webster,
T. J. (2001). A principal component analysis of the U.S. News & World Report tier rankings of colleges and universities. Economics of Education Review 20, 235-244.