U.S. News College Rankings
Tarradiddle or Falderol
When we talk about rankings, we often take them at face value. But what if I told you that these seemingly innocuous lists carry significant implications for social justice? By the end of this post, you’ll see how ranking methodologies not only skew perceptions, but can perpetuate existing power dynamics and inequities.
A week ago, I critiqued the U.S. News Best State Rankings in my post U.S. News Best State Rankings: Poppycock, Hogwash, Fiddlefaddle, Bunkum, or Codswallop, diving deep into its complex ranking structure. Yesterday, U.S. News released their annual Best Colleges rankings, and I am once again compelled to pull on threads, and perhaps unravel them.
College rankings have been a staple for decades, guiding prospective students, parents, and even faculty in decisions about education and employment. They often influence the reputation, funding, and appeal of an institution. However, like all metrics, they come with their set of challenges and biases.
The Complexity of Rankings
In last week’s dissection of the Best State Rankings, I underscored the overly-intricate ranking procedure. U.S News employed 71 metrics to rank 50 states, which is problematic because of overfitting, redundancy, and lack of interpretability. Using a bit of statistical maneuvering, I developed a simpler algorithm using just six metrics, and the output of my algorithm closely mirrored the U.S. News state ranking results.
Venturing into College Rankings
Let’s shift focus to the freshly minted college rankings, particularly the National Liberal Arts Colleges category. By way of disclosure, my own institution, Williams College, is ranked #1, and in fact, has been #1 every year since 2004. I mention this not to gloat, but rather, to own up to the privilege of criticizing these rankings from the (alleged) top.
To give credit as due, as compared to the state ranking methodology, the college ranking methodology is simpler and more transparent. They used 12 or 13 metrics (depending on some details of how a school uses standardized test scores) to rank 211 liberal arts colleges. Nonetheless, I devised a simpler and more transparent model. My model includes the top 99 liberal arts colleges from the U.S. News ranking, data I had to manually input due to the challenges of scraping it from their web page (and I got tired after 99 schools). I combined these rankings with institutional data sourced from the College Scorecard, a comprehensive higher education dataset curated by the U.S. Department of Education, a source that U.S. News also partially employs.
My algorithm is as follows. For each liberal arts college, apply the formula:
200 + 50 x (admissions rate) — 0.00053 x (instructional expenditure per student) — 47 x (four year retention rate) — 70 x (six year graduation rate for non-financial-aid students) — 69 x (eight year graduation rate for all students)
Rank the colleges based on the resulting values (with the lowest number denoting the “best” college) and voilà, you have your ranking. The specifics of how I derived this formula, the exact units of measurement for the variables, and the way I handled near-ties in the formula, are all tales for another day. But I’m open to sharing if prompted. The essence is that my model — which, by the way, I would never use, because I am philosophically opposed — offers a still simpler and more transparent alternative to the U.S. News methodology and yet produces similar results. To validate this claim, consider the so-called Spearman correlation coefficient between my rankings and U.S. News’s: a robust 0.95 (even higher than I got when I did a similar exercise for the U.S. states rankings). For those seeking a different, more intuitive metric: of the 4,851 possible pairings of my 99 colleges, my rankings agree with U.S. News’s in terms of relative college positioning 94% of the time.
Let’s move on to discuss two of the more serious problems with the U.S. News college ranking methodology.
Problem 1: Peer Assessment
One notable concern is the weight that U.S. News gives to “Peer Assessment.” The methodology states:
U.S. News collected the most recent data by administering peer assessment surveys to schools in spring and summer 2023. Of the 4,734 academics who were sent questionnaires on the overall rankings in 2023, 30.8% responded compared with 34.1% in 2022. The peer assessment response rate for… the National Liberal Arts category was 28.6%.
Despite having a modest survey response rate, this metric occupies a hefty 20% of their ranking algorithm, making it the single most heavily weighted factor. The reliance on peer assessments has serious implications. Historically prestigious institutions, because of entrenched networks of recognition, often continue to be perceived as superior by their peers, regardless of objective shifts in quality or performance. This system, bolstered by the U.S. News ranking methodology, risks cementing the standing of long-established prestigious institutions. As a result, emerging or historically underrepresented institutions that may deserve recognition could remain overshadowed.
Problem 2: Why Rank at All?
Let’s delve into the core issue: the very concept of rankings. Why, indeed, do we rank at all? By endorsing these lists, we unwittingly allow external entities to shape and dictate our values and priorities. For instance, according to U.S. News, 20% of a college’s worth should be derived from the opinions of its peers, and 8% should hinge on faculty salaries. But let’s consider an analogous, albeit hypothetical, scenario with cars. If a magazine ranked cars and decided that 20% of a car’s value should be based on its color and 8% on the brand of its tires, would that truly align with what every consumer values in a vehicle? Perhaps you care more about fuel efficiency or safety features than the hue of the paint.
Similarly, when considering colleges, what if faculty research, student engagement, or community impact are your top priorities, rather than peer opinions or faculty pay? In the grand scheme of things, relying solely on rankings can obscure the very details and nuances that might be most critical to you. By adhering strictly to such rankings, we risk losing sight of our unique preferences and priorities in favor of a one-size-fits-all metric.
A Holistic Perspective on Colleges
Navigating the landscape of higher education can be daunting, especially for individuals or families who may not have had much prior exposure to the collegiate world. It’s entirely understandable that many seek guidance through rankings, as they offer a seemingly straightforward way to evaluate institutions. Nonetheless, I want to highlight an alternative way that we (collectively) could approach the challenge: data grouping, also known as clustering.
Instead of linear rankings that prioritize certain outcomes, we can apply statistical methods to datasets like the College Scorecard to group colleges according to their characteristics. For example, Colleges A through F might cluster together due to their emphasis on student engagement and arts, while Colleges P, Q, and R form another cluster renowned for research and STEM disciplines. In this scenario, a student wouldn’t compare colleges by a numerical rank but would instead identify them as part of specific clusters, allowing decisions based on the attributes they value most.
While the groupings would still depend on our choices of what characteristics to include in the first place, this approach at least avoids ranking one cluster as “better” than another. Instead, it highlights similarities between institutions and their distinctiveness from others. Prospective students can then choose which cluster aligns best with their priorities, devoid of the biases of traditional rankings. This method is more transparent, more tailored to individual needs, and fairer in its perspective on higher education.
Rankings and Social Justice
Rankings, by their nature, often favor historically prestigious institutions, sidelining equally capable but lesser-known colleges. By adopting a data-driven clustering method in lieu of traditional outcome metrics, we can foster a more equitable approach to evaluating colleges, ensuring every institution has a chance to be seen.
While rankings might seem a convenient way to categorize institutions, they come with embedded biases and can overshadow crucial nuances. The clustering method, on the other hand, might amplify the unique strengths of each college, providing a more comprehensive and just perspective on higher education.
In Conclusion
It’s time we reevaluate the way we talk about educational institutions. Let’s champion a system that values diversity and individuality, free from the constraints of one-size-fits-all rankings.
Your Neighbor,
Chad