Academic rankings: The tide begins to turn

In my 2018 book, The Soul of a University, a whole chapter is devoted to saying (in effect) that the phenomenon of ‘university world rankings’ is really just a global confidence trick. At the time, this was a minority opinion. Five years later, there is evidence that the tide is beginning to turn. This change should give pause for thought to all those university leaders who still fawn on the commercial rankers.

The methodological argument against ‘university world rankings’ is well known and has been made many times. Essentially, it boils down to this: in order to compile a ranking, you need to make so many arbitrary choices between equally plausible alternatives that the result becomes meaningless.

It is not difficult to construct a university ranking. What is needed is not so much any technical skill as enough blind self-confidence to tell the world that the arbitrary choices you have made in constructing your ranking actually represent reality.

First, there is the choice of which categories of activities to evaluate. This choice is often driven by expediency because some activities (like research outputs) are easier to measure than others (like societal engagement). Naturally, the choice you make of what to evaluate will advantage some universities and disadvantage others.

Second, you have to choose performance indicators in your chosen categories and how to measure them. Research performance, for example, has many plausible indicators and whatever selection you make could easily have been different, with different outcomes. Also, when choosing performance indicators, you have to choose the manner and extent to which you use indicators of opinion vis-à-vis indicators of fact. ‘Reputation’, for example, is a matter of opinion, as is ‘student satisfaction’.

Third, for each performance indicator you have to come up with a number that represents your measurement of that indicator. Actually, the term ‘measurement’ is a dubious suggestion of objectivity. In practice, the so-called ‘measurement’ again requires a number of choices. You need to choose, for example, which data set to use and what level of reliability of those data sets you will be content with.

You also need to choose whether you will deal with gross numbers (which will favour larger institutions) or normalise the numbers according to the size of the institution (which tends to favour smaller institutions). Even normalising your numbers ‘relative to size’ involves a level of choice because there is no generally agreed definition of what the size of a university is.

Fourth, having already made many choices to arrive at a number for each performance indicator, you still need to decide on a formula for combining those numbers into one number (which would then deliver your ranking).

You could, for example, take the average – either mean or median. Or you could assign weights to each performance indicator, which can, of course, be done in infinitely many ways. There are many different ways of combining a set of numbers to yield one number, but there is no strong reason, either mathematical or empirical, for choosing one such method above any other.

Any ranking of universities therefore reflects the choices made by the ranker at least as much as it might reflect any reality about those universities.

It is hard to escape the suspicion that rankers make their choices according to their own preconceived notions of which ‘the best’ universities are. If a ranking did not fit their preconceptions, they would change their parameters rather than adjust their preconceptions – as has, in fact, happened.

What this means is that rankings are normative, not descriptive. They create a reality at least as much as they reflect a reality.

Tags: No tags

Add a Comment

Your email address will not be published. Required fields are marked *