Rankings, reputations and surveys
Author: Don Westerheijden
Early in March, Times Higher Education (THE) published its annual reputation ranking of universities worldwide. THE proudly mentions that it is based on the ‘largest invitation-only academic opinion survey’ consisting of ‘tens of thousands’ of invited academics. The QS ranking is the other global university ranking that relies heavily upon reputation surveys in its methodology ; QS goes further and uses two surveys: one among academics, which according to the QS website received 63,700 responses, and the other among employers totalling almost 28,800 responses. There is marketing talk aplenty, from trying to impress readers with large figures on the QS side and vaguer assertions on the THE side, implying that even though they have less than 63,700 responses they can claim to be the largest because they limit the statement to ‘invitation-only’. This is the part of university rankings that vexes me most, because of the fundamental flaws of worldwide reputation surveys.
Nothing wrong with reputation
In Leviathan, Hobbes already realised that ‘what quality soever maketh a man beloved, or feared of many; or the reputation of such quality, is Power’. He was right to the extent that people will react on what they believe to be true. Concerning quality of universities, reputation may look like an efficient and therefore attractive indicator of ‘quality’, for actors who do not have the time, need or other resources to delve deep for detailed information (Stigler, 1961), or to worry about what makes up ‘quality’. Reputation is ‘good’ for institutional managers (van Vught, 2008), because it is what many stakeholders act upon. It helps gain better access to funds, highly-performing staff members, well-prepared first-year students, etc., all of which might result in measurably better performance in later years.
Nothing wrong, then, with having a high reputation for quality, as long as it is based on actual performance. But there’s the rub!
Individuals’ shortcuts to form a reputation in their minds can be based on anything: rather than on the actual quality of education, university reputation probably depends on hear-say (in academic terms: previous reputation), and as Marginson (2008) wrote, on being located in a well-known major city or on establishing a university brand, which I guess partly depends on institutional age.
But don’t use them as valid information for rankings
How valid are reputations that academics hold of universities worldwide? Academic researchers may know individual colleagues, but rarely do they know about the performance of a whole department or field in another university. If they do, it is most often through having read their publications, or having participated in research projects with them. Knowledge of colleagues’ quality of teaching is very rare. Moreover, actual knowledge of other universities is mostly limited to other universities in the respondents’ own country. Knowledge about foreign universities is mostly very limited—globalisation is far from complete and for insider knowledge about universities, the world certainly is not yet flat. Reputation therefore is an individual shortcut, but not a valid source of scientific information. Those implementing worldwide reputation surveys cannot know how to value the different shortcuts taken by their respondents. What reputation surveys measure is therefore highly uncertain; they are ‘prone to being subjective, self-referential and self-perpetuating’ (Hazelkorn, 2011, p. 75).
Moreover, there are incentives to lie in reputation surveys: by answering that one has a low esteem for other universities, one’s own institution scores better in the ranking. Several incidents in national rankings (especially in the US) show that manipulation is far from hypothetical. For instance, start with this article in Inside HigherEd and then look at the ‘related articles’! Manipulation of much of the data can be discovered: false numbers, inflated salaries… But manipulation of reputations is, in principle, not detectable: they reside in the respondents’ heads and the surveys take them by definition at face value, whatever they are based on, even if that is the desire to shine. It only came out that Clemson University officials systematically rated other universities ‘below average’ because one official talked about it in public. Maybe next time I should fill out such a questionnaire, to get the University of Twente into the top-200…
Hazelkorn, Ellen. (2011). Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence. London: Palgrave Macmillan.
Hobbes, Thomas. (1981). Leviathan. Harmondsworth: Penguin Books.
Marginson, Simon. (2008). Global, multiple and engaged: Has the ‘Idea of a University’ changed in the era of the global knowledge economy? Paper presented at the Fifth International Workshop on Higher Education Reforms ‘The Internationalization of Higher Education and Higher Education Reforms’, Shanghai.
Stigler, G.J. (1961). The economics of information. Journal of Political Economy, LXIX, 213-225.
van Vught, Frans A. (2008). Mission diversity and reputation in higher education. Higher Education Policy, 21(2), 151-174.
About the author
Don Westerheijden is a senior research associate at CHEPS where he co-ordinates research on quality management whilst also co-ordinating and supervising Ph.D. students. Don researches and publishes internationally on quality assurance and accreditation in higher education and its impacts, comparative higher education (e.g., the Bologna Process, employability, and excellence in education), as well as transparency tools (classifications and rankings). His personal tweets appear under @DFWHd.