Quora Questions are part of a partnership between Newsweek and Quora, through which we'll be posting relevant and interesting answers from Quora contributors throughout the week. Read more about the partnership here.
Google Scholar turned the process of academic career selection into a social media video game. How did it do this?
- It popularized citation-based metrics measuring research output such as total citations, and i10 index. Note that Google didn’t invent the h-index, but it definitely helped to make it almost universally known among researchers.
- By popularizing citation-based indices, a “good” paper became one that is highly cited, regardless of the field, or why or by whom the paper was cited. The scientific quality of the content of the paper became irrelevant.
- All researchers, whether they like it or not, now feel the pressure to increase their citation metrics as fast as possible, and cite themselves gratuitously. Quantity before quality.
To oversimplify grossly, faculty selection committees typically feature one-third of the committee who are experts in the field of a candidate, one third who know something about the field, and one third who have very little idea about the field at all. They are very busy people, and have limited time to evaluate the candidates. They read the candidate’s CV, see them present a lecture, and then interview them. Within a few hours, they must make a decision.
Consider candidate A, who has published a volume of decent-but-average stuff over a period of time. Due to self-citations and the probabilistic effect of having many publications which are bound to be cited sooner or later, she already has a few thousand citations and a h-index somewhere between 20 and 40. Now consider candidate B, who has spent years doing nothing except solving some obscure thing which could lead to cold fusion or time travel. She publishes a single paper, which is popular enough to get 100–200 citations in a few years. A majority of the selection committee doesn’t understand the significance of the paper - they just see h=1, citations=257. Without time to understand the research, they are instantly biased against candidate B. So candidate B, who could have changed the world with that one paper, loses out to journeyman candidate A who then proceeds to pump out reliably mediocre stuff for decades.
This change in the nature of research careers is not really the fault of Google Scholar. It simply reflects the times in which we live. Once upon a time, the popular idea of academia was that of an individual genius toiling away in an ivory tower to produce a single masterpiece. The academic was independently wealthy or sponsored by a rich person, and thus had the luxury of pursuing a passion regardless of short-term rewards.
Now things are different - academia is a profession and a big business, in terms of both education and intellectual property. The commercial and national economic incentives are so great that the emphasis has shifted to management skills and the entrepreneurial ability to raise money. Money is thrown at big problems with the confident assumption that, if enough money is applied, the problem will be solved.
In this world, there is no room for the individual researcher against an army of assistants who will chance upon a solution sooner or later. By exploring the solution space with these armies of researchers, the sheer volume of research has a quality all its own. The best result is simply more likely to come from the highly cited, productive army, making it a safer short-term bet than the individual - a key consideration when trying to justify the allocation of public funds. That might be why, in a perverse sense, the social citation networking assumption behind Google Scholar is so powerful. It’s the same assumption used in Google Search that made Google so wealthy in the first place.
Are there other ways to quantify research output? Impact factors, which have been around for a long time, reflect the average number of citations per paper in a journal, so “good” journals have a higher impact factor with more attached prestige. One major criticism of impact factors include the power of the editorial boards of journals to influence paper acceptance and hence the careers of researchers, which is much harder with the h-index and i10 index. The journals themselves can also fake their impact factors, although there is a well-known list of such predatory journals maintained by.
So if research output statistics can be misleading, why are they used? Correlations have been shown between(which might be a case of the result begging the question, but the link is still there). You can even predict your chances of becoming research faculty using a biology research index on . It has also been suggested that, for papers with sufficiently high citation numbers, . Thus the possibly improved aggregate objectivity of these social citation indices, and the difficulty of faking them on a large scale, may explain their popularity to at least some degree.
How has Google Scholar changed academia? originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:
- Google Scholar: In addition to ScienceDirect and Google Scholar, what are popular academic databases/search engines for academic research papers and review articles?
- Academia: How does the notion of higher education vary around the world?
- Technology: What do you think computers will be like in 10 years?
More from Newsweek