The Publisher’s Trap: How Academic Publishing Undermines Research

Scientific publications are a major part of academia, which in turn increasingly affects our lives. This essay explores two of the most urgent issues in the publication system and their consequences: the way the profits of this market are unfairly distributed and the faulty metrics used to assess quality in peer review publications.

Scientific development increasingly impacts our lives. Today, more than ever, we are surrounded by technologies that are based on academic and scientific work while politicians and policy-makers around the world are gradually adopting more recommendations from academia, greatly impacting issues from unemployment or housing to mental health and air quality. For these reasons,  it is crucial that we understand and discuss how the academic world operates.

The academic world is dominated by publications. Publications are the main means by which new scientific findings are communicated to the academic community and the outside world. Publications are also, more often than not, the basis of how the academia is evaluated. University tables, hiring processes, research funding, promotions, and overall prestige count the quantity and a number of metrics of one or a university’s publications as one of the most important items to be taken into consideration.

Image: The Monument to an Anonymous Reviewer via www.hse.ru

Nowadays, academics are being increasingly pressured to publish more and more. This raises a number of issues, such as the production of an untraceable amount of research outputs, which has risen to the staggering number of 2.5 million new scientific papers published in 2015, spread across over 30,000 peer reviewed journals. 1 This urge for never-ending publications aids the development of a philosophy in which methodological or ethical shortcuts can be seen as tempting, and leads to a preference for quantity over quality in a field where reflection, discussion and time should be the basis for one to develop and be successful. Considering the impact of publications on academia, and the impact academia can have on our lives, I believe it is important to understand how this publication system works. Peer review publications in specific have been increasingly criticised and I will guide you through a short overview of some of its issues.

The process of peer review publishing

So, very shortly, how does the publishing process work?

Let’s say you have done some research, and you want to share it with the world. There is an endless number of specialised journals in which you can publish your work. Journals have an editorial board, in which the number and type of editors might vary from journal to journal and across different disciplines. Nevertheless, an editorial board usually has several main Editors (usually between one and four) and a number of Associate Editors and Consulting Editors. Editors are responsible for deciding on a number of general publishing policies and for accepting or refusing papers when they are submitted for the first time. If papers are accepted, they are further responsible for forwarding them to peer reviewers of their choice. Level of involvement, responsibility and workload varies between type of editor and from journal to journal.

Once the editors have initially accepted a paper, it is forwarded for peer review: (in most cases) two academics, who will anonymously and independently provide feedback on the paper and judge whether it should or should not be published;  a decision in which the editor usually has the final word. Peer review not only is the basis of the academic publication system but it is also a guarantee of a piece’s quality.

So, both the editorial board and the review team are constituted by academics (from doctoral students to senior researchers, depending on journal ranking and prestige) who, in the altruistic name of scientific progress, but also for prestige and career purposes, collaborate in this system. Indisputably, this is work that ought to be done by academics and one should be happy that those who are assessing the quality of scientific publications are experts on the topic. Nevertheless there are a number of things that have been criticised in this process. Let me focus on two: the distribution of profits and the evaluation of quality.

Fair pay for work?

In 2011, the scientific journal market had a total revenue of 9.4 billion dollars (roughly €9.6bn), with most of it being concentrated in a handful of big companies.2 As an example, as of 2013, in the fields of psychology and chemistry, over 70% of all scientific publications are concentrated in the hands of 5 big publishers.3

You would think this profit could be used and at least partially given to universities and academics. Well, not exactly, as the majority of the most prestigious journals are not freely accessible. So, basically, most academics are paid by universities, which are often funded by state itself. The public, very often, funds specific research projects too. Then, when academics wish to publish their research they submit it and (if successful) publish it in privately owned journals.

The publication system as it is now actually exacerbates inequalities between universities.

Academics and researchers need to be updated with the state of the art in their fields. However,  in order to get access to the papers and journals their academics need, the university – which employs the academics in the first place – and its library(ies), pay immense amounts of money for access to such publications. As an example, Harvard University spends $3.75 million yearly on journal subscriptions.4

This is a perverse system in two ways. First, those who fund the research have to pay publishers to access the very thing they funded in the first place. These publishers are not responsible for the most demanding, expensive and important task involved, which is to produce research and to evaluate it. To make matters worse, the vast majority of peer reviewers and members of editorial teams are not paid for their contribution to the publication process, work that according to some reports is worth £1.9 billion (approximately €2,16 billion) a year globally.5

Secondly, obviously, a large portion of universities around the world cannot afford to pay $3.75 million (approximately €3 million) in journal subscriptions. For that reason, the publication system as it is now actually exacerbates inequalities between universities, as a university’s budget largely impacts the amount of information and knowledge its staff has access to, given that universities with lower budgets are not able to pay for as much journal subscriptions as the University of Harvard.

All in all, these paywalls not only hinder the public – who often fund the research – from accessing research outputs, but it further makes academia increasingly isolated in its ivory tower, deterring scientific knowledge from impacting the wider society and from being accountable. It also allows publishers to profit extensively, with academics not being paid the extra hours they put in for the peer review process.

Evaluating research

Let’s move to the second point of concern: the issue of how academic quality is determined. Together with how much one publishes, where one publishes is seen as an important criterion when assessing a CV, a department or a university. So, how do we assess which journals are the best?

According to the academic community, a piece’s quality is akin to the influence it has on the scientific production following its publication. Therefore, a metric called Journal Impact Factor (JIF) was developed. JIF is an index given by the yearly mean (average) number of times papers published in a specific journal over the previous two years have been cited in the current year. So, when research production in general (by a researcher, department, university, etc.) is assessed, evaluators usually take JIF as an important measurement of a publication’s quality.

However, there are a few issues with this measurement. As any course on statistics will tell you,  a mean is not necessarily the best description of what a set of data is. Let’s say you want to calculate the mean salary of a company. Let’s say the company has 10 employers who receive 1,000€ and one who receives 100,000€, the mean wage in such a company is 10,000€, which would be the same mean salary of a company of 11 employers who all receive 10,000€. This is because the mean as a measurement is highly sensitive to what are called extreme values; values that are much higher or lower than the rest of the observations.

JIF is a mean, so the same thing happens to it as in the example of mean salary I gave. It has been proven that a large portion of the articles published in a journal are actually cited much less frequently than what the JIF would indicate, whilst a small minority of the articles published there have a much higher number of citations.6 Likewise, there seems to be no major difference in distributions of article citations between articles published in high-, medium- or low-JIF journals.7 So it seems as if JIF is not making a fair work in representing a paper’s and an author’s influence or quality, as a paper published on a journal with high JIF does not mean that that paper will be highly citable. Nonetheless, JIF still is often seen as a measurement of someone’s quality or potential as an academic.

It is high time for the larger society to become aware of the mechanisms by which academia regulates itself.

There is a second major issue with JIF, and that is how its value is calculated. Editorial boards lobby publishers concerning what comes to count as a ‘citable’ piece in their journals.8 These negotiations can and do impact a journal’s final JIF greatly, which in some cases can go from a JIF of 3 to 11,9 depending on small variations on what comes to be accepted as a citable piece. For you to have a notion of the scale, Nature has a JIF of around 40, and that is well above the highest JIF in social sciences and humanities. Additionally, editorial boards decide strategically when to publish specific papers, as papers that can be expected to be highly-cited are published in the beginning of the year and have more time to be extensively cited before the JIF’s assessment of that year.10

So, JIF does a poor job in assessing the quality of publications because as a measurement it too often inaccurately represents the qualities of publications and because the way in which it is calculated is, apparently, negotiable. However, JIF is a metric that has an immense impact on academic careers, increasingly characterised by job insecurity, precariousness and endless competitiveness.

I suppose we imagine academia as this place where precision and rigour are imperative. Still, this is clearly not the case when it comes to the way the publication system operates: academics aren’t fairly rewarded for their labour, universities spend thousands to millions of euros on journals they have directly contributed to, and the way quality is determined remains faulty.

It is high time for the larger society to become aware of the mechanisms by which academia regulates itself. This essay is the first of two parts on the problems associated with academia and academic publishing. In the second part, I will review the progress and proposals that have been under development to transform and eradicate these issues over the last couple of years.

  1. The STM Report, March 2015..
  2. Larivière V, Haustein S, Mongeon P (2015) The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE 10(6): e0127502.
  3. idem.
  4. Chambers, C. (2017). The 7 Deadly Sins of Psychology. Princeton: Princeton University Press.; for the full memo by the University of Harvard see http://web.archive.org/web/20160317160330/http://gantercourses.net/wp-content/uploads/2013/11/Faculty-Advisory-Council-Memorandum-on-Journal-Pricing-%C2%A7-THE-HARVARD-LIBRARY.pdf.
  5. Activities, costs and funding flows in the scholarly communications system.
  6. Falagas, M.E. (2010) Comparison of the distribution of citations received by articles published in high, moderate, and low impact factor journals in clinical medicine.
  7. Idem.
  8. Chambers, C. (2017). The 7 Deadly Sins of Psychology. Princeton:Princeton University Press.
  9. Idem.
  10. Idem.
António Valentim
António Valentim is a master in Social and Cultural Psychology at the London School of Economics. Originating from Portugal, he will soon be starting a PhD at the Weizenbaum Institute and Freie Universität Berlin, which will be focused on digital citizenship and the effects of the Internet on political participation and political identities. Contact: antoniodinisvalentim@gmail.com.