Skip to content
alt_text
AI generated illustration using Microsoft Designer

Academia is undergoing a rapid transformation characterized by exponential growth of scholarly outputs. This phenomenon, often termed the "firehose problem," presents significant challenges for researchers, publishers, funders, policymakers, and institutions alike. Some of these challenges include stresses on the peer-review system, a lower capacity to stay abreast of the latest research, a shift in the value of quantity over quality of scholarship, and a divergence between the rewards and incentives for producing outputs that meet funder and societal expectations. In this essay, the implications of the firehose problem and potential approaches to resolving it through reform of incentives and rewards for publishing are explored.

The firehose

At the 2024 National Academies Workshop, Enhancing Public Access to the Results of Research Supported by the Department of Health and Human Services Tom Ciavarella of Frontiers re-raised an idea that periodically pops up in conversations about scholarly publishing: there are too many research articles being submitted for publication to journals - that is, we publish too much. This was in response to a researcher who complained about the firehose problem in academic publishing - it’s challenging to keep up with the volume at which research publications are produced. The researcher was, by implication, placing some blame for the firehouse on the publishers. Ciavarella noted that publishers only respond to demand - they build bigger pipes on which to fit bigger hoses; if there is too much research coming out of the publication firehose, it’s because there are too many submissions for publication, he claimed. The relationship between supply and demand in academic publishing is more complex than this analogy supposes. After all, the traditional currency that academic institutions use in exchange for tenure and promotion for their faculty is peer-reviewed publications - and the fiat currency of publishers is peer-reviewed publications. More than fifty years after Silen’s seminal editorial criticizing this currency, it’s still a publish or perish world.

Upstream of the nozzle

To assess Ciavarella’s claim, a reasonable estimate of the global number of article submissions to academic journals would be needed. Submissions represent the reservoir supplying the firehose. It is, however, incredibly difficult to get accurate data on the number of submissions to journals. Most publishers keep their data closed - a point that the Office of Science and Technology Policy respectfully glosses over in their Sisyphean reports to Congress on public access to federally funded research. Publishers do, however, advertise their journal “acceptance rates” in part because of a belief that such rates are inversely proportional to journal prestige - the more submissions a journal rejects relative to the number they accept, the more prestigious that journal claims to be along with other factors such as readership, citation rates, etc. Publishers translate that prestige into brand power - creating market forces that drive additional submissions from researchers eager to attach their names to recognizable brands.

In a preprint from a few years ago, Rachel Herbert of Elsevier’s International Center for the Study of Research (ICSR), evaluated the acceptance rates of over 2,000 journals (80% of them published by Elsevier) in 2017. The study found that the average acceptance rate was 32% with a range of 1.1% to 93.2% - this is similar to the rates found independently around the same time. Acceptance rates, of course, are a function of editorial policy that can be influenced by the publisher through a range of factors such as issue page number allotments, scope of the journal field, editorial prerogative, reviewer consensus on potential scientific importance of a submission, and the number of submissions sent into the journal. Leaving aside the fact that acceptance rates can be artificially deflated (and prestige factors artificially inflated) by simply increasing the number of submissions of poor-quality manuscripts, they can be used in conjunction with estimates of the total number of articles published to come up with a reasonable back-of-the-envelope estimate of the number of submissions.

According to Dimitri Curcic at WordsRated, since 1996 more than 64 million papers have been published in academic journals and there has been a 23% increase in growth in the last five years alone. During that time, the number of active journals has increased by more than 28% - outpacing the proliferation of articles (which suggests that the growth of these journals is somewhat indicative of having higher acceptance rates as there are more venues for articles that would have otherwise not been accepted for publications). These data also suggest that this relationship dampened in recent years, with the average annual growth rate of journals (1.67%, 2016-2020) being about a third of that of the number of published articles (5.28%, 2018-2022) in the last five years of available data for each.

The WordsRated data suggests that there were, conservatively, around 4 million articles published in 2017-the year of Herbert's ICSR preprint (about 2.5 million of those were in science and engineering fields based on National Science Foundation estimates). Combining these data, we can reasonably estimate the range of articles submitted in 2017 at 12.5 million articles submitted globally - that's almost 24 submissions per minute.[1] That is a lot of papers under consideration in the scholarly publishing market.

But is it too many submissions translating into too many papers coming out of the nozzle of the firehose? Because the firehose analogy is an oversimplification of a complex system we need to consider more factors than just the demand from researchers to have more outlets to publish in and the publishers’ response to supply those outlets. Whether this system has resulted in a surplus of published manuscripts – flooding the streets well after the fire is out – is a question about the quality of that surplus.

Is the water from the firehose safe to drink?

Few would argue that a surplus of high-quality, groundbreaking, innovative research is characteristic of a system producing too many papers. On the other hand, an excess of poor quality, low impact, and questionable research published in journals that are fit for that purpose, should[2] give pause to anyone drinking from the firehose. That is, perhaps it’s not just the excess number of papers that contributes to the surplus of frankly poor quality research published. Maybe there are too many poor quality journals too - responding to an underlying demand to publish low quality papers.

During the digital transformation to online publication, there was only one variable that the publishers needed to tune to satisfy this demand - increase the acceptance rate of existing journals. But doing that would have diminished the underlying value of their high prestige journals. So, instead, they tuned two variables by increasing the number of journals with higher acceptance rates in the system while protecting their big brands with constant rates. This is a practice that is often couched within the seemingly innocuous guise of field specialty journals. Why would field specialization need a lower threshold for publication if the merits of peer review are constant?

Here too, however, even after accounting for quality, the relationship between the number of submissions and the number of journals is more complex than a simple feedback loop. There is an underlying hidden dependence resulting from the very rational behavior of researchers contributing to this loop. The expansion of journals with higher acceptance rates alters the rational calculus for researchers - all things being equal higher acceptance rates create a perverse incentive to submit as many manuscripts as possible since the underlying probability of acceptance is simply higher than if those same publications were submitted to a journal with a lower acceptance rate, and hence higher prestige. Publishers often compound this incentive by offering automatic referrals for papers rejected from higher prestige journals to lower ones - allowing for almost instantaneous and friction-free forwarding of a rejected manuscript to another journal in their own portfolios (and certainly never to a competitor's journal). This feedback loop is self-replicating and self-expanding without significant disruption.

The idea that there are too many journals accepting too many manuscripts for publication is not new. Christa Easton summarized the pressures that libraries faced sustaining subscriptions to the growing number of serial publications during the digital transformation in the 1990s while, simultaneously, publishers were faced with increased incentives to produce more journals. In 1997, the great migration to digital formats was only just beginning and came at significant risk and cost - now, it’s effectively effortless for a publisher to spin off a new online-only journal. Publishers – and not just experienced publishers but really anybody including scholarly societies and unaffiliated individuals – can create a new journal in minutes. Researchers can – and do – respond to the availability by slicing up their work (and their data) into minimally publishable units (sometimes called salami slicing) knowing that they have lower chances of contributing a single high-quality and holistic article to the scholarly record than they do contributing multiple smaller studies to a wider variety of field journals (which can contribute to self-citation and auto-plagiarism).

Who is testing the water?

Another argument is that the growth in the number of journals and the number of published research articles has been a response to scientific advances requiring increasingly specialized knowledge. The scientific publication model, and the incentives superstructure supporting researcher participation in it, is based upon the premise that science advances by taking baby steps – little discoveries, a single hypothesis falsified, a new data point, a novel method, another patent. And yet, there is very little evidence that this slow march of science correlates positively with the volume of published research. In fact, there is a countervailing theory, and some evidence to support it, that the overall quality of research is inversely proportional to the overall quantity of research or perhaps just a random consequence of the pool of available sources to cite.

Major breakthroughs, theories, and discoveries in science are rare and, relative to the volume of research over the last half-century, appear to be getting rarer. If journals provide value proportional to their acceptance rates (i.e., a component of their prestige by the publishers' metrics), then one might assume that the quality of reviews and rigor of the underlying research must also be of significantly higher quality than less prestigious journals, right? Well, if that's the case, then must it also be true that a less prestigious journal has a poorer review system allowing for lower quality research to be published? If publishers were building bigger pipes solely in response to the increase in the volume of submissions flowing through the firehose to altruistically advance science in proportion to its progress – and not to increase their revenues – they would have produced more journals with lower acceptance rates or improved the fittings and bushings connecting their pipes to the firehose by innovating with alternative business models (a few ideas that some good faith publishers have attempted to introduce into the system without much success in disruption include models such as compensating reviewers, charging non-refundable submission fees at the time of submission, etc). Two widespread and successful, if inequitable, innovations to come out of the digital transformation in the last 20 years include the invention of article processing charges (APCs) and journal impact factors (JIFs[3]). APCs support the surplus revenues of the journals and not the quality of the underlying research that they publish. This latter point was highlighted in the 2022 Budapest Open Access Initiative 20th anniversary statement: "[APCs] don’t pay for improved quality but for the perception of improved quality…Career advancement can depend on that perception. But that is a problem to solve, not an immutable reality to accommodate.” The JIFs were also called out by the statement as erroneously mistaking impact for quality and conflating journal with article impact. Even if the JIFs and their individual impact factor counterparts were once an effective measure of quality, they are unlikely to remain effective as measures in general.

If peer reviewers have a role in the firehose analogy, they are supposed to play the part of water quality control. However, across the academic publishing landscape, there is a mix of quality control through the peer review system – a mix that may be increasingly unreliable. It might seem intuitive to assume that, given the greater resources and credibility of higher prestige journals there's a higher concentration of high quality rigorous reviews while the concentration of poor quality reviews is likely to pool in less prestigious outlets, but the evidence appears to be to the contrary. Of course, the worst of the worst falls within the predatory space where there's a complete absence of review or disingenuous promises of quality rapid review. Sometimes this takes the form of otherwise legitimate journals allowing for greater tolerances to review processes to accelerate the publication of “special issues” (which sometimes come with the condition that authors have to pay APCs when they're being invited by the journal to contribute - a suspect practice altogether). Certainly, no peer review system or peer reviewer pool is perfect and there are significant and often consequential lapses in scientific integrity or methodological concerns that escape even the most rigorous peer review and end up in a prestigious publication outlet.

The most egregious and sometimes hilarious lapses in the peer review system in high profile journals – some at the very apex of prestige – only demonstrate that there is a significant problem with peer review at a systemic level.

The conjecture advanced here, and by others, is that this problem is threefold:

  1. The demand for peer review is too high;
  2. The rewards for conducting peer review are too low;
  3. The resources required for rigorous peer review are insufficient for most peer review editorial systems.

The initial conditions of the first aspect of the peer review problem have already been implied here: the likelihood that there are too many papers and too many journals for the system to support. If demand for peer review is too high, it’s likely because of a deficit in the supply of qualified peers available to conduct review.

There's good data from the National Science Foundation to support that claim. According to the latest report on the results of the Survey of Earned Doctorates by the NSF’s National Center for Science and Engineering Statistics, the number of earned doctorates in science and engineering fields has risen on average over time. The trend peaked in 2019 at 42,898 doctorates awarded before declining slightly during the pandemic years.

However, even as universities are awarding a greater number of doctorates over time, fewer and fewer are entering into academic jobs at universities where the vast majority of journals draw their editorial review boards. At the 2024 State of the Science Address, NASEM President Marcia McNutt called attention to the deficit in academic replacement. Just 36% of doctorate recipients not immediately going into a post-doctoral fellowship in 2021 reported that their first job out of graduate school would be in academia. That’s a significant decline from 48% of such in 2001. Now, it’s true that a greater number of doctorate recipients are entering into postdoctoral fellowships but the rate hasn’t kept up with the decline in academe overall – postdoctoral fellowships are not offsetting the fewer career positions filled in academia. Almost everyone getting a doctorate goes into a non-university position after graduation and for good reason: the academy cannot compete with the salary, benefits, and job security and stability that other sectors provide to newly minted PhDs. And when they get into those jobs, very few PhDs continue to publish in or review for scholarly outlets at high rates – some journals have strict review policies that do not allow non-academics or individuals who have not published recently to participate in peer review, which compounds the problem.

For those that remain in the academy, participation in peer review – ostensibly lauded as a critical component of the scientific process – is not given adequate recognition and reward by academic institutions. Peer review is often treated as service by tenure and promotion committees, which is the least important aspect of the expected contributions of scholars. Recognition mechanisms outside of the academy, and metadata infrastructure to support them, is increasingly becoming available[4] and yet the culture of the academy has been slow to adopt these innovations.

Based on these trends, it’s very likely that the proliferation in the number of journals and articles rose concurrently with a decline in the availability of qualified reviewers to conduct robust, high quality peer review over the same time period. Conveniently, it appears that not only is there a demand to publish low quality research, there is a deficit in the quality of the peer review system overall anyway. Here too, artificial intelligence – even with its potential benefits to the peer review system (such as augmented literature search, detection of falsified data and images, and translation services) – is likely contributing to a downturn in thoughtful and rigorous review as generative AI can both do the reading and the writing on behalf of a human reviewer. Some publishers require reviewers to abstain from using those tools, even when they could be helpful. We can be certain that, given the unrewarded high demands on qualified human reviewers, they are indeed employing this tactic without disclosure.

All of this belies a significant crisis unfolding within the scholarly peer review system[5]. Legitimate journals are finding it increasingly difficult to solicit and retain high quality and timely responses to invitations for peer review. The pressures are perverse for potential reviewers. The motivations for contributing to peer review are almost entirely altruistic, save for the ability to stay abreast of potentially competing research – or, dreadfully, quell competing research from behind the veil of anonymity. There's very little incentive provided by publishers, academic institutions, and scholarly societies for potential reviewers to contribute their valuable time in an increasingly productivity-constrained environment. For many journals, the time between the initial invitation and final decision is substantially delayed and editorial boards have to reach an increasing number of potential reviewers before finding someone available and willing to do reviews. Worse, there is very little done by editorial boards or journals to assess and report on the quality of reviews and to incentivize potential reviewers to maintain the scientific integrity of the reviews they provide. Authors are required to disclose conflicts of interest, reviewers however, rarely if ever are expected to do the same.

Still, peer review remains an incredibly valuable asset to the research enterprise. When done effectively and transparently, peer review provides significant benefits to science, including: improving the quality and rigor of works, grammatical and typesetting corrections, identifying theoretical gaps and insights, fostering collaborations, conferring public trust in the oversight of science, and more. But this crisis in peer review threatens all of its benefits. Without intervention and significant reform in the culture of scholarly publishing, it will be peer review – and not unsustainable publication business models – that undermines science the most. The firehose will continue to flow even as the water quality tests fail to pass potability standards.

Sip from the spring

Silen, and many others after, point out the unintended consequences the publish or perish culture in academia has created for the research community that have been revisited here: a proliferation of low-quality publications, slow review times-or false-promises for high-quality fast review-proliferation of predatory journals and paper mills, and more. With the recent emergence and widespread availability of generative artificial intelligence, all of these problems are accelerating in magnitude and velocity and will continue to put significant strain on the fittings and controls in this system – possibly until the hose bursts. As we enter a second digital transformation in scholarly writing, one characterized by demand for data and code sharing in publicly accessible repositories and the challenges and promises of AI[6], reform in infrastructure alone will be insufficient to support the forthcoming deluge – we need significant changes to the incentive structure for academic scholarship to lower the pressure, and increase the quality, of the water coming out of the firehose.

Researchers hold an incredible amount of market power in scholarly publishing - they drive both the supply of and demand for manuscripts. Researchers can, and should, leverage that power to challenge the status-quo and resolve the firehouse problem that they themselves decry. They are the source of the Pieran spring and everything downstream depends on its flow. One way to temper the firehouse would be to sip directly from the spring rather than from the nozzle.

Scholarly writing is much richer than just publications. For example, researchers produce grant proposals, editorials, policy briefs, blog posts, teaching curricula and lectures, software code and documentation, dataset curation, and labnotes and codebooks. Some of these scholarly outputs may end up being published – some may even end up changing how science is communicated and conducted. But, realistically, most will not obtain the recognition that the authors and contributions deserve for these “non-traditional” outputs. Much of these outputs hold incredible value to the scientific community and to the public. A singular focus on writing manuscripts to submit for publication lowers the likelihood that the value of these other materials can be realized.

By writing more and publishing less, researchers can lower the pressure of the firehose while continuing to make valuable contributions to the world. When academics write policy briefs that inform legislation, create a blog to enhance dialogue in their field, produce open data that is broadly reused, or write open software that enables a broad gold standard method to be widely available, they should be rewarded with value on par with any particular peer-reviewed publication in a journal. All of these materials fill the spring even if only published manuscripts filter through to the firehose.

The superstructure of incentives – predominantly those that provide credit for the purposes of tenure, promotion, and other career advances – should treat some combination of these outputs with parity to publications. Increasingly, funder policies require preprints, data and code sharing, and other research outputs that they support with their grant money: some funders incentivize these requirements by rewarding compliance with parity to publications for the purposes of future grant review – the same should be true of home institutions that receive the funds that researchers are awarded and researchers should demand that the full cornucopia of the work supported by their grant is rewarded equitably for the purposes of performance, tenure, and promotion review. That is not to say that major breakthroughs and discoveries should not receive special attention – rather, it’s a proposal to amplify the entire portfolio of work that led to those discoveries.

Sharing ideas earlier with the community can greatly improve the quality of scholarship and broaden the impact and reach of those ideas. One way to accomplish this with research is by contributing to preprints and preprint reviews. Preprints, of course, can be added to a repository without ever undergoing review - that’s both an advantage and a disadvantage. On the one hand mass adoption of preprint use could shift the firehose problem to a problem of a poisoned spring with a large volume of unreviewed manuscripts filling the pool – on the other, preprints can be checked by many more potential reviewers and provide an avenue for sharing important results that may not otherwise get published in a journal (such as null-results). To help with the balance, preprint review provides an additional filter from the spring into the firehouse.

Preprint review is increasingly important as a mechanism to reform the peer review system – reform that’s happening in a manner that seeks to shift peer review from a monoculture maintained solely by publishers into an entire ecosystem largely maintained by researchers, their institutions, and their funders. With preprint review, authors participate in a system that views peer review not as a gatekeeping hurdle to overcome to reach publication but as a participatory exercise to improve scholarship.

Preprint review can also reveal potential errors and issues of scientific integrity earlier in the development of a manuscript so that authors can make more informed decisions about the state of their research and what should be done to address those issues in future revisions. Also, because preprint review is done out in the open, reviewers have an opportunity to interact with one another and to expose disagreements, highlight consistencies, and respond to ideas that ultimately allow authors to have a more holistic review. It’s also a way to ensure that authors can retain full control over their intellectual property and its derivatives by asserting licenses that fit with their personal needs and values (e.g., using a CC-BY-NC license if one does not want publishers to sell the content of a manuscript for use in AI, for instance).

The macro-effect of reforming peer review to include widespread use of preprint review would align nicely with a widely held philosophy of science that treats science communication to be a conversation rather than a broadcast. Models of preprint review are many, including relatively novel approaches such as PREreview’s live review where multiple reviewers collaborate to review a preprint in real time online or ASAPbio’s crowd preprint review done collaboratively buy asynchronously. Some journals, like eLife’s new model, already consider preprint review in their editorial pipelines – bypassing the need to solicit additional feedback and accelerating editorial decisions. Recently, The Gates Foundation refreshed their open access policy to require deposit of preprints by their grantees. Certainly, funders like Gates continue to value peer review and greater adoption of preprint review can shift the inaccurate belief that all preprints lack review (a belief codified in Gates’ required disclosure for researchers posting preprints).

The firehose problem in academic publishing is unlikely to be resolved by changing the pipe fittings alone. Of course, this is the rational option that publishers choose in response to the apparent demand from researchers - as Ciavarella rightly pointed out. The underlying demand, however, is fueled by a complex of misaligned and perverse incentives to publish or perish in academia. Reform at the fittings has only compounded the problem – increasing additional demand on an already stressed peer-review system and favoring quantity over quality of published manuscripts. A more holistic approach should focus repairs upstream, away from the nozzle, and to the source of knowledge that flows through the firehose itself – a spring filled with a variety of scholarly outputs beyond just manuscripts. Incentives for filling that spring should mesh with the rewards for publishing offered by funders, academic institutions, policymakers, publishers, and researchers themselves. This requires collegiality across and between all of those stakeholders to work together without polarization. In sum, alleviating the pressure coming out of the firehose is straightforward when the incentives are appropriately and collaboratively aligned: write more and publish less.

Acknowledgments

Special thanks are owed to Stuart Buck, Tom Ciavarella, Erin McKiernan, Peter Suber, and Crystal Tristch for their efforts in improving this work.

Disclosure

The opinions expressed here are my own and may not represent those of my employer, my position, or the reviewers. For full transparency: I am a member of the scientific advisory board of PREreview, which I cited here. I contributed - either as author or reviewer - to a few of the papers incorporated by reference above. I have been guest editor or associate editor on a number of special issues during my academic career though I have never participated in soliciting direct contributions to those issues. I have made every attempt at citing works that are publicly accessible - a few works may not be freely available to all readers.

Notes

  1. (4M publications / 0.32 acceptance rate) = 12.5M submissions; (12.5M submissions / 525,600 minutes in a year) = 24 submissions per minute. This should conservatively underestimate the true rate. ↩︎
  2. The same argument about the balance of quantity over quality has also been made about books too. ↩︎
  3. Pronounced like, but not to be confused with, the famous brand of peanut butter. ↩︎
  4. A trusted colleague once warned: “be careful what you eat in the scholarly kitchen.” Thankfully, this article is good soup. ↩︎
  5. This is the most comprehensive and up-to-date review of peer review currently available. The scope of the article demonstrates the value of peer review, its novelty as 20th Century practice, and the challenges that jeopardize its contemporary legitimacy in the 21st Century. It’s well worth a read and it is available open access for free. There is another recent great article about the crisis in peer-review by Colleen Flaherty behind a paywall here. ↩︎
  6. There are new tools available to researchers for writing and conducting peer review, including emerging artificial intelligence tools. ↩︎

References

Patil, C., & Siegel, V. (2009). Drinking from the firehose of scientific publishing. Disease Models & Mechanisms, 2(3–4), 100–102. https://doi.org/10.1242/dmm.002758

Hanson, M. A., Barreiro, P. G., Crosetto, P., & Brockington, D. (2023). The strain on scientific publishing (Version 2). arXiv. https://doi.org/10.48550/arxiv.2309.15884

Jin, S. (2024). Should We Publish Fewer Papers? ACS Energy Letters, 9(8), 4196–4198. https://doi.org/10.1021/acsenergylett.4c01991

Publish or perish: Origin and perceived benefits. (2018). In I. Moosa, Publish or Perish (pp. 1–17). Edward Elgar Publishing. https://doi.org/10.4337/9781786434937.00007

Silen, W. (1971). Publish or Perish. Archives of Surgery, 103(1), 1. https://doi.org/10.1001/archsurg.1971.01350070027002

Herbert, R. (2020). Accept Me, Accept Me Not: What Do Journal Acceptance Rates Really Mean? [ICSR Perspectives]. https://doi.org/10.2139/ssrn.3526365

Bjórk, B.-C. (2019). Acceptance rates of scholarly peer-reviewed journals: A literature survey. El Profesional de La Información, 28(4). https://doi.org/10.3145/epi.2019.jul.07

Easton, C. (1997). Too many journals, in too many forms? Serials Review, 23(3), 64–68. https://doi.org/10.1080/00987913.1997.10764393

Harvey, L. A. (2020). We need to value research quality more than quantity. Spinal Cord, 58(10), 1047–1047. https://doi.org/10.1038/s41393-020-00543-y

Ioannidis, J. P. A. (2015). A generalized view of self-citation: Direct, co-author, collaborative, and coercive induced self-citation. Journal of Psychosomatic Research, 78(1), 7–11. https://doi.org/10.1016/j.jpsychores.2014.11.008

Casadevall, A., & Fang, F. C. (2014). Specialized Science. Infection and Immunity, 82(4), 1355–1360. https://doi.org/10.1128/iai.01530-13

Tumin, D., & Tobias, J. (2019). The peer review process. Saudi Journal of Anaesthesia, 13(5), 52. https://doi.org/10.4103/sja.SJA_544_18

Michalska-Smith, M. J., & Allesina, S. (2017). And, not or: Quality, quantity in scientific publishing. PLOS ONE, 12(6), e0178074. https://doi.org/10.1371/journal.pone.0178074

Park, M., Leahey, E., & Funk, R. J. (2023). Papers and patents are becoming less disruptive over time. Nature, 613(7942), 138–144. https://doi.org/10.1038/s41586-022-05543-x

Avital, M. & Copenhagen Business School. (2024). Digital Transformation of Academic Publishing: A Call for the Decentralization and Democratization of Academic Journals. Journal of the Association for Information Systems, 25(1), 172–181. https://doi.org/10.17705/1jais.00873

McKiernan, E. C., Schimanski, L. A., Muñoz Nieves, C., Matthias, L., Niles, M. T., & Alperin, J. P. (2019). Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. eLife, 8, e47338. https://doi.org/10.7554/eLife.47338

Fire, M., & Guestrin, C. (2019). Over-optimization of academic publishing metrics: Observing Goodhart’s Law in action. GigaScience, 8(6), giz053. https://doi.org/10.1093/gigascience/giz053

Drozdz, J. A., & Ladomery, M. R. (2024). The Peer Review Process: Past, Present, and Future. British Journal of Biomedical Science, 81, 12054. https://doi.org/10.3389/bjbs.2024.12054

Brembs, B. (2018). Prestigious Science Journals Struggle to Reach Even Average Reliability. Frontiers in Human Neuroscience, 12, 37. https://doi.org/10.3389/fnhum.2018.00037

Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00291

Elmore, S. A., & Weston, E. H. (2020). Predatory Journals: What They Are and How to Avoid Them. Toxicologic Pathology, 48(4), 607–610. https://doi.org/10.1177/0192623320920209

Repiso, R., Segarra‐Saavedra, J., Hidalgo‐Marí, T., & Tur‐Viñes, V. (2021). The prevalence and impact of special issues in communications journals 2015–2019. Learned Publishing, 34(4), 593–601. https://doi.org/10.1002/leap.1406

Schimanski, L. A., & Alperin, J. P. (2018). The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future. F1000Research, 7, 1605. https://doi.org/10.12688/f1000research.16493.1

Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023). Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots. JAMA, 330(8), 702. https://doi.org/10.1001/jama.2023.12500

Horta, H., & Jung, J. (2024). The crisis of peer review: Part of the evolution of science. Higher Education Quarterly, e12511. https://doi.org/10.1111/hequ.12511

Superchi, C., González, J. A., Solà, I., Cobo, E., Hren, D., & Boutron, I. (2019). Tools used to assess the quality of peer review reports: A methodological systematic review. BMC Medical Research Methodology, 19(1), 48. https://doi.org/10.1186/s12874-019-0688-x

Bergstrom, T., Rieger, O. Y., & Schonfeld, R. C. (2024). The Second Digital Transformation of Scholarly Publishing: Strategic Context and Shared Infrastructure. Ithaka S+R. https://doi.org/10.18665/sr.320210

Alperin, J. P., Schimanski, L. A., La, M., Niles, M. T., & McKiernan, E. C. (2022). The Value of Data and Other Non-traditional Scholarly Outputs in Academic Review, Promotion, and Tenure in Canada and the United States. In A. L. Berez-Kroeker, B. McDonnell, E. Koller, & L. B. Collister (Eds.), The Open Handbook of Linguistic Data Management (pp. 171–182). The MIT Press. https://doi.org/10.7551/mitpress/12200.003.0017

Avissar-Whiting, M., Belliard, F., Bertozzi, S. M., Brand, A., Brown, K., Clément-Stoneham, G., Dawson, S., Dey, G., Ecer, D., Edmunds, S. C., Farley, A., Fischer, T. D., Franko, M., Fraser, J. S., Funk, K., Ganier, C., Harrison, M., Hatch, A., Hazlett, H., … Williams, M. (2024). Recommendations for accelerating open preprint peer review to improve the culture of science. PLOS Biology, 22(2), e3002502. https://doi.org/10.1371/journal.pbio.3002502

Bucchi, M., & Trench, B. (2021). Rethinking science communication as the social conversation around science. Journal of Science Communication, 20(03), Y01. https://doi.org/10.22323/2.20030401

Dawson, D. (DeDe), Morales, E., McKiernan, E. C., Schimanski, L. A., Niles, M. T., & Alperin, J. P. (2022). The role of collegiality in academic review, promotion, and tenure. PLOS ONE, 17(4), e0265506. https://doi.org/10.1371/journal.pone.0265506

Copyright © 2024 Christopher Steven Marcum. Distributed under the terms of the Creative Commons Attribution 4.0 License.

Comments

Latest