Skip to content
Don't feed the metrics machine!

In the Open Scientist Handbook, I argue that open science supports anti-rivalrous science collaborations where most metrics are of little, or of negative value. I would like to share some of these arguments here.

“Making a better, more sustainable institution, in other words, requires us to move away from quantified metrics for meritorious production — in fact to step off the Fordist production line that forever asks us to do more — and instead to think in a humane fashion about ways that we can do better. Better often in fact requires slowing down, talking with our colleagues and our communities, and most importantly, listening to what others have to say. Better requires engagement, connection, sharing, in ways that more nearly always encourages us to rush past. Turning from more to better goes against some of the ingrained ways of working we’ve adopted, but that turn can help us access the pleasures — indeed, the joys — of our work that life on the production line has required us to push aside” (Fitzpatrick, April 26, 2020; Accessed September 2, 2020).

Open scientists take full advantage of emergent technologies (e.g., the internet, cloud computing, online networking) in order to build shared research repositories and platforms that provide abundant, mineable data, reproducible experiments, lateral learning for new methods, rapid research publication with rigorous review, streamlined and fair funding opportunities, and world-wide knowledge access with equal participation.

Your future is better with open science. Why is that? Open science offers new value to your work and your science life. Open science multiplies your research’s impact. When you add your research objects (from ideas to findings) to open repositories, these can be rapidly discovered, evaluated, shared, and applauded; and all of this without being subjected to arbitrary metrics that institutions have gamified for their own purposes (e.g., journal impact factors), instead of providing value to your own work. As open science is grounded on Demand Sharing and Fierce Equality, you can also pull resources from the common pool to accelerate your work, and discover new collaborators across the planet.

Open science takes us beyond the games that metrics promote

This vision of an open, global science endeavor confronts a range of entrenched institutional practices and perverse incentives: a toxic culture that has hobbled science for decades. Open scientists need the know-how and tools to tear down these practices and to interrogate these incentives, in order to replace them.
Institutional prestige is a profound drag on the potential for networked science. If your administration has a plan to “win” the college ratings game, this plan will only make doing science harder. It makes being a scientist less rewarding. Playing finite games of chasing arbitrary metrics or 'prestige' drags scientists away from the infinite play of actually doing science.

Goldman and Gabriel (2005) penned the phrase: “Innovation happens elsewhere” to capture the value of open-source software communities. In the academy, it doesn’t matter if you are at Oxford or in Oxnard, almost everything you need to know to make the next step in your research is also being considered at this moment: somewhere else. In a world where science happens elsewhere, the first thing your campus can do is become more attached to all those academy “elsewheres” that can amplify your in-house efforts.

The best thing your campus can do is to become that really attractive “elsewhere” to which others want to attach themselves. This means opening up to demand-sharing. Once science gets funded across a broad spectrum of institutions and across the globe, online collaboratives will form, work together, and create new knowledge without regard to game-able institutional rankings. The entire academy will become more nimble, creativity will quicken, and good work will find its rewards outside of current reputation schemes.

The future of open science will be much more distributed and democratic. Open scientists work wherever their research and teaching acumen is needed and supported. The perverse lure of so-called-famous universities disappears as great work emerges from highly diverse teams in hundreds of institutions and locales across the planet, and along the internet.

Universities cannot be managed through any particular set of metrics. Open science looks to break the “tyranny of metrics” (Muller 2018) by expanding descriptions of the value proposition of the university (See Also: Newfield 2016; and Scarcity) to include a broad range of public goods and societal value created from the provident bounty of the academy’s shared open repositories.

The only metric that works…

“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”-DORA; Accessed April 9, 2020.

As Cameron Neylon said at the metrics breakout of the 'Beyond the PDF' conference some years ago, “reuse is THE metric.” Reuse reveals and confirms the advantage that open sharing has over current, market-based, practices. Reuse validates the work of the scientist who contributed to the research ecosystem. Reuse captures more of the inherent value of the original discovery and accelerates knowledge growth.

Open science is a scientific knowledge and data reuse accelerator. Its network effects help make reuse available, and, in time, inevitable. The main reasons your work is reused more than some others are that you did great science in your research (followed the methodology, maintained the data, etc.), and you also made this reusable. Nobody really cares which journal or preprint service opened this research to community use. It will be shared and used on its own merits.

Metrics fuel bad behavior

Hyper-competitiveness at the institutional and personal level “crowds out” (Binswanger 2014) science’s intrinsic motivations and promotes quantity over quality, “bad science” (Smaldino and McElreath 2016), and marketable formalism over research needs. Worse, it crowds out scientists who refuse to play the excellence game required by the gamification of reputation in the academy. The “priority rule” of discovery in the academy is really just a method to gamify episodes in the lives of research teams for the reward of individuals against the benefits of discovery for science and the world.

Foray (2004) argues against the claim that personal recognition is required for scientists to rapidly release research results; or that personal ownership of the knowledge contributes to knowledge sharing. “On the contrary, the tournament contexts created by the priority rule, as well as the size of related rewards, tend to encourage bad conduct” (ibid). The production of the public goods of science in these circumstances has become sub-optimal, feeding the goods of reputation metrics instead of the benefits of open demand sharing across the academy.
Competition also feeds the Matthew effect.

“[I]ntense competition also leads to ‘the Matthew effect’…this competition and these rewards reduce creativity; encourage gamesmanship (and concomitant defensive conservatism on the part of review panels) in granting competitions; create a bias towards ostensibly novel (though largely non­-disruptive), positive, and even inflated results on the part of authors and editors; and they discourage the pursuit and publication of replication studies, even when these call into serious question important results in the field” (Moore, et al. 2017).

Science loses on all scores. For science, hyper-competitiveness is a race to the bottom that so many institutions are fighting to win using arbitrary metrics as goals. “Competitiveness has therefore become a priority for universities and their main goal is to perform as highly as possible in measurable indicators which play an important role in these artificially staged competitions” (Binswanger 2014).

Look instead for the internal goods of science

The notion that a university can increase managerial control over research practices using performance-based funding schemes, and so to capture year-by-year productivity gains, has been tried in various places around the globe. But the practice of top-down, goal-driven, productivity management translates poorly from the commercial world (where this is also failing) into the academy. Metrics applied in this manner are highly susceptible to Goodhart’s law, and subsequent gaming attempts.

The best incentives for better science are those goods internal to the professional practices of doing science. Governance practices that open up more avenues for sharing and knowing anchor science inside its own praxis. There is an authentic “meritocracy,” here, not the artificial sort claimed by prestigious organizations. A fluid, dynamic, emergent shared sense of where new knowing is being forged.

In the interconnected intellectual rooms of online science communities, the acceleration of knowing and discovery through access to open shared resources, active, global collaborations, and diverse team-building assembles shared intelligence to solve wicked problems. There is no organizational strategic plan, no business model, no tactical hiring that can match open innovation collaboratives that push the boundaries and change the rules of their infinite playing together. The merit belongs to the team, and to the work. What the scientists get is the joy and wonder of a lifetime of science play.

Replacing metric-game awards with universal badges

The notion of using open digital badges to acknowledge certain practices and learning achievements has been circulating in the open science endeavor for more than a decade. Over these years, this has become a perennial “near future” augmentation/implementation of how open science can recognize and reward practices and skills.

Instead of using game-able metrics that rank individuals as though they were in a race, badges can promote active learning, current standards, professional development, and research quality assurance. The transition from arbitrarily scarce reputation markers (impact metrics, prizes, awards) to universally available recognition markers also helps to level the ground on which careers can be built across the global republic of science. Every scientist who wants to take the time and effort to earn a badge for achieving some level of, say, research-data reusability, or graduate-student mentorship, can then show off this badge to the world. Every student/scientist who acquires a specific skill (R programming, software reusability, statistics, etc.) can add a new badge to their CV. Perhaps, the “Fellows” of learned societies will one day include everyone who has acquired the necessary, open badges.

References

Copyright © 2022 Bruce Caron. Distributed under the terms of the Creative Commons Attribution 4.0 License.

Comments

Latest