We often discuss how outdated the way researchers are assessed is: rewarding the number of publications and relying on obsolete metrics that fail to capture the real value of scientific work. What we talk about much less is another factor that deeply reinforces this problem: the way research itself is published.

Why Scientific Publishing Still Looks Like It Did Decades Ago?

Why do journals still publish static PDFs? A simple answer is: because it is cheap and extremely profitable.

From the publishers’ perspective, the PDF-based paper is an ideal product. It is standardized, scalable, and requires minimal innovation. The infrastructure has been amortized for decades, and the marginal cost of publishing yet another paper is very low, resulting in extremely high profit margins.

And taking into account where the money comes from (authors and institutions paying substantial APCs, sometimes reaching several thousand dollars per paper), it becomes clear why popular publishers (no need to name them; you already know exactly who they are) maintain this model. Authors and their institutions are left with little real choice: they must pay to ensure that years of work do not remain locked away in a drawer, and to remain competitive for funding, positions, and more favorable working conditions. But what makes this model even more striking is that the peer-review process is mostly unpaid. Reviewers, who are themselves researchers, volunteer their time and expertise to evaluate manuscripts, improve their quality, and uphold scientific standards. Editors are often academics as well. The intellectual labor that sustains the system is largely provided for free (authors pay to publish, while reviewers are not compensated, simple logic!)

Under these conditions, there is little incentive for publishers to fundamentally rethink how research is communicated. Moving away from static PDFs toward more modular, interactive, or living research outputs would require structural changes and potentially disrupt a business model that currently works very well for them. To give just one obvious example: in the Web 4.0 era, it is hard to justify why scientific articles still cannot natively support interactive features like data visualizations, simply because we remain tied to an outdated, static PDF format.

Modern Research Produces Far More Than Articles

Code, datasets, models, protocols, benchmarks, negative results, software tools, living documents, and shared infrastructures. Much of the work that enables scientific progress never fits neatly into a traditional paper and is therefore undervalued or ignored. By centering the entire system around papers, we create predictable consequences:

  • We prioritize quantity over substance,
  • Delay the dissemination of knowledge,
  • Discourage openness and collaboration,
  • And systematically overlook essential contributions that do not translate into “publishable units.”
Graphic showing different types of research outputs represented as colored labels: Code, Datasets, Models, Protocols, Benchmarks, Negative Results, Software Tools, Living Documents, and Shared Infrastructures.
Some of the types of research outputs

Innovation in publishing is not technologically difficult.
It is just economically inconvenient for publishers.

As long as prestige, evaluation, and funding remain tied to journals and their traditional formats, publishers have little reason to challenge the system that secures their revenue.

Open Science is the Solution

If the problem is structural, the solution must be structural as well.

A fairer and more sustainable research ecosystem requires breaking ties with organizations that extract value from science without proportionally contributing to it. As long as the infrastructure of research (publication, evaluation, and discovery) is controlled by profit-driven intermediaries, meaningful change will remain limited.

Encouragingly, some institutions are starting to take concrete steps in this direction. Recent initiatives such as the decision by CNRS (2025) to break free from Web of Science set an excellent precedent for other research institutions to follow. These moves matter even more because open alternatives already exist. Platforms like OpenAlex show that open, community-driven bibliographic infrastructures are both possible and effective.

Detaching research assessment and discovery from commercial interests is not just an ideological stance; it is a practical requirement for progress. Open infrastructures enable:

  • Fairer evaluation practices,
  • Reproducibility and transparency,
  • Broader access to knowledge,
  • And recognition of diverse research outputs beyond papers.

Open Science is not simply about open access to PDFs.
It is about reclaiming control of the research ecosystem itself.

Step by step, these changes can help build a research ecosystem that is more just, more efficient, and ultimately more aligned with the public good it is meant to serve. Hopefully, more organizations will join these initiatives and commit to free, open, and publicly accessible knowledge.