Report on the workshop: Peer Review Practices in Scholarly Journals

Workshop Date: November 20, 2025
Location: Université de Montréal
Facilitator: Adrien Savard-Arseneault (from the journal Sens Public)
Intervening Participant: Lucia Cespedes (Réseau Circé)
Report Date: December 1, 2025

Summary

This workshop, facilitated by Adrien Savard-Arseneault, was held on November 20, 2025, at the Université de Montréal as part of the plenary meeting of the "Revue 3.0" project. Focusing on the issues of open peer review, it combined a theoretical presentation based on a report co-authored with the Circé network, a practical case study (Sens Public), and a collective discussion with the researchers present. The exchanges allowed for the identification of persistent limitations of the blind model, the exploration of various modalities of openness, and the confrontation of these concepts with the operational reality of journals, highlighting both the transformative potential and the practical challenges of these new practices.

Workshop Objectives:

  1. Present a theoretical framework and a survey of open review practices, based on collaborative research.

  2. Share a concrete case study (the journal Sens Public) to illustrate the implementation of these practices and lessons learned.

  3. Engage in a debate with the research community of the Revue 3.0 project on the definitions of a "good" review and the possible futures of academic assessment.

Summary of Presentations

Theoretical Framework: Challenges of the Blind Model and Alternative Models

A concise presentation of the conclusions from the Réseau Circé report focusing on open review practices in the journal Sens Public.

Limitations of Blind Reviews

  1. Uneven Quality of Reviews
    There is a variable, sometimes poor, quality of reviews within this system. Reviews lack consistency and may even contradict each other.

  2. Lack of Reviewer Accountability
    Reviewers are not "accountable" for their reviews when they are conducted blindly. They are less incentivized to defend and substantiate their points.

  3. Lack of Fluid Dialogue Between Different Parties

    Anonymity can break the conversational flow between authors, reviewers, and editors. Many important conversations are thus lost, and numerous comments never reach the author.

  4. Practical Problems
    Ineffective anonymization in specialized fields and extended review timelines due to an increase in refusals.

Reflection on Open Review Practices

To define open review, the workshop facilitator referred to the terminology of Ross-Hellauer (Ross-Hellauer 2017), which outlines its main modalities:

Open review has the merit of prompting journals to reflect on their practices, without offering a definitive solution. The Circé network report chose to focus on the first 4 points:

  1. Open Identities: Allows for valuing the work of reviewing, making conflicts of interest visible, and increasing individual reviewer accountability. However, this practice can lead to less critical reviews (when reviews are public, reviews are observed to be "softer") and raises questions about power dynamics.

  2. Open Reports: A rather consensual modality ensuring transparency of the review process. It ensures comments are not lost and also serves a pedagogical function, making the implicit norms of review systems visible. But it is time-consuming and requires adapted platforms.

  3. Open Participation: Involving a wider audience in the review promotes inclusivity and the collective construction of knowledge. However, it raises questions about the relevance of contributions and the definition of a "peer" community.

  4. Open Interaction: It transforms the review into a collective and conversational process. It is noted that reviewer involvement generates more precise and relevant reports. However, it requires a significant investment of time and the establishment of a framework to prevent issues and conflicts.

Case Study: Feedback from the Journal Sens Public

Chosen Modality: Open and public annotation of texts via the online editing platform Stylo (using Hypothes.is). Interactions and comments are visible to all participants.

Key Learnings:

Quantitative Data (2024): Out of 70 solicitations, 65 received a response. The origin of the reviewers has no impact on their eventual acceptance or refusal. Only 1 in 3 solicitations results in a completed review. Lack of time remains the main reason for refusal. Other reviewers cited their lack of expertise in the subject or their lack of interest in the article’s topic as reasons.

Discussion

The discussion, moderated by the facilitator, brought forth the following reflections from the participants:

Redefinition of Authority and the Role of the Editor (Servanne Monjour, Bertrand Gervais): The editor must assume a role of diplomat and guarantor of the process, especially in the face of hostile reviews or when tools do not support new workflows. Editorial work, essential but invisible, remains undervalued by institutions.

Limits of the Current Model (Bertrand Gervais): The review model based on the researcher’s "good faith" as a volunteer is undermined by the general acceleration of the academic work pace. A minority of reviewers shoulder a disproportionate burden, which is not sustainable.

Review as Collective Construction (Nicolas Sauret): Opening the review to practitioners is a way to shift and enrich scientific authority.

What is a "Good" Review? (Marcello Vitali-Rosati): A good review must go beyond simple correction to engage in rigorous scientific dialogue. It precisely identifies weaknesses and justifies its arguments.

Conclusions and Perspectives

The workshop confirmed that open review is not a one-size-fits-all solution but a set of experiments aiming to address the current system’s crises of quality, timeliness, and equity.

Avenues for Reflection

  1. How to equip and train editorial teams to manage these new processes?

  2. What indicators and analysis methods can be developed to more objectively assess the impact of different review models on the scientific quality of the work produced?

  3. How to institutionally valorize the work of reviewing and editing in scientific research?