Minutes of the AI workshop of April 24th 2025 (Lise Verlaet, portal NumeRev)
Video of the workshop: https://api.nakala.fr/embed/10.34847/
Presentation by Lise Verlaet
Use of AI on the numeRev platform: a scientific publishing platform, but not a publisher per se – it provides editorial tools and support for journals. The project is hosted by MSH Sud. It became sustainable after the developer’s position was made permanent. It hosts around 20 journals (half of which are newly created).
The following points are addressed:
- How is AI used to compensate for the lack of staff (particularly editorial assistants)?
- Feedback from a UX study conducted by MAVINUM Master’s students.
1. "Scientific editorial assistant" project using OpenAI’s ChatGPT 4.0.
Anthropomorphizing practice: use of a proper name and politeness conventions
Uploading brochures and information about NumeRev and the journals as needed.
Tasks performed:
Adding abstracts in French and English
Adding metadata: keywords
Data on authors
Bibliographic corrections
The abstracts and metadata are proposed to journal coordinators, who then contact authors for validation.
→ ChatGPT 4.0 performs very well on these tasks. Bibliographic sources are correctly handled when the sources are well-known and widely recognized (though this limitation also applies to human editors).
Spelling correction for non-native speakers → begins hallucinating after 2–3 pages, tends to restructure or rewrite the text
Stylistic correction → moderately effective
HTML source code corrections → inconclusive results
Creating illustrations for journal issues to avoid copyright issues and limitations of manual design → results are highly relevant using DALL·E
2. Feedback from the study by Master’s students with NumeRev users
Question: What AI features could help with working on a scientific article?
Answers:
Translation
Search for related articles
Identification of related topics
Concept explanations without changing interface
Article summarization
Posters illustrating the methodology used in articles (providing a schematic view)
But there is concern over how AI selects content: e.g., for related articles, "if it’s just based on view counts, we’re not interested."
Question: What do you think about integrating a chatbot on the article page?
Answers: Clear expression of doubt regarding automation of certain tasks.
Possible explanation: The term "chatbot" may carry a negative connotation, associated with simplification or intellectual laziness.
Discussion
Frédéric Clavert: Prompts influence the performance of models. What prompting strategy is used?
Lise Verlaet: In these experiments, no model comparison was done. No access to the prompts written by interns. Prompts are basic summary requests with an instruction to retain the author’s style. For spelling corrections, the prompt is simple, yet the model hallucinates after 2–3 pages. Bullet points and subheadings were added. One solution is limiting input text length. It might be a prompt issue, but different methods of input (copy/paste, file upload) did not change the hallucination behavior.
Marcello Vitali-Rosati:
We don’t automate what can be automated, but what we want to automate. We tend to trivialize tasks we already devalue. Should the strategy in the humanities be to publish less?
On using commercial applications: Why use a chatbot interface? It’s possible to use an LLM in an expert way. For instance, for summarization: you can compute an average vector instead of generating text via chatbot. Feasibility is an issue, but even if the chatbot is “easier,” building more complex systems might better align with epistemological paradigms.
Lise Verlaet: Agrees with both points, but emphasizes two things:
NumeRev doesn’t necessarily have the resources. People working with and for NumeRev often don’t have deep knowledge of what a scientific journal is or best publication practices (especially in LLA – Lettres, Langues, Arts). Funding requests have gone unanswered. The team is three people: one developer and two people contributing 10% of their time. For journals lacking best practices, AI makes their existence and indexing possible. This use is justified by external constraints and is not done with shame. Sometimes AI use also reveals that humans do the job better, and the work must start somewhere (the hope is authors will recognize the importance of writing their own abstracts).
They don’t have the technical expertise or manpower to work with expert models. For now, they have to make do with what they’ve got.
Suzanne Beth (Érudit) (in the chat): Can you tell us more about the journals you host and the links with Cairn / OpenEdition?
Lise Verlaet: The journals are open-access diamond. The platform has a TEI-XML export system to help issue coordinators during evaluation. At the end of the chain, there is export to distribution platforms (Cairn and OpenEdition).
Suzanne Beth (in the chat): Are these journals affiliated with researchers from your university?
Lise Verlaet: About half of them, yes.
Alexia Schneider: Is the reluctance really about the term "chatbot" or more generally about automation?
Lise Verlaet: That’s a hypothesis: the reluctance appears only when the term "chatbot" is used, but people are open when the same usage is described without that label. Master’s theses are in progress and will be shared. However, AI is a relatively minor topic in them.
Alexia Schneider: It could be interesting to see how these AI-related questions fit into broader journal reflections. AI won’t solve every issue, nor replace recurring editorial tasks.
Lise Verlaet: Maybe not all tasks, but for discoverability, generative AI is about to replace all traditional filtering systems.
Alexia Schneider: There are other ways to approach AI for discoverability. LLM-based discoverability might not be valuable if it just relies on article view counts. Current project explores corpus exploration using semantic modeling in IEML. Link: https://revue30.org/projets/ieml-dans-stylo/
Elisabeth Guérard (in the chat): Alexia, I heard about this future LUMEN project – https://operas-eu.org/projects/lumen discovery assistant
Lise Verlaet: There is consultation with teachers and researchers in the NumeRev community (not only journals).
Alexia Schneider: Participatory use of AI is important for understanding needs, determining how much can be delegated to AI, and considering actual user behavior. A chatbot interface can also just be a front-end layer over an expert model.
Frédéric Clavert: This is similar to how social media are perceived: widespread distrust and suspicion of AI in the scientific community at least. This is due to the media-driven uses of AI (and social media) that don’t align with academic community values.
Lise Verlaet: There’s an association with non-academic uses; the same people were hesitant when social media share buttons were added.
Marcello Vitali-Rosati: Fear of the “chainsaw effect”—the tool that does everything. We concentrate tasks into one tool that can do many things but wasn’t designed to do all of them. This leads to plausible errors. Time and budget constraints push us toward more uniform uses. Maybe the solution is to change the paradigm: stop chasing time savings and instead reclaim meaning and quality. Do less, but better.
Lise Verlaet: What comes up again and again is “saving time”—especially for small journals trying to improve their quality. These tools, like Word, allow us to go further in what we already know how to do. The question is how far we can go—and how far without human oversight.