Conclusions from the "AI and Research" Workshop Series
Feedback and Synthesis from the "AI and Research" Workshop Series
Six online workshops on AI were held during the Winter 2025 term. The speakers—Frédéric Clavert, Nicolas Sauret, Samuel Szoniecky, Gérald Kembellec, Joaquine Barbet, Servanne Monjour, and Lise Verlaet—each offered reflections on the automation of research and scholarly publishing practices. A seventh and final workshop took place during the Circé network general assembly on May 8, 2025. It provided an opportunity to revisit the discussions from all previous workshops.
Frédéric Clavert presented the automatic code explanation feature used in JDH articles.
Nicolas Sauret and Samuel Szoniecky offered an ethical reflection on the use of generative AI, taking the Revue3.0 project as a starting point to draft a framework document.
Gérald Kembellec and Joaquine Barbet presented the outcomes of a project on automating the production of abstracts for scholarly journals.
Gérald Kembellec, this time alone, initiated a discussion on the concept of the digital palimpsest for the semantic structuring of digital data.
Servanne Monjour presented a project on AI-assisted literature review carried out in collaboration with the journal Humanités numériques.
Lise Verlaet discussed the use of a generative AI scientific editing assistant for the NumeRev platform, as well as the preliminary results of a questionnaire sent by master’s students to NumeRev users regarding their AI usage.
Alexia Schneider and Marcello Vitali-Rosati reflected on automation within the editorial workflow of scholarly journals, using the case study of automatically generating lists of peer reviewers.
Participants and speakers included journal editors in the humanities and social sciences (HSS), as well as researchers affiliated with Revue3.0 who are interested in the topic.
The one-hour format was divided into approximately 15 minutes of presentation followed by 45 minutes of discussion.
This document offers a synthesis of the key issues raised throughout the discussions.
Issues Raised by the Workshops
What tasks can be automated?
In the field of research and scholarly publishing, many technical or repetitive tasks appear eligible for automation. These include manuscript formatting according to editorial standards, plagiarism detection, automatic indexing, abstract or keyword generation, metadata extraction, and even the pre-selection of articles based on defined criteria. In research, processes such as literature review, corpus classification, statistical analysis, and assisted translation can also benefit from automation. While these uses are presumed to increase efficiency, they also raise questions about the desired degree of human intervention in activities historically assessed for their rigor and critical contextualization.
What tasks do we want to automate?
The rapid adoption of generative AI tools across a wide range of tasks suggests that almost everything could be automated—so the real question becomes what we want or are willing to delegate to machines. The desire to automate is shaped less by technical capacity than by the values the scholarly community aims to uphold. It’s not merely about optimizing production workflows, which reinforces the paradigm of hyper-productivity, but about determining which tasks are compatible with machine delegation without compromising scientific quality or ethics.
The fact that tasks like copyediting or grammar checking are more readily delegated—compared to the reluctance around automating responsibilities like article selection or content generation—reveals a value system potentially unsettled by the capabilities of generative AI. The tension between efficiency and responsibility is now central to the debate on AI usage. In trying to find an acceptable compromise between time/resource savings and the delegation of high-stakes scholarly tasks, we are in fact witnessing a redefinition of core concepts of scientific legitimacy.
What is the current standard? ChatGPT?
The emergence of ChatGPT marked a turning point in the mass diffusion of generative AI tools accessible to both the general public and researchers. Its use has quickly spread to tasks like reformulation, abstract generation, and idea development. However, its status remains ambiguous: for some, it is a useful tool; for others, a potential threat. It raises issues related to intellectual property, the reliability of AI-generated content, and the invisibility of sources. ChatGPT has become a focal point for debates on ethical and methodological use of AI in research, making visible practices that are often discreet or undeclared.
Currently, there is no unified standard for the use or disclosure of AI in scholarly contexts. Practices vary significantly depending on discipline, journal, and institutional setting. Some publishing platforms already integrate tools for writing assistance, automated checking, or bibliographic recommendation—often without transparency about how they work. In research, AI standards are still emerging, particularly concerning the traceability of tools used, the evaluation of output quality, and the integrity of data generated by probabilistic models.
What are the new "discreet practices" (Muller and Clavert, 2021)?
Echoing what Muller and Clavert describe as "discreet practices," generative AI is often used behind the scenes: paragraph reformulation, writing support in foreign languages, abstract or email drafting for authors or reviewers. These uses are not always explicitly acknowledged, but they are reshaping the everyday gestures of research and publishing. They also alter how labor is distributed and how skills are perceived: what does it mean to “write well” if a tool can generate fluent text from a basic prompt?
Standardization of practices and adaptation to a generic tool?
The adoption of tools like ChatGPT or other general-purpose AIs may lead to the standardization of scholarly content and formats. By conforming to the logic of a tool built on dominant linguistic models, researchers risk adapting their writing and thinking to match the expectations and forms favored by these systems. This raises concerns about the potential homogenization of scholarly practices at the expense of epistemological, linguistic, and methodological diversity. What emerges is not so much a general artificial intelligence as a generalized use of AI. Behind the generic tool lies a subtle cultural alignment of research practices with the standards of a model optimized for linguistic fluency rather than critical rigor.
Is it just a matter of finding better or more ethical alternatives to ChatGPT?
This issue goes far beyond the search for "better" tools or comparing existing models. It touches on societal and science policy choices: which actors do we want to support (local startups, public consortiums, global private companies)? What ethical and transparency standards are required for systems used in scholarly evaluation, publishing, or writing? Finding alternatives also means examining the economic models and sociotechnical infrastructures underlying AI in our fields. It’s not just about benchmarking performance, but about assessing the broader impact of these tools on the research ecosystem.
Generative AI challenges some of the foundations of research and publishing
Generative AI usage puts several pillars of scientific legitimacy under pressure. For example, if a bibliography can be generated automatically or artificially bloated, it loses its status as evidence of prior, situated intellectual work. It can no longer serve as an indicator of the author’s command of disciplinary norms. The automation of bibliographic, writing, or reviewing practices may change how the validity of scientific work is perceived and may reinforce researchers’ disengagement from these foundations of scientific rigor.
What do we do with the supposed time and resource savings?
One of the main arguments in favor of AI is the promise of saving time or human resources. But the time freed is rarely reinvested in improving the quality of research work—it is more often absorbed by increased pressure to produce more. This fuels a logic of continuous productivity, where process acceleration does not lead to a renewed appreciation for attention, slowness, or rigor. The risk is that AI becomes a tool for thinking faster, rather than thinking better—undermining scientific reflexivity. The challenge is to transform this gain in resources into an opportunity to rethink our goals, pace, and standards in knowledge production.
Conclusion of the Workshop Series
The discussions revealed a growing interest among participants in drafting practical and even ethical guides for various AI tools, highlighting the need to formalize a typology of AI systems and associated practices in scientific research and scholarly publishing.
This points to a privileged role for the Digital Humanities community, which may serve as a key interdisciplinary mediator in addressing this need.
Bibliography
Muller, Caroline, and Frédéric Clavert. 2021. “De La Poussière à La Lumière Bleue: Émotions, Récits, Gestes de l’archive à l’ère Numérique.” Signata, no. 12 (May). https://doi.org/10.4000/signata.3136.