IA and research

In the context of this meeting, we are investigating the explicability of Large Language Models (LLMs) and the possibilities of hybridization between these models and deductive approaches based on an explicit knowledge framework. We therefore aim to explore from a theoretical point of view (with some practical case studies) the possible relationships between connectivist and symbolic approaches to knowledge in the context of research.

We have identified two analytical perspectives, not entirely isolated, that can contribute to the exploration of this type of relationship. Both perspectives are based on the idea that, for algorithmic methods to contribute to research, it is essential to reintroduce forms of structured knowledge. We briefly describe both perspectives below, as well as some of the application areas and methodologies associated with them.

Understanding LLMs from results

Focusing in particular on alignment issues, this approach seeks to interpret and understand the results generated by LLM models. This questioning can, in turn, be approached from at least two angles:

These results-oriented approaches enable the design of operational systems that could contribute to the development of models aimed at maintaining a degree of control over knowledge production. However, it is essential to question the emphasis placed on the results produced by LLM models.

LLM outside results-oriented perspectives

At the heart of this second approach lies the following question: is fundamental research always a question of performance and results? How can we approach AI in a framework other than that of model performance, and return to the fundamental question of meaning?

By focusing on the role and objectives of research and science, this approach allows us to move away from the increasingly popular rhetoric that pits human against machine. For example, instead of asking how artificial intelligence algorithms match or surpass humans in terms of creativity and originality, we could ask what these concepts mean and how they are modeled (or could be modeled) within artificial intelligence algorithms.

We recognize that questioning the meaning of inductive algorithms could open up potentially rich and interesting avenues of reflection, including at the application level. The experiments and analyses we’ll be undertaking with the Stylo semantic editor could fit into this framework.

Resources

The following introductory readings provide an initial overview of Gastaldi’s research and the type of discussion we could have together: