CEA-List@EvalLLM2024: prompter un très grand modèle de langue ou affiner un plus petit?
August 26, 2024

CEA-List@EvalLLM2024: prompter un très grand modèle de langue ou affiner un plus petit?

The EvalLLM2024 challenge aims to evaluate the results of few-shot approaches to information extraction in French. Our contribution to this challenge tests two approaches: one exploits the available annotated data in the prompt of an LLM (in context learning) while the other fine-tunes a generic entity recognition model (GLiNER) by exploiting the annotated data. Our experiments show that this second approach obtains the best results, especially when enriched by a data augmentation step exploiting the annotation guide and LLMs for the generation of synthetic examples.

 

View ARIEN’s Community on Zenodo to read the complete publication.