DrBenchmark: A Large Language Understanding ...
Type de document :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Titre :
DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical Domain
Auteur(s) :
Labrak, Yanis [Auteur]
Laboratoire Informatique d'Avignon [LIA]
Bazoge, Adrien [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
El Khettari, Oumaima [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Rouvier, Mickaël [Auteur]
Laboratoire Informatique d'Avignon [LIA]
Constant Dit Beaufils, Pacôme [Auteur]
Centre Hospitalier Universitaire de Nantes = Nantes University Hospital [CHU Nantes]
Grabar, Natalia [Auteur]
Savoirs, Textes, Langage (STL) - UMR 8163 [STL]
Daille, Béatrice [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Quiniou, Solen [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Morin, Emmanuel [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Gourraud, Pierre‐antoine [Auteur]
Dufour, Richard [Auteur]
Laboratoire Informatique d'Avignon [LIA]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Laboratoire Informatique d'Avignon [LIA]
Bazoge, Adrien [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
El Khettari, Oumaima [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Rouvier, Mickaël [Auteur]
Laboratoire Informatique d'Avignon [LIA]
Constant Dit Beaufils, Pacôme [Auteur]
Centre Hospitalier Universitaire de Nantes = Nantes University Hospital [CHU Nantes]
Grabar, Natalia [Auteur]
Savoirs, Textes, Langage (STL) - UMR 8163 [STL]
Daille, Béatrice [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Quiniou, Solen [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Morin, Emmanuel [Auteur]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Gourraud, Pierre‐antoine [Auteur]
Dufour, Richard [Auteur]
Laboratoire Informatique d'Avignon [LIA]
Traitement Automatique du Langage Naturel [LS2N - équipe TALN ]
Titre de la manifestation scientifique :
Fourteenth Language Resources and Evaluation Conference (LREC-COLING 2024)
Organisateur(s) de la manifestation scientifique :
Nicoletta Calzolari
Min-Yen Kan
Min-Yen Kan
Ville :
Torino
Pays :
Italie
Date de début de la manifestation scientifique :
2024-05-20
Mot(s)-clé(s) en anglais :
NLP evaluation
Benchmarking
Medical domain
French language
Transformers
Benchmarking
Medical domain
French language
Transformers
Discipline(s) HAL :
Informatique [cs]
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Informatique et langage [cs.CL]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Traitement du texte et du document
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Informatique et langage [cs.CL]
Informatique [cs]/Apprentissage [cs.LG]
Informatique [cs]/Traitement du texte et du document
Résumé en anglais : [en]
The biomedical domain has sparked a significant interest in the field of Natural Language Processing (NLP), which has seen substantial advancements with pre-trained language models (PLMs). However, comparing these models ...
Lire la suite >The biomedical domain has sparked a significant interest in the field of Natural Language Processing (NLP), which has seen substantial advancements with pre-trained language models (PLMs). However, comparing these models has proven challenging due to variations in evaluation protocols across different models. A fair solution is to aggregate diverse downstream tasks into a benchmark, allowing for the assessment of intrinsic PLMs qualities from various perspectives. Although still limited to few languages, this initiative has been undertaken in the biomedical field, notably English and Chinese. This limitation hampers the evaluation of the latest French biomedical models, as they are either assessed on a minimal number of tasks with non-standardized protocols or evaluated using general downstream tasks. To bridge this research gap and account for the unique sensitivities of French, we present the first-ever publicly available French biomedical language understanding benchmark called DrBenchmark. It encompasses 20 diversified tasks, including named-entity recognition, part-of-speech tagging, question-answering, semantic textual similarity, and classification. We evaluate 8 state-of-the-art pre-trained masked language models (MLMs) on general and biomedical-specific data, as well as English specific MLMs to assess their cross-lingual capabilities. Our experiments reveal that no single model excels across all tasks, while generalist models are sometimes still competitive.Lire moins >
Lire la suite >The biomedical domain has sparked a significant interest in the field of Natural Language Processing (NLP), which has seen substantial advancements with pre-trained language models (PLMs). However, comparing these models has proven challenging due to variations in evaluation protocols across different models. A fair solution is to aggregate diverse downstream tasks into a benchmark, allowing for the assessment of intrinsic PLMs qualities from various perspectives. Although still limited to few languages, this initiative has been undertaken in the biomedical field, notably English and Chinese. This limitation hampers the evaluation of the latest French biomedical models, as they are either assessed on a minimal number of tasks with non-standardized protocols or evaluated using general downstream tasks. To bridge this research gap and account for the unique sensitivities of French, we present the first-ever publicly available French biomedical language understanding benchmark called DrBenchmark. It encompasses 20 diversified tasks, including named-entity recognition, part-of-speech tagging, question-answering, semantic textual similarity, and classification. We evaluate 8 state-of-the-art pre-trained masked language models (MLMs) on general and biomedical-specific data, as well as English specific MLMs to assess their cross-lingual capabilities. Our experiments reveal that no single model excels across all tasks, while generalist models are sometimes still competitive.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Projet ANR :
Collections :
Source :
Fichiers
- document
- Accès libre
- Accéder au document
- DrBenchmark.pdf
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- DrBenchmark.pdf
- Accès libre
- Accéder au document