Find-2-Find: Multitask Learning for Anaphora ...
Document type :
Communication dans un congrès avec actes
Title :
Find-2-Find: Multitask Learning for Anaphora Resolution and Object Localization
Author(s) :
Oguz, Cennet [Auteur]
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH = German Research Center for Artificial Intelligence [DFKI]
Denis, Pascal [Auteur]
Machine Learning in Information Networks [MAGNET]
Vincent, Emmanuel [Auteur]
Speech Modeling for Facilitating Oral-Based Communication [MULTISPEECH]
Ostermann, Simon [Auteur]
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH = German Research Center for Artificial Intelligence [DFKI]
van Genabith, Josef [Auteur]
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH = German Research Center for Artificial Intelligence [DFKI]
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH = German Research Center for Artificial Intelligence [DFKI]
Denis, Pascal [Auteur]

Machine Learning in Information Networks [MAGNET]
Vincent, Emmanuel [Auteur]
Speech Modeling for Facilitating Oral-Based Communication [MULTISPEECH]
Ostermann, Simon [Auteur]
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH = German Research Center for Artificial Intelligence [DFKI]
van Genabith, Josef [Auteur]
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH = German Research Center for Artificial Intelligence [DFKI]
Conference title :
2023 Conference on Empirical Methods in Natural Language Processing
City :
Singapore
Country :
Singapour
Start date of the conference :
2023-12-06
Publication date :
2023
HAL domain(s) :
Informatique [cs]
English abstract : [en]
In multimodal understanding tasks, visual and linguistic ambiguities can arise. Visual ambiguity can occur when visual objects require a model to ground a referring expression in a video without strong supervision, while ...
Show more >In multimodal understanding tasks, visual and linguistic ambiguities can arise. Visual ambiguity can occur when visual objects require a model to ground a referring expression in a video without strong supervision, while linguistic ambiguity can occur from changes in entities in action flows. As an example from the cooking domain, "oil" mixed with "salt" and "pepper" could later be referred to as a "mixture". Without a clear visual-linguistic alignment, we cannot know which among several objects shown is referred to by the language expression "mixture", and without resolved antecedents, we cannot pinpoint what the mixture is. We define this chicken-and-egg problem as visual-linguistic ambiguity. In this paper, we present Find2Find, a joint anaphora resolution and object localization dataset targeting the problem of visual-linguistic ambiguity, consisting of 500 anaphora-annotated recipes with corresponding videos. We present experimental results of a novel end-to-end joint multitask learning framework for Find2Find that fuses visual and textual information and shows improvements both for anaphora resolution and object localization as compared to a strong single-task baseline.Show less >
Show more >In multimodal understanding tasks, visual and linguistic ambiguities can arise. Visual ambiguity can occur when visual objects require a model to ground a referring expression in a video without strong supervision, while linguistic ambiguity can occur from changes in entities in action flows. As an example from the cooking domain, "oil" mixed with "salt" and "pepper" could later be referred to as a "mixture". Without a clear visual-linguistic alignment, we cannot know which among several objects shown is referred to by the language expression "mixture", and without resolved antecedents, we cannot pinpoint what the mixture is. We define this chicken-and-egg problem as visual-linguistic ambiguity. In this paper, we present Find2Find, a joint anaphora resolution and object localization dataset targeting the problem of visual-linguistic ambiguity, consisting of 500 anaphora-annotated recipes with corresponding videos. We present experimental results of a novel end-to-end joint multitask learning framework for Find2Find that fuses visual and textual information and shows improvements both for anaphora resolution and object localization as compared to a strong single-task baseline.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- document
- Open access
- Access the document
- emnlp23impress.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- emnlp23impress.pdf
- Open access
- Access the document