Probing for Bridging Inference in Transformer ...
Document type :
Communication dans un congrès avec actes
Title :
Probing for Bridging Inference in Transformer Language Models
Author(s) :
Pandit, Onkar [Auteur]
Machine Learning in Information Networks [MAGNET]
Hou, Yufang [Auteur]
IBM [DUBLIN] [IBM]
Machine Learning in Information Networks [MAGNET]
Hou, Yufang [Auteur]
IBM [DUBLIN] [IBM]
Conference title :
NAACL 2021 - Annual Conference of the North American Chapter of the Association for Computational Linguistics
City :
Online Conference
Country :
Mexique
Start date of the conference :
2021-06-06
Publication date :
2021-06-06
HAL domain(s) :
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]
Informatique [cs]
English abstract : [en]
We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations ...
Show more >We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations incomparison with the lower and middle layers, also, few specific attention heads concentrate consistently on bridging. More importantly, we consider language models as a whole in our second approach where bridging anaphora resolution is formulated as a masked token prediction task (Of-Cloze test). Our formulation produces optimistic results without any finetuning, which indicates that pre-trained language models substantially capture bridging inference. Our further investigation shows that the distance between anaphor-antecedent and the context provided to language models play an important role in the inference.Show less >
Show more >We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations incomparison with the lower and middle layers, also, few specific attention heads concentrate consistently on bridging. More importantly, we consider language models as a whole in our second approach where bridging anaphora resolution is formulated as a masked token prediction task (Of-Cloze test). Our formulation produces optimistic results without any finetuning, which indicates that pre-trained language models substantially capture bridging inference. Our further investigation shows that the distance between anaphor-antecedent and the context provided to language models play an important role in the inference.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- https://hal.inria.fr/hal-03284110/document
- Open access
- Access the document
- https://hal.inria.fr/hal-03284110/document
- Open access
- Access the document
- https://hal.inria.fr/hal-03284110/document
- Open access
- Access the document
- BridgingProbingBERT.pdf
- Open access
- Access the document
- document
- Open access
- Access the document