Expressivity and comparison of models of ...
Document type :
Communication dans un congrès avec actes
Title :
Expressivity and comparison of models of discourse structure
Author(s) :
Venant, Antoine [Auteur]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
Asher, Nicholas [Auteur]
Centre National de la Recherche Scientifique [CNRS]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
Muller, Philippe [Auteur]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
Denis, Pascal [Auteur]
Machine Learning in Information Networks [MAGNET]
Afantenos, Stergos [Auteur]
Université Toulouse III - Paul Sabatier [UT3]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
Asher, Nicholas [Auteur]
Centre National de la Recherche Scientifique [CNRS]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
Muller, Philippe [Auteur]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
Denis, Pascal [Auteur]

Machine Learning in Information Networks [MAGNET]
Afantenos, Stergos [Auteur]
Université Toulouse III - Paul Sabatier [UT3]
MEthodes et ingénierie des Langues, des Ontologies et du DIscours [IRIT-MELODI]
Conference title :
SIGDIAL 2013 - Special Interest Group on Discourse and Dialogue Conference
City :
Metz
Country :
France
Start date of the conference :
2013-08-22
Book title :
Proceedings of the SIGDIAL 2013 Conference
Publisher :
ACL
Publication date :
2013-08-22
HAL domain(s) :
Informatique [cs]/Traitement du texte et du document
Informatique [cs]/Intelligence artificielle [cs.AI]
Informatique [cs]/Intelligence artificielle [cs.AI]
English abstract : [en]
Several discourse annotated corpora now exist for NLP exploitation. Nevertheless, it is not clear how these annotations compare: are they incompatible, incomparable, or do they share some inter- pretations? In this paper, ...
Show more >Several discourse annotated corpora now exist for NLP exploitation. Nevertheless, it is not clear how these annotations compare: are they incompatible, incomparable, or do they share some inter- pretations? In this paper, we relate three types of discourse annotation as found in: (i) the RST Tree Bank corpus, (ii) SDRT corpora DISCOR and ANNODIS, and (iii) dependency tree structures. The latter have not yet been used in actual annotations, but represent elementary substructures which are interesting for automated parsing. Specifically, we discuss two ways of interpreting RST trees by taking discourse relations as semantics operators, one is fully specified, the other one underspecified. We also provide an underspecified semantic interpretation of dependency trees. We define trans- lations between RST and DT that preserve these underspecified interpretations. On this basis, we design similarity measures that quantify the loss of information implied by these translations. Over- all, these translations and metrics provide a unified framework that will hopefully enable us to take advantage of the various existing discourse annotation data that are available for automated tasks.Show less >
Show more >Several discourse annotated corpora now exist for NLP exploitation. Nevertheless, it is not clear how these annotations compare: are they incompatible, incomparable, or do they share some inter- pretations? In this paper, we relate three types of discourse annotation as found in: (i) the RST Tree Bank corpus, (ii) SDRT corpora DISCOR and ANNODIS, and (iii) dependency tree structures. The latter have not yet been used in actual annotations, but represent elementary substructures which are interesting for automated parsing. Specifically, we discuss two ways of interpreting RST trees by taking discourse relations as semantics operators, one is fully specified, the other one underspecified. We also provide an underspecified semantic interpretation of dependency trees. We define trans- lations between RST and DT that preserve these underspecified interpretations. On this basis, we design similarity measures that quantify the loss of information implied by these translations. Over- all, these translations and metrics provide a unified framework that will hopefully enable us to take advantage of the various existing discourse annotation data that are available for automated tasks.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- document
- Open access
- Access the document
- W13-4002.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- W13-4002.pdf
- Open access
- Access the document