Post-edited quality, post-editing behaviour ...
Type de document :
Partie d'ouvrage
Titre :
Post-edited quality, post-editing behaviour and human evaluation: a case study
Auteur(s) :
Depraetere, Ilse [Auteur]
Savoirs, Textes, Langage (STL) - UMR 8163 [STL]
de Sutter, Nathalie [Auteur]
Tezcan, Arda [Auteur]

Savoirs, Textes, Langage (STL) - UMR 8163 [STL]
de Sutter, Nathalie [Auteur]
Tezcan, Arda [Auteur]
Éditeur(s) ou directeur(s) scientifique(s) :
Laura Winther-Balling
Lucia Specia
Michael Carl
Michel Simard
Sharon O'Brien
Lucia Specia
Michael Carl
Michel Simard
Sharon O'Brien
Titre de l’ouvrage :
Post-editing of machine translation: processes and applications
Éditeur :
Cambridge Scholars Publishing
Date de publication :
2014
Mot(s)-clé(s) :
post-édition
Discipline(s) HAL :
Sciences de l'Homme et Société/Linguistique
Résumé en anglais : [en]
In this chapter, we address the correlation between post-editing similarity and the human evaluation of machine translation. We were interested to find out whether a high similarity score corresponded to a high quality ...
Lire la suite >In this chapter, we address the correlation between post-editing similarity and the human evaluation of machine translation. We were interested to find out whether a high similarity score corresponded to a high quality score and vice versa in the sample that we compiled for the purposes of the case study. A group of translation trainees post-edited a sample and a number of these informants also rated the MT output for quality on a five-point scale. We calculated Pearson's correlation coefficient as well as the relative standard deviation per informant for each activity with a view to determining which of the two evaluation methods appeared to be the more reliable measurement given the project settings. Our sample also enabled us to test whether MT enhances the productivity of translation trainees, and whether the quality of post-edited sentences is different from the quality of sentences translated 'from scratch'.Lire moins >
Lire la suite >In this chapter, we address the correlation between post-editing similarity and the human evaluation of machine translation. We were interested to find out whether a high similarity score corresponded to a high quality score and vice versa in the sample that we compiled for the purposes of the case study. A group of translation trainees post-edited a sample and a number of these informants also rated the MT output for quality on a five-point scale. We calculated Pearson's correlation coefficient as well as the relative standard deviation per informant for each activity with a view to determining which of the two evaluation methods appeared to be the more reliable measurement given the project settings. Our sample also enabled us to test whether MT enhances the productivity of translation trainees, and whether the quality of post-edited sentences is different from the quality of sentences translated 'from scratch'.Lire moins >
Langue :
Anglais
Audience :
Non spécifiée
Vulgarisation :
Non
Collections :
Source :