2022_15_2_7 - XLinguae

Search
Go to content

Main menu:

2022_15_2_7

Open access Issues > Issue n_2_2022 > section n_2_2022
Measuring quality in translation for dubbing: a quality assessment model proposal for trainers and stakeholders

Giselle Spiteri Miggiani

DOI: 10.18355/XL.2022.15.02.07

Abstract
Quality assessment in the field of Audiovisual Translation (AVT) has been addressed by several scholars, particularly in relation to interlingual subtitling (Pedersen, 2017; Robert & Remael, 2016), intralingual live subtitling (Romero-Fresco & Martínez Pérez, 2015) and interlingual live subtitling (Robert & Remael, 2017; Romero-Fresco & Pöchhacker, 2017), but to-date no model in relation to dubbing has been proposed. As with other AVT modes, the need for a quality assessment method in dubbing arises in academic and in-house training contexts. Moreover, localization companies often resort to ‘entry tests’ before engaging translators. Self-assessment also proves to be one of the main challenges for trainees in a dubbing training context, and any quality assessment tools can possibly be of help. This paper proposes a tentative quality assessment model that attempts to pin down the ‘errors’ in a dubbing dialogue script while measuring the quality via a percentage score system. The model focuses on the translation and adaptation phase in the dubbing workflow and is therefore based on a set of textual quality parameters. These are drawn on a revisited taxonomy of dubbing quality standards (Spiteri Miggiani, 2021a, 2021b), further adapted from Chaume (2007), which takes into account the dubbed end product as a whole. The model combines an end product-oriented approach with workflow-oriented standards and expectation norms, therefore taking the industry perspective into account. This implies considering the functionality of a dubbing script as a macro quality parameter in its own right. The application of this tentative model has so far been limited to the author’s academic and in-house training settings. This paper, therefore, is simply intended as a point of departure to pave the way towards applied and collaborative research that could test, validate, and further develop the proposed model.

Key words: Dubbing, translation, adaptation, quality standards, quality assessment

Pages: 85-102

Full Text
 
Back to content | Back to main menu