SEECAT Project: Speech & Eye-Tracking Enabled CAT

Dates: May - July, 2013 | Keywords: computer-aided translation, speech recognition, eye-tracking

Monday, April 22, 2013 - 15:15 to Wednesday, July 31, 2013 - 23:45
During summer 2013 The Bridge (CRITT + danCAST) plans to conduct an implementation workshop for computer assisted translation (CAT), in which a translator reads a source text on a computer screen and speaks out the translation in the target language, a process called sight translation. This sight translation process is supported by an Automatic Speech Recognition (ASR) and a Machine Translation (MT) system, which transcribe the spoken speech signal into the target text and which assist the translator with partial translation proposals, predictions and completions on the computer monitor. An eye-tracking device follows the translators gaze path on the screen, detects where he or she faces translation problems and triggers reactive assistance.
 
The project will extend the CASMACAT workbench, transforming it to a Speech and Eye-tracking Enabled Computer-Assisted Translation (SEECAT) platform which will be experimentally implemented and tested.
 
  • Objective: Use speech input as a post-editing tool for language translators in order to enhance their efficiency. Use eyetracker to synchronize reading and speaking with the MT output, for positioning of input cursor.
  • Success Metric: Demonstrate increase in translation throughput using speech input for post-editing over a system without speech input.
  • Why: Currently, human post-editors use keystrokes to improve the quality of the translation output. The project is to investigate the efficiency impact if they were to correct the translation output using speech input.
  • How: In the framework of the current CASMACAT workbench, integrating an automatic speech recognizer (ASR) that would accept spoken translations as input from human translators/post-editors to improve the quality output generated by a machine translation system. The speech recognition system would be partly constrained by the gaze data and output of a machine translation system, but will also be flexible enough to accept broader language. Strategies for balancing these approaches will be investigated.
  • Technology and Languages: The AT&T ASR system will be trained for Danish and Hindi and the Moses open source decoder will be used for English > Danish and English > Hindi translation.
 
More information HERE.
The page was last edited by: Department of Management, Society and Communication // 10/10/2018