Hi, there. I am a translator who just stepped into the programming world a couple of months ago.
I want to combine my translation and interpretation (T&I) experience with programming and hopefully make contributions to the development of machine translation–perhaps a better Google Translate or simultaneous interpretation application.
I feel that if coders and T&I professionals can talk to each other, we can already achieve so many breakthroughs.
For example, professional translators spend a lot of time going through the tedious type-the-keyword-and-search cycles on certain websites. (Yes, Google and Wikipedia are among the most popular ones, but the sites we look at also depend on the assignments.) Such repetitive tasks can already be programmed so we just need one click and the computer will list all the search results–it is a bit like Google Dictionary (which is gone now), but it will need to be customized for each specific translation task. A translator also needs different help when she has to translate into her B language, usually her second language. In other words, I can already design something to save translators perhaps 50% of their time if not more–IF ONLY I knew more about programming.
Another example is how speech recognition technology can help professional interpreters deal with a “difficult” speaker. However, the machine has to be trained to recognize the speaker’s accent, and preferably the terminology of the scheduled talk. The current speech-to-text technology works well when the user is a NATIVE speaker who talks NORMALLY about a GENERAL topic. But in the real world, a simultaneous interpreter might need to work with a speaker who talks about a very SPECIALIZED topic SUPER FAST in her SECOND (or even third) language. The trick I can think of is, since interpreters sometimes have a chance to do briefing with the speaker, we can record the speaker and use the recording to train the machine. The worst case scenario: we have no chance to talk to the speaker before the actual event, but since simultaneous interpreters usually work in a pair of two, the non-working interpreter can focus on training the computer by editing the errors of speech recognition production so the accuracy of the recognition technology improves steadily as the speaker goes on talking.
I am sure a lot of programmers are interested in machine translation. I have even read some papers published by computer science researchers. Is there an easy way to reach out to them? Like a specific forum or website?