This conversation is closed.

Couldn't voice synthesis be used to hear text translations instead of reading subtitles?

Subtitles are not easy for many people, from those don't see well to children who don't read fast enough. Using voice synthesis, which is now available in many languages on several systems, would gave TED talks and translations a much better impact. There is a synchronization issue, of course, but I think a solution is possible.
And, also, I think it's a pity Open Translation Project doesn't apply to all talks, like Best of the Web, for example. There are some talks I miss because of that (subtitles, even in english, allow me to follow talks even when everybody sleeps at home… with sound off).

  • thumb
    Jan 20 2012: Totally agree with you !
    I am also a english-to-french TED talks translator, and since I signed in, I cannot translate any talks, I am always too late. We should be able to translate every talks (and the TED website itself, why not?).

    That is how we can spread Ideas!

    (and ok, it's a bit funnuy to answer you in english, but if we want everybody to understand our conversations...)