Hacker News new | past | comments | ask | show | jobs | submit login

Something I've wondered... when politicians give speeches they often have some hand gesturing from a sign language interpreter standing next to them. But I've never understood why this is better than subtitles. If you're deaf then wouldn't you rather read text than follow sign language?



Sign languages are not coded speech. They are languages with their own grammar and vocabulary. For example, American SL is descended from Old French Sign Language and is partially understandable by French SL speakers today, while British SL is completely different, not in the same language family. It is even possible to write sign language. it is done like with spoken language. The most basic components, akin to phonemes in spoken language, are a closed set, assigning a symbol to each allows lossless transcription. Mostly used by linguists; but there are some books in ASL.

Deaf people who speak sign language natively approach English as a second language. And it is hard to learn a spoken language when deaf. English literacy rates among ASL native speakers are rather low.


This has to be a generational thing. There is no way a deaf person growing up now is not going to be using the internet.


A speech and language therapist tells me that spelling and reading is hard for deaf children because we match phonics to text but they don't have access to phonics, so they have much lower literacy levels without specialist help


Partial literacy is all you need for YouTube or video calls. Plenty of hearing people can read well enough for that too, or to find what they need at the store, but can't read well enough to e.g. summarize the main points of a newspaper story. I've seen estimates that something like 20 - 40% of Americans are functionally illiterate in that way. For the Deaf, it is even higher.


Still, I suppose that many people still want the original phrasing, not a translation where subtleties might get lost.


Closed captions are usually available for that need (which is also helpful for people who became hard of hearing later in life). But that's a separate need than what is effectively translation.


Think about how you learn to read - you 'sound out' words, turning letters into sounds to match them to a pronunciation to figure out what word is represented.

Now imagine trying to learn how to do that when you have never heard any words spoken out loud.

People who are deaf from birth often have a lot of difficulty with spelling and reading, because both skills are closely connected to saying and hearing words. Connecting written words to lip movements (which is kind of the closest thing to 'phonics' for a deaf person) is lossy - the letter-to-lip connections are fuzzier than letter-to-sound, and lip-to-letter is very ambiguous.

Subtitles are great for people who are confident and comfortable readers - say, people who have become deaf due to age - but for some deaf people following subtitles can be like asking someone who's dyslexic to quickly read a sentence out loud.


The more salient point is that English is going to be a second language for people who grew up deaf (with a completely unrelated sign language probably being the first language).


That doesn't sound too different from learning Mandarin or any other Chinese dialect. There are some parts of characters which are roughly phonetic, but generally speaking you're not going to know how to write a word by hearing it, or how to speak a word by reading it.


Since I live with someone whose first language isn't English I basically watch everything with subtitles. And live subtitling - while it's a thing - isn't that good. Try turning on subtitles for something like a news programme or live broadcast and it'll usually be quite delayed, with lots of misspellings and even outright wrong text. (This is true for premier public broadcasters in the UK such as the BBC, I don't know if this is solved better in other countries).

Anyway my theory is that sign language interpreters may be much better at this because sign language uses the same areas of the brain as speaking[1] so they're able to listen and sign much more intuitively than typing. Think if you were able to listen and speak at the same time without your "speech" drowning out what you are listening to.

[1] https://maxplanckneuroscience.org/language-is-more-than-spea...


We have subtitles on all the time for my little boy and can attest that the BBC is very poor - even on iPlayer.

Interesting side note: if ever subtitles are turned off, or we are watching TV elsewhere, me and my wife can't 'hear' well. Even if the volume is up. Like we've untrained our ability...


Totally agree, the "live" subtitling on BBC is remarkably bad.

It's way worse than even using a cheap computer with open source solutions - it's strange no one at the BBC decides to resolve this as it would be easy and could easily give the public a far better result. Even if you took a hit on the most obscure words, off the shelf would outperform the current process by a country mile.


Surely not, syncapatocaption is infallible.


Nobody can type fast enough on QWERTY to keep up with human speech, so the comparison is versus chorded typing on a stenography keyboard, or someone repeating the dialogue into voice dictation software which is obviously prone to errors.


As someone who watches TV news with subtitles on, whatever entry technique they're using, the result is not very good [in the UK, can't speak for other countries].


Sure, but the closed captioning is still extremely good. The typists use chorded keyboards for speed and yes they occasionally make mistakes but everything is generally quite clear and accurate.

On the other hand, signing involves actual translation, not just transcription, which is much more likely to drop meaning or introduce confusion. Translation is already hard enough, and live translation is a whole other level of difficulty.


>Sure, but the closed captioning is still extremely good. The typists use chorded keyboards for speed and yes they occasionally make mistakes but everything is generally quite clear and accurate.

Live subtitling (on the BBC at least) is mainly done using re-speaking and voice recognition, rather than typing.


I'm not deaf, don't know any sign language, am not close friends with anyone who does.

However, I understand that it's much easier to be expressive in sign language. Non-verbal language used by the speaker - sarcasm, tone, inflection - either translate badly, or get lost entirely when transliterating into subtitles. A talented sign-language translator is able to carry this over much better.


Yes, but.

Now, you’re getting the inflection of the interpreter and not the speaker (derivative). Certainly you’re not hearing inflection on the original orator, either, so maybe it’s a mixed bag.


Sign language (at least ASL) involves a lot of facial expression in addition to using your hands. There's a lot of room to creatively translate the unspoken message in a vocal inflection into signed message.

Subtitles in general, and live captioning especially, avoid editorializing or taking creative license- they're literal translations of words without other cues added in.

One nice exception to this has been the community subtitling work involved in various Taskmaster spinoff series, where explanations for cultural jokes or idioms are added into the subtitles for foreign audiences, especially when the joke is a pun or relies on a mispronounced word to make sense.


But it's being done by professional translators, right? That might be more expensive than having a human do live subtitle transcription, but I bet it's not a huge difference especially relative to the production costs of any broadcast to a large audience, and based on the live subtitles I've seen (mostly on national sports and news broadcasts, so I don't know if that's automated or done by a human) it's hard imagine the quality achieved by professional signers wouldn't be significantly better.


Transcription can lose meaning, as text is not a 1:1 replacement for speech. You lose cadence, stress and emotion.


I don't know much about sign language. Does it use a different grammar from English?


Yes, it's best to consider it an entirely different language rather than replacements of words.


I learned UK and USA use incompatible versions of sign language from online discussion of a recent television show.


Sign language is the native language for most deaf people, while subtitling is derived from spoken language which is usually their second language. Also you can express emotions better using it, similar to how it's easier to convey them using speech than using text.


The other part of the jigsaw that a lot of people don't realise at first is that sign languages are distinct languages from spoken languages. Or to put it another way: ASL is to American English as Portuguese is to Korean.


There's more to speaking than just the words. Sign language can convey inflection and emotion in ways that closed captions cannot.

Watch someone signing: they use their face and body to convey emotions like anger, confusion, hesitation, love, joy, and the rest of the human range.


But if you're watching the politician speak, you can already see all of the emotion in their facial expression and body movement.

And that's the "original" emotion, it's not filtered through another human being. When you watch a movie without audio and with subtitles (like in a bar or on a bus), the emotions of the speakers are already awfully clear from the visuals.


> But if you're watching the politician speak, you can already see all of the emotion in their facial expression and body movement.

No, you can’t; how much emotion is shown via those things vs. tone, volume, and other auditory cues varies from speaker to speaker and speech to speech; sometimes, speakers demonstrate one emotion through gestures but indicate that it is insincere/being mocked/etc. via vocal cues, even.

Not to mention the degree to which simultaneously tracking face and subtitles makes you likely to miss parts of either or both.


No I think you're missing the grandparent's point. Inflection in spoken language "translates" to facial expression in sign language.


No I'm getting that completely. But inflection in spoken language is redundant to a large degree with facial expression. If you're watching the original speaker, you're already getting that.

For example, if we ask a question, it's not just that our voice goes up at the end. Our eyes move in a certain way too, slightly more opened and our eyebrows and sometimes cheeks raise.


And you’re seeing this nuance while simultaneously reading subtitles? From a speaker at a distance? GP is correct, sign is a fully expressive language, far richer than subtitles.


Yes of course. Haven't you ever watched a movie without audio and with subtitles turned on? It's quite easy to get the nuance. People's faces are incredibly expressive.

And nobody's at a distance, the cameras are always on either a medium or close-up shot when filming politicians speaking.


This is a deeply strange line of thought to follow. Do you think broadcasters would go to the trouble and expense if there was no value for people in it?

If subtitles were equal value or even “good enough”, they’d be used exclusively. That they aren’t should tell you something, and you repeatedly protesting that you are unable to comprehend the value doesn’t mean it isn’t there.


Not who you replied to, but weren't we talking about televised speech? (We're talking about adding subtitles.) So the "at distance" really isn't an issue. And yes I can watch subtitle and speaker's face at the same time and get their expression.


Not everything that's spoken in a televised event has a single accompanying speaker to watch for cues the entire time.

Separately, relatively dry sarcasm can't be visually picked up directly, but a signer may be able to suggest some of that with body/hand language.


> But inflection in spoken language is redundant to a large degree with facial expression.

I don't know how you'd possibly attempt to objectively quantify that degree, but my guess is that you're understating it. The entire deaf community is probably not mistaken about which means of communication are the most effective for them.


> But if you're watching the politician speak, you can already see all of the emotion in their facial expression and body movement.

Yes, but a deaf person has trouble hearing how the speaker is speaking. They're missing out on the emotion in the voice.

Consider all of the emotion that can be conveyed in an audiobook, or on a phone call, or through music. Humans convey a lot of emotion in sound that is not represented visually. Part of sign language is conveying the emotion usually present in speech.


As I understand it, BSL is a fully different language to spoken English, with different grammar and syntax. For someone whose 'first' language is BSL, reading subtitles is more like a second language, where meaning is not conveyed in the same way.

https://www.british-sign.co.uk/what-is-british-sign-language...


Sign language has a different grammar. At least in British Sign Language. Simplistically, put the object of the sentence first so it's clearer what's being talked about.

For someone who is profoundly deaf from birth and who can't lipread, the way we speak and write is a massive struggle. Cochlear implants before a year old are much more common now, while the brain is still more malleable, so there's maybe less and less deaf people who are totally profoundly deaf and you may not realise what it's like for them if you never come across them.


Interesting perspective. I've never considered how much of reading and writing is dependent on first listening and speaking. I guess it makes sense, since the first steps to reading are "sounding out the words."


I would guess that the sign language interpreter is translating in near real time in live speeches.


Live closed captions (text only) is very common and standard. Usually those are done by an external company listening in to an audio feed, and sending the data back. It used to be done with regular POTS lines and telnet, but now it’s obviously more common to use public internet based services like EEG iCap[1]

I don’t know too much about it but I had read recently that ASL sign language can be thought of as a different language, rather than a direct equivalent to text subtitles[2].

[1] https://eegent.com/icap [2] https://imanyco.com/closed-captions-and-sign-language-not-a-...


> I had read recently that ASL sign language can be thought of as a different language

Yes, it is a different language. I've heard that ASL is rather similar to French Sign Language and quite different from BSL (British Sign Language). If someone were to translate something from English into ASL, and someone else were to translate the ASL back into English, I'd expect the result to be as different from the original as if they'd gone via some other language, like Italian, for example.


There's a variety of running jokes that Italian is half sign language anyways.

(Apparently derived from the fact that in Italy, there's quite a lot more non-verbal communication with hand gestures than other parts of the world)


Depends, if the speech is IRL-first and video-second then a sign language interpreter is better and cheaper than installing some sort of concoction to display live subtitles (which have to be typed by a paid steganographer).

But in any case, deaf people still have the need to practice reading their language, so removing it from everywhere except IRL conversations might be detrimental to them




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: