Thinking through some potentially interesting sources for videos where two people are talking but we don't know what was said and well, I think this is a decent starting point: https://www.youtube.com/watch?v=KLcfpU2cubo
Sadly, doesn't work too great in this situation:
> That they didnt go through but i would tell you theyre just a chill look at here lets do it chills with all of our great men and they look at every chance they go oh do you want to the black man well thats my gosh thats my gosh thats my gosh thats my gosh thats my gosh thats my gosh thats
Bowman: You know, of course, though, he's right about the 9000 series having a perfect operational record. They do.
Poole: Unfortunately that sounds a little like famous last words.
Bowman: Yeah, still it was his idea to carry out the failure mode analysis, wasn't it?
Poole: mmm
Bowman: Should certainly indicate... (away from camera): his integrity and self-confidence
Bowman: If he were wrong, it'd be the surest way of proving it.
Poole: It would be if he knew he was wrong.
Results:
"Of course there is recommended getting necessary to have a perfect operational rank i know youre going to be the first to do that youre going to get the best youre going to get the best youre going to get the best youre going to get the best youre going to get the best of yours if you want to rock better sure its well perfect."
Experienced lip readers are lucky to get half of what is said. Better than nothing but not reliable enough for anything and so better to use something else if possible.
'i love you' and 'island view' have the same lip movements is the clasical example.
my mother was a mostly-deaf lip-reader. She needed conversational context in order to keep up 'legibly'; and it created a lot of fun between the two of us when she would come up with an oddball question or comment that had nothing to do with the conversation once-in-awhile when her guesses failed spectacularly.
With context, though, it's a great tool. She and I used to watch crime dramas with the sound off late at night and never miss a beat. It feels if you're trying to transcribe something that has a lot of structural context the success rate is higher than 50%, but I don't know that formally.
It's still a tool I use in conversation. Even with good hearing it's tough to hear people in crowded restaurants or concert venues, lip-reading helps immensely.
What made me think of this was a documentary I saw years ago called "Hitler's Private World" where they used a lip reader and some video enhancements (I don't exactly remember) to be able to read his lips that didn't contain audio, such as those taken at his private villa. I don't know if what they found was ever vetted for accuracy, but it was quite amazing to be able to use lip reading to try and determine what all was being discussed privately.
I found a copy of it on Dailymotion [1] and a brief description [2]. It's well worth a watch! I always wondered why they didn't use these techniques on other recordings of video that have some mystery surrounding them.
Doesn't even need to be user guided. Use videos that have audio. You could have one AI that generates a transcript using the audio/video and another that watches the video on mute and tries to read the lips. Feedback would then be provided by the AI that had access to the audio.
I am thinking of the millions of hours of tv news. Presenters are almost always going to be the same position in frame and may already have high quality transcripts.
Sadly, doesn't work too great in this situation:
> That they didnt go through but i would tell you theyre just a chill look at here lets do it chills with all of our great men and they look at every chance they go oh do you want to the black man well thats my gosh thats my gosh thats my gosh thats my gosh thats my gosh thats my gosh thats