So Patrick Buehler and Andrew Zisserman at the University of Oxford, along with Mark Everingham at the University of Leeds, started by designing an algorithm that could let an artificially intelligent computer system identify individual signs.
Then, they let the system watch TV shows with both text subtitles and British Sign Language. After about ten hours of watching TV - well, watch the video and see for yourself.
The software correctly learned about 65% of the signs that it was exposed to.
Would this have been enough to betray Bowman and Poole in the famous HAL 9000 lip-reading incident in 2001: A Space Odyssey? Hopefully, we'll never know.
Related News Stories -
("
Artificial Intelligence
")
Microsoft VASA-1 Creates Personal Video From A Photo
'...to build up a video picture would require, say, ten million decisions every second. Mike, you're so fast I can't even think about it. But you aren't that fast.' - Robert Heinlein, 1966.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
Gaia - Why Stop With Just The Earth?
'But the stars are only atoms in larger space, and in that larger space the star-atoms could combine to form living matter, thinking matter, couldn't they?'
Microsoft VASA-1 Creates Personal Video From A Photo
'...to build up a video picture would require, say, ten million decisions every second. Mike, you're so fast I can't even think about it. But you aren't that fast.'