DeepMind AI Baffled By Homer Simpson, Needs Human Help
DeepMind, an AI in London, needs your help in understanding the world. In particular, Homer Simpson.
(Homer Simpson Donut Hell)
The best artificial intelligence still has trouble visually recognizing many of Homer Simpson’s favorite behaviors such as drinking beer, eating chips, eating doughnuts, yawning, and the occasional face-plant. Those findings from DeepMind, the pioneering London-based AI lab, also suggest the motive behind why DeepMind has created a huge new dataset of YouTube clips to help train AI on identifying human actions in videos that go well beyond “Mmm, doughnuts” or “Doh!”
DeepMind enlisted the help of online workers through Amazon’s Mechanical Turk service to help correctly identify and label the actions in thousands of YouTube clips. Each of the 400 human action classes in the Kinetics dataset has at least 400 video clips, with each clip lasting around 10 seconds and taken from separate YouTube videos.
I think that the first time I really thought about how computers would need help like this is in a quirky 1995 novel by Amitav Ghosh, The Calcutta Chromosome. In the novel, a voraciously inquisitive artificial intelligence named Ava is quite demanding and needs the help of human beings to understand puzzling items.
The first instance that I know about is from The Velvet Glove, a short story by Harry Harrison, published in 1956. See the entry on human object recognition.
Orion's 'Skip-to-M'Lou' Entry
'A lightning pilot possibly could land that tin toy without power and still walk away from it provided he had the skill to play Skip-to-M’Lou in and out of the atmosphere...'