In an interesting experiment, scientists at MIT have created a psychopathic artificial intelligence by giving it violent content from Reddit.
As it turns out, one guaranteed way to make a machine turn bad is by putting it in the hands of some scientists who are actively trying to create an AI “psychopath,” which is exactly what a group from MIT has achieved with an algorithm they’ve named “Norman”—like the guy from Psycho.
The scientists exclusively fed Norman violent and gruesome content from an unnamed Reddit page before showing it a series of Rorschach inkblot tests. [See illustrations below.]
Thankfully, there was a purpose behind this madness beyond trying to expedite the destruction of humanity. The MIT team—Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan—was actually trying to show how some AI algorithms aren’t necessarily inherently biased, but they can become biased based on the data they’re given. In other words, they didn’t build Norman as a “psychopath,” but it became a “psychopath” because all it knew about the world was what it learned from a Reddit page.
I recall one such artificial intelligence from back in the day: fans of the original Star Trek may recall the 1968 episode The Ultimate Computer. A computer genius, Dr. Daystrom, imprints his own mental engrams upon the M5 computer, apparently unaware of his own tendency toward psychotic episodes.
One of my favorite sf stories of the 1970's is Home is the Hangman, by Roger Zelazny. In the story, a robot (the Hangman) with a "learning brain" is trained using a telefactoring connection with each of several researchers. In the process of imparting lessons on how to move around and manipulate objects, the connection also passes some measure of the feeling and emotions of the researchers.
As a prank, the researchers use the Hangman to break into a bank. Unfortunately, a human guard is killed; the Hangman feels the guilt and horror of the researchers and has what amounts to a 'psychotic break'.