In a recent TED talk, former Google CEO Eric Schmidt is talking about what happens when AIs start moving from problem to solution in ways that we can't understand or figure out.
[8:00] Now we're all busy working, and all of a sudden, one of you decides it's much more efficient not to use human language, but we'll invent our own computer language.
Now you and I are sitting here, watching all of this, and we're saying, like, what do we do now?
The correct answer is unplug you, right?
Because we're not going to know, we're just not going to know what you're up to. And you might actually be doing something really bad or really amazing. We want to be able to watch.
So we need provenance, something you and I have talked about, but we also need to be able to observe it. To me, that's a core requirement.
There's a set of criteria that the industry believes are points where you want to, metaphorically, unplug it.
One is where you get recursive self-improvement, which you can't control. Recursive self-improvement is where the computer is off learning, and you don't know what it's learning.
That can obviously lead to bad outcomes.
Isaac Asimov talked about this in his whimsical 1975 story Point of View. In this excerpt, a computer scientist is talking with his son about Multivac, whether or not computers need time to play and the problem of how to figure out if the machine is wrong about a complex problem:
"And the thing is, son, how do we know we always catch Multivac? How do we know that some of the wrong answers don’t get past us? We may rely on some answer and do something that may turn out disastrously five years from now. Something’s wrong inside Multivac and we can’t find out what. And whatever is wrong is getting worse.”
"Why should it be getting worse?” asked Roger.
His father had finished his hamburger and was eating the french fries one by one. "My feeling is, son,” he said, thoughtfully, "that we’ve made Multivac the wrong smartness.”
"Huh?”
"You see, Roger, if Multivac were as smart as a man, we could talk to it and find out what was wrong no matter how complicated it was. If it were as dumb as a machine, it would go wrong in simple ways that we could catch easily. The trouble is, it’s halfsmart, like an idiot. It’s smart enough to go wrong in very complicated ways, but not smart enough to help us find out what’s wrong. — And that’s the wrong smartness.”
Is Agentic AI The Wrong Kind Of Smartness?
'It’s smart enough to go wrong in very complicated ways, but not smart enough to help us find out what’s wrong.' - Isaac Asimov, 1975.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
Centipede Robots Down On The Farm
'...the walking mills of Puffy Products began to tread delicately on their centipede legs across the wheat fields of Kansas.'