You call this “too good”?

MIRI explains that you don’t need a malevolent or even sentient AI for it to be dangerous and worth worrying about – ridiculing “a Hollywood-style robot apocalypse.” The real danger, they claim, is in our machines may simply become too good at science and planning and solving problems; whether they also become self-conscious is unimportant.

I’m all for ridiculing a Hollywood-style apocalypse but I can’t help wondering what “too good at solving problems” may actually mean. “Treating humans as resources or competition” – yeah, I get that one, but that would be simply bad (for us humans), not “so good as to be bad”. You don’t have to be smart to be mean; in fact, from what I’ve seen in life, the correlation is rather the opposite.

MIRI gives a glimpse at what they think is a likely failure mode of dangerous future AIs:

“A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.”

That's true, but pardon me, where's the “too good” part in this? Are they implying that such an AI will be mindbogglingly, unimaginably, impossibly smart – and yet will readily commit such a dumb mechanical error of ignoring a part of the world simply because it was so (self-)programmed?

Isn't there, you know, a contradiction?

Admittedly we cannot know what it means to be orders of magnitude smarter than the smartest of humans. We don't even know if it's possible at all. But I think a straightforward undergrad-level function optimizer, even with infinite RAM and clock speed, can be safely ruled out.

Advertisements