You call this “too good”?

MIRI explains that you don’t need a malevolent or even sentient AI for it to be dangerous and worth worrying about – ridiculing “a Hollywood-style robot apocalypse.” The real danger, they claim, is in our machines may simply become too good at science and planning and solving problems; whether they also become self-conscious is unimportant.

I’m all for ridiculing a Hollywood-style apocalypse but I can’t help wondering what “too good at solving problems” may actually mean. “Treating humans as resources or competition” – yeah, I get that one, but that would be simply bad (for us humans), not “so good as to be bad”. You don’t have to be smart to be mean; in fact, from what I’ve seen in life, the correlation is rather the opposite.

MIRI gives a glimpse at what they think is a likely failure mode of dangerous future AIs:

“A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.”

That's true, but pardon me, where's the “too good” part in this? Are they implying that such an AI will be mindbogglingly, unimaginably, impossibly smart – and yet will readily commit such a dumb mechanical error of ignoring a part of the world simply because it was so (self-)programmed?

Isn't there, you know, a contradiction?

Admittedly we cannot know what it means to be orders of magnitude smarter than the smartest of humans. We don't even know if it's possible at all. But I think a straightforward undergrad-level function optimizer, even with infinite RAM and clock speed, can be safely ruled out.


2 thoughts on “You call this “too good”?

  1. Pingback: The Orthogonality Thesis; or, Arguing About Paperclips | Into the Everday

  2. In this context, “intelligent” just means “effective at achieving its assigned goal”. If you program something to do X, it’s not going to start doing Y when it gets smart enough if Y doesn’t help it do X.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s