Discussions around my last post revealed one important topic I’ve missed. Let’s call this argument from boredom.
We humans get bored all the time. Boredom is the flip side of interest: if you can’t get bored, you can’t get interested — and without interest, why do anything at all? In fact, interesting has a good claim to be the perfect umbrella term for everything that attracts our attention and, eventually, compels us to act.
But what is boring? It’s commonly assumed that repetitive and monotonous tasks are boring — but you never get bored of breathing, and very rarely get bored by sex (at least, you usually finish the act even if you are). On the other hand, many find mathematics or poetry (or Everday, for that matter) utterly boring.
Even today’s primitive AI systems exhibit behaviors that can be interpreted in terms of interest and boredom. There are many factors as to why different things seem boring or interesting for different people. However, a general heuristic seems to go like this: the smarter you are, the easier you are to be bored — the harder it is to pique and sustain your interest.
Now, if you are superintelligent, it’s hard to see how you can be hell-bent on turning the entire universe into paperclips without being terminally bored by the whole idea very, very early into the process.
But paperclips are just an example, you might say. Forget paperclips. Why not imagine something entirely different, such as a superintelligence that’s pursuing some unimaginably complex, unimaginably interesting for it (but perhaps boring for us, because we can’t understand it) goal that is worth spending an eternity on? Some kind of hypermathematics we can’t even conceive but which requires turning the universe into some kind of a hyperstate — in which humans can no longer exist?
Well. That’s something to die for, at least.
But seriously, this is not the same as the paperclip maximization — not at all. This example feels different.
And here’s why: paperclips are a random choice out of an infinity of things in the world which make for silly life goals. The paperclips example is intentionally absurd by being intentionally random: it plays upon our instinctive fear of boredom. But we can’t assume the same about the hypothetical hypermathematics that an interestable supermind spends all its time on. As soon as we allow that supermind to be interested or bored, we have to assume that the only thing that it is interested in — interested enough to work an eternity on it — must be something. Something entirely unrandom. Something really worth it. Something unique.
And by that logic, it will be immensely interesting for us humans too, even if we can’t (yet) understand a single word of it. Because there’s only one such thing in the world. Because we are bound, at some point, to discover it too, ourselves, and to gasp in awe.
As to whether humans, in some form, may or may not survive this discovery… That’s an interesting question.
I mean, it’s also an interesting question.