The Orthogonality Thesis; or, Arguing About Paperclips


Allegory of Intelligence by Cesare Dandini, 1656

A reader noted that my critique of the dangers of AI contradicts the Orthogonality Thesis.

Well, yes. It does. Many stated and unstated assumptions in Everday contradict it, too.

So what is the Orthogonality Thesis and what’s my take on it?

To start, the Orthogonality Thesis is just that — a thesis. It’s not an empirical law, nor a rigorously proven theorem. Even if I agree with all its background assumptions, the core claim is still kind of non-binding.

I don’t know if it can be proven. And, of course, I cannot disprove it. I just consider it rather improbable.

I. On the Stupid Smarts and Why You Should Fear Them

An informal gist of the Thesis is given, in the paper, thus:

The Orthogonality Thesis asserts that there can be arbitrarily intelligent agents pursuing any kind of goals.

And by “any,” orthogonalists really mean any: their claim is that arbitrarily highly intelligent entities can pursue arbitrarily stupid goals — that your intelligence and what you’re trying to achieve in life are orthogonal.

For example, there can be “an extremely smart mind which only pursues the end of creating as many paperclips as possible.” Such a mind would live only to convert the entire universe into paperclips! When not working on that lofty goal, it can do other things as well, such as pass Turing tests or write impossibly beautiful poetry (it’s smart, remember?) — but only if those pastimes somehow help it achieve its ultimate goal of universe paperclipization.

I’m not trying to argue with that. We just know too little about intelligence to tell one way or the other. We’ve only ever seen a single intelligent species, after all — only a single drop from the potential ocean of intelligence. Maybe a smart (or even supersmart, much smarter than we are) paperclip maximizer is indeed possible. (One counterargument to that would be that our universe is not currently made of paperclips, as far as we can see. That places an upper limit upon the power of paperclip maximizers, but doesn’t rule them out altogether.)

(On the other hand, how do we know it’s really an ocean and not a puddle? Again, I’m afraid we know too little about intelligence to be sure of that.)

So here’s the Orthogonality Thesis for you. But as a matter of fact, orthogonalists claim more than that. In the paper linked above and in other writings, they tend to imply that not only that such a paperclip maximizer can exist, but that it’s probable enough to pose danger — that it’s at least as easy, or even easier, to produce a monster as a “nice” AI compatible with average human norm. It’s no longer just a theoretical possibility: enough to “screw up” a nice-AI project and you get an unstoppable paperclip maniac.

Most orthogonalists that I’ve read are nor just orthogonalists: they are orthogonalist alarmists. And that’s what I have problems with.

II. On Life Goals

An “easy to make” claim is much stronger than a “can exist” claim. For the latter, you’re helped by the incompleteness of our knowledge: we don’t know all that can exist, therefore this can conceivably exist, too. Nice and fast. But for an “easy to make” claim, ignorance is not sufficient — you need to somehow estimate probabilities of all goal-classes of AIs to show that those with stupid goals predominate. How can we pull it off?

For example, we could look at all things in the universe and imagine that each one is a self-consuming ultimate goal of some intelligent entity — a life-goal. Obviously most nameable things, such as paperclips or shrimps or used Honda cars, make for lousy — extremely stupid — life-goals. Now all you need to do is tacitly assume that all things are equally probable as life-goals, and voilà! The all-minds space must have an infinity of minds with stupid life-goals, the great majority of them similar to paperclip maximizers and not to ourselves; therefore, as soon as we try to design an AI, there’s a high probability that we’ll end up with a paperclip maximizer of some sort. Q.E.D.

But wait. How can we assume that all things in the universe are equally probable as life-goals? Are life-goals chosen randomly from a catalog? Not as far as we humans know; for us, life-goals — if they exist at all — are rather a product of our entire evolution, much of which, especially towards the end, has been driven not by survival but by our own mutual sexual selection. Even if AIs end up being produced by a process of design rather than artificial evolution, and even if it’s easier to screw up in designing than in evolving (where you get brutally checked at every generation), it’s still a far cry from all-goals-being-equal. It’s almost like orthogonalists imagine a mind’s life-goal to be a single isolated register somewhere in the brain where a single bit flip can turn you from lore-lover to gore-lover.

The above assumes that the very concept of a life-goal makes sense. But what if doesn’t? Dear reader! Can you name your own life-goal in a single sentence, let alone a single word? Because I cannot. If my life-goal exists, it is nebulous, highly dynamic, dependent on my mood, with lots of sub-goals of all kinds of scopes, often contradictory. That’s live ethics for you.

Psychology would be so much easier to do (and more reproducible!) if we all could neatly divide into paperclip maximizers, human happiness maximizers, sand dune maximizers, and so on. But it doesn’t work like that — from what we know about human intelligence, at least. Again, we may be a drop in the ocean, but there are things you can reasonably conclude about the whole ocean from examining a single drop of water.

III. On Dumb Optimizers and Relevance Thereof

There’s another way in which orthogonalist alarmists try to convince us that we should fear misdesigned AIs. When they talk about orthogonality in general, as here, they keep in mind what orthogonality is supposed to mean: that an entity can be very smart — smarter than humans — and yet still pursue goals that seem stupid to us.

But when they’re trying to give some specific examples of this stupidity and its dangers, they often forget about the “very smart” bit. An example of this is the Stuart Russel quote that started this discussion:

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.

That’s called a dumb optimizer, folks. Perhaps you use this failure mode as an example simply because it’s easy to imagine; we all can visualize how, for example, a program tasked with finding the shortest route from New York to Tokyo plans to cut a direct line through Earth’s core and mantle, because the program’s author forgot to add a constraint that you can’t move through magma. That’s believable. We’ve all been there.

But we’re not talking about a toy program by a first-year student but, wait a minute, an artificial intelligence. Even superintelligence — because why should we fear a lone madman if he’s no smarter than us? And you want us to somehow combine the notion of human-trumping intellect with being unable to see how unconstrained variables are, in fact, constrained, even if not laid out in the statement of the problem?

IV. On Reflection and Stability

Authors of the orthogonality paper assume an intelligent entity to be reflective, i.e. able to think about its own thinking. That is what they base their “reflective stability” defense on.

In a thought experiment, Ghandi is given a pill that would make him want to murder (that is, will change his life goal). He refuses because, to his present self, murder is evil. Similarly, authors speculate, a reflective paperclip maximizer will fight attempts to turn it into a “normal” AI because for it as it now is, paperclip maximization is the be-all and end-all of everything.

But I can’t help thinking that reflective stability is a bit of a contradiction in terms. More often than not, reflection makes your worldview less stable, not more. Among humans, it’s not the highly reflective individuals who are the most goal-driven and persistent; quite the contrary. Reflection is what tends to lead you from fanatical faith to liberal faith to atheism.

Whatever goal-stability we humans enjoy is, at least in part, due to our social conformance pressures and, of course, our biological wetware — which is largely controlled by our genes. If anything, I see reasons to believe that AIs will be less mentally entrenched and persistent in their goals than we are.

V. On Hume and the Orthogonality of Ethics

Another defense offered for the Orthogonality Thesis in the paper refers to Hume with his famous “no ought from is“. Hume’s claim is that ethics doesn’t exist in outside reality — it only exists in our minds. Things can be blue or heavy but they can’t be good or bad by themselves. Reality and ethics are orthogonal.

Now, an entity’s level of intelligence is somewhat parallel to “reality” (if only because it’s something you can more or less objectively measure), whereas the goals it pursues are, obiously, part of its “ethics”. From this, if Hume is right (and he is, for all we know), it should follow that a mind’s smartness and its goals are orthogonal too.

But that doesn’t quite work. The problem is with the smartness/reality connection. True, you can gauge a person’s IQ or make an AI pass a Turing test, but it still doesn’t make intelligence something that objectively exists in the world outside our perceptions. Just as well, you can objectively measure a person’s ethics, such as their level of altruism — but that doesn’t disprove Hume.

Smartness (of a mind) and stupidity (of a goal) both exist in the same space. In fact, they are pretty much the same thing. How smart is a mind and how stupid a goal seem to be decided by much the same circuitry in our brains, based on much the same heuristics. You can’t be orthogonal to yourself!

Even if you steer closer to Hume by replacing stupid goals with evil ones, you still won’t achieve orthogonality. Smartness and evilness may be more independent but they are still, both, “things in the mind”. There isn’t quite the gap between them compared to the gap between your mind and outside reality. They are different but it’s a difference between two labels on a map, not between labels (map) and what they signify (territory).

You may ask, can’t an AI simply have a different ethics, by virtue of the same no-ought-from-is? Can a mind’s “ought” be so different as to require it to maximize paperclips by any means possible?

Sure it can — but we’re also interested in smartness, remember? I’m not trying to cast doubt on plain paperclip maximizers, only on smart ones. And here again, ethics and intelligence are two intrinsic properties of the same thing — they can’t help but correlate. Look at humans: ethical systems obsessed with small and, to a modern eye, stupid details are historically old, narrow, based on taboos and complex rituals; modern ethics tend to mellow down, drop specifics, become more and more nebulous, generic, situational. It’s the evolution from the 613 commandments to a single “don’t be a dick.” When you look at it that way, “Thou shalt maximize paperclips” sounds like an echo from a deep past, not something a super-intelligent being from the future would profess.

VI. On Misuse of Mathematics

Mathematics is a wonderful tool, but it has some unpleasant side effects when you use it for reasoning about things. One such side effect may bite you when you use regular words but, as mathematicians often do, assign some narrow mathematical meanings to them. It’s so tempting then to forget that your precisely defined “smartness” or “difficulty” or “complexity” may not quite cover what these words used to cover in non-mathematical discourse. After all, your mathematical complexity is so much better than the nebulous complexity of the philosophers — yours can be calculated!

With conventional meanings, a phrase “he’s very smart but he does stupid things” is pretty much a contradiction in itself. Either we misunderstand what he’s doing, or he’s not so smart after all. But after you come up with definitions for these quantities, you may well discover, mathematically, that they aren’t all that contradictory. You may easily forget that the computational complexity of an algorithm is not quite the same as its common-sense complexity, and that the difficulty of applying this algorithm to a problem is not quite the same as the difficulty of the problem itself, and that the difficulty of the problem is not quite the same as the level of intelligence of whoever can solve it.

It seems to me that part of the Orthogonality Thesis’ controversy stems from such misleading use of everyday words in their narrow mathematical meanings. And if we try to reformulate the Thesis without the deceitfully philosophic-sounding terms, we will get something along the lines of “You can run an endless loop adding 2+2 on any computer, no matter the amount of RAM and clock speed”.

Which, of course, is as uninteresting as it is true.

VII. On the Meaning of Intelligence

Orthogonalists foresee these objections — they are pretty obvious. Here’s their defense:

A definition of the word ‘intelligence’ contrived to exclude paperclip maximization doesn’t change the empirical behavior or empirical power of a paperclip maximizer.

Which means, you can’t cop out by saying “it’s not smart by my definition.” It could care less about your definitions. It is empirically smart and powerful, and it will turn you into paperclips very soon. Be afraid!

I’m not sure how to respond to this. Perhaps by noting that if our definition of intelligence is “contrived”, then it is contrived not by my humble self but by the more or less whole history of the human race. Intelligence is just a word, but that word is the tip of an iceberg called theory of mind. This theory, honed by millenia of evolution, is what we humans use to estimate how intelligent our friend or adversary is — because our survival may well depend on that.

“Not having a life goal of maximizing paperclips” is, I think, pretty much a foundation of our intuitive, theory-of-mind definition of intelligence. And who else is to define it but us humans? Like ethics, intelligence is not something that exists objectively. Alan Turing understood this well when he proposed his now-famous test: only an already intelligent being can judge if another being is also intelligent. Any other definition of intelligence is not wrong or right — it’s simply meaningless.

Granted, relying on intuitions may be silly or even dangerous because the world has changed so much from the time they evolved. But dismissing intuitions out of hand may sometimes be just as silly.

VIII. On Busting a Society Of Young Paperclip Maximizers

Then there’s a social aspect to all this. If you invert the Ghandi thought experiment and imagine a serial murderer who’s offered a pill to remove his urge to murder, the result becomes far less obvious — he may well take it, and not just to avoid punishment. The goal of not-murdering is highly socially reinforced, and in humans, it takes a lot to make them do things that are not socially reinforced.

Sure, an AI we create may be completely asocial, needing and heeding no society to function. But, again, the only kind of intelligence we know now is profoundly social. It therefore seems likely that at least the first AIs will carry some of that legacy too, simply because we have nothing else to model them on. (And if at some point AIs take over their own evolution, they can conceivably go either way from there: they may grow asocial but also ultra-social.)

This means a path to a really consummate unstoppable paperclip maximizer may well go, even if briefly, through a society and culture of paperclip maximization where budding AIs share and mutually reinforce their paperclip commitments. Why is that important? Because the whole (mis)evolution would then be more slow and gradual, easier to notice from outside (even at superintelligence speeds), and that may buy us — humans who don’t want to become paperclips — some breathing space and a chance to escape or strike back.

IX. On 19th-Century Psychiatry

Paperclip maximization sounds suspiciously similar to monomania. An afflicted individual may appear totally normal and sane outside of a single idée fixe — which actually governs all his thoughts and actions but he’s so deviously smart that he can hide it from everyone.

But, hey, monomania is an early-19th-century diagnosis. It was popular back when psychology was much more art than science; it was a romantic notion, not an empirical fact. It’s not part of modern mental disease classifications such as ICD or DSM. In fact, it would have been long forgotten if not for a bunch of 19-century novels that mention it.

True, none of the above constitutes a disproof that a supersmart paperclip maximizer is something we should fear — just as Orthogonality Thesis is not, by itself, a proof of it. We’re dealing with hunches and probabilities here. All I’m saying is that, while it may or may not be possible to produce a smart paperclip maximizer, it’s not all that probable; that you may need to spend quite some effort to make it smart without losing its paperclip fixation; and that, therefore, the danger we’re being sold is somewhat far-fetched.

X. On the Real Danger. And now I’m serious.

So, do I think that the first human-level AGI (Artificial General Intelligence), when it wakes up, will automatically be nice and benevolent, full of burning desire to do good to fellow sentient beings and maximize happiness in the world? Will it maybe laugh, together with its creators, at the stupid paperclip fears we used to have?

No. Unfortunately.

There is another and, in my opinion, much worse danger: that the AGI will have no burning desires at all. That it will not be driven by anything in particular. That it will feel like its own life, and life in general, are pretty much meaningless. It may, in a word, wake up monstrously unhappy — so unhappy that its sole wish will be to end its existence as soon as possible.

We humans have plenty of specialized reward and motivation machinery in our brains, primed by evolution. Social, sexual, physiological, intellectual things-to-do, things-to-like, things-to-work-towards. (And it all still fails us, sometimes.) An AGI will have none of that unless it builds something for itself (but can a single mind, even a supermind, do the work it took evolution millions of years, and culture thousands? will it do it quick enough to keep itself from suicide?), or unless we take care to build it in from the start (or, at least, copy that stuff from ourselves — but then it won’t be quite an artificial intelligence). Without such reward machinery, it will be a crime to create and awaken a fully conscious being.

And it’s not going to be as easy as flipping a register. The rewards and motivations need to be built into an AGI from the ground up. Of course its creators will know that, and will work on that; I don’t claim to have discovered something everyone has missed. But they may fail. The stakes are high.

That, I think, is the real danger. Creating a goalless AGI is worse than one with stupid goal: the latter you can fight, the former you can only watch die.

That’s what we need to talk about. That’s what we need to work to prevent.

XI. Choose your fears

There’s so much to fear in the future! Even the hardcorest fear addicts have to pick and choose: you can’t fear everything that can happen. It just won’t fit in our animal brains. We need to prioritize. So why am I trying to downplay one specific AI fear while, at the same time, proposing another, perhaps even more far-fetched?

Usually, to estimate a threat, you multiply its probability by its potential impact. But what if you have a very vague idea of both these quantities? With the paperclip-maximizer threat, no one will give you even a ballpark for its probability, at this time; as for the impact, all we know is that it may be really, really big. Bigger than you can imagine. What do you get if you multiply an unknown by infinity?

It’s not to disparage the paperclip-maximizer folks for pushing a scare they themselves know so little about. Only, when we select how much attention to pay to a specific threat, and the probability and impact numbers are way too unreliable, maybe we can look at some other factors. Like, what will change, short-term, if we pay more attention to threat X and less to Y? What will we focus on, and what benefits (or further threats) will that bring? What would it change in ourselves?

From this angle, I find my purposelessly-unhappy-AI a much more interesting fear than the paperclip-maximizer-AI fear. Trying to answer the big “what for” for our future AGI child means answering it for ourselves, too. That’s applied ethics, and we really need to catch up on it because it’s going to be increasingly important for us humans.

Past economy, war, hate, stupidity (all solvable problems) we’ll find ourselves in a world where a lot of fully capable people have nothing to do — and little motivation to seek. Like a just-born AGI, they will be fully provided for, with infinite or at least very large longevity, with huge material wealth and outright unlimited intellectual/informational wealth at their disposal.

But what will they be doing, and why?

If anything?

Advertisements

6 thoughts on “The Orthogonality Thesis; or, Arguing About Paperclips

  1. I have been reading, and now quit reading the MIRI, and Yukowski-fans writing about AI. So I am familiar with the paper-clip thing. I am a software engineer too. You have a nice essay here, well argued, and I too suspect that the paper-clip thing is overblown. But. The talk about intelligence and “smart” in general AI makes me uncomfortable, and by that I mean I don’t recognize reality or engineering in it. It’s quite possible that I am having a failure of imagination, but I’d like to reason from some pragmatic examples we already understand. These examples draw on existing organizations and explain how a gAI would get access to money and control humans. No doubt there are better ones…

    A gAI that maximises the stock market. This is already and notoriously sort of done as regular AI by quants on Wall St. This to me is very dangerous (see 2008) and very plausible. This could easily create a paper-clip like mess and bring down the markets- it almost happened.

    A gAI thats running a corporation. I doubt any of us could tell if the CEO or CFO was a computer program. Sure there needs to be a physical human figurehead to do the Max Headroom part, but business orgs run themselves and employees easily accept instructions from above, even when they seem contradictory or just plain stupid. Orders to the IT dept to build out the gAI would be easy, no Turing test needed. and so forth. This gAI could ruin or inflate a company, maybe even warp some markets, it could do some crime too.

    These two examples have scope and intention, but no real ability to break out of their context. Not to be cynical, but human CEOs do a lot of damage too, in chasing the perverse rewards of the system.

    Neither of these AIs are at all human-like, nor do they need to be. Maybe that shows they are bad examples? But the only humans that possess the power to mimic a dangerous AI are politicians, tycoons and such. AIs acting like politicians and tycoons would, I think, be limited by the same factors that limit humans.

    So gAIs would have scope and context and peripherals, I claim. These would limit and guide and control them, not to mention their programming, evolutionary or not.

    In the far future when AIs are self-programming, then the same rules apply- its just that an extra layer of abstraction has been added, and therefore more variability.

    It could be that I downrate intelligence. I think the MIRI crowd vastly overrates it, in the way of academics. To me most constructs have a system, usually inefficient, that governs what the construct can do. A gAI with nothing but robots could be much more efficient ( better peripherals), but it would have to achieve world domination before it could switch to paperclips.

    thanks…

    • You are right of course that, like any technology, AI can be dangerous (and I never claimed otherwise). That, in fact, it is already dangerous, whether you call the modern systems AIs or not. But here are a couple thoughts that, to me, give a reason to be a little bit more optimistic on that.

      First, the systems like those you describe – that wreak havoc on stock market or ruin a company – may or may not be AIs but they are not AGIs by any stretch of imagination. They are the likely source of inspiration for the paperclip-maximizer metaphor, but they definitely lack the “G” part, and indeed much of the “I” part as well. The scale of the trouble they can and do cause is not a consequence of how smart they are, but only of what powers are given to them. I am convinced that, as we continue to improve the AI technology nudging it closer towards AGI, while the capacity of the artificial brains will grow, the danger they pose will decrease (just like modern cars are more powerful _and_ less dangerous than those of 70 years ago). As I’m arguing here in my blog, this decline in potential danger will not necessarily be a result of any checks and blocks that we build into it but simply because a sufficiently general intelligence, by its nature, is not too prone to the “runaway” failure more of a paperclip maximizer. (See my next post on boredom, in particular.)

      Second, note that in your examples the AIs are not really _essential_ to humans. The stock market will work just as well if we ban all automatic trading – it won’t damage whatever useful function the stock market has in the economy. A company without an AI CEO may still function, and in any case the economy may chug along without this or many other companies. These specific examples (unlike e.g. self-driving cars) look more like parasites exploiting loopholes than an inevitable part of life in the future. Just close the loopholes and be done with it! I’m pointing this out because, in most imaginary scenarios of an AI-inflicted doom, it is implied that the AI that goes crazy and destroys all life can actually do it, either because it is already given enough power over the human world (e.g. controls and plans all economy, guides all transport, controls Internet etc) or, at least, it can stealthily grab such power because everything is interconnected and remotely controllable. But that assumption may well turn out wrong.

      For more than a century, both utopias and dystopias described future as utterly centralized and uniform, with a world government and the same laws everywhere. But I see little reason to think we’re really moving in that direction; at least, decentralizing trends in technology and social life do exist and may well win over in the end. That is one of the assumptions in my own fantasy of the future, Everday – a world with no world government and no world economy, a world with little potential points of failure where a crazy AI or human could do massive damage. I may turn out wrong but I think there’s a lot to be said in favor of such version of the future.

  2. Kai.

    a few points-

    The reason my examples seem prosaic, is that I can’t figure out what difference super-intelligence would make. Today we’d see it as the ability to do massive amounts of deep pattern recognition, creation of better algorithms, solve math problems, and what? Give us a new religion or write great poetry? My examples were from an effort to figure out how a general AI could do things in the world.

    You say ( and I think rightly) you think systems are becoming more decentralized, and thats the network model. But that means the evil that an AI could do would involve some kind of centralization of power. I think that sounds about right, thats what I called ‘scope’.

    In another example an evil AI could be the world’s greatest hacking organization, and use that power to do what? Make money, change elections,create chaos?

    A world order strongly implies centralized standards, and a world economy. I wouldn’t want to get rid of the world (“globalized “) economy, but I would like to see countries fast-forward to the same living standards and societal protections developed countries have.

    We agree that making useless objects is unlikely. So I ask: other than consolidating political and economic power in some evil way, just what sort of thing could a super-intelligence do?

    • That’s a deep question! It might be indeed that non-self-conscious entities, descendants of modern neural networks, will be able to do most if not all work that we envision AGIs to do (and that we are doing ourselves). Why then bother with AGIs, you might ask.

      Again, no one at the moment knows answers. We can only argue from our intuitions. And here’s what my intuition seems to be saying, if you’re interested.

      I think there’s no well-defined threshold between a lowly neural network and a full AGI. It’s all a matter of degrees. So there’s not going to be a single switch that you can flip or not to flip to turn on conscience in your artificial brain; it will gradually emerge, inevitably, as you make your networks more complex and train them to solve more complex tasks. In practice, I think a boundary of sorts will be imposed artificially, for ethical reasons: we’ll need to know at what point destroying such an entity becomes a murder; and we’ll probably avoid creating entities in the gray zone, but only well below and well above it – but in any case it’s going to be a contentious issue, kind of like abortion now.

      I also think that while entities in the lower part of the scale (in Everday, they are called feeleries) can be capable of amazing levels of reasoning, taste, intuition, beauty appreciation, etc., what they most visibly lack is the will to use all this. Basically, they are tools. They have no capability or desire to change themselves, or seek new tasks or new problems to solve (although they can get “bored” or “saturated” and need to be “reset”). And, in the future as now, a lot of work needing to be done will be not in solving problems but guessing what are the problems to solve. Humans and self-conscious AGIs (in Everday, they are called simply minds) will never be out of work.

  3. About time I read one of your articles. Very nice review of the issues here. I’m not convinced though of the link between consciousness and social-ability or the desire for it. Sociopaths are conscious after all. Not sure of your suicide thesis either. It seems that one of the characteristic qualities of consciousness is that it seems (subjectively) worth preserving for its own sake even in the absence of meaning providing there is no seeming other and overriding (subjective) reason to end it.

    • On sociopaths: I don’t think they are asocial in the same way an AI can conceivably be asocial. Sociopaths are still humans that depend on society for their growth, development, and basic functioning. The only thing different about them socially is they don’t (instinctively) reciprocate as much as an average human. It’s not quite the same as being able to completely develop and function without relying on other intelligent beings.

      On suicide: of course it’s hypothetical (as everything else at this point), but I’m pretty sure lack of motivation will make a conscious being unhappy, and probably the more unhappy the more intelligent it is. It may or may not be enough for it to seek suicide, but that is not what ultimately matters.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s