On extropy, complexity, and Harrison’s Law

Entropy has entered common usage as a sciencey-sounding synonym for “disorder,” “degradation,” even “death.” Promoted by science fiction writers, it has become something of an impersonal arch-villain of the universe. “Fighting entropy” sounds inherently noble.

Indeed the state of maximum entropy – gas at thermal equilibrium – is not too conductive to life, and indeed any living thing needs to continually expulse its entropy outwards in order to continue living. But not all decreases of entropy are good for us humans. The world of zero entropy is no paradise – it is more like Snow Queen’s realm of perfect crystals at absolute zero. If we want to humanize scientific concepts (and who doesn’t?), we better leave entropy alone and look for something else.

Extropy is, etymologically, the opposite of entropy, and it is already used to bundle together everything good that opposes entropy’s bad. However, the definitions quoted in that Wikipedia article are all quite vague. Can we define extropy in a way that actually makes sense?

Definition

Entropy is (logarithm of) the number of possible microstates per the given macrostate of a system. By contrast, extropy is (logarithm of) an estimate of how many distinct macrostates are possible for a given microstate.

This doesn’t make extropy a direct reciprocal or negative of entropy. It’s more exciting than that.

Macrostates as measurables

A microstate is simply a precise (as precise as the principle of uncertainty allows us to measure) values of position, speed, and other properties of all microscopic constituents of the system (molecules, atoms, subatomic particles). But what is a macrostate? How to count the macrostates of a system? I can think of at least three views: physical (measurables), temporal (futures), and subjective (interpretations).

In a physical view, a macrostate is a set of measurable quantities such as temperature, pressure, volume, and energy. We can add pretty much any other measurables to the mix – viscosity, opacity, color, even smell.

Whatever we measure, a volume of gas at thermodynamic equilibrium (i.e. with maximum entropy) will always have minimum extropy because there’s exactly one macrostate – one set of quantities – that characterize all of it. However, once our gas becomes inhomogenuous – for example, divided into volumes with different density – we can count each homogeneous volume as a macrostate of its own: extropy grows. An opposite example is a crystal; it has low entropy but its extropy is higher than that of gas because some of its measurables (such as opacity) may differ depending on how you look at the crystal – i.e., unlike gas, it is not isotropic even if homogeneous.

Alternatively (and perhaps more meaningfully), we can count each individual measurable as a macrostate, instead of a whole set. From this viewpoint, a flowing stream of gas is more extropic than gas at rest because it has one more measurable: velocity. A turbulent stream is more extropic (but not infinitely more, see “distinctness” below) than a laminar flow because its velocity varies from point to point, plus it has additional parameters that characterize the turbulence itself (such as the scale of vortices). This example shows that extropy is relative – what looks like (laminarized) gas flow to one observer is gas at rest for another.

Macrostates as futures

A (small) volume of gas, taken in isolation, will never change into something else; it will always remain a volume of gas. It has reached the state of maximum entropy. A crystal, on the other hand, will, in the really long term, erode into gas – will, with the rest of the universe, die the heat death. That means our crystal can be said to have one more macrostate: an “antecedent of gas” (of a given composition and other measurables). What prevents us from considering our system not as a snapshot of the present but as the entirety of its history? If we make this step, we can include in the count of the macrostates not only the present ones but all the possible futures as well: for each distinct future state X of the system, it can be said to have a current macrostate of “ancestor of X.”

This temporal view is reducible to the physical one: whatever the possible future may be, it is distinct from other futures by its values of measurable quantities (what else?). However, the temporal view is more powerful – it trumps the pure physics because even one more future means a lot of new measurables to the overall count. So if you can simplify the system by removing a few measurables but in so doing gain a distinct new future for it – do it. You’re getting extropically richer.

What about the system’s past? Shouldn’t it count towards the extropy, too? Logically, it should. However, if we simply count all possible macro-pasts that could have ended up in the system’s present state, then gas at equilibrium will have infinite extropy because it may have formed (again, given enough time) from anything at all. So, I think the following approach is more meaningful: instead of summing the absolute extropies of past and future states, sum only the gains in extropy – by how much more extropic is the future state than the present, and the present than the past. Now, if you define your system’s history as starting from a zero-extropy state, the contribution of its past to the overall count will be simply equal to the current “momentary” extropy. Rich past doesn’t buy you more extropy than you can demonstrate on the spot, but rich future is your credit that you can already use now.

This means time-integrated extropy can well be negative. A pile of rubble after an explosion of a building – or a homogeneous cloud of gas after the heat death of a universe – are as extropically negative as extropically positive was their antecedent.

If you have achieved more in life than your parent, this counts towards your extropy – this is something you can be proud of; similarly, your offspring’s outdoing you is something that adds a bonus to your extropy count. Conversely, if you are dumber than your ancestors and your descendants are dumber than you, both the past glory and the future gloom count against you. It is much more extropic to be descended from an ape than from God.

By the way, this might give a clue as to what can be considered “the same” system or entity. A precipitous drop in extropy – explosion, extinction, dissolution, death – is a natural boundary at which to declare the old entity gone and start something new. A simple definition of a “thing” is then “a story arc of growing extropy.”

Macrostates as interpretations

As we have seen, both measurables and futures are subjective: there’s no definitive algorithm for exhausting them all – you can always invent a new measurable quantity or discover a new possible future. Generalize this: macrostates are simply interpretations or meanings one can give to a system – all the different “named things” that this system can be viewed as. Yes, this is ultimately subjective – but it just makes the subjectivity of the other two approaches explicit.

The interpretations view builds upon the futures and measurables – but trumps them both: a single new interpretation can open a whole range of previously unthinkable futures, each with an infinity of measurables. So, again, if reducing measurables (simplifying the thing) buys you some new interpretations, go for it.

This approach gives even higher score to a crystal over gas because it is a tangible object that can, to a human, symbolize many other tangible or intangible things. Moreover, it has some visible structure – e.g. the crystal’s faces can also stand for something, and these sub-interpretations count to the overall macrostates. An average-sized diamond can thus out-extropize something much larger, even if it is man-made and quite ingenious (such as a heavy electric motor in my backyard shed).  The Mona Lisa, on the other hand, will likely beat any other man-made thing by any of the three counts: it has a lot of measurables (due to its intricate composition); it has many possible futures; and it has an infinity of possible interpretations by meaning different things to different minds.

Distinctness and applicability

Not all macrostates are equally macrostatish, though. There may be local fluctuations of gas density, for example, but intuitively it’s clear that they don’t add a macrostate. What makes us fully recognize a macrostate?

  • A macrostate must be distinct enough from other macrostates. If states are too similar, not sufficiently independent, or if there’s a smooth transition between them, their contribution to the overall extropy count diminishes.
  • A macrostate must be applicable enough. If it’s a measurable, it should be measured with enough precision to prove its being the case; if it’s a future, it should be realistic enough for this microstate (“realistic” does not necessarily mean “probable,” though); and if it’s an interpretation, it has to be persuasive – non-random, clearly meaningful for the system, obviously following from its perceivable qualities.

In other words, when integrating into the extropy value, each macrostate is taken with the coefficients of its distinctiveness and applicability. Two macrostates that are clearly distinct and  definitely applicable may count for more than a thousand barely distinguishable hues – or than a thousand interpretations pulled out of thin air.

Extropy of life

Living things are champions of extropy, leaving any inorganic stuff far behind. A basest prokaryote boasts immense molecular-level complexity (countless measurables), unlimited futures (it moves, it grows, it adapts; depending on how we define the boundaries of the system, its futures may include some or all of its evolutionary progeny, potentially arbitrarily different), and can be interpreted as many distinct things, depending on your perspective (a self-replicator, a chemical factory, a cute pet…). For a being that has achieved intelligence, extropy soars even higher because the mind adds an infinity of its own current and potential macrostates (pain, curiosity, knowledge of French…).

Extropy of language

A high-extropy word, phrase, text is one that has multiple – distinct and applicable – meanings, connotations, readings. This is profundity, the essential quality of poetry; a high-extropy text is one that evokes awe, happy laughter, catharsis. Again, extropy is relative – a function of both the message and the one who interprets it; what is plain and boring to me may be amazingly deep and meaninful for you (e.g. found poetry).

Why do we like to use foreign words or phrases (blasé, doppelgänger) even where a close enough analog in our own language is available? Apparently it’s the extropy infusion that they bring with them: not just a new shade of meaning but a glimpse of a whole new language with its very different semantic structures. A plain English analog would be easier to understand even if you don’t know the exact word, and its meaning would be inevitably more diffuse, blurry, “all over the place”; by contrast, a foreign word has (to us) a narrower, much more pronounced meaning – and is separated by a wide gap from our native language: distinctness goes through the roof. (On the other hand, the easiness and therefore cheapness of such extropy gain may explain why overusing foreign phrases may sound pretentious; and also why we sometimes feel the need to excuse our usage with phrases like “as the French say, …”).

What I’m doing in this essay is raising the extropy of the word “extropy.” In its current usage, it claims many implications but they are not very applicable – they are simply assigned to it without much justification. It’s fake poetry, make-believe profundity. By giving the idea a definition that makes some of these implications follow more strictly, I improve its extropy – so that even those interpretations that still do not strictly follow become somehow more believable.

Extropy as uncertainty

Entropy is the measure of uncertainty about the microscopic state of a system, provided you know exactly its macroscopic state. Conversely, extropy is the measure of uncertainty about the macroscopic state of the system provided you know how it’s laid out on the microscopic levels.

If you know the precise positions and movements of each atom in something, do you know its exact thoughts, capabilities, where and in what state it will be the same day next year? The answer is “yes I do” if that something is a volume of gas (low extropy) and “you must be kidding me” if it is a sophisticated human being (high extropy).

Extropy as complexity

Entropy is a measure of disorder – but extropy is not necessarily a “measure of order,” however you understand “order.” Again, a crystal is perfectly orderly but its extropy is modest. Rather, extropy explicates the intuitive notion of “complexity” or “interestingness.”

One well-known formalization of complexity – Kolmogorov complexity – works well in its mathematical domain but cannot grasp the intuitive notion. Per Kolmogorov, random noise gets the highest complexity grade because it is not reducible to anything simpler. Obviously, this is not what we mean when we describe something as “complex.”

To us, trivial simplicity and perfect chaos are equally boring, deathlike freeze and meaningless chaotic activity are both off-putting. We human beings are far from both these extremes – and like things that are neither too high nor too low in entropy. We like things that we can have fun examining – measuring, pondering their past and future, interpreting them in multiple ways: we like extropy. Often, extropy means sophistication, but just as often it means simplicity – since the interpretations give the richest extropic yield, we’re keen to get rid of nonessential measurables if it makes these interpretations more evident.

Boundaries of the system

So far, we have been looking at some predefined system and tried to find and enumerate its macrostates. In the real world, however, the situation is often inverted: the macrostates, present or potential, are self-evident, whereas the system to which they apply is not. To measure extropy, we then need to identify that system and to somehow prove that these macrostates have something to do with it (to satisfy applicability). And since we like high extropy, we naturally strive to find the minimal – smallest, simplest, remotest – system to which the maximum number of macrostates can be ascribed.

This is what science often does. By tracing different languages or biologic species to a common ancestor, we increase that ancestor’s proven extropy. By finding hidden connections between dissimilar phenomena, such as electricity and magnetism, we drastically improve the extropy of all electric and magnetic systems because we now know more macrostates that these systems can legally produce. Simply by adding a new bit to our knowledge store, such as by discovering a new (not necessarily interesting by itself) virus or insect, we nudge upwards the extropy of the primordial nebula that had coalesced into our planet.

And, of course, the Law(s)

Entropy has its Second Law of Thermodynamics. Extropy has its law, too. It was first formulated by Edward Harrison:

“Hydrogen is a light, odorless gas, which, given enough time, turns into people.”

Just make it a tiny bit more general and update the terminology, and you get:

“Large enough systems tend to form subsystems in which extropy grows.”

This is despite the fact that in the universe as a whole, extropy decreases, by virtue of the second-law march towards the zero-extropy heat death.

We can derive another law for our own guidance – a law that’s prescriptive rather than descriptive. There’s no ought from is, and people have always tried to agree on some axiom from which their numeous “oughts” could be derived. I think this one is as good an axiom as any other:

“Act so as to increase extropy.”

And we do! Some prefer to do it by creating or changing things, other enrich the world by discovering and proving interpretations; from doodling on a piece of paper during a phone call, to adding just one more flower to an embroidery, to feeling frustrated by procrastination, to fighting for freedom (more futures for people), to doing art and science (more interpretations for the world) – we all like to create extropy and hate to lose it.

P.S. Extropy as future entropy?

A reader pointed me to this paper which suggests that entropy – or rather, forces caused by the growth of entropy, called entropic forces – may result in what looks amazingly similar to tool use and cooperation, thereby hinting at a connection with intelligence. How can that be – how can entropy be associated with intelligent behavior if (as we all know) entropy is disorder, an antithesis of intelligence?

The trick that the paper’s authors use is to take into account not just present entropy but all potential future entropies as well – the sum of microstate/macrostate ratios for all the possible futures of the system, each taken with the coefficient of its probability. If we then assume that this future entropy (they call it causal path entropy) also wants to grow, then this tendency to grow may, at present, cause what looks like very non-random and even intelligent behavior.

I think there’s a connection to the idea of extropy here. Extropy measures macrostates, including future ones, per microstate; but each macrostate will also have its own range of possible microstates. If the macrostates are distinct enough, their microstate ranges will not overlap. This means that a system with many possible distinct futures will have a higher “causal path entropy” than a system that already has maximum entropy now but has no interesting futures.

This approach may get rid of extropy’s subjectiveness: once we reduce macro futures to micro ones, we can count them precisely and unambiguously, just as we do when measuring the now-entropy. The law of (locally) growing extropy suddenly sounds more plausible!

Exciting.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s