FARTH, n. The sense of distance, of how far something is.
Let the woods restore your farth perception numbed by the vastness of space.
Here’s an amazingly direct manifest (call to arms?) of the emergent “we don’t understand” faction:
And it really makes one wonder. Eventually what is this understanding thing?
When a deep-learning neural network identifies someone as prone to cancer, and that proves to be correct, and human doctors have missed this case: What do we mean when we say that “we don’t understand” how the AI came to its conclusion?
What a deep-learning neural network (NN) does, essentially, is just a very long calculation where the source data — patients’ ages, weights, and other medical parameters encoded into numbers — are fed into a vast network of nodes, each node doing some simple mathematical operations on its input and sending its output to other nodes. After (typically) millions of additions and multiplications, a designated output node gives us our final number: in this case, the probability that the given patient has cancer.
I don’t need to go into where this vast network and its calculation parameters came from. Assume we have no idea how the outcome was achieved; all we have is the ready-to-use trained network that does its magic. What would it mean to understand its workings? Is that even possible?
We can always run the same neural-network calculation for the same patient again — and get the same answer. We can trace the entire calculation and write it our linearly. We can verify it and we can watch, with our own eyes, how the numbers of blood pressure, age, minutes of exercise, etc. get combined, multiplied by coefficients, added, multiplied and added again and again, until the final probability emerges.
How is this different from some other calculation that we (think that we) do fully understand? For example, take a simple physics formula for calculating how far a projectile will land, based on initial velocity and angle. Here we, too, take some measured data, apply some calculations, and arrive at a number that is verifiably correct. What is really different in this case as opposed to the cancer-detecting NN, other than the total count of mathematical operations? Are there qualitative, not just quantitative, aspects to tell these two calculations apart?
This may sound like a too easy question to ask. In the case of a physics formula, we can derive it from other known formulas. That seems to be in contrast to neural networks where each one is derived anew from its raw training data, not from other already-existing networks. But this is not as different as it may seem at first. First, even if NNs do not borrow literal calculations from one another (at least so far), they still share architectures, approaches to data normalization, and tons of other changeables, without which the AI progress we’re now witnessing would be impossible. Second, if you lived in a world before Galileo and Newton and needed a ballistics formula, you would have to go by raw data as well, doing multiple experiments and trying some mathematical models to see which one fits. Very similarly, when developing a neural network for a given task, you gather experimental data and then explore which NN architecture and metaparameters work best.
In a pre-Newton world, you would have to be your own Newton. You would discover that gravity on Earth works by constant acceleration, independent of the mass of the projectile. Why? Because its force scales by mass. Why does it? Why the inverse square of distance? Even now there’s no easy answers to these questions. But this doesn’t prevent us from using simple ballistic formulas — and claiming we fully understand them.
Then, it is intuitively clear that a physics formula is unique. That is, if another formula is also correct, it is reducible to this one; otherwise it cannot be correct for all inputs. We also reasonably believe that the formula we use is the simplest of all equivalent formulas. This is totally unlike neural networks where even the exact same training data can result in different networks (because training often includes stochastic elements), and it’s pretty impossible to say if, let alone how, a given network can be made simpler without losing its efficiency.
In essence, this means is that we can apply certain formal methods to simplify a formula or verify the equivalence of two formulas — but we can’t do the same for neural networks. Our self-known ability to do this in physics constitutes a major part in our intuitive sense of “understanding” a formula.
Note, however, that a physical formula is an algebraic expression — and we use algebra to manipulate it. Isn’t it natural to suggest that with NNs, we need to seek manipulation methods that are, also, of the same nature as the entities we’re manipulating? Plain algebra is of little help with NNs’ million-term calculations, but what if it is possible to build meta-NNs that would manipulate other trained NNs in useful ways — for example, reducing their complexity and calculation costs without affecting efficiency, or maybe discovering some underlying structures in these calculations to classify them? If such meta-NNs turn out feasible — even if they are just as opaque as our today’s NNs but provably work — wouldn’t that affect our sense of “understanding” of how NNs work in general?
In summary, I’m convinced that “understanding” is not a fundamental philosophical category. Instead, it is an emergent perception that depends on a lot of things, some of them hardly if at all formalizable. It is basically our evolution-hardwired heuristics to determine how reliable our knowledge is. As all heuristics, it can be tricked or defeated. However, since it is hardwired into our brains, we tend to seek the feeling of understanding as the highest intellectual reward, and we tend to be highly suspect of something that provably works but somehow fails to appease our understanding glands — such as NNs in their current state of development.
No, I’m not calling to reject understanding. It has been, and is certain to remain, immensely useful both as a heuristic that rates our models of the world — and, perhaps more importantly, as a built-in motivator that drives us to seek and improve these models. All I’m saying is that, being a product of evolution, our sense of what it means to understand things needs to continue evolving to keep up with our latest toys.
“Mind uploading” is a staple of modern futurology. We spend so much time in front of the screens already… why not extrapolate the trend? (Extrapolations: Definitely the Best Bang for Your Buck when you’re shopping for predictions! Just check out London’s horse manure problem.)
More to the point, who wouldn’t want to get rid of this ugly contraption called human body with its nasty diseases and nasty desires? To think of the freedom — the speed — the invincibility you’ll enjoy when you’re nothing but a bunch of logical states, totally independent of the underlying hardware!
Put this way, the inspiration (as so many other futurist and transhumanist inspirations) does sound quite a bit religious. Achieve the Disembodiment: lead an angelic life up in the digital heavens, leave the dust of this sinful world behind you.
I’m not saying this will never happen… or at least be attempted. Religion-tinted motivations are bound to remain a powerful force for a long long time. Besides, simply by their sheer number humans tend to exhaust the space of current possibilities, even if some routes remain relatively neglected.
Body-shedding may, one day, become a major trend — an exodus. But let me have my doubts about that.
First of all, before we seriously discuss getting rid of the body, how about we fix the most glaring problems with the body we’ve inherited?
Make it a body that doesn’t get ill and doesn’t die, a body that cures any senescence as easily and as tracelessly as it cures a day’s weariness by a night’s sleep. Make it a body that stays permanently beautiful and attractive to other human beings: we people depend on that a lot more than we’d like to admit. Even better, make the human body as flexible and expressive as is language, so we can morph — copyedit — ourselves and each other in the never-ending quest for new beauty.
In a word, let’s fully implement our idea of the human body before we decide whether we need it or not.
About this perfect-body revolution I’m a good deal more optimistic than about the no-body revolution of Mind Uploading — simply because, however imperfect, our understanding of how our body works is already more advanced than our understanding of how our mind works.
About the only thing you can safely say about the future is:
They won’t care about things we care about.
Naturally, they will care about something else. Something we may well have around already but do not care sufficiently about, yet. Something we don’t even notice, perhaps.
Caring about something always expires. Interest gets saturated and dies off. If it doesn’t, it’s not interest, it’s something else. And without interest, there’s no intelligence. Without intelligence, there’s no future.
(And need I mention that there’s no such thing as superintelligence?)
Of course you can say that they will still care about all the same stuff as we do, only call and depict and experience it differently. Please be my guest. I’m not going to argue beyond pointing out that the very essence of intelligence is the how, not what. Bare what is just not interesting. Make a good enough — and new enough — how and you’re pretty much looking at a new what.
Going out on a limb, I would also hypothesize that in the future, the things we now deeply care about will probably not disappear completely. Instead, they will slid down on the age scale to become children’s play. Assuming children of the future will undergo some kind of growth and development, they may pass “us” as a stage. Just as our own children now pass, and grow out of, the stages of totem animals, princes, or pirates — all of which were extremely serious matters for adults at some point in the past. So, if you want to imagine the future, try a world where (something reminiscent of) our politics, commerce, even science, even sex are but development stages on the path to adulthood.
Book covers are cheating. They may be art in their own right — but what you get when you get a book is its text, not cover.
That’s why the only meaningful way to browse for new books is to open them randomly and read a page. Ideally, without even looking at the cover.
Which is what this site gives you.
And it now has Everday, too.
Unlike art, philosophy does make truth claims.
Unlike science, philosophy’s truth claims are not judged by experiments. Instead, they are assessed on their overall persuasiveness (see my essay for more on that). On their ability to stick in the mind, to stimulate, to breed related claims.
Philosophies are mental constructs (I don’t like the word “memes”) that undergo evolution (there is inheritance, mutation, and selection) with the aim to best satisfy our minds’ craving to know how things really are. For a variety of reasons, science is unable to fully satisfy these cravings, so philosophy continues to exist.
But minds themselves evolve, so philosophy has to adapt to a quickly shifting landscape. It is therefor quite understandable that many philosophers borrow from science its approach to proving things, simply because science is so prominent nowadays. Sometimes, it helps their philosophies be more persuasive, but sometimes it backfires: a “rigorous” math-like proof applied to claims that are obviously unverifiable to begin with may sound off-putting — a travesty. Not everyone likes analytical philosophy, and I think this is one of the reasons why.
For a long long time, the uneducated classes more or less accepted what the educated — “talking” — classes were saying. They could grumble or jeer but they had little reasons to doubt, let alone reject. The educateds could basically ignore anyone except other educateds because the silent class remained silent.
That’s not the case anymore. The silent class are now talking in between themselves — full steam. They have their own news, entertainment, even science. It’s not that they are totally ignoring what the talking class is saying but they now have a choice, including the choice of getting educated (in the traditional sense) or not.
In this new landscape of choice, forcing or shaming no longer work. People now have where to retreat when they feel they’re being pushed on. They have where to go and who to get encouragement from. “They” are no longer aspiring to become “us”.
(And they, increasingly, are trying to push back. Can the world go on as before now that more and more positions of power and authority are taken by people from this alternate reality? Will it crash and burn or will it adapt? The next four years in the US will give us a glimpse.)
All I’m saying is that if “we” want to win, we must stop fighting. It’s not a battlefield. If we are the smart class, the only way we can win is by becoming smarter. We have to learn to model people — to feel the way they feel. We have to learn to be them and look through their eyes, without losing what makes us us.
And you know what? It’s not only that this is the smart way: it’s the easy way. Bashing uneducated people for being lazy achieves nothing: it just makes you both bitter. But if you approach them as a fellow human, if you’re listening and thinking with them, you will discover that what separates you two is often amazingly shallow. Just a change of tone, just a different order of arguments, just using a different synonym here and there may often make all the difference when you’re trying to persuade.
Little tweaks go a long way… it’s only that finding the right tweak is so hard. Obama was able to win, in part, because he has a natural talent for choosing the right tone and words. Too bad this talent is so rare. But what is all our science, our smarts, our rationalism worth if we’re unable to reliably master this skill — now that (literally) the fate of the world depends on it?