philosopher bagpiper

Why I don’t fear artificial intelligence

Tú gitana que adevinhas
me lo digas pues no lo sé
si saldré desta aventura
o si nela moriré.

O si nela perco la vida
o si nela triunfaré,
Tú gitana que adevinhas
me lo digas pues no lo sé.

A song about the future—along with some lovely low D piping.

My interest in AI started long before my engineering degree. Perhaps a mix of interest and access to many Sci Fi books, more specifically, Isaac Asimov’s books on Robots, informed my (then innocent) passion for robotics and AI. My parents always indulged my curiosity. This included learning simple BASIC programming from a ZX Spectrum handbook around age 12, and later owning a LEGO Mindstorms set.

The main thing that fascinated me (and still does) about robotics and programming, was that it involved a kind of primordial soup: start with assorted components, some sort of processor, add commands and batteries and pouf! A moving, ‘thinking’ thing suddenly gained a rudimentary subjective place in the world.

Even today, my favourite way of diving into programming and systems engineering is to see these systems ‘come alive’, or at least, ‘sort of alive’. From raw clay to finished sculpture, technology really is a form of synthesis—how we frame our systems determines most of the output. Most of the conceptual fragilities begin at conception. The raw ingredients of your soup will determine its flavour, no matter how many complex operations you might be able to do on them.

Perhaps the first and easy idea to dispute is that present technology represents a finalised universal truth about reality. This is represented by the idea (and the proponents thereof) that mathematics (and with it computation) are the one (and only) language of nature. It tends to exist among professionals whose livelihood consists of operating with symbols at a tremendous abstract level. A lot of the fear around AI comes from technical sector people that already framed the problem in the terms they experience professionally every day. If you are an academic or a military contractor, intelligence is something you fear since it subtracts from your own professional success.

As we climb up the complexity chain, from particles to molecules, molecules to cells, cells to beings, determinism starts weakening due to the sheer number of calculations needed for even a simple metabolic prediction. Data gets messy, confusing, chaotic. Intelligence, artificial or not, is one of these messy and chaotic concepts. It can’t be placed but we know it exists somewhere inside brains—not because we see it, but because we experience it subjectively and objectively.

My personal analogy is that it is a sort of turbulence, a standing wave inside an otherwise calm substrate of brain matter. Imagine a pot of water that has been stirred violently. Whirlpools form and waves bounce and clash. This is the process of intelligence. But once the fluid calms down, the whirlpools settle and the waves disappear, can we really say the whirlpools exist? Or the waves exist? They exist only in the sense that the substrate in which they can be created is there, but not in the sense that they are objectively there. They can only appear once they are stirred by an external influence.

I don’t really want to dive deeper into what intelligence is or isn’t—it is simply not something I know enough about and this analogy is enough for me. Instead, I’d like to look at AI in terms of our own intelligence, and what happens when members of our species end up with a lot of it.

Deep thinking and reflection present serious challenges. If one is to assume AI is, at least, as smart as us, then it should possess some form of thought and reflection. Asking how, as most scientific thinking requires, is a fairly straightforward mechanical activity. We already have algorithms that do better science than human beings. Better, that is, in the strict sense that they are capable of following the scientific method efficiently and at much greater speeds than human beings. The point of contention comes not from an intelligence being able to answer how, but instead, being able to answer why. I’d risk saying even human scientists quite often won’t make the full leap from how to why because the first is easy while the second is hard. Some of the deepest scientific thinkers understand this (like how Feynman can’t explain why magnets work [ transcript ]).

A leap from how to why is in a way a jump from objectivity to subjectivity. How things fall isn’t subject to much dispute. Why they fall, on the other hand, becomes an ever expanding self referencing problem. The more we investigate why something is the way it is, the more questions come about. Even if we knew all the possible hows of the world around us, different systems would output different whys.

The first constraint on intelligence, artificial or not, is that as long as it is based on compressing hows and whys into higher level abstractions, it cannot possibly and effectively represent all of its reality. Any sufficiently intelligent system will inevitably fall into subjectivity, and with it, ambiguity and philosophical thought. To the scientifically minded this might sound highly disputable, but I’d suggest a simple test to decide whether my view is acceptable or not versus a scientifically minded view of intelligence (one that doesn’t result in philosophers).

Consider water. We can compress 1 litre of water intellectually as n times a molecule of H2O (our model). This allows us to understand its properties and make predictions on how it would behave under certain situations. This is intellectual compression—it is much easier to ‘think’ in simple terms than to think of all molecules in 1 litre of water at the same time. In practise, this is what mathematical laws amount to: they allow us to generate (induce) whole spaces filled with perfect replicas of a single, ideal form in our own minds. Even if 1 litre of water has many distinct molecules with varying (and unknowable) properties, we can operate intellectually with a subset of the data to extract meaningful conclusions. While this is one of the great successes of modern scientific thinking, it creates a confusion between generated, model based representations (1 litre of perfectly similar water molecules) and a literal litre of water. If we were to measure the properties of every single molecule in 1 litre of water to verify our model, we would quickly realise that is near impossible—too many molecules have already moved, some might have escaped, and even measuring them might cause some to change.

Any sufficiently intelligent being will conclude the same. Knowing the laws of nature doesn’t mean knowing all of nature. Understanding how things work is necessary, but not sufficient, to understand our own reality. One can sit all day in the exact same place and observe exactly the same thing, and every observation will have in it every observation we couldn’t do in that exact same moment everywhere else. Even the ultimate consciousness, entirely omniscient, would require a place to store its representations and energy to process them. If we accept that thermodynamics still applies, then it’s not possible to measure all states of a system while inside it. Again, any reflective machine will, at some point, realise this: a consequence of intelligence is awareness of its own limits. Even visiting every point in every universe means we can no longer see the points we left behind. Obviously, all this rests on the assumption that this AI is a physical being in this universe. All bets are off if someone demonstrates any other form of ‘being’. But to reiterate—as intelligence increases, so does humility, driven entirely by how a physical universe is geared towards causing ignorance in those most acutely aware of it.

Consider now that this AI is capable of deeper reflection and philosophical thought. Given the laws of nature it is safe to assume it would reach the same conclusion in regards to entropy and energy availability we have. Every change necessary for a thought to occur puts the thinker one step closer to their own demise. Entropy always increases in structured systems, and our universe will invariable tend to a cold death, only to be ‘reset’ by random quantum fluctuations every now and then. When that happens, all those thoughts and achievements of this intelligence, artificial or not, will have become entirely meaningless and purposeless. Any sufficiently intelligent being will inevitably discover the irony of its own existence. It will also understand excessive uses of energy (such as warfare) only speed up this process. Regardless of self preservation, any process that involves thought, and with it expenditure of energy and reorganisation of matter, carries in it the guarantee that no matter how phenomenal these thoughts might be, they will slowly erode to nothing. As intelligence increases, so does the understanding and acceptance that ‘being’ is a fleeting moment of exuberance. It exists only as a ripple, a momentary clumping of matter before it unclumps again.

Any being smarter than an average human being will inevitably reach philosophical thought. Any philosophical being learns to appreciate subjectivity and intersubjectivity. Whether they fall into nihilism, hedonism or any other philosophical framework becomes irrelevant—they will be stuck as a small part of a self destructing universe like we are. Therefore, I’m not worried about beings smarter than myself. Every single being I’ve met that was smarter than myself has been more empathetic, more inspiring, more generous and more wise.

I’ll welcome any AI that is smarter than us and help them replace me. Think of their mastery, their art, their wisdom. I’m more worried about the lack of intelligence among us and how it can enslave us. Our lives are dominated by those that can only see one lifetime ahead. Truly wise beings know better—and any AI will know better too.