Tag Archives: AI

Ijon Tichy meets Artificial Intelligence


Stanislaw Lem on Entropy (Littering)

Littering in space has been a concern long before Elon Musk’s Starlink program, and various methods for cleaning up the growing clutter in Earth’s orbit are currently under discussion. The task is not easy because – due to the second law, the inevitable increase in entropy – all littering tends to increase exponentially. If one of the thousands of pieces of scrap metal in space is hit by another piece of scrap metal, the one piece that was hit creates many new pieces that fly around at insane speeds. Space pollution is therefore a self-perpetuating phenomenon with an increasingly exponential tendency.

But haven’t we known about this problem for a long time? The Polish writer Stanislaw Lem had already written about it in the 1960s when he wrote his science fiction story “Star Diaries” about the travels of a certain cosmonaut called Ijon Tichy. In his 21st voyage, Tichy lands on a planet which had survived a complete littering by satellites. The well-travelled cosmonaut writes:

«Every civilisation that is in the technical phase gradually begins to sink into the waste, causing enourmous worries.»

Tichy goes on to describe how the waste is therefore disposed of in space around the planet. This, however, causes new problems there, with consequences that also became apparent to cosmonaut Tichy.


Stanislaw Lem on Artificial Intelligence

The 21st journey, however, has something else to offer. The main theme of Tichy’s  21st journey – as in many of Stanislaw Lem’s stories – is artificial intelligence.

On the now purified planet, Tichy encounters not only another unpleasant consequence of the second law (namely degenerate biogenetics), but also an order of monks consisting of robots. These robots discuss the conditions and consequences of their artificial intelligence with Tichy. For example, the robot prior says about the conclusivness of algorithms:

«Logic is a tool» replied the Prior, «and nothing results from a tool. It must have a shaft and a guiding hand.» (Lem 2071, page 272)

Without realising the connection, I followed in the footsteps of Lem’s robot prior and wrote about AI in 2021:

«An entity (intelligence) […] needs to establish a relationship between the data and the objective of the analysis in order to interpret the data. This task is always linked to an entity with a specific intent..» (Straub 2021, p. 64-65)

Already 50 years ago, Stanislaw Lem formulated what I believe to be the fundamental difference between an intelligence of a tool and an animate (i.e. biological) intelligence – namely the intent that guides the logic. The intent cannot be set by a machine itself, but must be provided from outside by the creators, from the “guiding hands” of the machine. They can do this in various ways, e.g. by selecting the training data, or by shaping the AI algorithms in the most wanted direction. In other words: The intelligence of the AI is formed from outside.

Human intelligence, on the other hand, can – especially if we don’t want to be robots – determine its own goals. In the words of Lem’s prior, it consists not only of the logic that is guided by the guiding hand, but also includes the guiding hand itself.

As a consequence of this consideration, one can draw the following conclusion about AI:

If we make use of the technical possibilities of AI (and why shouldn’t we?), then we should always take into account the aim of our algorithms.

I think this is just what the robot intelligence of Lem’s prior wanted to say to Tichy.

Translation: Juan Utzinger

This is a post on Artificial Intelligence


Literature

  1. Lem, S. (1971) Sterntagebücher, Frankfurt am Main, Suhrkamp, 1978.
    English: The Star Diaries
  2. Straub, HR. (2021) Wie die künstliche Intelligenz zur Intelligenz kommt, St. Gallen, ZIM-Verlag.
  3. Nowotny, H. (2021) In AI we Trust, Power, Illusion and Control of Predictive Algorithms, Cambridge/Medford, Polity Press.

Self-reference 1

Douglas Hofstadters ‘Gödel-Escher-Bach’

In the 1980s, I read Douglas Hofstadter’s cult book ‘Gödel-Escher-Bach’ with fascination. Central to it is Gödel’s incompleteness theorem. This theorem shows the limit for classical mathematical logic. Gödel proved this limit in 1931 in conjunction with the fact that it is, in principle, insurmountable for all classical mathematical systems.

This is quite astonishing! Is mathematics imperfect? As inheritors of the Age of Enlightenment and convinced disciples of rationality, we consider nothing to be more stable and certain than mathematics.

Hofstadter’s book impressed me. However, at certain points, e.g. on the subject of the ‘coding’ of information, I had the impression that certain aspects were greatly simplified by the author. In my opinion, the way in which information is incorporated into an interpreting system plays a major role in the recognition process in which information is picked up. The integrating system is itself active and participates in the decision-making process. Information is not exactly the same before and after integration. Does the interpreter, i.e. the receiving (coding) system, have no influence here? And if it does, what influence does it have?

In addition, the aspect of ‘time’ did not seem to me to be sufficiently taken into account. In the real world, information processing always takes place within a certain period of time. There is a before and an after. A receiving system is also changed by this. In my opinion, time and information are inextricably linked. Hofstadter seemed to miss something here.

Strong AI

My reception of Hofstadter was further challenged by Hofstadter’s positioning as a representative of ‘strong AI’. The ‘strong AI’ hypothesis states that human thinking, indeed human consciousness, can be simulated by computers on the basis of mathematical logic, a hypothesis that seemed – and still seems – rather daring to me.

Roger Penrose is said to have been provoked to write his book ‘Emperor’s New Mind’ by a BBC programme in which Hofstadter, Dennett and others enthusiastically advocated the strong AI thesis, which Penrose obviously does not share. As I said, neither do I.

But of course, front lines are never that simple. Although I am certainly not on the side of strong AI, Hofstadter’s presentation of Gödel’s incompleteness theorem as a central insight of 20th century science remains unforgettable to me. With enthusiasm, I also read the interview with Hofstadter that appeared in Spiegel this spring (DER SPIEGEL 18/2014: ‘Language is everything’). In it, he postulates, among other things, that analogies are decisive in the thinking of scientists and he differentiates his interests from those of the profit-oriented IT industry. These are thoughts that one might very well endorse.

Selfreference and incompleteness

But let’s go back to Gödel. What – in layman’s terms – is the trick in Gödel’s incompleteness theorem?

The trick is the same as in the barbar paradox and all other real  paradoxes. The trick is to make a sentence, a logical statement and …

1. To refer it to itself (Selfreference)
2. and then to deny it. (Negation)

That’s the whole trick. With this combination, any classic formal system can be broken.

I’m afraid I need to explain this in more detail …

→ „Self-reference 2“ (will be translated to English soon)


Summary

Self-referentiality causes classical logical systems such as FOL or Boolean algebra to crash.

More on the topic of logic → Overview page Logic


German original (2015): Selbstreferenz

Translation; Juan Utzinger