Tag Archives: Spencer-Brown

What is Entropy?

Definition of Entropy

The term entropy is often avoided because it contains a certain complexity. The phenomenon entropy, however, is constitutive for everything that is going on in our lives. A closer look is worth the effort.

Entropy is a measure of information and it is defined as:

Entropy is the information
– known at micro,
– but unknown at macro level.

The challenge of this definition is:

  1. to understand what is meant by the micro and macro states and
  2. ​to understand why entropy is a difference.

What is Meant by Micro and Macro Level?

The micro level contains the details (i.e. a lot of information), the macro level contains the overview (i.e. less, but more targeted information). The distance between the two levels can be very small (as with the bit, where the microlevel knows just two pieces of information: on or off) or huge, as with the temperature (macrolevel) of the coffee in a coffee cup, where the kinetic energies of the many molecules (microlevel) determine the temperature of the coffee. The number of molecules in the cup is really large (in the order of Avogadro’s number 1023) and the entropy of the coffee in the cup is correspondingly high.

Entropy is thus defined by the two states and their difference. However, states and difference are neither constant nor absolute, but a question of observation, therefore relative.

Let’s take a closer look at what this relativity means for the macro level.

What is the Relevant Macro Level?

In many fields like biology, psychology, sociology, etc. and in art, it is obious to me as a layman, that the notion of the two levels is applicable to these fields, too. They are, of course, more complex than a coffee cup, so that the simple thermodynamic relationship between micro and macro becomes more complex.

In particular, it is conceivable to have a mixture of several macro-states occurring simultaneously. For example, an individual (micro level), may belong to the macro groups of the Swiss, the computer scientists, the older men, the contemporaries of 2024, etc – all at the same time. Therefore, applying entropy reasoning to sociology is not as straightforward as the simple examples like Boltzmann’s coffee cup, Salm’s lost key, or a basic bit might suggest.

Entropy as a Difference

Micro and macro level of an object both have their own entropy. But what really matters ist the difference of the two entropies. The bigger the difference, the more is unknown on the macro level about the micro level.

The difference between micro and macro level says a lot about the way we perceive information. In simple words: when we learn something new, information is moved from micro to macro state.

The conventional definition of entropy states that it represents the information present in the micro but absent in the macro state. This definition of entropy via the two states means that the much more detailed microstate is not primarily visible to the macrostate. This is exactly what Niklas Luhmann meant when he spoke of intransparency1.

When an observer interprets the incoming signals (micro level) at his macro level, he attempts to gain order or transparency from an intransparent multiplicity. How he does this is an exciting story. Order – a clear and simple macro state – is the aim in many places: In my home, when I tidy up the kitchen or the office. In every biological body, when it tries to maintain constant form and chemical ratios. In society, when unrest and tensions are a threat, in the brain, when the countless signals from the sensory organs have to be integrated in order to recognise the environment in a meaningful interpretation, and so on. Interpretation is always a simplification, a reduction of information = entropy reduction.

Entropy and the Observer

An essential point is that the information reduction from micro to macro state is always carried out by an active interpreter and guided by his interest.

The human body, e.g., controls the activity of the thyroid hormones via several control stages, which guarantee that the resulting state (macro state) of the activity of body and mind remains within an adequate range even in case of external disturbances.

The game of building up a macro state (order) out of the many details of a micro state is to be found everywhere in biology, sociology and in our everyday live.

There is – in all these examples – an active control system that steers the reduction of entropy in terms of the bigger picture. This control in the interpretation of the microstate is a remarkable phenomenon. Always when transparency is wanted, an information rich micro state must be simplified to a macro state with less details.

Entropy can then be measured as the difference in information from the micro to the macro level. When the observer interprets signals from the micro level, he creates transparency from intransparency.

Entropy, Re-Entry and Oscillation

We can now have a look at the entropy relations in the re-entry phenomenon as described by Spencer-Brown2. Because the re-entry ‘re-enters’ the same distinction that it has just identified, there is hardly any information difference between before and after the re-entry and therefore hardly any difference between its micro and macro state. After all, it is the same distinction.

However, there is a before and an after, which may oscillate, whereby its value becomes imaginary (this is precisely described in chapter 11 of Spencer-Browns book ‘Laws of Form’)2. Re-entries are very common in thinking and in complex fields like biology or sociology when actions and their consequences meet their own causes. These loops or re-entries are exciting, both in thought processes and in societal analysis.

The re-entries lead to loops in the interpretation process and in many situations these loops can have puzzling logical effects (see paradoxes1 sand paradoxes2 ). In chapter 11 of ‘Laws of Form’2, Spencer-Brown describes the mathematical and logical effects around the re-entry in details. In particular, he develops how logical oscillations occur due to the re-entry.

Entropy comes into play whenever descriptions of the same object occur simultaneously at different levels of detail, i.e. whenever an actor (e.g. a brain or the kitchen cleaner) wants to create order by organising an information-rich and intransparent microstate in such a way that a much simpler and easier to read macrostate develops.

We could say that the observer actively creates a new macro state from the micro state. However, the micro-state remains and still has the same amount of entropy as before. Only the macro state has less. When I comb my hair, all the hairs are still there, even if they are arranged differently. A macro state is created, but the information can still be described at the detailed micro level of all the hairs, albeit slightly altered in the arrangement on the macro level.
Re-entry – on the other hand – is a powerful logical pattern. For me, both re-entry and entropy complement each other in the description of reality. Distinction and re-entry are very elementary. Entropy, on the other hand, always arises when several things come together and their arrangement is altered or differentely interpreted.

See also: Five preconceptions about entropy

Translation: Juan Utzinger


1 Niklas Luhmann, Die Kontrolle von Intransparenz, hrsg. von Dirk Baecker, Berlin: Suhrkamp 2017, S. 96-120

2 Georg Spencer Brown , Laws of Form, London 1969, (Bohmmeier, Leipzig, 2011)

Georg Spencer-Browns Distinction and the Bit

continues paradoxes and logic (part 2)


History

Before we Georg Spencer-Brown’s (GSB’s) distinction as basic element for logic, physics, biology and philosophy, it is helpful to compare it with another, much better-known basic form, namely the bit. This allows us to better understand the nature of GSB’s distinction and the revolutionary nature of his innovation.

Bits and GSB forms can both be regarded as basic building blocks for information processing. Software structures are technically based on bits, but the forms of GSB (‘draw a distinction’) are just as simple, fundamental and astonishingly similar. Nevertheless, there are characteristic differences.

 Fig. 1: Form and bit show similarities and differences

Both the bit and the Spencer-Brown form were found in the early phase of computer science, so they are relatively new ideas. The bit was described by C. A. Shannon in 1948, the distinction by Georg Spencer-Brown (GSB) in his book ‘Laws of Form’ in 1969, only about 20 years later. 1969 fell in the heyday of the hippie movement and GSB was warmly welcomed Esalen, an intellectual hotspot and starting point of this movement.  This may have put him – on the other hand – in a bad light and hindered the established scientific community to look closer into his ideas. While the handy bit vivified California’s nascent high-tech information movement, Spencer-Brown’s mathematical and logical revolution was rather ignored by the scientific community. It’s time to overcome this disparity.

Similarities between Distinction and Bit

Both the form and the bit refer to information. Both are elementary abstractions and can therefore be seen as basic building blocks of information.

This similarity reveals itself in the fact that both denote a single action step – albeit a different one – and both assign a maximally reduced number of results to this action, exactly two.

Table 1: Both Bit and Distinction each contain
one action and two possible results (outcomes)

Exactly one Action, Exactly Two Potential Results

The action of the distinction is – as name says – the distinction, and the action of the bit is the selection. Both actions can be seen as information actions and are as such fundamental, i.e. not further reducible. The bit does not contain further bits, the distinction does not contain further distinctions. Of course, there are other bits in the vicinity of the bit and other distinctions in the vicinity of a distinction. However, both actions are to be seen as fundamental information actions. Their fundamentality is emphasised by the smallest possible number of results, namely two. The number of results cannot be smaller, because a distinction of 1 is not a distinction and a selection of 1 is not a selection. Both are only possible if there are two potential results.

Both distinction and bit are thus indivisible acts of information of radical, non-increasable simplicity.

Nevertheless, they are not the same and are not interchangeable. They complement each other.

While the bit has seen a technical boom since 1948, its prerequisite, the distinction, has remained unmentioned in the background. It is all the more worthwhile to bring it to the foreground today and shed new light on what links mathematics, logic, the natural sciences and the humanities.


Differences

Information Content and Shannons Bit

Both form and bit refer to information. In physics, the quantitative content of information is referred to as entropy.

At first glance, the information content when a bit is set or a distinction is made appears to be the same in both cases, namely the information that distinguishes between two states. This is clearly the case with a bit. As Shannon has shown, its information content is log2(2) = 1. Shannon called this dimensionless value 1 bit. The bit therefore contains – not surprisingly – the information of one bit, as defined by Shannon.

The Bit and its Entropy

The bit measures nothing other than entropy. The term entropy originally came from thermodynamics and was used to calculate the behaviour of heat machines. Entropy is in thermodynamics the partner term of energy, but it applies – like the term energy – to all fields of physics, not just to thermodynamics.

What is Entropy ?

Entropy is a measure for the information content. If I do not know something and then discover it, information flows. In a bit, there are – before I know which one is true – two states possible, the two states of the bit . When I find out which of the two states is true, I receive a small basic portion of information with the quantitative value of 1 bit.

One bit decides about two results. If more than two states are possible, the number of bits increases logarithmically with the number of possible states; so it takes three binary elections (bits) to find the correct choice out of 8 possibilities. The number of choices (bits) behaves logarithmically to the number of possible choices, as the example shows.

Dual choice = 1 Bit = log2(2).
Quadruple choice= 2 Bit = log2(4)
Octuple choice = 3 Bit = log2(8)

The information content of a single bit is always the information content of a single binary choice, i.e. log2(2) = 1.

The bit as a physical quantity is dimensionless, i.e. a pure number. This suits because the information about the choice is neutral, and not a length, a weight, an energy or a temperature. The bit serves well as the technical unit of quantitative information content. What is different with the other basic unit of information, the form of Spencer-Brown?


The Information Content of the Form

The information content of the bit is exactly 1 if the two outcomes of the selection have exactly the same probability. As soon as one of the two states is less probable, its choice reveals more information. When it is selected despite its lower prior probability, this makes more of a difference and reveals more information to us. The less probable its choice is, the greater the information will be, if it is selected. The classic bit is a special case in this regard: the probability of its two states is equal by definition and the information content of the choice is exactely 1.

This is entirely different with Spencer-Brown’s form of distinction. The decisive factor lies in the ‘unmarked space’. The distinction distinguishes something from the rest and marks it. The rest, i.e. everything else, remains unmarked. Spencer-Brown calls it the ‘unmarked space’.

We can and must now assume that the remainder, the unmarked, is much greater, and the probability of its occurrence is much higher than the probability that the marked will occur. The information content of the mark, i.e. of the drawing the distinction, is therefore usually greater than 1.

Of course, the distinction is about the marked and the marked is what interests us. That is why the information content of the distinction is calculated based on the marked and not the unmarked.

How large is the space of the unmarked? We would do well to assume that it is infinite. I can never know what I don’t know.

The difference in information content, measured as entropy, is the first difference we can see between bit and distinction. The information content of the bit, i.e. its entropy, is exactly 1. In the case of distiction, it depends on how large the unmarked space is, but it is always larger than the marked space and the entropy of the distinction is therefore always greater than 1.


Closeness and Openness

Fig. 1 above shows the most important difference between distinction and bit, namely their external boundaries. These are clearly defined in the case of the bit.

The meaning in the bit

The bit contains two states, one of which is activated, the other not. Apart from these two states, nothing can be seen in the bit and all other information is outside the bit. Not even the meanings of the two states are defined. They can mean 0 and 1, true and false, positive and negative or any other pair that is mutually exclusive. The bit itself does not contain these meanings, only the information as to which of the two predefined states was selected. The meaning of the two states is regulated outside the bit and assigned from outside. This neutrality of the bit is its strength. It can take on any meaning and can therefore be used anywhere where information is technically processed.

The meaning in the distinction

The situation is completely different with distinction. Here the meaning is marked. To do this, the inside of the distinction is distinguished from the outside. The outside, however, is open and there is nothing that does not belong to it. The ‘unmarked space’, in principle, is infinite. A boundary is defined, but it is the distinction itself. That is why the distinction cannot really separate itself from the outside, unlike the bit. In other words: The bit is closed, the distinction is not.


Differences between Distinction and Bit

There are two essential differences between distionction and bit.

Table 2: Differences between Distinction (Form) and Bit


Consequences

The two difference between distinction and bit have some interesting consequences.

Example NLP (Natural Language Processing)

The bit, due to its defined and simple entropy and its close borders, has the technological advantage of simple usability, which we exploit in the software industry. Distinctions, on the other hand, are more realistic due to their openness. For our specific task of interpreting medical texts, we therefore came across the need to introduce openness into the bit world of technical software through certain principles: The keywords here are

  1. Introduction of an acting subject that evaluates the input according to its own internal rules,
  2. Working with changing ontologies and classifications,
  3. Turning away from the classical, i.e. static and montonic logic and turning towards a non-monotonic logic,
  4. Integration of time as a logical element (not just as a variable).

Translation: Juan Utzinger

 

Paradoxes and Logic (Part 2)

continues Paradoxes and Logic (part 1)


“Draw a Distinction”

Spencer-Brown introduces the elementary building block of his formal logic with the words ‘Draw a Distinction’. Figure 1 shows this very simple formal element:

Fig 1: The form of Spencer-Brown

A Radical Abstraction

In fact, his logic consists exclusively of this building block. Spencer-Brown has thus achieved an extreme abstraction that is more abstract than anything mathematicians and logicians have found so far.

What is the meaning of this form? Spencer-Brown is aiming at an elementary process, namely the ‘drawing of a distinction’. This elementary process now divides the world into two parts, namely the part that lies within the distinction and the part outside.

Fig. 2: Visualisation of the distinction

Figure 2 shows what the formal element of Fig. 1 represents: a division of the world into what is separated (inside) and everything else (outside). The angle of Fig. 1 thus becomes mentally a circle that encloses everything that is distinguished from the rest: ‘draw a distinction’.

The angular shape in Fig. 1 therefore refers to the circle in Fig. 2, which encompasses everything that is recognised by the distinction in question.

Perfect Continence

But why does Spencer-Brown draw his elementary building block as an open angle and not as a closed circle, even though he is referring to the closedness by explicitly saying: ‘Distinction is perfect continence’, i.e. he assigns a perfect inclusion to the distinction. The fact that he nevertheless shows the continence as an open angle will become clear later, and will reveal itself to be one of Spencer-Brown’s ingenious decisions.  ↝  imaginary logic value, to be discussed later.

Marked and Unmarked

In addition, it is possible to name the inside and the outside as the marked (m = marked) and the unmarked (u = unmarked) space and use these designations later in larger and more complex combinations of distinctions.

Fig. 3: Marked (m) and unmarked (u) space

Distinctions combined

To use the building block in larger logic statements, it can now be put together in various ways.

Fig. 4: Three combined forms of differentiation

Figure 4 shows how distinctions can be combined in two ways. Either as an enumeration (serial) or as a stacking, by placing further distinctions on top of prior distinctions. Spencer-Brown works with these combinations and, being a genuine mathematician, derives his conclusions and proofs from a few axioms and canons. In this way, he builds up his own formal mathematical and logical system of rules. Its derivations and proofs need not be of urgent interest to us here, but they show how carefully and with what mathematical meticulousness Spencer-Brown develops his formalism.

​Re-Entry

The re-entry is now what leads us to the paradox. It is indeed the case that Spencer-Brown’s formalism makes it possible to draw the formalism of real paradoxes, such as the barber’s paradox, in a very simple way. The re-entry acts like a shining gemstone (sorry for the poetic expression), which takes on a wholly special function in logical networks, namely the linking of two logical levels, a basic level and its meta level.

The trick here is that the same distinction is made on both levels. That it involves the same distinction, but on two levels, and that this one distinction refers to itself, from one level to the other, from the meta-level to the basic level. This is the form of paradox.

​Exemple Barber Paradox

We can now notate the Barber paradox using Spencer-Brown’s form:

 

Fig. 5: Distinction of the men in the village who shave themselves (S) or do not shave themselves (N)

Fig. 6: Notation of Fig. 5 as perfect continence

Fig. 5 and Fig. 6 show the same operation, namely the distinction between the men in the village who shave themselves and those who do not.

So how does the barber fit in? Let’s assume he has just got up and is still unshaven. Then he belongs to the inside of the distinction, i.e. to the group of unshaven men N. No problem for him, he shaves quickly, has breakfast and then goes to work. Now he belongs to the men S who shave themselves, so he no longer has to shave. The problem only arises the next morning. Now he’s one of those men who shave themselves – so he doesn’t have to shave. Unshaven as he is now, however, he is a men he has to shave. But as soon as he shaves himself, he belongs to the group of self-shavers, so he doesn’t have to be shaven. In this manner, the barber switches from one group (S) to the other (N) and back. A typical oscillation occurs in the barber’s paradox – and in all other real paradoxes, which all oscillate.

How does the Paradox Arise?

Fig. 7: The barber (B) shaves all men who do not shave themselves (N)

Fig. 7, shows the distinction between the men N (red) and S (blue). This is the base level. Now the barber (B) enters. On a logical meta-level, it is stated that he shaves the men N, symbolised by the arrow in Fig. 7.

The paradox arises between the basic and meta level. Namely, when the question is asked whether the barber, who is also a man of the village, belongs to the set N or the set S. In other words:

→  Is  B an  N  or an  S ?  

The answer to this question oscillates. If B is an N, then he shaves himself (Fig. 7). This makes him an S, so he does not shave himself. As a result of this second cognition, he becomes an N and has to shave himself. Shaving or not shaving? This is the paradox and its oscillation.

How is it created? By linking the two levels. The barber is an element of the meta-level (macro level), but at the same time an element of the base level (micro level). Barber B is an acting subject on the meta-level, but an object on the basic level. The two levels are linked by a single distinction, but B is once the subject and sees the distinction from the outside, but at the same time he is also on the base level and there he is an object of this distinction and thus labelled as N or S. Which is true? This is the oscillation, caused by the re-entry.

The re-entry is the logical core of all true paradoxes. Spencer-Brown’s achievement lies in the fact that he presents this logical form in a radically simple way and abstracts it formally to its minimal essence.

The paradox is reduced to a single distinction that is read on two levels, firstly fundamentally (B is N or S) and then as a re-entry when considering whether B shaves himself.

The paradox is created by the re-entry in addition to a negation: he shaves the men who do not shave themselves. Re-entry and negation are mandatory in order to generate a true paradox. They can be found in all genuine paradoxes, in the barber paradox, the liar paradox, the Russell paradox, etc.

Georg Spencer-Brown’s achievement is that he has reduced the paradox to its essential formal core:

→ A (single) distinction with a re-entry and a negation.

His discoveries of distinction and re-entry have far-reaching consequences with regard to logic, and far beyond.


Let’s continue the investigation, see:  Form (Distinction) and Bit

Translateion: Juan Utzinger


 

Paradoxes and Logic (Part 1)


Logic in Practice and Theory

Computer programs consist of algorithms. Algorithms are instructions on how and in what order an input is to be processed. Algorithms are nothing more than applied logic and a programmer is a practising logician.

But logic is a broad field. In a very narrow sense, logic is a part of mathematics; in a broad sense, logic is everything that has to do with thinking. These two poles show a clear contrast: The logic of mathematics is closed and well-defined, whereas the logic of thought tends to elude precise observation: How do I come to a certain thought? How do I construct my thoughts when I think? And what do I think just in this moment, when I think about my thinking? While mathematical logic works with clear concepts and rules, which are explicit and objectively describable, the logic of thinking is more difficult to grasp. Are there any rules for correct thinking, just as there are rules in mathematical logic for drawing conclusions in the right way?

When I look at the differences between mathematical logic and the logic of thought, something definitely strikes me: Thinking about my thinking defies objectivity. This is not the case in mathematics. Mathematicians try to safeguard every tiny step of thought in a way that is clear and objective and comprehensible to everyone as soon as they understand the mathematical language, regardless of who they are: the subject of the mathematician remains outside.

This is completely different with thinking. When I try to describe a thought that I have in my head, it is my personal thought, a subjective event that primarily only shows itself in my own mind and can only be expressed to a limited extent by words or mathematical formulae.

But it is precisely this resistance that I find appealing. After all, I wish to think ‘correctly’, and it is tempting to figure out how correct thinking works in the first place.

I could now take regress to mathematical logic. But the brain doesn’t work that way. In what way then? I have been working on this for many decades, in practice, concretely in the attempt to teach the computer NLP (Natural Language Processing). The aim has been to find explicit, machine-comprehensible rules for understanding texts, an understanding that is a subjective process, and – being subjective – cannot be easily brought to outside objectivity.

My computer programmes were successful, but the really interesting thing is the insights I was able to gain about thinking, or more precisely, about the logic with which we think.

My work has given me insights into the semantic space in which we think, the concepts that reside in this space and the way in which concepts move. But the most important finding concerned time in logic. I would like to go into that closer and for this target we first look at paradoxes.

Real Paradoxes

Anyone who seriously engages with logic, whether professionally or out of personal interest, will sooner or later come across paradoxes. A classic paradox, for example, is the barber’s paradox:

The Barber Paradox

The barber of a village is defined by the fact that he shaves all the men who do not shave themselves. Does the barber shave himself? If he does, he is one of the men who shave themselves and whom he therefore does not shave. But if he does not shave himself, he is one of the men he shaves, so he also shaves himself. As a result, he is one of the men he does not have to shave. So he doesn’t shave – and so on. That’s the paradox: if he shaves, he doesn’t shave. If he doesn’t shave, he shaves.

The same pattern can be found in other paradoxes, such as the liar paradox and many others. You might think that these kinds of paradoxes are far-fetched and don’t really play a role. But paradoxes do play a role, at least in two places: in maths and in the thought process.

Russell’s Paradox and Kurt Gödel’s Incompleteness Theorems

Russel’s paradox has revealed the gap in set theory. Its ‘set of all sets that does not contain itself as an element’ follows the same pattern as the barber of the barber paradox and leads to the same kind of unsolvable paradox. Kurt Gödel’s two incompleteness theorems are somewhat more complex, but are ultimately based on the same pattern. Both Russel’s and Gödel’s paradoxes have far-reaching consequences in mathematics. Russel’s paradox has led to the fact that set theory can no longer be formed using sets alone, because this leads to untenable contradictions. Zermelo had therefore supplemented the sets with classes and thus gave up the perfectly closed nature of set theory.

Gödel’s incompleteness theorems, too, are ultimately based on the same pattern as the Barber paradox. Gödel had shown that every formal system (formal in the sense of the mathematicians) must contain statements that can neither be formally proven nor disproven. A hard strike for mathematics and its formal logic.

Spencer-Brown and the “Laws of Form”

Russel’s refutation of the simple set concept and Gödel’s proof of the incompleteness of formal logic suggest that we should think more closely about paradoxes. What exactly is the logical pattern behind Russel’s and Gödel’s problems? What makes set theory and formal logic incomplete?

The question kept me occupied for a long time. Surprisingly, it turned out that paradoxes are not just annoying evils, but that it is worth using them as meaningful elements in a new formal logic. This step was exemplarily demonstrated by the mathematician Georg Spencer-Brown in his 1969 book ‘Laws of Form’, including a maximally simple formalism for logic.


I would now like to take a closer look at the structure of paradoxes, as Spencer-Brown has pointed them out, and the consequences this has for logic, physics, biology and more.

continue: Paradoxes and Logic (part2)

Translation: Juan Utzinger


 

What Can I Know?


The question regarding the relationship between thinking and information determined my professional activity and continues to engage me.


Information and Interpretation

How is data assigned a meaning? What does information consist of? The answer seems clear, as the bit is generally regarded as its building block.

Entropy is the quantity by which information appears in physics – thanks to C. E. Shannon, the inventor of the bit. Bits measure entropy and are regarded as the measure of information. But what is entropy and what does it really have to do with information?


Artificial Intelligence (AI)

Today there is a lot of talk about AI. I have been creating such systems for forty years – but without labelling them with this publicity term.

  • The big difference: corpus-based and rule-based AI
  • How real is the probable?
  • Which requires more intelligence: jassen or chess?
  • What distinguishes biological intelligence from machine intelligence?

What is referred to as AI today are always neural networks. What is behind them? They are extremely successful – but are they intelligent?

-> Can machines be intelligent? 


Logic

Mathematical logic, to many, appears to be the ultimate in rationality and logic. I share the respect for the extraordinary achievements of the giants on whose shoulders we stand. However, we can also think beyond this:

  • Are statements always either true or false?
  • Can classical logic with its monotonicity really be used in practice?
  • How can time be incorporated into logic?
  • Can we approach logical contradictions in a logically correct way?

Aristotle’s classical syllogisms still influence our view of the world today. This is because they gave rise to the ‘first order logic’ of mathematics, which is generally regarded as THE classical logic. Is there a formal way out of this restrictive and static logic, which has a lot to do with our static view of the world?

-> Logic: From statics to dynamics


Semantics and NLP (Natural Language Processing)

Our natural language is simply ingenious and helps us to communicate abstract ideas. Without language, humanity’s success on our planet would not have been possible.

  • No wonder, then, that the science that seeks to explain this key to human success is considered particularly worthwhile. In the past, researchers believed that by analysing language and its grammar they could formally grasp the thoughts conveyed by it, which is still taught in some linguistics departments today. In practice, however, the technology ‘Large Language Model’ (LLM) of Google’s has shattered this claim.

As a third option, I argue in favour of a genuinely semantic approach that avoids the gaps in both the grammar and the LLMs. We will deal with the following:

  • Word and meaning
  • Semantic architectures
  • Concept molecules

-> Semantics and Natural Language Processing (NLP)


Scales: Music and Maths

A completely different topic, which also has to do with information and the order in nature, is the theory of harmony. Rock and hits are based on a simple theory of harmony, jazz and classical music on complex ones. But why do these information systems work? Not only can these questions be answered today, the answers also provide clues to the interplay between the forces of nature.

  • Why do all scales span an octave?
  • The overtone series is not a scale!
  • Standing waves and resonance
  • Prime numbers and scales

-> How our scales were created


The author

My name is Hans Rudolf Straub. Information about my person can be found here.


Books

On the topics of computational linguistics, philosophy of information, NLP and concept molecules:

The Interpretive System, H.R. Straub, ZIM-Verlag, 2020 (English version)
More about the book

Das interpretierende System, H.R. Straub, ZIM-Verlag, 2001 (German version)
More about the book

On the subject of artificial intelligence:

Wie die künstliche Intelligenz zur Intelligenz kommt, H.R. Straub, ZIM-Verlag, 2021 (Only available in German)
More about the book
Ordering the book from the publisher

You can order a newsletter here.


Thank You

Many people have helped me to develop these topics. Wolfram Fischer introduced me to the secrets of Unix, C++ and SQL and gave me the opportunity to build my first semantic interpretation programme. Norbert Frei and his team of computer scientists actively helped to realise the concept molecules. Without Hugo Mosimann and Maurus Duelli, Semfinder would neither have been founded nor would it have been successful. The same applies to Christine Kolodzig and Matthias Kirste, who promoted and supported Semfinder in Germany. Csaba Perger and Annette Ulrich were Semfinder’s first employees, full of commitment and clever ideas and – as knowledge engineers – provided the core for the emerging knowledge base.

Wolfram Fischer actively helped me with the programming of this website. Most of the translations into English were done by Vivien Blandford and Tony Häfliger, as well as Juan Utzinger.

Thank you sincerely!