Tag Archives: Intelligence

Artificial and natural intelligence: the difference

What is real intelligence? 

Paradoxically, the success of artificial intelligence helps us to identify essential conditions of real intelligence. If we accept that artificial intelligence has its limits and, in comparison with real intelligence, reveals clearly discernible flaws – which is precisely what we recognised and described in previous blog posts – then these descriptions do not only show what artificial intelligence lacks, but also where real intelligence is ahead of artificial intelligence. Thus we learn something crucial about natural intelligence.

What have we recognised? What are the essential differences? In my view, there are two properties which distinguish real intelligence from artificial intelligence. Real intelligence

– also works in open systems and

– is characterised by a conscious intention.

 

Chess and Go are closed systems

In the blog post on cards and chess, we examined the paradox that a game of cards appears to require less intelligence from us humans than chess, whereas it is precisely the other way round for artificial intelligence. In chess and Go, the computer beats us; at cards, however, we are definitely in with a chance.

Why is this the case? – The reason is the closed nature of chess, which means that nothing happens that is not provided for. All the rules are clearly defined. The number of fields and pieces, the starting positions and the way in which the pieces may move, who plays when and who has won at what time and for what reasons: all this is unequivocally set down. And all the rules are explicit; whatever is not defined does not play a part: what the king looks like, for instance. The only important thing is that there is a king and that, in order to win the game, his opponent has to checkmate him. In an emergency, a scrap of paper with a “K” on it is enough to symbolise the king.

Such closed systems can be described with mathematical clarity, and they are deterministic. Of course, intelligence is required to win them, but this intelligence may be completely mechanical – that is, artificial intelligence.

Pattern recognition: open or closed system?

This looks different in the case of pattern recognition where, for example, certain objects and their properties have to be identified on images. Here, the system is basically open, for it is not only possible that images with completely new properties can be introduced from the outside. In addition, the decisive properties themselves that have to be recognised can vary. The matter is thus not as simple, clearly defined and closed as in chess and Go. Is it a closed system, then?

No, it isn’t. Whereas in chess, the rules place a conclusive boundary around the options and objectives, such a safety fence must be actively placed around pattern recognition. The purpose of this is to organise the diversity of the patterns in a clear order. This can only be done by human beings. They assess the learning corpus, which includes as many pattern examples as possible, and allocate each example to the appropriate category. This assessed learning corpus then assumes the role of the rules of chess and determines how new input will be interpreted. In other words: the assessed learning corpus contains the relevant knowledge, i.e. the rules according to which previously unknown input is interpreted. It corresponds to the rules of chess.

The AI system for pattern recognition is thus open as long as the learning corpus has not been integrated; with the assessed corpus, however, such a system becomes closed. In the same way that the chess program is set clear limits by the rules, expert assessment provides the clear-cut corset which ultimately defines the outcome in a deterministic way. As soon as the assessment has been made, a second and purely mechanical intelligence is capable of optimising the behaviour within the defined limits – and ultimately to a degree of perfection which I as a human being will never be able to achieve.

Who, though, specifies the content of the learning corpus which turns the pattern recognition program into a technically closed system? It is always human experts who assess the pattern inputs und who thus direct the future interpretation done by the AI system. In this way pattern recognition can be turned into a closed task like a game of chess or go which can be solved by a mechanical algorithm.

In both cases – in the initially closed game program (chess and Go) as well as in the subsequently closed pattern recognition program – the algorithm finds a closed situation, and this is the prerequisite for an artificial, i.e. mechanical intelligence to be able to work.

Conclusion 1:
AI algorithms can only work in closed spaces.

In the case of pattern recognition, the human-made learning corpus provides this closed space.

Conclusion 2:
Real intelligence also works in open situations.

Is there any intelligence without intention?

Why is artificial intelligence unable to work in an open space without assessments introduced from outside? Because it is only the assessments introduced from outside that make the results of intelligence possible. And assessments cannot be provided purely mechanically by the AI but are always linked to the assessors’ views and intentions.

Besides the differentiation between open and closed systems, our analysis of AI systems shows us still more about real intelligence, for artificial and natural intelligence also differ from each other with regard to the extent to which individual intentions play a part in their decision-making.

In chess programs, the objective is clear: to checkmate the opponent’s king. The objective which determines the assessment of the moves, namely the intention to win, does not have to be laboriously recognised by the program itself but is intrinsically given.

With pattern recognition, too, the role of the assessment intention is crucial, for what kind of patterns should be distinguished in the first place? Foreign tanks versus our own tanks? Wheeled tanks versus tracked tanks? Operational ones versus damaged ones? All these distinctions make sense, but the AI must be set, and adjusted to, a specific objective, a specific intention. Once the corpus has been assessed in a certain direction, it is impossible to suddenly derive a different property from it.

As in the chess program, the artificial intelligence is not capable of finding the objective on its own: in the chess program, the objective (checkmate) is self-evident; in pattern recognition, the assessors involved must agree on the objective (foreign/own tanks, wheeled/tracked tanks) in advance. In both cases, the objective and the intention come from the outside.

Conversely, natural intelligence has to determine itself what is important and what is unimportant, and what objectives it pursues. In my view, an active intention is an indispensable property of natural intelligence and cannot be created artificially.

Conclusion 3:
In contrast to artificial intelligence, natural intelligence is characterised by the fact that it is able to judge, and deliberately orient, its own intentions.


This is a blog post about artificial intelligence. You can find further posts through the overview page about AI.


Translation: Tony Häfliger and Vivien Blandford

Now where in artificial intelligence is the intelligence located?


In a nutshell: the intelligence is always located outside.


a) Rule-based systems

The rules and algorithms of these systems are created by human beings, and no one will ascribe real intelligence to a pocket calculator. The same also applies to all other rule-based systems, however refined they may be. The rules are devised by human beings.

b) Conventional corpus-based systems (neural networks)

These systems always use an assessed corpus, i.e. a collection of data which have already been evaluated  (details). This assessment decides according to what criteria each individual corpus entry is classified, and this classification then constitutes the real knowledge in the corpus.

However, the classification cannot be derived from the data of the corpus itself but is always introduced from the outside. And it is not only the allocation of a data entry to a class that can only be done from the outside; rather, the classes themselves are not determined by the data of the corpus, either, but are provided from the outside – ultimately by human beings.

The intelligence of these systems is always located in the assessment of the data pool, i.e. the allocation of the data objects to predefined classes, and this is done from the outside, by human beings. The neural network which is thus created does not know how the human brain has found the evaluations required for it.

c) Search engines

Search engines constitute a special type of corpus-based system and are based on the fact that many people use a certain search engine and decide with their clicks which internet links can be allocated to the search string. Ultimately, search engines only average the traces which the many users leave with their context knowledge and their intentions. Without the human brains of the users who have used the search engines so far, the search engines would not know where to point new queries.

d) Game programs (chess, Go, etc.) / deep learning

This is where things become interesting, for in contrast to the other corpus-based systems, such programs do not require any human beings who assess the corpus, which consists of the moves of games previously played from the outside. Does this mean, then, that such systems have an intelligence of their own?

Like the pattern recognition programs (b) and the search engines (c), the Go program has a corpus which in this case contains all the moves of the test games played before. The difference from the classic AI systems consists in the fact that the assessment of the corpus (i.e. the moves of the games) is already defined by the success in the actual game. Thus no human being is required who has to make a distinction between foreign tanks and our own tanks in order to provide the template for the neural network. The game’s success can be directly recognised by the machine, i.e. the algorithm itself; human beings are not required.

With classic AI systems, this is not the case, and a human being who assesses the individual corpus items is indispensable. Added to this, the assessment criterion is not given unequivocally, as it is with Go. Tank images can be categorised in completely different ways (wheeled/tracked tanks, damaged/undamaged tanks, tanks in towns/open country, in black and white/coloured pictures, etc.). This opens the interpretation options for the assessment at random. For all these reasons, an automatic categorisation is impossible with classic AI systems, which therefore always require an assessment of the learning corpus by human experts.

In the case of chess and Go, it is precisely this that is not required. Chess and Go are artificially designed and completely closed systems and thus indeed completely determined in advance. The board, the rules and the objective of the game – and thus also the assessment of the individual moves – are given automatically. Therefore no additional intelligence is required; instead, an automatism can play test games with itself within a predefined, closed setting and in this way attain the predefined objective better and better until it is better than any human being.

In the case of tasks which have to be solved not in an artificial game setting but in reality, however, the permitted moves and objectives are not completely defined, and there is leeway for strategy. An automatic system like deep learning cannot be applied in open, i.e. real situations.

It goes without saying that in practice, a considerable intelligence is required to program victory in Go and other games, and we may well admire the intelligence of the engineers at Google, etc., for that, yet once again it is their human intelligence which enables them to develop the programs, and not an intelligence which the programs designed by them are able to develop themselves.

Conclusion

AI systems can be very impressive and very useful, but they never have an intelligence of their own.

Artificial Intelligence (Overview )

Is AI dangerous or useful?

This question is currently the subject of extensive debate. The aim here is not to repeat well-known opinions, but to shed light on the basics of the technology that you are almost certainly unaware of. Or do you know where AI gets its intelligence from?

For a quarter of a century, I have been working with ‘intelligent’ IT systems and I am astonished that we ascribe real intelligence to artificial intelligence at all. That’s exactly what it doesn’t have. Its intelligence always comes from humans, who not only provide the data, but also have to evaluate its meaning before the AI can use it. Only then, AI can surprise us with its impressiv performance and countless useful applications in a wide variety of areas. How does it achieve this?

In 2019, I started a blog series on this topic, which you can see an overview of below. In 2021, I then summarised the articles in a book entitled “Wie die künstliche Intelligenz zur Intelligenz kommt” (in German). See below a list of blogposts which form the basis of the book.

While the book is in German, the blogseries is available both in German and English.


Latest Posts about AI


Earlier Posts (basis of the KI-book)

Rule-based or corpus-based?

These are the two fundamentally different methods of computer intelligence. They can either be based on rules or a collection of data (corpus). In the introductory post, I present the two with the help of two characteristic anecdotes:


With regard to success, the corpus-based systems have obviously outstripped the rule-based ones:


The rule-based systems had a more difficult time of it. What are their challenges? How can they overcome their weaknesses? And where is their intelligence situated inside them?


How are corpus-based systems set up? How is their corpus compiled and assessed? What are neural networks all about? And what are the natural limits of corpus-based systems?


Next, we’ll have a look at search engines, which are also corpus-based systems. How do they arrive at their proposals? Where are their limits and dangers? Why, for instance, is it inevitable that bubbles are formed?


Is a program capable of learning without human beings providing it with useful pieces of advice? It appears to work with deep learning. To understand this, we first compare a simple card game with chess: what requires more intelligence? Surprisingly, it becomes clear that for a computer, chess is the simpler game.

With the help of the general conditions of the board games Go and chess, we recognise under what conditions deep learning works.


In the following blog post, I’ll provide an overview of the AI types known to me. I’ll draw a brief outline of their individual structures and of the differences in the way they work.

So where is the intelligence?


The considerations reveal what distinguishes natural intelligence from artificial intelligence:


AI only shows its capabilities when the task is clear and simple. As soon as the question becomes complex, they fail. Or they fib by arranging beautiful sentences found in their treasure trove of data in such a way that it sounds intelligent (ChatGPT, LaMDA). They do not work with logic, but with statistics, i.e. with probability. But is what appears to be true always true?

The weaknesses necessarily follow from the design principle of AI. Further articles deal with this:

Games and Intelligence (2): Deep Learning

Go and chess

The Asian game of Go shares many similarities with chess while being simpler and more sophisticated at the same time.

The same as in chess:
– Board game → clearly defined playing field
– Two players (more would immediately increase complexity)
– Unequivocally defined possibilities of playing the stones (clear rules)
– The players place stones alternately (clear timeline).
– No hidden information (as, for instance, in cards)
Clear objective (the player who has surrounded the larger territory wins)

Simpler in Go:
– Only one type of piece: the stone (unlike in chess: king, queen, etc.)

More complex/requires more effort:
– Go has a slightly larger playing field.
– The higher number of fields and stones require more computation.
– Despite its very simple rules, Go is a highly sophisticated game.

Summary

Compared with their common features, the differences between Go and chess are minimal. In particular, Go satisfies the strongly limiting preconditions a) to d), which enable an algorithm to tackle the job:

a) a clearly defined playing field,
b) clearly defined rules,
c) a clearly defined course of play,
d) a clear objective.(Cf. also preceding blog post)

Go and deep learning

Google has beaten the best human Go players. This victory was achieved by means of a type of AI which is called deep learning. Many people think that this proves that a computer – i.e. a machine – can be genuinely intelligent. Let us therefore have a closer look at how Google managed to do this.

Rule- or corpus-based, or a new, third system?

The strategies of the known AI programs are either rule-based or corpus-based. In previous posts, we asked ourselves where the intelligence in these two strategies comes from, and we realised that the intelligence in rule-based AI is injected into the system by the human experts who establish the rules. Corpus-based AI also requires human beings, since all the inputs into the corpus must be assessed (e.g. friendly/hostile tanks), and these assessments can always be traced back to people even if this is not immediately obvious.

However, what does this look like in the case of deep learning? Obviously, it does not require any human beings any longer in order to provide specific assessments – in Go, with regard to the individual moves’ chances of winning; rather, it is sufficient for the program to play against itself and find out on its own which moves have proved most successful. In this, deep learning does NOT depend on human intelligence and – in chess and Go – even turns out to be superior to human intelligence.

Deep learning is corpus-based

Google’s engineers undoubtedly did a fantastic job. Whereas in conventional corpus-based applications, the data for the corpus have to be compiled laboriously, this is quite simple in the case of the Go program: the engineers simply have the computer play against itself, and every game is an input into the corpus. No one has to take the trouble to trawl the internet or any other source for data; instead, the computer is able to generate a corpus of any size very simply and quickly. Although like the programs for pattern recognition, deep learning for Go continues to depend on a corpus, this corpus can be compiled in a much simpler way – and automatically at that.

Yet it gets even better for deep learning. Not only is the compilation of the corpus much simpler, but the assessment of the single moves in the corpus is also very easy: Finding out the best move from among all the moves that are possible at any given time no longer requires any human experts. How does this work? How is deep learning capable of drawing intelligent conclusions without any human intelligence at all? This may be astonishing, but if we look at it in more detail, it becomes clear why this is indeed the case.

The assessment of corpus inputs

The difference is the assessment of the corpus inputs. To illustrate this, let’s have another look at the tank example. Its corpus consists of tank images, and a human expert has to assess each picture according to whether it shows one of our own tanks or a foreign tank. As explained, this requires human experts. In our second example, the search engine, it is also human beings, namely the users, who assess whether the link to a website suggested in the corpus fits the input search string. Both types of AI cannot do without human intelligence.

With deep learning, however, this is really different. The assessment of the corpus, i.e. the individual moves that make up the many different Go test games, does not require any additional intelligence. The assessment automatically results from the games themselves, since the only criterion is whether the game has been won or lost. This, however, is known to the corpus itself since it has registered the entire course of every game right to the end. Therefore the way in which every game has proceeded, automatically contains its own assessment – assessments by human beings are no longer required.

The natural limits of deep learning

The above, however, also reveals the conditions in which deep learning is possible at all: for the course of the game and the assessment to be clear-cut, there must not be any surprises. Ambiguous situations and uncontrollable outside influences are not allowed. For everything to be flawlessly calculable, the following is indispensable:

1. A closed system

This is given by the properties a) to c) (cf. preceding post), which games like chess and Go possess, namely

a) a clearly defined playing field,
b) clearly defined rules,
c) a clearly defined course of play.

A closed system is necessary for deep learning to work. Such a system can only be an artificially constructed system, for there are no closed systems in nature. It is no accident that chess and Go are particularly suitable for AI since games always have this aspect of being consciously designed. Games which integrate chance as part of the system, such as cards in the preceding post, are not absolutely closed systems any longer and therefore less suitable for artificial intelligence.

2. A clearly defined objective

A clearly defined objective – point d) in the preceding post – is also necessary for the assessment of the corpus to take place without any human interference, because the objective of the process under investigation and the assessment of the corpus inputs are closely connected. We must understand that the target of the corpus assessment is not given by the corpus data. Data and assessment are two different things. We have already discussed this in the example of the tanks, where we saw that a corpus input, i.e. the pixels of a tank photograph, did not automatically contain its own assessment (hostile/friendly). The assessment is a piece of information which is not intrinsic to the individual data (pixels) of an image; rather, it has to be fed into the corpus from the outside (by an interpreting intelligence). Therefore the same corpus input can also be assessed in very different ways: if the corpus is told whether an individual image is one of our own tanks or a foreign tank, it still does not know whether it is a tracked tank or a wheeled tank. With all such images, assessments can go in very different directions – unlike with chess and Go, where a move in a game (which is known to the corpus) is solely assessed according to the criterion of whether it is conducive to winning the game.

Thus chess and Go pursue a simple, clearly defined objective. In contrast to these two games, however, tank pictures allow for a wide variety of assessment objectives. This is typical of real situations. Real situations are always open, and in such situations, various and differing assessements can make sense and are absolutely appropriate. For the purpose of assessment, an instance (intelligence) outside the data has to establish the connection between the data and the assessment objective. This function is always linked to an instance with a certain intention.

Machine intelligence, however, lacks this intention and therefore depends on being provided with it by an objective from the outside. If the objective is as self-evident as it is in chess and Go, this is not a problem, and the assessment of the corpus can indeed be conducted by the machine itself without any human intelligence. In such unequivocal situations, machine deep learning is genuinely capable of working – indeed, even of beating human intelligence.

However, this only applies if the rules and the objective of a game are clearly defined. In all other cases, it is not an algorithm that is required but “real” intelligence, i.e. intelligence with a deliberate intention.

Conclusion

  1. Deep learning (DL) works.
  2. DL uses a corpus-based system.
  3. DL is capable of beating human intelligence in certain applications.
  4. However, DL only works in a closed system.
  5. DL only works if the objective is clear and unequivocal.

Ad 4) Closed systems are not real but are either obvious constructs (like games) or idealisations of real circumstances (= models). Such idealisations are invariably simplification with reduced information content. They are therefore incapable of mapping reality completely.

Ad 5) The objective, i.e. the “intention”, corresponds to a subjective momentum. This subjective momentum distinguishes natural from machine intelligence. The machine must be provided with it in advance.

This is a blog post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford

Overview of the AI systems

All the systems we have examined so far, including deep learning, can in essence be traced back to two methods: the rule-based method and the corpus-based method. This also applies to the systems we have not discussed to date, namely simple automata and hybrid systems, which combine the two above approaches. If we integrate these variants, we will arrive at the following overview:

A: Rule-based systems

Rule-based systems are based on calculation rules. These rules are invariably IF-THEN commands, i.e. instructions which assign a certain result to a certain input. These systems are always deterministic, i.e. a certain input always leads to the same result. Also, they are always explicit, i.e. they involve no processes that cannot be made visible, and the system is always completely transparent – at least in principle. However, rule-based systems can become fairly complex.

A1: Simple automaton (pocket calculator type)

Fig. 1: Simple automaton

Rules are also called algorithms (“Algo”) in Fig. 1. Input and outputs (results) need not be figures. The simple automaton distinguishes itself from other systems in that it does not require any special knowledge base, but works with a few calculation rules. Nevertheless, simple automata can be used to make highly complex calculations, too.

Perhaps you would not describe a pocket calculator as an AI system, but the differences between a pocket calculator and the more highly developed systems right up to deep learning are merely gradual in nature – i.e. precisely of the kind that is being described on this page. Complex calculations soon strike us as intelligent, particularly if we are unable to reproduce them that easily with our own brains. This is already the case with simple arithmetic operations such as divisions or root extraction, where we quickly reach our limits. Conversely, we regard face recognition as comparatively simple because we are usually able to recognise faces quite well without a computer. Incidentally, nine men’s morris is also part of the A1 category: playing it requires a certain amount of intelligence, but it is complete in itself and easily controllable with an AI program of the A1 type.

A2: Knowledge-based system

Fig. 2: Compiling a knowledge base (IE=Inference Engine)

These systems distinguish themselves from simple automata in that part of their rules have been outsourced to a knowledge base. Fig. 2 indicates that this knowledge base has been compiled by a human being, and Fig. 3 shows how it is applied. The intelligence is located in the rules; it originates from human beings – in the application, however, the knowledge base is capable of working on its own.

Fig. 3: Application of a knowledge-based system

The inference machine (“IE” in Figs. 2 and 3) corresponds to the algorithms of the simple automaton in Fig. 1. In principle, algorithms, the inference engine and the rules of the knowledge bases are always rules, i.e. explicit IF-THEN commands. However, these can be interwoven and nested in a variety of different ways. They can refer to figures or concepts. Everything is made by human experts.

The rules in the knowledge base are subordinate to the rules of the inference engine. The latter control the flow of the interpretation, i.e. they decide what rules of the knowledge base are to be applied and how they are to be implemented. The rules of the inference engine are the actual program that is read and executed by the computer. The rules of the knowledge base, however, are not directly executed by the computer, but indirectly through the instructions provided by the inference engine. This is nesting – which is typical of commands, i.e. software in computers; after all, the rules of the inference engine are not implemented directly but read by deeper rules right down to the machine language at the core (in the kernel) of a computer. In principle, however, the rules of the knowledge base are calculation rules just like the rules of the inference machine, but in a “higher” programming language. It is an advantage if the human domain experts, i.e. the human specialists, find this programming language particularly easy and safe to read and use.

With regard to the logic system used in inference machines, we distinguish between rule-based systems

– with a static logic (ontologies type / semantic web type),
– with a dynamic logic (concept molecules type).

For this, cf. the blog post on the three innovations of rule-based AI.

B: Corpus-based systems

Corpus-based systems are compiled in three steps (Fig. 4). In the first step, as large as possible a corpus is collected. The collection does not contain any rules, only data. Rules would be instructions; however, the data of the corpus are not instructions: they are pure data collections, texts, images, game processes, etc.

Fig. 4: Compiling a corpus-based system

These data must now be assessed. As a rule, this is done by a human being. In the third step, a so-called neural network is trained on the basis of the assessed corpus. In contrast to the data corpus, the neural network is again a collection of rules like the knowledge base of the rule-based systems A. Unlike those, however, the neural network is not constructed by a human being but built and trained by the assessed corpus. Unlike the knowledge base, the neural network is not explicit, i.e. it is not readily accessible.

Fig. 5: Application of a corpus-based system

In their applications, both neural networks and the rule-based systems are fully capable of working without human beings. Even the corpus is no longer necessary. All the knowledge is located in the algorithms of the neural network. In addition, neural networks are also quite capable of interpreting poorly structured contents such as a mess of pixels (i.e. images), where rule-based systems (B type) very quickly reach their limits. In contrast to these, however, corpus-based systems are less successful with complex outputs, i.e. the number of possible output results must not be too large since if it is, the accuracy rate will suffer. What are best suited here are binary outputs of the “our tank – foreign tank” type (cf. preceding post) or of “male author – female author” in the assessment of Twitter texts. For such tasks, corpus-based systems are vastly superior to rule-based ones. This superiority quickly declines, however, when it comes to finely differentiated outputs.

Three subtypes of corpus-based AI

The three subtypes differ from each other with regard to who or what assesses the corpus.

Fig. 6: The three types of corpus-based system and how they assess their corpus

B1: Pattern recognition type

I described this type (top in Fig. 6) in the tank example. The corpus is assessed by a human expert.

B2: Search engine type

Cf. middle diagram in Fig. 6: in this type, the corpus is assessed by the customers. I described such a system in the search engine post.

B3: Deep learning type

In contrast to the above types, this one (bottom in Fig. 6) does not require a human being to train or assess the neural network. The assessment results solely from the way in which the games proceed. The fact that deep learning is only possible in very restricted conditions is explained in the post on games and intelligence.

C: Hybrid systems

Of course the above-mentioned methods (A1-A2, B1-B3) can also be combined in practice.

Thus a face identification system, for instance, may work in such a way that in the images provided by a surveillance camera, a corpus-based system B1 is capable of recognising faces as such, and in the faces the crucial shapes of eyes, mouth, etc. Subsequently, a rule-based system A2 uses the points marked by B1 to calculate the proportions of eyes, nose, mouth, etc., which characterise an individual face. Such a combination of corpus- and rule-based systems allows for individual faces to be recognised in images. The first step would not be possible for an A2 system, the second step would be far too complicated and inaccurate for a B1 system. A hybrid system makes it possible.


In the following blog post, I will answer the question as to where the intelligence is located in all these systems. But you have probably long found the answer yourself.

This is a blog post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford

How real is the probable?

AI can only see whatever is in the corpus

Corpus-based systems are on the road to success. They are “disruptive”, i.e. they change our society substantially within a very short period of time – reason enough for us to recall how these systems really work.

In previous blog posts I explained that these systems consist of two parts, namely a data corpus and a neural network. Of course, the network is unable to recognise anything that is not already in the corpus. The blindness of the corpus automatically continues in the neural network, and the AI is ultimately only able to produce what is already present in the data of the corpus. The same applies to incorrect input in the corpus: this will reappear in the results of the AI and, in particular, lessen their accuracy.

When we bring to mind the mode of action of AI, this fact is banal, since the learning corpus is the basis for this kind of artificial intelligence. Only that which is in the corpus can appear in the results, and errors and lack of precision in the corpus automatically diminish the validity of the results.

What is less banal is another aspect, which is also essentially tied up with the artificial intelligence of neural networks. It is the role played by probability. Neural networks work through probabilities. What precisely does this mean, and what effects does it have in practice?

Neural networks make assessments according to probability

Starting point

Let’s look again at our search engine from the preceding post. A customer of our search engine enters a search string. Other customers before him have already entered the same search string. We therefore suggest those websites to the customer which have been selected by the earlier customers. Of course we want to place those at the top of the customer’s list which are of most interest to him (cf. preceding post). To be able to do so, we assess all the customers according to their previous queries. How we do this in detail is naturally our trade secret; after all, we want to gain an edge over our competitors. No matter how we do this, however – and no matter how our competitors do it – we end up weighting previous users’ suggestions. On the basis of this weighting process, we select the proposals which we present to our enquirer and the order in which we display them. Here, probabilities are the crucial factor.

Example

Let us assume that enquirer A asks our search engine a question, and the two customers B and C have already asked the same question as A and left their choice, i.e. the addresses of the websites selected by them, in our well-stocked corpus. Which selection should we now prefer to present to A, that of B or that of C?

Now we have a look at the assessments of the three customers: to what extent do B’s and C’s profiles correspond with A’s profile? Let’s assume that we arrive at the following correspondences:

Customer B:  80%
Customer C: 30%

Naturally we assume that B corresponds better with A than C and that A is therefore served better by B’s answers.

But is this truly the case?

The question is justified, for after all, there is no complete correspondence with either of the two other users. It may be the case that it is precisely the 30% with which A and C correspond which concerns A’s current query. In that case, it would be unfortunate to give B’s answer priority, particularly if the 80% correspondence with B concerns completely different fields which have nothing to do with the current query. Admittedly, this deviation from probability is improbable in a specific case, but it is not impossible – and this is the actual crux of probabilities.

Now in this case, we reasonably opted for B, and we may be certain that probability is on our side. In terms of our business success, we may confidently rely on probability. Why?

This is connected with the law of large numbers. In an individual case as described above, C’s answer may indeed by the better one. In most cases, however, B’s answers will be more to our customer’s liking, and we are well advised to provide him with that answer. This is the law of large numbers. Essentially, it is the basis of the phenomenon of probability:

In an individual case, something improbable may happen; in many cases, however, we may rely on it that usually what is probable is what will happen.

Conclusion for our search engine
  1. If we are interested in being right in most cases, we stick to probability.
  2. At the same time, we accept that we may miss the target in rare cases.

Conclusion for corpus-based AI in general

What applies to our search engine generally applies to any corpus-based AI since all these systems work on the basis of probability. Thus the conclusion for corpus-based AI is as follows:

  1. If we are interested in being right in most cases, we stick to probability.
  2. At the same time, we accept that we may miss the target in rare cases.

 We must acknowledge that corpus-based AI has an inherent weak point, a kind of Achilles’ heel of an otherwise highly potent technology. We should therefore continue to watch this heel carefully:

  1. Incidence:
    When is the error most likely to occur, when can it be neglected? This is connected with the size and quality of the corpus, but also with the situation in which the AI is used.
  2. Consequence:
    What are the consequences if rare cases are neglected?
    Can the permanent averaging and observing of solely the most probable solutions be called intelligent?
  3. Interdependencies:
    With regard to the fundamental interdependencies, the connection with the concept of entropy is of interest: the second law of thermodynamics states that in an isolated system, what happens is always what is more probable, and thermodynamics measures this probability with the variable S, which it defines as entropy.
    What is probable is what happens, both in thermodynamics and in our search engine – but how does a natural intelligence choose?

The next blog post will be about games and intelligence, specifically about the difference between chess and a Swiss card games.

This is a post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford

Rule-based AI: Where is the intelligence situated

Two AI variants: rule-based and corpus-based

The two AI variants mentioned in previous blog posts are still topical today, and they have registered some remarkable successes. The two differ from each other not least in where precisely their intelligence is situated. Let’s first have a look at the rule-based system.

Structure of a rule-based system

In the Semfinder company, we used a rule-based system. I drew the following sketch of it in 1999:

Semantic interpretation system

Green: data
Yellow: software
Light blue: knowledge ware
Dark blue: knowledge engineer

The sketch consists of two rectangles, which represent different locations. The rectangle bottom left shows what happens in the hospital; the rectangle top right additionally shows what goes on in knowledge engineering.

In the hospital, our coding program reads the doctors’ free texts, interprets them and converts them into concept molecules, and allocates the relevant codes to them with the help of a knowledge base. The knowledge base contains the rules with which the texts are interpreted. In our company, these rules were drawn up by people (human experts). The rules are comparable to the algorithms of a software program, apart from the fact that they are written in a “higher” programming language to ensure that non-IT specialists, i.e. the domain experts, who in our case are doctors, are able to establish them easily and maintain them safely. For this purpose, they use the knowledge base editor, which enables them to view the rules, to test them, to modify them or to establish completely new ones.

Where, then, is the intelligence situated?

It is situated in the knowledge base – but it is not actually a genuine intelligence. The knowledge base is incapable of thinking on its own; it only carries out what a human being has instilled into it. I have therefore never described our system as intelligent. At the very least, intelligence means that new things can be learnt, but the knowledge base learns nothing. If a new word crops up or if a new coding aspect is integrated, then this is not done by the knowledge base but by the knowledge engineer, i.e. a human being. All the rest (hardware, software, knowledge base) only carry out what they have been prescribed to do by human beings. The intelligence in our system was always and exclusively a matter of human beings – i.e. a natural rather than an artificial intelligence.

Is this different in the corpus-based method? In the following post, we will therefore have a closer look at a corpus-based system.

 

This is a post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford

Intelligence in the search engine

How does intelligence get into a search engine?

Let’s assume that you are building a search engine. In the process, you do not want to avail yourself of the services of expensive and not always faultless domain experts, but solely build the search engine with sufficient data servers (the hardware for the corpus) and an ingenious software. In principle, you will use a neural network with a corpus. How do you inject intelligence into your system?

Trick 1: Let the customers train the corpus

As in the tank AI of previous blog posts, a search engine depends on categorisations, this time provided by customers’ allocation of input texts (search string) to a list of web addresses which might be interesting for their searches. To find the relevant addresses, your system is again based on a learning corpus, which this time consists of the list of your previous customers’ search inputs. The web addresses which the previous customers have clicked from among those offered to them are qualified as positive hits in the corpus. When it comes to new queries – also from other customers – you simply indicate the addresses which have received most clicks to date. They can’t be all that bad, after all, and the system gets more refined with every query and the following click. And it still applies that the bigger the corpus, the more precise the system.

Again, the categorisations originate outside the system as they are provided by people who have assessed the selection offered to them by the search engine by placing their clicks according to their preferences. They did so

  • with their human intelligence and
  • in line with their individual interests.

The second point is particularly interesting. We might have a closer look at this later.

Trick 2: Assess the customers at the same time

Not every categorisation by every customer is equally relevant. As a search engine operator, you can optimise two directions:

  • Assess the assessors:
    You know all your customers’ inputs, so you can easily find out how reliable these customers’ categorisations, i.e. the web addresses they clicked in connection with their search strings, are. Not all the customers are equally proficient in this respect. The more other customers click the same web address for the same search string, the safer the categorisation will also be for future queries. You can now use this information in order to weight your customers: the customer who has so far had the most reliable categorisations, i.e. the one who most often chose what the others also chose, is given most weight. A customer who was followed by fewer others will be regarded as less reliable. This weighting process will increase the probability that the future search results will rate those websites higher which are of interest to most customers.
  • Assess the searchers:
    Not every search engine user has the same interests. You are able to take this into consideration since you know all their previous inputs. You can make use of these inputs to generate a profile of this customer. This will naturally enable you to select the search results for him or her accordingly. Assessors with a profile similar to the searcher’s will weight the potential addresses similarly, too, and you will be able to personalise the search results even more in the customer’s interest.

For you as a search machine operator, it is in any case worth generating a profile of all your customers for an improvement in the quality of search suggestions alone.

Consequences

  1. Search engines become more precise the more they are used.
    This applies to all the corpus-based systems, i.e. to all technologies with neural networks: the larger their corpus, the higher their precision. They can be capable of amazing feats.
  2. A remarkable feedback effect can be observed in this connection: the bigger the corpus, the better the quality of the search engine, which is why it is used more often, which in turn enlarges its corpus and thus boosts its attractiveness in comparison with competitors. This effect inevitably results in such monopolies as are typical of all applications of corpus-based software.
  3. All the categorisations were primarily made by human beings. The basis of intelligence – the categorising inputs in the corpus – is still provided by human beings. In the case of search engines, these are all the individual users who in this way input their knowledge into the corpus. Which means that the intelligence in AI is not all that artificial after all.
  4. The tendency towards bubble formation is inherent in corpus-based systems: if search engines generate profiles of their customers, they can offer them better search results. In a self-referential loop, this inevitably leads to bubble formation: users with similar views are brought increasingly closer together by the search engines since in this way, these users are provided with the search results which correspond most closely to their individual interests and views. They will come across deviating views less and less often.

The next post will be about a further important aspect of corpus-based systems, namely the role of probability.

This is a post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford