Tag Archives: search engine

Now where in artificial intelligence is the intelligence located?


In a nutshell: the intelligence is always located outside.


a) Rule-based systems

The rules and algorithms of these systems are created by human beings, and no one will ascribe real intelligence to a pocket calculator. The same also applies to all other rule-based systems, however refined they may be. The rules are devised by human beings.

b) Conventional corpus-based systems (neural networks)

These systems always use an assessed corpus, i.e. a collection of data which have already been evaluated  (details). This assessment decides according to what criteria each individual corpus entry is classified, and this classification then constitutes the real knowledge in the corpus.

However, the classification cannot be derived from the data of the corpus itself but is always introduced from the outside. And it is not only the allocation of a data entry to a class that can only be done from the outside; rather, the classes themselves are not determined by the data of the corpus, either, but are provided from the outside – ultimately by human beings.

The intelligence of these systems is always located in the assessment of the data pool, i.e. the allocation of the data objects to predefined classes, and this is done from the outside, by human beings. The neural network which is thus created does not know how the human brain has found the evaluations required for it.

c) Search engines

Search engines constitute a special type of corpus-based system and are based on the fact that many people use a certain search engine and decide with their clicks which internet links can be allocated to the search string. Ultimately, search engines only average the traces which the many users leave with their context knowledge and their intentions. Without the human brains of the users who have used the search engines so far, the search engines would not know where to point new queries.

d) Game programs (chess, Go, etc.) / deep learning

This is where things become interesting, for in contrast to the other corpus-based systems, such programs do not require any human beings who assess the corpus, which consists of the moves of games previously played from the outside. Does this mean, then, that such systems have an intelligence of their own?

Like the pattern recognition programs (b) and the search engines (c), the Go program has a corpus which in this case contains all the moves of the test games played before. The difference from the classic AI systems consists in the fact that the assessment of the corpus (i.e. the moves of the games) is already defined by the success in the actual game. Thus no human being is required who has to make a distinction between foreign tanks and our own tanks in order to provide the template for the neural network. The game’s success can be directly recognised by the machine, i.e. the algorithm itself; human beings are not required.

With classic AI systems, this is not the case, and a human being who assesses the individual corpus items is indispensable. Added to this, the assessment criterion is not given unequivocally, as it is with Go. Tank images can be categorised in completely different ways (wheeled/tracked tanks, damaged/undamaged tanks, tanks in towns/open country, in black and white/coloured pictures, etc.). This opens the interpretation options for the assessment at random. For all these reasons, an automatic categorisation is impossible with classic AI systems, which therefore always require an assessment of the learning corpus by human experts.

In the case of chess and Go, it is precisely this that is not required. Chess and Go are artificially designed and completely closed systems and thus indeed completely determined in advance. The board, the rules and the objective of the game – and thus also the assessment of the individual moves – are given automatically. Therefore no additional intelligence is required; instead, an automatism can play test games with itself within a predefined, closed setting and in this way attain the predefined objective better and better until it is better than any human being.

In the case of tasks which have to be solved not in an artificial game setting but in reality, however, the permitted moves and objectives are not completely defined, and there is leeway for strategy. An automatic system like deep learning cannot be applied in open, i.e. real situations.

It goes without saying that in practice, a considerable intelligence is required to program victory in Go and other games, and we may well admire the intelligence of the engineers at Google, etc., for that, yet once again it is their human intelligence which enables them to develop the programs, and not an intelligence which the programs designed by them are able to develop themselves.

Conclusion

AI systems can be very impressive and very useful, but they never have an intelligence of their own.

Artificial Intelligence (Overview )

Is AI dangerous or useful?

This question is currently the subject of extensive debate. The aim here is not to repeat well-known opinions, but to shed light on the basics of the technology that you are almost certainly unaware of. Or do you know where AI gets its intelligence from?

For a quarter of a century, I have been working with ‘intelligent’ IT systems and I am astonished that we ascribe real intelligence to artificial intelligence at all. That’s exactly what it doesn’t have. Its intelligence always comes from humans, who not only provide the data, but also have to evaluate its meaning before the AI can use it. Only then, AI can surprise us with its impressiv performance and countless useful applications in a wide variety of areas. How does it achieve this?

In 2019, I started a blog series on this topic, which you can see an overview of below. In 2021, I then summarised the articles in a book entitled “Wie die künstliche Intelligenz zur Intelligenz kommt” (in German). See below a list of blogposts which form the basis of the book.

While the book is in German, the blogseries is available both in German and English.


Latest Posts about AI


Earlier Posts (basis of the KI-book)

Rule-based or corpus-based?

These are the two fundamentally different methods of computer intelligence. They can either be based on rules or a collection of data (corpus). In the introductory post, I present the two with the help of two characteristic anecdotes:


With regard to success, the corpus-based systems have obviously outstripped the rule-based ones:


The rule-based systems had a more difficult time of it. What are their challenges? How can they overcome their weaknesses? And where is their intelligence situated inside them?


How are corpus-based systems set up? How is their corpus compiled and assessed? What are neural networks all about? And what are the natural limits of corpus-based systems?


Next, we’ll have a look at search engines, which are also corpus-based systems. How do they arrive at their proposals? Where are their limits and dangers? Why, for instance, is it inevitable that bubbles are formed?


Is a program capable of learning without human beings providing it with useful pieces of advice? It appears to work with deep learning. To understand this, we first compare a simple card game with chess: what requires more intelligence? Surprisingly, it becomes clear that for a computer, chess is the simpler game.

With the help of the general conditions of the board games Go and chess, we recognise under what conditions deep learning works.


In the following blog post, I’ll provide an overview of the AI types known to me. I’ll draw a brief outline of their individual structures and of the differences in the way they work.

So where is the intelligence?


The considerations reveal what distinguishes natural intelligence from artificial intelligence:


AI only shows its capabilities when the task is clear and simple. As soon as the question becomes complex, they fail. Or they fib by arranging beautiful sentences found in their treasure trove of data in such a way that it sounds intelligent (ChatGPT, LaMDA). They do not work with logic, but with statistics, i.e. with probability. But is what appears to be true always true?

The weaknesses necessarily follow from the design principle of AI. Further articles deal with this:

Overview of the AI systems

All the systems we have examined so far, including deep learning, can in essence be traced back to two methods: the rule-based method and the corpus-based method. This also applies to the systems we have not discussed to date, namely simple automata and hybrid systems, which combine the two above approaches. If we integrate these variants, we will arrive at the following overview:

A: Rule-based systems

Rule-based systems are based on calculation rules. These rules are invariably IF-THEN commands, i.e. instructions which assign a certain result to a certain input. These systems are always deterministic, i.e. a certain input always leads to the same result. Also, they are always explicit, i.e. they involve no processes that cannot be made visible, and the system is always completely transparent – at least in principle. However, rule-based systems can become fairly complex.

A1: Simple automaton (pocket calculator type)

Fig. 1: Simple automaton

Rules are also called algorithms (“Algo”) in Fig. 1. Input and outputs (results) need not be figures. The simple automaton distinguishes itself from other systems in that it does not require any special knowledge base, but works with a few calculation rules. Nevertheless, simple automata can be used to make highly complex calculations, too.

Perhaps you would not describe a pocket calculator as an AI system, but the differences between a pocket calculator and the more highly developed systems right up to deep learning are merely gradual in nature – i.e. precisely of the kind that is being described on this page. Complex calculations soon strike us as intelligent, particularly if we are unable to reproduce them that easily with our own brains. This is already the case with simple arithmetic operations such as divisions or root extraction, where we quickly reach our limits. Conversely, we regard face recognition as comparatively simple because we are usually able to recognise faces quite well without a computer. Incidentally, nine men’s morris is also part of the A1 category: playing it requires a certain amount of intelligence, but it is complete in itself and easily controllable with an AI program of the A1 type.

A2: Knowledge-based system

Fig. 2: Compiling a knowledge base (IE=Inference Engine)

These systems distinguish themselves from simple automata in that part of their rules have been outsourced to a knowledge base. Fig. 2 indicates that this knowledge base has been compiled by a human being, and Fig. 3 shows how it is applied. The intelligence is located in the rules; it originates from human beings – in the application, however, the knowledge base is capable of working on its own.

Fig. 3: Application of a knowledge-based system

The inference machine (“IE” in Figs. 2 and 3) corresponds to the algorithms of the simple automaton in Fig. 1. In principle, algorithms, the inference engine and the rules of the knowledge bases are always rules, i.e. explicit IF-THEN commands. However, these can be interwoven and nested in a variety of different ways. They can refer to figures or concepts. Everything is made by human experts.

The rules in the knowledge base are subordinate to the rules of the inference engine. The latter control the flow of the interpretation, i.e. they decide what rules of the knowledge base are to be applied and how they are to be implemented. The rules of the inference engine are the actual program that is read and executed by the computer. The rules of the knowledge base, however, are not directly executed by the computer, but indirectly through the instructions provided by the inference engine. This is nesting – which is typical of commands, i.e. software in computers; after all, the rules of the inference engine are not implemented directly but read by deeper rules right down to the machine language at the core (in the kernel) of a computer. In principle, however, the rules of the knowledge base are calculation rules just like the rules of the inference machine, but in a “higher” programming language. It is an advantage if the human domain experts, i.e. the human specialists, find this programming language particularly easy and safe to read and use.

With regard to the logic system used in inference machines, we distinguish between rule-based systems

– with a static logic (ontologies type / semantic web type),
– with a dynamic logic (concept molecules type).

For this, cf. the blog post on the three innovations of rule-based AI.

B: Corpus-based systems

Corpus-based systems are compiled in three steps (Fig. 4). In the first step, as large as possible a corpus is collected. The collection does not contain any rules, only data. Rules would be instructions; however, the data of the corpus are not instructions: they are pure data collections, texts, images, game processes, etc.

Fig. 4: Compiling a corpus-based system

These data must now be assessed. As a rule, this is done by a human being. In the third step, a so-called neural network is trained on the basis of the assessed corpus. In contrast to the data corpus, the neural network is again a collection of rules like the knowledge base of the rule-based systems A. Unlike those, however, the neural network is not constructed by a human being but built and trained by the assessed corpus. Unlike the knowledge base, the neural network is not explicit, i.e. it is not readily accessible.

Fig. 5: Application of a corpus-based system

In their applications, both neural networks and the rule-based systems are fully capable of working without human beings. Even the corpus is no longer necessary. All the knowledge is located in the algorithms of the neural network. In addition, neural networks are also quite capable of interpreting poorly structured contents such as a mess of pixels (i.e. images), where rule-based systems (B type) very quickly reach their limits. In contrast to these, however, corpus-based systems are less successful with complex outputs, i.e. the number of possible output results must not be too large since if it is, the accuracy rate will suffer. What are best suited here are binary outputs of the “our tank – foreign tank” type (cf. preceding post) or of “male author – female author” in the assessment of Twitter texts. For such tasks, corpus-based systems are vastly superior to rule-based ones. This superiority quickly declines, however, when it comes to finely differentiated outputs.

Three subtypes of corpus-based AI

The three subtypes differ from each other with regard to who or what assesses the corpus.

Fig. 6: The three types of corpus-based system and how they assess their corpus

B1: Pattern recognition type

I described this type (top in Fig. 6) in the tank example. The corpus is assessed by a human expert.

B2: Search engine type

Cf. middle diagram in Fig. 6: in this type, the corpus is assessed by the customers. I described such a system in the search engine post.

B3: Deep learning type

In contrast to the above types, this one (bottom in Fig. 6) does not require a human being to train or assess the neural network. The assessment results solely from the way in which the games proceed. The fact that deep learning is only possible in very restricted conditions is explained in the post on games and intelligence.

C: Hybrid systems

Of course the above-mentioned methods (A1-A2, B1-B3) can also be combined in practice.

Thus a face identification system, for instance, may work in such a way that in the images provided by a surveillance camera, a corpus-based system B1 is capable of recognising faces as such, and in the faces the crucial shapes of eyes, mouth, etc. Subsequently, a rule-based system A2 uses the points marked by B1 to calculate the proportions of eyes, nose, mouth, etc., which characterise an individual face. Such a combination of corpus- and rule-based systems allows for individual faces to be recognised in images. The first step would not be possible for an A2 system, the second step would be far too complicated and inaccurate for a B1 system. A hybrid system makes it possible.


In the following blog post, I will answer the question as to where the intelligence is located in all these systems. But you have probably long found the answer yourself.

This is a blog post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford

How real is the probable?

AI can only see whatever is in the corpus

Corpus-based systems are on the road to success. They are “disruptive”, i.e. they change our society substantially within a very short period of time – reason enough for us to recall how these systems really work.

In previous blog posts I explained that these systems consist of two parts, namely a data corpus and a neural network. Of course, the network is unable to recognise anything that is not already in the corpus. The blindness of the corpus automatically continues in the neural network, and the AI is ultimately only able to produce what is already present in the data of the corpus. The same applies to incorrect input in the corpus: this will reappear in the results of the AI and, in particular, lessen their accuracy.

When we bring to mind the mode of action of AI, this fact is banal, since the learning corpus is the basis for this kind of artificial intelligence. Only that which is in the corpus can appear in the results, and errors and lack of precision in the corpus automatically diminish the validity of the results.

What is less banal is another aspect, which is also essentially tied up with the artificial intelligence of neural networks. It is the role played by probability. Neural networks work through probabilities. What precisely does this mean, and what effects does it have in practice?

Neural networks make assessments according to probability

Starting point

Let’s look again at our search engine from the preceding post. A customer of our search engine enters a search string. Other customers before him have already entered the same search string. We therefore suggest those websites to the customer which have been selected by the earlier customers. Of course we want to place those at the top of the customer’s list which are of most interest to him (cf. preceding post). To be able to do so, we assess all the customers according to their previous queries. How we do this in detail is naturally our trade secret; after all, we want to gain an edge over our competitors. No matter how we do this, however – and no matter how our competitors do it – we end up weighting previous users’ suggestions. On the basis of this weighting process, we select the proposals which we present to our enquirer and the order in which we display them. Here, probabilities are the crucial factor.

Example

Let us assume that enquirer A asks our search engine a question, and the two customers B and C have already asked the same question as A and left their choice, i.e. the addresses of the websites selected by them, in our well-stocked corpus. Which selection should we now prefer to present to A, that of B or that of C?

Now we have a look at the assessments of the three customers: to what extent do B’s and C’s profiles correspond with A’s profile? Let’s assume that we arrive at the following correspondences:

Customer B:  80%
Customer C: 30%

Naturally we assume that B corresponds better with A than C and that A is therefore served better by B’s answers.

But is this truly the case?

The question is justified, for after all, there is no complete correspondence with either of the two other users. It may be the case that it is precisely the 30% with which A and C correspond which concerns A’s current query. In that case, it would be unfortunate to give B’s answer priority, particularly if the 80% correspondence with B concerns completely different fields which have nothing to do with the current query. Admittedly, this deviation from probability is improbable in a specific case, but it is not impossible – and this is the actual crux of probabilities.

Now in this case, we reasonably opted for B, and we may be certain that probability is on our side. In terms of our business success, we may confidently rely on probability. Why?

This is connected with the law of large numbers. In an individual case as described above, C’s answer may indeed by the better one. In most cases, however, B’s answers will be more to our customer’s liking, and we are well advised to provide him with that answer. This is the law of large numbers. Essentially, it is the basis of the phenomenon of probability:

In an individual case, something improbable may happen; in many cases, however, we may rely on it that usually what is probable is what will happen.

Conclusion for our search engine
  1. If we are interested in being right in most cases, we stick to probability.
  2. At the same time, we accept that we may miss the target in rare cases.

Conclusion for corpus-based AI in general

What applies to our search engine generally applies to any corpus-based AI since all these systems work on the basis of probability. Thus the conclusion for corpus-based AI is as follows:

  1. If we are interested in being right in most cases, we stick to probability.
  2. At the same time, we accept that we may miss the target in rare cases.

 We must acknowledge that corpus-based AI has an inherent weak point, a kind of Achilles’ heel of an otherwise highly potent technology. We should therefore continue to watch this heel carefully:

  1. Incidence:
    When is the error most likely to occur, when can it be neglected? This is connected with the size and quality of the corpus, but also with the situation in which the AI is used.
  2. Consequence:
    What are the consequences if rare cases are neglected?
    Can the permanent averaging and observing of solely the most probable solutions be called intelligent?
  3. Interdependencies:
    With regard to the fundamental interdependencies, the connection with the concept of entropy is of interest: the second law of thermodynamics states that in an isolated system, what happens is always what is more probable, and thermodynamics measures this probability with the variable S, which it defines as entropy.
    What is probable is what happens, both in thermodynamics and in our search engine – but how does a natural intelligence choose?

The next blog post will be about games and intelligence, specifically about the difference between chess and a Swiss card games.

This is a post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford

Intelligence in the search engine

How does intelligence get into a search engine?

Let’s assume that you are building a search engine. In the process, you do not want to avail yourself of the services of expensive and not always faultless domain experts, but solely build the search engine with sufficient data servers (the hardware for the corpus) and an ingenious software. In principle, you will use a neural network with a corpus. How do you inject intelligence into your system?

Trick 1: Let the customers train the corpus

As in the tank AI of previous blog posts, a search engine depends on categorisations, this time provided by customers’ allocation of input texts (search string) to a list of web addresses which might be interesting for their searches. To find the relevant addresses, your system is again based on a learning corpus, which this time consists of the list of your previous customers’ search inputs. The web addresses which the previous customers have clicked from among those offered to them are qualified as positive hits in the corpus. When it comes to new queries – also from other customers – you simply indicate the addresses which have received most clicks to date. They can’t be all that bad, after all, and the system gets more refined with every query and the following click. And it still applies that the bigger the corpus, the more precise the system.

Again, the categorisations originate outside the system as they are provided by people who have assessed the selection offered to them by the search engine by placing their clicks according to their preferences. They did so

  • with their human intelligence and
  • in line with their individual interests.

The second point is particularly interesting. We might have a closer look at this later.

Trick 2: Assess the customers at the same time

Not every categorisation by every customer is equally relevant. As a search engine operator, you can optimise two directions:

  • Assess the assessors:
    You know all your customers’ inputs, so you can easily find out how reliable these customers’ categorisations, i.e. the web addresses they clicked in connection with their search strings, are. Not all the customers are equally proficient in this respect. The more other customers click the same web address for the same search string, the safer the categorisation will also be for future queries. You can now use this information in order to weight your customers: the customer who has so far had the most reliable categorisations, i.e. the one who most often chose what the others also chose, is given most weight. A customer who was followed by fewer others will be regarded as less reliable. This weighting process will increase the probability that the future search results will rate those websites higher which are of interest to most customers.
  • Assess the searchers:
    Not every search engine user has the same interests. You are able to take this into consideration since you know all their previous inputs. You can make use of these inputs to generate a profile of this customer. This will naturally enable you to select the search results for him or her accordingly. Assessors with a profile similar to the searcher’s will weight the potential addresses similarly, too, and you will be able to personalise the search results even more in the customer’s interest.

For you as a search machine operator, it is in any case worth generating a profile of all your customers for an improvement in the quality of search suggestions alone.

Consequences

  1. Search engines become more precise the more they are used.
    This applies to all the corpus-based systems, i.e. to all technologies with neural networks: the larger their corpus, the higher their precision. They can be capable of amazing feats.
  2. A remarkable feedback effect can be observed in this connection: the bigger the corpus, the better the quality of the search engine, which is why it is used more often, which in turn enlarges its corpus and thus boosts its attractiveness in comparison with competitors. This effect inevitably results in such monopolies as are typical of all applications of corpus-based software.
  3. All the categorisations were primarily made by human beings. The basis of intelligence – the categorising inputs in the corpus – is still provided by human beings. In the case of search engines, these are all the individual users who in this way input their knowledge into the corpus. Which means that the intelligence in AI is not all that artificial after all.
  4. The tendency towards bubble formation is inherent in corpus-based systems: if search engines generate profiles of their customers, they can offer them better search results. In a self-referential loop, this inevitably leads to bubble formation: users with similar views are brought increasingly closer together by the search engines since in this way, these users are provided with the search results which correspond most closely to their individual interests and views. They will come across deviating views less and less often.

The next post will be about a further important aspect of corpus-based systems, namely the role of probability.

This is a post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford