Tag Archives: chess

Artificial and natural intelligence: the difference

What is real intelligence? 

Paradoxically, the success of artificial intelligence helps us to identify essential conditions of real intelligence. If we accept that artificial intelligence has its limits and, in comparison with real intelligence, reveals clearly discernible flaws – which is precisely what we recognised and described in previous blog posts – then these descriptions do not only show what artificial intelligence lacks, but also where real intelligence is ahead of artificial intelligence. Thus we learn something crucial about natural intelligence.

What have we recognised? What are the essential differences? In my view, there are two properties which distinguish real intelligence from artificial intelligence. Real intelligence

– also works in open systems and

– is characterised by a conscious intention.

 

Chess and Go are closed systems

In the blog post on cards and chess, we examined the paradox that a game of cards appears to require less intelligence from us humans than chess, whereas it is precisely the other way round for artificial intelligence. In chess and Go, the computer beats us; at cards, however, we are definitely in with a chance.

Why is this the case? – The reason is the closed nature of chess, which means that nothing happens that is not provided for. All the rules are clearly defined. The number of fields and pieces, the starting positions and the way in which the pieces may move, who plays when and who has won at what time and for what reasons: all this is unequivocally set down. And all the rules are explicit; whatever is not defined does not play a part: what the king looks like, for instance. The only important thing is that there is a king and that, in order to win the game, his opponent has to checkmate him. In an emergency, a scrap of paper with a “K” on it is enough to symbolise the king.

Such closed systems can be described with mathematical clarity, and they are deterministic. Of course, intelligence is required to win them, but this intelligence may be completely mechanical – that is, artificial intelligence.

Pattern recognition: open or closed system?

This looks different in the case of pattern recognition where, for example, certain objects and their properties have to be identified on images. Here, the system is basically open, for it is not only possible that images with completely new properties can be introduced from the outside. In addition, the decisive properties themselves that have to be recognised can vary. The matter is thus not as simple, clearly defined and closed as in chess and Go. Is it a closed system, then?

No, it isn’t. Whereas in chess, the rules place a conclusive boundary around the options and objectives, such a safety fence must be actively placed around pattern recognition. The purpose of this is to organise the diversity of the patterns in a clear order. This can only be done by human beings. They assess the learning corpus, which includes as many pattern examples as possible, and allocate each example to the appropriate category. This assessed learning corpus then assumes the role of the rules of chess and determines how new input will be interpreted. In other words: the assessed learning corpus contains the relevant knowledge, i.e. the rules according to which previously unknown input is interpreted. It corresponds to the rules of chess.

The AI system for pattern recognition is thus open as long as the learning corpus has not been integrated; with the assessed corpus, however, such a system becomes closed. In the same way that the chess program is set clear limits by the rules, expert assessment provides the clear-cut corset which ultimately defines the outcome in a deterministic way. As soon as the assessment has been made, a second and purely mechanical intelligence is capable of optimising the behaviour within the defined limits – and ultimately to a degree of perfection which I as a human being will never be able to achieve.

Who, though, specifies the content of the learning corpus which turns the pattern recognition program into a technically closed system? It is always human experts who assess the pattern inputs und who thus direct the future interpretation done by the AI system. In this way pattern recognition can be turned into a closed task like a game of chess or go which can be solved by a mechanical algorithm.

In both cases – in the initially closed game program (chess and Go) as well as in the subsequently closed pattern recognition program – the algorithm finds a closed situation, and this is the prerequisite for an artificial, i.e. mechanical intelligence to be able to work.

Conclusion 1:
AI algorithms can only work in closed spaces.

In the case of pattern recognition, the human-made learning corpus provides this closed space.

Conclusion 2:
Real intelligence also works in open situations.

Is there any intelligence without intention?

Why is artificial intelligence unable to work in an open space without assessments introduced from outside? Because it is only the assessments introduced from outside that make the results of intelligence possible. And assessments cannot be provided purely mechanically by the AI but are always linked to the assessors’ views and intentions.

Besides the differentiation between open and closed systems, our analysis of AI systems shows us still more about real intelligence, for artificial and natural intelligence also differ from each other with regard to the extent to which individual intentions play a part in their decision-making.

In chess programs, the objective is clear: to checkmate the opponent’s king. The objective which determines the assessment of the moves, namely the intention to win, does not have to be laboriously recognised by the program itself but is intrinsically given.

With pattern recognition, too, the role of the assessment intention is crucial, for what kind of patterns should be distinguished in the first place? Foreign tanks versus our own tanks? Wheeled tanks versus tracked tanks? Operational ones versus damaged ones? All these distinctions make sense, but the AI must be set, and adjusted to, a specific objective, a specific intention. Once the corpus has been assessed in a certain direction, it is impossible to suddenly derive a different property from it.

As in the chess program, the artificial intelligence is not capable of finding the objective on its own: in the chess program, the objective (checkmate) is self-evident; in pattern recognition, the assessors involved must agree on the objective (foreign/own tanks, wheeled/tracked tanks) in advance. In both cases, the objective and the intention come from the outside.

Conversely, natural intelligence has to determine itself what is important and what is unimportant, and what objectives it pursues. In my view, an active intention is an indispensable property of natural intelligence and cannot be created artificially.

Conclusion 3:
In contrast to artificial intelligence, natural intelligence is characterised by the fact that it is able to judge, and deliberately orient, its own intentions.


This is a blog post about artificial intelligence. You can find further posts through the overview page about AI.


Translation: Tony Häfliger and Vivien Blandford

Now where in artificial intelligence is the intelligence located?


In a nutshell: the intelligence is always located outside.


a) Rule-based systems

The rules and algorithms of these systems are created by human beings, and no one will ascribe real intelligence to a pocket calculator. The same also applies to all other rule-based systems, however refined they may be. The rules are devised by human beings.

b) Conventional corpus-based systems (neural networks)

These systems always use an assessed corpus, i.e. a collection of data which have already been evaluated  (details). This assessment decides according to what criteria each individual corpus entry is classified, and this classification then constitutes the real knowledge in the corpus.

However, the classification cannot be derived from the data of the corpus itself but is always introduced from the outside. And it is not only the allocation of a data entry to a class that can only be done from the outside; rather, the classes themselves are not determined by the data of the corpus, either, but are provided from the outside – ultimately by human beings.

The intelligence of these systems is always located in the assessment of the data pool, i.e. the allocation of the data objects to predefined classes, and this is done from the outside, by human beings. The neural network which is thus created does not know how the human brain has found the evaluations required for it.

c) Search engines

Search engines constitute a special type of corpus-based system and are based on the fact that many people use a certain search engine and decide with their clicks which internet links can be allocated to the search string. Ultimately, search engines only average the traces which the many users leave with their context knowledge and their intentions. Without the human brains of the users who have used the search engines so far, the search engines would not know where to point new queries.

d) Game programs (chess, Go, etc.) / deep learning

This is where things become interesting, for in contrast to the other corpus-based systems, such programs do not require any human beings who assess the corpus, which consists of the moves of games previously played from the outside. Does this mean, then, that such systems have an intelligence of their own?

Like the pattern recognition programs (b) and the search engines (c), the Go program has a corpus which in this case contains all the moves of the test games played before. The difference from the classic AI systems consists in the fact that the assessment of the corpus (i.e. the moves of the games) is already defined by the success in the actual game. Thus no human being is required who has to make a distinction between foreign tanks and our own tanks in order to provide the template for the neural network. The game’s success can be directly recognised by the machine, i.e. the algorithm itself; human beings are not required.

With classic AI systems, this is not the case, and a human being who assesses the individual corpus items is indispensable. Added to this, the assessment criterion is not given unequivocally, as it is with Go. Tank images can be categorised in completely different ways (wheeled/tracked tanks, damaged/undamaged tanks, tanks in towns/open country, in black and white/coloured pictures, etc.). This opens the interpretation options for the assessment at random. For all these reasons, an automatic categorisation is impossible with classic AI systems, which therefore always require an assessment of the learning corpus by human experts.

In the case of chess and Go, it is precisely this that is not required. Chess and Go are artificially designed and completely closed systems and thus indeed completely determined in advance. The board, the rules and the objective of the game – and thus also the assessment of the individual moves – are given automatically. Therefore no additional intelligence is required; instead, an automatism can play test games with itself within a predefined, closed setting and in this way attain the predefined objective better and better until it is better than any human being.

In the case of tasks which have to be solved not in an artificial game setting but in reality, however, the permitted moves and objectives are not completely defined, and there is leeway for strategy. An automatic system like deep learning cannot be applied in open, i.e. real situations.

It goes without saying that in practice, a considerable intelligence is required to program victory in Go and other games, and we may well admire the intelligence of the engineers at Google, etc., for that, yet once again it is their human intelligence which enables them to develop the programs, and not an intelligence which the programs designed by them are able to develop themselves.

Conclusion

AI systems can be very impressive and very useful, but they never have an intelligence of their own.

Artificial Intelligence (Overview )

Do we have to be afraid of artificial intelligence? Or will it save the world? – As is well known, we’re afraid of what we don’t understand. And at the same time, we hope that what we don’t understand will work fantastic miracles.

I’m writing this blog series because I consider it useful if you understand what types of AI there are and how they work. In this way, you’ll be able to get a more concrete idea of the dangers and opportunities of AI. The text is aimed at people with a normal educational background; you don’t need to be a special kind of nerd to follow the explanations.

How does AI work? Where are its limits? How do the various types of AI differ from each other? What is behind deep learning? Is artificial intelligence really intelligent? And if it is, where exactly is the intelligence in artificial intelligence situated, and how does it get there?

These are the questions that I’ll be trying to answer in my blog posts.


Rule-based or corpus-based?

These are the two fundamentally different methods of computer intelligence. They can either be based on rules or a collection of data (corpus). In the introductory post, I present the two with the help of two characteristic anecdotes:


With regard to success, the corpus-based systems have obviously outstripped the rule-based ones:


The rule-based systems had a more difficult time of it. What are their challenges? How can they overcome their weaknesses? And where is their intelligence situated inside them?


How are corpus-based systems set up? How is their corpus compiled and assessed? What are neural networks all about? And what are the natural limits of corpus-based systems?


Next, we’ll have a look at search engines, which are also corpus-based systems. How do they arrive at their proposals? Where are their limits and dangers? Why, for instance, is it inevitable that bubbles are formed?


Is a program capable of learning without human beings providing it with useful pieces of advice? It appears to work with deep learning. To understand this, we first compare a simple card game with chess: what requires more intelligence? Surprisingly, it becomes clear that for a computer, chess is the simpler game.

With the help of the general conditions of the board games Go and chess, we recognise under what conditions deep learning works.


In the following blog post, I’ll provide an overview of the AI types known to me. I’ll draw a brief outline of their individual structures and of the differences in the way they work.

 

Games and Intelligence (2): Deep Learning

Go and chess

The Asian game of Go shares many similarities with chess while being simpler and more sophisticated at the same time.

The same as in chess:
– Board game → clearly defined playing field
– Two players (more would immediately increase complexity)
– Unequivocally defined possibilities of playing the stones (clear rules)
– The players place stones alternately (clear timeline).
– No hidden information (as, for instance, in cards)
Clear objective (the player who has surrounded the larger territory wins)

Simpler in Go:
– Only one type of piece: the stone (unlike in chess: king, queen, etc.)

More complex/requires more effort:
– Go has a slightly larger playing field.
– The higher number of fields and stones require more computation.
– Despite its very simple rules, Go is a highly sophisticated game.

Summary

Compared with their common features, the differences between Go and chess are minimal. In particular, Go satisfies the strongly limiting preconditions a) to d), which enable an algorithm to tackle the job:

a) a clearly defined playing field,
b) clearly defined rules,
c) a clearly defined course of play,
d) a clear objective.(Cf. also preceding blog post)

Go and deep learning

Google has beaten the best human Go players. This victory was achieved by means of a type of AI which is called deep learning. Many people think that this proves that a computer – i.e. a machine – can be genuinely intelligent. Let us therefore have a closer look at how Google managed to do this.

Rule- or corpus-based, or a new, third system?

The strategies of the known AI programs are either rule-based or corpus-based. In previous posts, we asked ourselves where the intelligence in these two strategies comes from, and we realised that the intelligence in rule-based AI is injected into the system by the human experts who establish the rules. Corpus-based AI also requires human beings, since all the inputs into the corpus must be assessed (e.g. friendly/hostile tanks), and these assessments can always be traced back to people even if this is not immediately obvious.

However, what does this look like in the case of deep learning? Obviously, it does not require any human beings any longer in order to provide specific assessments – in Go, with regard to the individual moves’ chances of winning; rather, it is sufficient for the program to play against itself and find out on its own which moves have proved most successful. In this, deep learning does NOT depend on human intelligence and – in chess and Go – even turns out to be superior to human intelligence.

Deep learning is corpus-based

Google’s engineers undoubtedly did a fantastic job. Whereas in conventional corpus-based applications, the data for the corpus have to be compiled laboriously, this is quite simple in the case of the Go program: the engineers simply have the computer play against itself, and every game is an input into the corpus. No one has to take the trouble to trawl the internet or any other source for data; instead, the computer is able to generate a corpus of any size very simply and quickly. Although like the programs for pattern recognition, deep learning for Go continues to depend on a corpus, this corpus can be compiled in a much simpler way – and automatically at that.

Yet it gets even better for deep learning. Not only is the compilation of the corpus much simpler, but the assessment of the single moves in the corpus is also very easy: Finding out the best move from among all the moves that are possible at any given time no longer requires any human experts. How does this work? How is deep learning capable of drawing intelligent conclusions without any human intelligence at all? This may be astonishing, but if we look at it in more detail, it becomes clear why this is indeed the case.

The assessment of corpus inputs

The difference is the assessment of the corpus inputs. To illustrate this, let’s have another look at the tank example. Its corpus consists of tank images, and a human expert has to assess each picture according to whether it shows one of our own tanks or a foreign tank. As explained, this requires human experts. In our second example, the search engine, it is also human beings, namely the users, who assess whether the link to a website suggested in the corpus fits the input search string. Both types of AI cannot do without human intelligence.

With deep learning, however, this is really different. The assessment of the corpus, i.e. the individual moves that make up the many different Go test games, does not require any additional intelligence. The assessment automatically results from the games themselves, since the only criterion is whether the game has been won or lost. This, however, is known to the corpus itself since it has registered the entire course of every game right to the end. Therefore the way in which every game has proceeded, automatically contains its own assessment – assessments by human beings are no longer required.

The natural limits of deep learning

The above, however, also reveals the conditions in which deep learning is possible at all: for the course of the game and the assessment to be clear-cut, there must not be any surprises. Ambiguous situations and uncontrollable outside influences are not allowed. For everything to be flawlessly calculable, the following is indispensable:

1. A closed system

This is given by the properties a) to c) (cf. preceding post), which games like chess and Go possess, namely

a) a clearly defined playing field,
b) clearly defined rules,
c) a clearly defined course of play.

A closed system is necessary for deep learning to work. Such a system can only be an artificially constructed system, for there are no closed systems in nature. It is no accident that chess and Go are particularly suitable for AI since games always have this aspect of being consciously designed. Games which integrate chance as part of the system, such as cards in the preceding post, are not absolutely closed systems any longer and therefore less suitable for artificial intelligence.

2. A clearly defined objective

A clearly defined objective – point d) in the preceding post – is also necessary for the assessment of the corpus to take place without any human interference, because the objective of the process under investigation and the assessment of the corpus inputs are closely connected. We must understand that the target of the corpus assessment is not given by the corpus data. Data and assessment are two different things. We have already discussed this in the example of the tanks, where we saw that a corpus input, i.e. the pixels of a tank photograph, did not automatically contain its own assessment (hostile/friendly). The assessment is a piece of information which is not intrinsic to the individual data (pixels) of an image; rather, it has to be fed into the corpus from the outside (by an interpreting intelligence). Therefore the same corpus input can also be assessed in very different ways: if the corpus is told whether an individual image is one of our own tanks or a foreign tank, it still does not know whether it is a tracked tank or a wheeled tank. With all such images, assessments can go in very different directions – unlike with chess and Go, where a move in a game (which is known to the corpus) is solely assessed according to the criterion of whether it is conducive to winning the game.

Thus chess and Go pursue a simple, clearly defined objective. In contrast to these two games, however, tank pictures allow for a wide variety of assessment objectives. This is typical of real situations. Real situations are always open, and in such situations, various and differing assessements can make sense and are absolutely appropriate. For the purpose of assessment, an instance (intelligence) outside the data has to establish the connection between the data and the assessment objective. This function is always linked to an instance with a certain intention.

Machine intelligence, however, lacks this intention and therefore depends on being provided with it by an objective from the outside. If the objective is as self-evident as it is in chess and Go, this is not a problem, and the assessment of the corpus can indeed be conducted by the machine itself without any human intelligence. In such unequivocal situations, machine deep learning is genuinely capable of working – indeed, even of beating human intelligence.

However, this only applies if the rules and the objective of a game are clearly defined. In all other cases, it is not an algorithm that is required but “real” intelligence, i.e. intelligence with a deliberate intention.

Conclusion

  1. Deep learning (DL) works.
  2. DL uses a corpus-based system.
  3. DL is capable of beating human intelligence in certain applications.
  4. However, DL only works in a closed system.
  5. DL only works if the objective is clear and unequivocal.

Ad 4) Closed systems are not real but are either obvious constructs (like games) or idealisations of real circumstances (= models). Such idealisations are invariably simplification with reduced information content. They are therefore incapable of mapping reality completely.

Ad 5) The objective, i.e. the “intention”, corresponds to a subjective momentum. This subjective momentum distinguishes natural from machine intelligence. The machine must be provided with it in advance.

This is a blog post about artificial intelligence.


Translation: Tony Häfliger and Vivien Blandford

Games and intelligence (1)

Chess or jass: what requires more intelligence?

(Jass is a very popular Swiss card game of the same family as whist and bridge, though more homespun than the latter.)

Generally, it is assumed that chess requires more intelligence, for obviously less intelligent players definitely stand a chance of winning at cards while they don’t in chess. If we consider, however, what a computer program must be able to do in order to win, the picture soon looks different: chess is clearly simpler for a machine.

This may surprise you, but it is worth looking at the features the two games have in common, as well as their differences – and of course, both have a great deal to do with our topic of artificial intelligence.

Common features

a) Clearly defined playing field

The chessboard has 64 black and white fields; only the pieces that are situated on these fields play a part. At cards, the bridge table could be regarded as a playing field, as could the so-called square “jass carpet” that is placed on a restaurant table; it is the material playing field in the same way that the material chessboard is for chess. If we are interested in successful playing behaviour, however, the colour of the jass carpet or the make of the chess board are immaterial; what counts is solely the abstract, i.e. “IT-type” of playing field: where can our chess pieces and playing cards move in a more mathematical way? And in this respect, the situation is completely clear at cards, too: the cards are in a clearly defined place at any given time, either in a player’s hand ready to be played, or in front of a player as a trick already won, or on the table as a face-up card to be seen by everyone. Both chess and cards can therefore be said to have a clearly defined playing field.

b) Clear rules

Here, too, there is hardly any difference between the two games. Although there are all sorts of variants of whist and bridge, and although jass rules differ from village to village and even from restaurant to restaurant (which may occasionally lead to heated discussions), as soon as a set of rules has been agreed upon, the situation is clear. As in chess, it is clear what goes and what doesn’t, and the players’ possible activities are clearly defined.

c) Clear course of play

Here again, the games do not differ from each other. At any point in time, there is precisely one player who is permitted to act, and his or her options are clearly defined.

d) Clear objective

Chess is about beating the opponent’s king; card games are about scoring points or tricks, depending on the variant. Games do not last an eternity. A card game is over when all the cards have been played; in chess, the draw and stalemate rules prevent a game from going on indefinitely. There is always one clear winner, there are always clear losers, and if need be there is a definitive tie.

Differences

e) Clear starting situation?

In chess, the starting situation is identical in every game; all pieces start at their appointed place. At cards, however, the pack of cards is shuffled before every game. Whereas in chess, we always start from precisely the same situation, we have to envisage a new one before every card game. Chance thus plays an important role in cards; in chess, it has been deliberately excluded. This is bound to have consequences. Since I have to factor in chance at cards, I cannot rely on certainties like in chess, but have to rely on probabilities.

f) Hidden information?

A lack of knowledge remains a challenge for card players throughout the game. Whereas in chess, everything is openly recognisable for each player on the board, card games literally thrive on players NOT knowing where the cards are. Therefore they must guess – i.e. rely on probabilities – and run certain risks. There is no guessing in chess; the situation is always clear, open and evident. Of course, this makes it substantially easier to describe the situation in chess; at cards, however, this lack of knowledge makes a description of the situation difficult.

g) Probabilities and emotions (psychology)

If I do not know everything, I have to rely on probabilities. Experience shows that this is something that we human beings are comprehensively very bad at. We let ourselves be guided by emotions much more strongly than we care to admit. Fears and hopes determine our expectations, and we often grossly misjudge probabilities. An AI program naturally has an edge over us in this respect since it does not have to cope with emotions and is much better at computing probabilities. Yet the machine wants to beat its opponent and will therefore have to assess its opponent’s reactions correctly. The AI program would therefore do well to take its opponent’s flawed handling of probabilities into its considerations, but this is not very easy in terms of algorithms. How does it recognise an optimist? Human players try to read their opponents while trying to mislead them about their own emotions at the same time. This is part of the game. It is no use to the program if it makes computations without any emotions while being incapable of recognising and assessing its opponent’s emotions.

h) Communication 

Chess is played by one player against the other. Card games usually involve four players playing each other in pairs. This aspect, i.e. that two individuals have to coordinate their actions, makes the game interesting, and it would be fatal for a card game program to neglect this aspect. But how should we program this? What has to be taken into account here, too, is point f) above, namely the fact that I cannot see my partner’s cards; I neither know my partner’s cards nor my opponents’. Of course my partner and I are interested in coordinating our game, and part of this is that we communicate our options (hidden cards) and our strategies (intentions for driving the game forward) to each other. If, for instance, I hold the ace of hearts, I would like my partner to lead hearts to enable me to win the trick. However, I am not allowed to tell him that openly – yet an experienced card player would not find this a problem. First of all, the run of the game often reveals who holds the ace of hearts. Of course it is not easy to discover this because both the cards that have already been played and possible tactics and strategies have to be taken into consideration. The number of options, the computation of the probabilities and the psychology of the players all come into play here, which can result in very exciting conflict situations – which ultimately also makes the game attractive. In chess, however, with its constantly very explicit situation, circumstances are a great deal simpler in this respect.

But this is not all:

i) The legal grey area

Is it really true that my partner and I are unable to exchange communication about our cards and strategies? Officially, of course, this is prohibited – but can this ban really be implemented in practice?

Of course it can’t. Whereas in chess, it is practically solely the explicit moves that play a part, there is a great deal of additional information at cards which a practised player must be able to read. How am I smiling when I’m playing a card? If I hold the ace of hearts, which can win the next trick, I obviously want my partner to help me and lead hearts. One possibility of achieving this in a jass game is to play a minor heart and place it on the table with distinctive emphasis. A practised partner will easily read this as a signal for him to lead hearts next time rather than diamonds to enable me to win the trick with my ace. No one will really be able to ban anyone from leading a card in a certain way, provided that this is done with sufficient discretion. Partners who are well attuned to each other do not only know the completely legal signals which they automatically emit through the selection of the cards they play, but also some signals from the grey area with which they coordinate their game.

These signals constitute information which an ambitious AI will have to be able to identify and process. The volume of information which it has to process for this purpose is not only much larger than the volume of information in chess, it is not limited by any manner of means either. My AI plays two human opponents, and those two also communicate with each other. The AI should be able to recognise their communication in order not to be hopelessly beaten. The signals agreed upon by the opponents may of course vary and be of any degree of sophistication. How can my AI discover what arrangements the two made prior to the game?

Conclusion

Card games are much more difficult to program than chess

If we want to develop a program for a card game, we will have to take into consideration aspects e) to i), which hardly play any part in chess. In terms of algorithms, however, aspects e) to i) constitute a difficult challenge owing to the imponderabilities.

In comparison with card games, chess is substantially less difficult for a computer because

– there is always the same starting situation,
– there is no hidden information,
– no probabilities need to be taken into account,
– human emotions play a small part,
– there is no legal grey area because no exchange of information between partners is possible.

For an AI program, chess is therefore the simpler game. It is completely defined, i.e. the volume of information that is in the game is very small, clearly disclosed and clearly limited. This is not the case with card games.


This is a blog post about artificial intelligence. In the second part about games and intelligence, I will deal with Go and deep learning .


Translation: Tony Häfliger and Vivien Blandford