Intelligence is actually common sense

Wanted: artificial intelligence with common sense

Machines that behave like humans in everyday situations. That is the goal of AI research. But common sense can be quite unhealthy at times.

The recent history of artificially intelligent (AI) systems is undoubtedly impressive. With neural networks and machine learning, research has taken a decisive step towards more environmentally adapted programming. There are speech, face and pattern recognition systems that are amazingly adaptable, and the expectations and visions of the AI ​​community are correspondingly high. Now the so-called general artificial intelligence (AGI: Artificial General Intelligence) is starting to shine on the horizon, a machine with “healthy artificial mind”.

The first, so to speak, infantile forms are currently being tried out with self-driving cars. The artifacts begin to learn to adapt to situations, so they become adaptive like the natural creatures. And so we don't just have to deal with technical problems, but also with philosophical ones. One thing is: What does it actually mean to react with common sense in everyday situations?

Descartes' reservations

The question preoccupied Descartes. In a famous section of his "Discours" he speaks of the universality of reason, which knows how to assert itself in all situations. If machines “also did many things as well or perhaps better than one of us”, they “would inevitably be absent in some others and thereby show (...) That they do not act according to insight, but only according to the disposition of their organs . Because while reason is a universal instrument that serves in all possible cases, these organs must have a special disposition for every special action, and therefore it is morally (in practice, author's note) impossible that different organs in one machine are enough to let them act in all cases of life as our reason enables us to act. "

These are strikingly modern words, and they are aimed precisely at the core of today's problematic of learning machines. If we use "neural network" for "organ" and "learning algorithm" for "disposition", Descartes' text reads as a reservation against an artificial common sense. Learning machines will never act out of insight, because their construction principle does not allow universal reason. To date, computers have been idiots savants.

Computer engineers would counter Descartes by saying that they don't need many organs, but rather a powerful algorithm plus an immense, possibly already pre-structured amount of data that he can plow through. Deep learning actually works according to surprisingly simple principles, which is why the long-term goal of artificial common sense can be achieved “in principle”.

The emphasis is on "distant". So far, the new types of artificial systems have excelled in games, i.e. in clearly defined frameworks with predetermined rules and one primary goal: to win. But a self-driving car cannot simply win. Its functioning depends on numerous eventualities - from delivering passengers to the right destination on time, following traffic rules, taking weather and road conditions into account, right up to imponderables such as unauthorized crossings by pedestrians, non-functioning traffic lights, traffic jams or accidents.

A self-driving car, for example, has registered countless red signals in the course of its training and has stored something like a concept of red in its neural network. This works quite well under normal conditions, but abnormal situations are always to be expected. And as it turns out, very small disturbances of the learned pattern are often enough to lead the algorithm to a total and possibly fatal misclassification.

Unhealthy machine sense

It is precisely this openness to the real situation that has so far been the major obstacle on the way to artificial intelligence with common sense. This may illustrate another example. Youtube developed an algorithm with the goal of maximizing the time that the user spends on the video portal. The algorithm did this by recommending videos with increasingly extreme content, following the principle of "upping-the-ante": increase the stakes. One user reported, for example, how she watched a couple of videos about Donald Trump's election campaign and was then showered with racist, conspiracy theoretic and other disreputable material. The algorithm "interprets" its task in a highly idiosyncratic, even stubborn way, which leads to unintended effects such as radicalization and polarization. Hardly any sign of “healthy” machine understanding.

The designers are looking for a remedy with a new approach. It comes from the computer scientist Stuart Russell and calls itself «human compatible machines». Such machines start from scratch, so to speak. Instead of encoding and maximizing a given goal, they learn for themselves to decode such a goal from human behavior and then improve the behavior. This is called inverse reinforcement. This is linked to the expectation that the orientation towards human behavior will allow the machine to act in a more compatible way - that is, with more common sense.

The skepticism remains. First, the question arises whether humans are suitable as models for an AI system. He is basically not a logical being. His behavior is fed by a dense, implicit network of expectations, preferences, opinions and motives, which can hardly ever be completely disentangled in an explicit formalism. Second, our preferences and desires are constantly changing, and often they are not guided by reasons that can be rationally reconstructed, but by irrational moods and moods that are often vague or even contradictory. And third: what if a person is deeply guided by bad reasons? Should the machines then learn to optimize this badness? Experiences like those of YouTube and other nefarious algorithms feed a not exactly optimistic vision of the future.

Back to the original question

Common computer sense actually throws us back on the original question: What does it mean to behave like a person, yes, to be a person? For example, we don't learn the same way that AI systems do. We don't have to see 10,000 cat pictures in order to create a reliable "cat" category. Rather, we develop expectations of how things could work out, and on this basis we make predictions. In our perception we naturally infer hidden parts of a thing without having any corresponding data about it. Or we develop an intuition for the difference between correlation and causality. The rain is not the reason why people open their umbrellas; her desire to stay dry, on the other hand, does.

It is such cognitive aspects that contribute significantly to our common sense, our embodied spirituality, so to speak. This is beginning to dawn on not a few AI researchers. None other than MIT's Rodney Brooks, an expert in the field, recently questioned a core assumption of the entire AI project: Artificial systems may reach the limit of complexity because they are made of the wrong stuff. In other words, the fact that robots are not made of flesh could make a greater difference to humans than he, Brooks, previously thought. The riddle of the human mind lies in its incarnation.

And that's why AI research will need to focus more on this specific stuff that we're made of. She will have to think more biologically. The robotics are already beginning to experiment with animal cells that develop according to a program - xenobots. Let us be careful not to rush to paint a future scenario with smart organoid devices. Rather, let's focus on the real problem. Artificial intelligence remains deeply alien to us. We are basically creating our own aliens with it. And these aliens, in spite of all efforts, will probably not adapt to our everyday lives. Rather, we adapt our everyday life to them. So the problem is not super-smart machines, but sub-smart people.

Follow the science department of the NZZ on Twitter.