Which aspect of artificial intelligence is exaggerated?

Ethical questions in artificial intelligence: What challenges do we have to deal with?

I would have Edmund Husserl on the line as the guideline-maker, whose thoughts about the detour of Paul Ricoeur could now be granted even more scope in France - where the current head of state himself spent some time with Ricoeur himself (although this closeness somehow shudder to me leaves, the legendary master above all of the infinite constellations of the concept of interpretation, and then the financial juggler with elite training ... there are probably strange hooks that life writes here).

In addition, Sartre's existentialism follows in the footsteps of Husserl, the intellectual closeness is easier there than with the eastern neighbors, who are far too rigorous in counting peas.

Why Husserl? As a logician, he establishes a strictly applied phenomenology, the task of exploring the surrounding lifeworld in a rigid, unbiased and appropriate manner. The proximity to such a concern is more or less tangible in the broad approach. (Many thanks to C.K. for passing the paper on!)

Following this thread, I immediately notice two weaknesses, parallel to the strength of setting the conceptual framework in the form of a social concern as impartially and broadly as possible, and yet as precise as possible, precisely in the sense of a powerful description of the world - which is something other than one A hodgepodge of political guidelines copied from technical manuals and poorly digested.

One weakness is the major shortcoming that a really resilient description of the world of life is always and in principle too much sloppy and a healthy dose of corruption at play. Well, unfortunately not everyone can live like a monk in order to be able to bring in appropriately cleared thoughts.

The sloppiness has two dimensions. On the one hand, more roughly, the scientific and technical horizon lacks clear delimitations where the categories used really apply and where, on the other hand, it is only a question of linguistic metaphors. Thereupon (as a direct consequence) the appropriate modesty is missing, so that the human being may, please, limit very clearly how he still knows what he is doing when he is doing something, in the direction of technical border-crossing. That is a problem of principle. Trying to delve into decisive accuracy has become a luxury given the vastness of the dimensions of social realizations.

In detail, there is no demarcation where the power of the possible description is no longer sufficient to appropriately breathe meaning and understanding into the creative frenzy, and then the technical descriptions via metaphors lead directly to beliefs, quite charmingly and underhand. With enough mindfulness, this can be more easily dealt with critically.

****

The certainly very fine and laudable request from France falls into a trap right at the beginning, precisely because the fundamental descriptions of the world of life are largely in a state of neglect in what are actually important dimensions, and the raising of discrepancies threatens to become unpopular, if ever with it too many approaches threaten to overturn at the same time.

Here it is explained in pleasantly clear and direct language for the linked social dimensions what is to be understood by algorithms and what is to be checked in the wake of the subsumption of social contexts.

But then there is an immediate switch to AI, and the swing occurs only as a result of a linguistic metaphor, which, however, is not covered by the facts. If you run self-learning AI, you have left the dimension of the algorithms permanently! There is no stringent coupling in the dimension of human observation of how algorithms should become a self-learning AI. (And at the same time, the device has nowhere near been given something like a self - that's just a metaphor, too.)

If you take a closer look, the two terms contradict each other. This is not done enough in the paper presented here. Unfortunately, corruption is lurking here in everyday life, the term AI has crept in as opportune, the meaning is not really given, the practical linguistic metaphor remains.

Perhaps it would be better to omit the term at all, since mostly only the misleading reference to an ideologue is meant, economically stimulating in the connotation, in the sense of the correctness of the linguistic description. Replacing the waiver will not require much more attention either, in the sense of correct descriptions.

But it would force you to always give extremely important details the same weighting in the linguistic communication - above all, how precisely the correctness of the various implementations of details was carried out. At the ideological level, this is seen as annoying. At the same time, however, the consequences differ here, which is to be expected in terms of use. Here the linguistic sloppiness is downright destructive.

The umbrella term AI is simply too cheap to be sustainable.

Just the repeated sober description, instead of the shortened metaphor of the self-learning machine, would be helpful to learn for yourself to gradually get to the point what is meant, if emphasized often enough. If humans are incapable of describing their living environment in the most important matters (and these are seldom of a technical nature), the projection as the hoped-for mimesis is only an expression of their arrogance as a mournful figure. That people take on this role more rigorously would also be helpful for themselves, as well as being more precise about what they are doing there.

****

One thinks, for example, of the transition from language and thinking to the pure mimicry of language and thinking, there is far and wide no progress in the game, in the sense that it is about people who lead a life, rather silliness or the dangerous exaggeration of desires.

As with most of the modern history of technology, the first wish will unfortunately only revolve around a key topic - that of the hoped-for safeguarding of rule, as well as totalitarian in consequence - no matter how idiotic the silliness behind it is.

And that is, in turn, an immensely relevant question about the description of the living environment - or its corruption, if questions do not advance to that point and are not supposed to be answered correctly.

A further dimension can also be indicated: the narrowing of the research horizon from scientific description to quantity is only a cultural-historical coincidence (classically based on the time of Galileo). This does not make a statement about a concept of truth as to whether other models of knowledge would not have been just as suitable for science as a social enterprise.

As the inner workings of machines are now carried to such an extent that the resulting behavior is no longer within the prediction horizon, man has at the same time planned that he should also include this absolutization of quantifiability as a universal method when describing the world of life.

One cannot expect this, however, since at the same time the character's horizon for reliable honesty in the investigations is massively on the decline. To tinker with doomsday machines with enthusiasm and enthusiasm is the first measure of importance and speaks for the contempt for a sense of responsibility.

So much for the loose coupling of a concept of lifeworld with the undertaking found.

A committed policy has a long way to go.

Perhaps it can be stated experimentally: only mimetics is always implanted. It could be called for guidelines on mimesis, as an abstraction of desires, to be named in an isolating manner, as well as the quality of adherence to and adherence to boundaries. This also requires being able to name the use of linguistic metaphors, as opposed to dealing with factual values ​​- which becomes tricky when factual values ​​in turn only represent linguistic metaphors. Quality in the workplace, honesty in implementation could help, but who wants to translate that into guidelines without either becoming arbitrary or strangling if the staff is too out of practice when dealing with them?