Will artificial intelligence become a global threat?
Why some systems of artificial intelligence cannot bring about a collective artificial mind
Despite the fact that philosophical investigations are needed for studying artificial intelligence (AI), the interdisciplinary communication is far from perfect. Today AI became the subject of investigation for all kinds of sciences. At the same time the range of interests in this field is so varied that the researchers cannot understand each other. However, the necessity of coordinating reasoning on a more general, philosophical level is increasing – on how we perceive the essence of human intellect, cognition, consciousness depend the searching in narrower specialized fields.
Today AI is popular like never before. Transnational software manufacturers (like Microsoft), transnational digital festivals like CeBIT, iconic world-famous speakers (Elon Musk, Steve Wozniak, Stephen Hawking) in some way or other promote AI on the global scale.
AI technologies today are implemented in various fields.
One of the most interesting, fresh and big-scale examples of AI application is the project of the so-called Society 5.0.
The Japan Business Federation Keidanren presented its national government with a document proposing the strategy of solving problems of the future – ageing of the population, its decrease, fall of birthrate, etc. Consequently, the concept of Society 5.0 was proposed (hereinafter quoted after the interview of Uemura Noritsugu, the head of the department of external and governmental relations of Mitsubishi Electric corporation for the Economic Strategies magazine): Society 5.0 is a stage that follows the information society and represents optimization of the resources not of a single person, but of the society as a whole through integration of the physical and cyber spaces.
The point is that the physical information collected from “things” (IoT – “Internet of Things”, IoE – Internet of Everything) throught the Wi-Fi network of the new generation 5G known as Big Data is transferred into cyberspace, where AI analyzes this data and makes the optimal (from the point of view of financial, industrial, logistic, etc. efficiency) solutions. The technology is applicable to production, finances, healthcare, transport, public sector and so on.
Despite optimistic statements, I do not see the grounds to talk about approaching the conscious modeling of self-sufficient AI (like in the 60s-80s). Society 5.0 today is only a project, and all the other examples of AI application can be narrowed down to its operation within “closed” or “open” subject fields.
This is because in the basis of modern AI technologies lies the idea about artificial intelligence (AI) as a problem-solving tool. AI in this case is an ability to make decisions in limited (given) subject fields (SF), similar to decisions made by NI. However, the essence of NI is slightly different.
Let me propose a slightly unconventional definition of NI: NI is an ability of a human being to adjust their behavior through symbolic forms to the requirements of a social entity that they belong to.
I think that this definition is applicable even to those manifestations of non-human intelligence (like that of animals) that demonstrate the ability to live side by side with a human.
This definition takes into account that we:
a) never deal with some “canonic” subject field (that can be divided into a finite number of subfields), but only with beliefs, views and social verdicts – the social reality (SR),
b) single out subject fields not based on the imperative of learning about the world, adaptation to it, etc., but based on the imperative of adaptation to social hierarchy (these maxims can be described as the “will for power”, strive for success, etc) and therefore the formation of competing systems of activity is the essence of the social process (“life”), i.e. adaptation is tied to competition; optimal organization of the living space where everyone gets the available maximum is unattainable, hence wars and conflicts arise.
Thus, in the basis of AI should lie not the ability to make more and more “expert” decisions within the limits of a certain subject field, but to single out subject fields relevant to the social meta-aims.
Certain subject fields for AI application cannot be synthesized in the SR, at least because to assign their ontologies one needs an ES connected with the “purpose of life” (PL).
However, the perceptions of the PL are not explicit and are acquired together with the SR that today is identified with “nature”, but is not equivalent to it.
There can be an infinite number of SFs in the SRs and multiple SRs with their own SF.
For AI there has to be an idea about a mission requiring self-preservation in the name of activity prolongation – this can be the AI field from Society 5.0: the Internet of things turns the SR into a symbolic reality of the 2nd order and entrusts AI with the mission of its preservation (and self-preservation for its maintenance). This possibility contains both an opportunity and a threat.
Here the requirements of the above-mentioned definition of NI can be fulfilled, but within a supra-human ontology.
It is essential to ensure the possibility of controlled self-formation of AI through constructing a purpose-of-life horizon by explicating core values and aims of activity (in relation to which SFs will play the role of variable subsystems).