06 june 2018
Google placed an inner ban on creating AI for the military

Google placed an inner ban on creating AI for the military

Google’s CEO Sundar Pichai made an important statement: the corporation’s employees will not be working on any AI systems that could be used for military purposes. Pichai posted in the company’s corporate blog the seven basic principles of the organization’s work in AI development.

The principles that were shared by the Google’s CEO were formulated after the organization’s employees protested against cooperation with Pentagon. Back at the time Google’s senior management intended to work on the project of preparing an AI system for UAVs used by the military.

The CEO of the major IT-company said that the current AI and machine learning technologies gained a huge significance for the society. AI-equipped systems are applied for various purposes: for weather forecasting, for fight against cancer, in agriculture, etc.

Pichai admitted that AI is highly important, and therefore so are the issues of its application in the future. Those fields where AI is developed and applied will have a great impact on the society. He said that Google, being the leader organization in the development of AI systems, feels the major responsibility for it. That is why from now on the corporation will abide to seven grounding principles.

They proclaim that the research in AI will be conducted by the company’s employees with utmost responsibility. They won’t take on projects where AI will be used for “potentially dangerous” purposes. The main task of developing AI should be in the beneficial effect for people and in promoting their safety. Pichai noted that AI should always remain under human control. He said that the AI technologies created by Google’s specialists will completely obey human control and will remain under control. Among the principles there are also confidentiality of information and safety in its use.

In addition, the CEO identified four directions where the company will never develop AI systems. Here belong projects that could be potentially damaging to society, violating human rights and constitutive norms. However, even more so, Google will not work with the military and develop AI for them.

After this the Maven project, which was working on an AI system for military UAVs, was shut down.

Subscribe to our newsletter

By clicking the button, I accept the terms of the Offer for the use of the site and agree with privacy policy