The research in the field of artificial intelligence in the military sphere should be limited – with an appeal addressed to the world more than a thousand scientists and inventors. The main thing that worries them, – the prospect of the creation of autonomous systems, with the right to make a decision about the murder without human intervention. On all other systems, they are distinguished by one fundamental fact.
More than a thousand scientists, engineers and businessmen from around the world have signed a letter calling for a ban on the use of autonomous weapons systems, endowed with artificial intelligence. Among them, the famous British astrophysicist and theorist Stephen Hawking, an American inventor and businessman Elon Musk, co-founder of Apple Steve Wozniak, CEO of Google DeepMind Demis Hassabis, the linguist Noam Chomsky.
«When the pilot of the aircraft or commandos ordered the destruction of the facility, pilot and special forces also act as an intellectual weapon?»
The letter was made public on passing in Buenos Aires an international conference on artificial intelligence.
«The development of artificial intelligence technology has reached the stage where such systems may be placed on the media has for the next few years. The danger is great, because such autonomous weapons are the third revolution in the military after the invention of gunpowder and nuclear weapons, “- the document says.
The letter does not call for ban on the whole the development of artificial intelligence for the defense industry, but, in their view, these technologies should not be autonomous and be vested with independent decision-making function.
«If the leading military powers continue to develop weapons systems with artificial intelligence, the global arms race will be inevitable. The result can be predicted now: autonomous weapons will become as commonplace tomorrow as a Kalashnikov today “, – the document says.
Independent system, as opposed to automated, requires no human intervention. Prior to the creation of autonomous weapons, experts say, it is still far. Nonetheless, and at the current level of technology, scientists have expressed a number of concerns: the team at the destruction of rights in the Middle mostok can be given to an officer, in his office located in the United States. And the level of awareness of what he was doing, the operator of the UAV can be very different from what is a fighter who is at the forefront. A separate problem is the possible use of drones without labeling in the interests of the security services.
completely eliminate the error can not be in a standalone system or in an automated, but in the latter case, at least you can find out who is responsible for the consequences of mistakes.
«reconnaissance and strike unmanned systems are automated systems. The issue of identification purposes, a decision on the use of weapons are the man – said the newspaper VIEW expert on unmanned aerial vehicles Denis Fedutinov. – And you can find a particular person, who took this or that decision. And in case there are errors charge. If we shift the question on automated systems, not personalities. I think it is quite premature. At least in the foreseeable future, these functions should remain with the man. ”
He said that in the development of the UAV is going on increasing the share of automatically- or automated, ongoing problems. “Now here we are talking about automation of take-off and landing phases, target detection, identification and tracking. In the future, it will also put the problem of automatic destruction purposes, single actions and the actions of the group with other manned and unmanned aerial vehicles. It must continue to reduce the cycle time detection, failure by increasing the effectiveness of the respective systems. Meanwhile, today there are frequent mistakes in the identification of goals, which often leads to civilian casualties. Such errors are likely, albeit to a lesser extent, but will remain in the short term, “- said the expert.
As told to the newspaper LOOK robotics expert, advisor to the All-Russian program “Robotics: engineering and technical personnel of Russia” Alexei Kornilov, the establishment and use of such weapons under discussion for several years. “But, in my opinion, the problem in robotics,” – said the expert.Kornilov noted that the generally accepted definition of what artificial intelligence at the moment. Therefore, experts in various fields and agree to take appropriate determination only for their narrow fields.
Regarding the weapon with artificial intelligence, expert explained that “most of this is understood as a kind of system which can itself take a decision on the destruction of or damage to a particular object.”
«Those that now is, do not hold (intellectually – approx. LOOK), even to the level of insects such as bees, not to mention the dog. But if we remember that the ancient Scythians, fighting with the Persians, were thrown on the enemy beehive with bees and currently we send the dog for a person, suggesting that he was a criminal, although it can not be named in these cases is also used smart weapons? ” – he argues.
Another example: when the pilot of an aircraft or commandos ordered the destruction of the facility, pilot and special forces also act as an intellectual weapon?
«technically very easy to put on the chassis of an instrument and make it remote controlled. And also we can give the system additional features. For example, to make it not just a radio-controlled, and capable of performing a number of independent actions – to go from point A to point B and send it to the operator by way of a picture of what is happening there. And if he will notice something dangerous, he will order the system to open fire. The next step we could give this machine and search functions of a dangerous object. She tells the operator: Look, at this point I saw something moving, I guess this place is dangerous and it is better to destroy it. Then the operator will give the command to destroy. Finally, you can register for the machine such algorithm of actions to herself, without an operator determines the potential danger and she opened fire, “- said the expert.
At the same time he considers incorrect to talk about machines and robots as pose a threat to humans. As is the case with the dog, the responsibility of the person who gives her the command to throw at someone.
«It is not the function of artificial intelligence … You can also speak about the turnstile in the subway, he possesses them. He must also “to think” miss you or not, taking into account a number of factors, such as whether you have made a charge. And then the same thing, “- says Kornilov.
In summary, the expert said that the current state of science allows us to technically very dangerous variety of things. This in itself does not create technological development problems of humanity, and can only sharpen the contradictions, what is already there. Blame technology in something stupid. Is considering “non-technical”.
The fears associated with uncontrolled development of autonomous systems, scientists express reshulyarno. Two years ago, the UN Special Rapporteur on extrajudicial, summary executions or arbitrary executions Christof Hynes urged to introduce universal moratorium on the production of lethal autonomous robotic systems (Lethal autonomous robotics – LARS).
The expert recommended to urge countries to “introduce at national level a moratorium on the manufacture, transfer, acquisition, implementation and use of LARS», is applied to this kind of weapons are developed to international regulations. The use of such robots in Hines “raises questions that have far-reaching consequences with regard to the protection of life in war and peace.”
Now, he stressed rapporteur, such a legal framework does not exist, so it is unclear whether it is possible to program the machine so that “they will act in accordance with international humanitarian law”, particularly in regard to determining the differences between military and civilians.
In addition, the expert said, “it is impossible to develop any adequate system of liability” when using autonomous robots. “While in the case of unmanned aerial vehicles person decides when to start shooting to kill, in LARS-board computer decides whom to aim”, – he said.
In 2012, the human rights organization Human Rights Watch published a 50-page report titled “Losing humanity: Arguments against the robots,” which warned of the dangers of creating a fully automatic weapons. The report, compiled by Human Rights Watch in conjunction with the Harvard Law School, called for individual states to develop an international treaty that would completely ban the production and use of robotic arms.
Human rights activists noted that the autonomous military weapons do not yet exist, and to adopting it is still far, but the military in some countries, for example in the US, have already presented prototypes that embody a significant breakthrough for a “mashin- killers. ”
The report notes that the United States in the lead in the race, in addition, it involves some other countries, including China, Germany, Israel, South Korea, Russia and the United Kingdom.
According to many experts, to complete autonomy armored countries will go from 20 to 30 years.
No comments:
Post a Comment