The human creators and operators of autonomous robots must be held accountable for the machines’ actions. If a programmer gets an entire village blown up by mistake, he should be criminally prosecuted, not get away scot-free or merely be punished with a monetary fine his employer’s insurance company will end up paying. Similarly, if some future commander deploys an autonomous robot and it turns out that the commands or programs he authorized the robot to operate under somehow contributed to a violation of the laws of war, or if his robot were deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold the commander responsible.
As nations continue to invest in and develop autonomous weapons systems or artificial intelligence for battlefield purposes, calls for international regulation are increasing. Wars of the future could see AI-powered weapons, ships and aircraft deployed to the battlefield without being subject to human control or monitoring. According to a new report published by Human Rights Watch, in collaboration with Harvard Law School‘s International Human Rights Clinic, the two organisations claim that autonomous weapon systems would violate the Martens Clause; a widely-acknowledged provision of international humanitarian law.
The degree of autonomy in weapons systems is steadily increasing, thanks to rapid progress in the fields of artificial intelligence (AI) and robotics. Machines are now capable of learning;they process experience by means of artificial neural networks similar to the human brain. The arms industry is making use of this. Weapons are becoming faster and more efficient, while the danger for the soldiers using them decreases. This is precisely what armies want. However, the boundaries are fluid. A robot that autonomously seeks, recognizes and defuses mines may generally be accepted, while a robot that autonomously seeks, recognizes and shoots people clearly contravenes international humanitarian law.