Home Contract Lab 1 Milestone 1 Lab 2 Milestone 2 Lab 3 Lab 4 Milestone 3 Milestone 4 Ethics Homework Final Design Photoshoot!

Ethics Question Assignment

ECE 3400 Fall 2018: Team 19

IS “A BAN ON OFFENSIVE AUTONOMOUS WEAPONS BEYOND MEANINGFUL HUMAN CONTROL” GOING TO WORK?

By Team 19 – Asena Ulug, Cynthia Zelga, Laasya Renganathan, John Chukwunonso Nwankwo, Robert

In March 2014, the Crimean peninsula was annexed from Ukraine by the Russian Federation by military takeover. Ukraine and many world leaders considered it to be a violation of international law. Sanctions were imposed on Russia but the takeover increased the country's position in the petroleum regulatory economies, to which it has continued to trade with. In April 2015, the Islamic Republic of Iran got involved in a joint deal to redesign, convert, and reduce its nuclear facilities earning billions of dollars and oil revenue as deal bond, but has continued to be revolutionary in other nuclear designs selling around the world. Today, we are concerned about investing in autonomous weapons for warfare and the truth remains that it is only a question of time before warring nations go all out, thereby making the ban impossible because of the stakes in the concept of war.

In our study case ‘Musk/Hawking Open Letter On Autonomous Weapons,’ which is an open letter signed by 3,978 Artificial Intelligence (AI)/Robotics researchers and 22,539 others, it was concluded that AI has great benefit potential to humans in many other ways other than warfare, but the ethical question is, why should we ever consider it for warfare? It is almost inevitable that wars abound in our world today, as a means of dominance and a means of resolving conflicts between tribes and nations. Ethically we would be glad to have conversations regarding countries taking other approaches to produce the best outcome for its citizens. However, we must note that war has never left any nation in a fair condition, so why engage in it at all, bringing us to the bigger question of who benefits from these wars in the first place?

In most cases of warfare, as cited in our first paragraph above, we see that disputes leading to war arise from a government’s urge to dominate in resilient ways and in most cases end up enriching the stakeholders, gaining regional and political control. Who then are these stakeholders? They are major players in war, nations who continuously seek to gain dominance and profits from war conditions, and it would only be a question of time (in the near future) before offensive autonomous weapons beyond meaningful human control would help take over their dirty work of war engagement. Today, Jerusalem is warring against Palestine, Syria against its own nation, all in the bid for territorial control and yet the United Nations Security Council cannot do much about this because its member councils with veto powers are those engaging in these territorial battles.

Just as cited in past cases of autonomous cars incidents, where the owners of high end cars or assist drivers are considered priority in cases of a crash, so would investors want to invest in a product that would appeal to their customers – in this case maximum safety. Investors and stakeholders prioritize their decisions to maximize profit, to which war and arms dealing are definitely part. We can conclude by saying that we would love to argue on the contrary that the utilitarian test of what the best ways to engage in war would be employed, or the justice test of who the actions of war is fair to, or the virtue test of whether the actions taken are a reflection of anyone’s image. Howerver, the truth remains that the subject of the application of AI in war from a stakeholder's point of view would make it difficult to enforce the ban on autonomous weapons because the ethical reasoning should first arise for why should there be a war in the first place and since they abound, it is only a question of time before the warring nations would go all out and the stakeholders maximize profits.

A ban on autonomous weapons will not be entirely effective, mostly for the reason that other weapon bans on items such as chemical weapons have not been entirely effective either. While they are followed by the vast majority of governments, there are still outliers who develop this technology despite its commonly-agreed upon ban. The use of sarin gas by Assad in Syria is one recent example of these regulations being ignored, where the regime in power chose to use these banned weapons. Another factor is the popular sentiment against such weapons. With the example of nuclear or chemical weapons, we are now far enough removed in time to see the devastating and inhumane outcomes using such weapons can have. However, we have no such examples for AI warfare, as it is still hypothetical. Until the tangible effects of an action involving these systems comes about, the grounds for a ban are also hypothetical. We would be inclined to agree that developing intelligent machines of war is a bad idea, but that cannot be demonstrated as clearly as it has been for chemical and nuclear weapons, simply because there is no concrete data and no examples of the drawbacks AI warfare presents to humanity.

It would also be pertinent to touch on the idea of robots being used as weapons by terror groups rather than by larger militaries. This passage in the open letter is of particular interest: “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” The passage cites four possible applications of autonomous weapons for use as weapons of terror: assassination, destabilization, subduing populations, and ethnic cleansing. While the latter two are certainly frightening and one could easily imagine autonomous weapons being used for this purpose, the former two are less believable. Simply put, using an autonomous weapon for covert operations seems rather far-fetched. To use assassination as an example, there is little an autonomous weapon can do while remaining unnoticed as compared to conventional weapons. Flying a drone to a target and firing a missile is out of the question unless they are so poorly defended you might as well get a human to do it. For a more reasonable example, say for instance, you wanted to use a robotic sniper for the assassination of a target. It would likely involve a weapon of similar caliber to a human with a sniper rifle, and, as such, have a similar chance of being discovered. If it were sufficiently advanced, it might have a better chance of succeeding, but it would take an even greater degree of autonomy for it to keep itself from being discovered. Covert explosives have a similar problem. No matter how well it can recognize a face of a specific target, a bomb is a bomb and it’s just as likely to be seen as a bomb with a timer or a remote detonator instead, if, again, more likely to succeed. This is not to say that covert autonomous weapons could not become a concern, just that it would require much more effort to develop than a replacement for a conventional weapon. Covert autonomous weapons also become more of a problem as people become more accustomed to robots being around humans and are less likely to notice one as out of the ordinary. For the time being, this is not what we should be concerned about with autonomous weapons.