February 8, 2025

Sapiensdigital

Sapiens Digital

Killer Robots Are Coming: Can Human Ethics Keep Up?

While on patrol in the mountains of Afghanistan in 2004, former US Army Ranger Paul Scharre spotted a girl of about five years old walking curiously close to his sniper team amidst a small herd of goats.

It quickly became clear that she was reporting their position to the Taliban in a nearby village. Legally, as Scharre points out in Army of None: Autonomous Weapons and the Future of War, her behavior was that of an enemy combatant. Legally, it would have been within the scope of the laws of war to kill her. Ethically—morally—it was wrong, and Scharre and his team knew that. The girl moved on and their mission continued. 

The questions he would later ask himself about that mission were unsettling ones: What if an autonomous weapon had been put in that situation? Could artificial intelligence (AI) ever distinguish the legal from the moral? 

Elusive Ethics: Robotic Warfare and Autonomous Weapons
An unmanned surface vehicle maneuvering on the Elizabeth River during a demonstration at Naval Station Norfolk. Source: Rebekah M. Rinckey/DoD

In recent years, the debate over how to ethically manage lethal autonomous weapons systems (LAWS) has come to the fore. As militaries the world over march toward an AI future, how will societies program machines with the insight needed to make complex, life-or-death decisions in volatile circumstances? Is such a thing as ethical warfare even possible? 

Scharre presents a sobering example of what getting the military AI question wrong could look like. That question is as important as it is urgent. While the idea of deadly AI systems is not exactly new given the pace of technological advances, concerns that were theoretical a decade ago are rapidly becoming very real. Many now wonder whether or not society can scale its ethics accordingly and quickly enough to prevent fears about these technologies from becoming a reality. 

Likewise, the merits of robotic warfare warrant serious consideration. Technology of this kind can make decisions on the battlefield with mathematical speed and accuracy. Saving time often means saving lives. Machine “thinking” doesn’t alter with fatigue or trauma, phenomena that routinely affect soldiers on the ground, and that consistency could end conflicts sooner rather than later.

Robotic circuitry and sensors are not bound to the shifting subjectivities of the mind. 

AI researchers, tech leaders, and policymakers agree that autonomous weaponry is advancing at a breakneck pace. Despite this, the countries and organizations developing this tech are only just beginning to articulate ideas about how ethics will influence the wars of the near future. 

For example, the US Defense Department guidelines on LAWS states that the technology must “allow operators appropriate levels of human judgment over the use of force.” Similarly, the United Nations Institute for Disarmament Research requires there be some amount of “meaningful human control” involved. 

What these phrases mean in practice remains ambiguous. 

Looking at the issue from afar reveals a dynamic of various forces at play: the organizations galvanizing LAWS development, the bodies urging caution and promoting dialogue as they do so, and those who oppose the very idea to begin with. 

Love at first circuit 

Combat AI could potentially do a large amount of ethical good on the battlefield. Writing in The Military Review, Amitai Etzioni, Professor of Sociology at The George Washington University in Washington D.C., and Oren Etzioni, CEO of the Allen Institute for AI, note that this technology could actually be the better path to future ethical warfare

For starters, removing soldiers from the battlefield and replacing them with robotic weapons would save lives by definition. Autonomous systems would also be more reliable than humans in identifying and reporting ethical violations and war crimes. Soldiers whose brains have been overloaded with stress and trauma are prone to making mistakes. Robotic circuitry and sensors are not bound to the shifting subjectivities of the mind. 

A semi-autonomous UGV at the Northeastern Kostas Research Institute for Homeland Security in Burlington, Mass. (U.S. Air Force photo by Todd Maki) The appearance of U.S. Department of Defense (DoD) visual information does not imply or constitute DoD endorsement.
A semi-autonomous UGV at the Northeastern Kostas Research Institute for Homeland Security. Source: Todd Maki/U.S. Air Force

While it’s difficult to say exactly which motivations are driving the trend, military officials the world over appear eager to cultivate AI warfare technology to the fullest extent possible. In a recent interview with The Washington Post, Lt. Gen. Jack Shanahan, the first director of the Pentagon’s Joint Artificial Intelligence Center, said, “We’re going to [develop these weapons]. We’re going to do it deliberately.”

The National Security Commission on Artificial Intelligence, chaired by former Google CEO Eric Schmidt, reinforces this attitude in its 756-page final report from March 2021. The document is as comprehensive an assessment of the United States’ position in the AI race as any that exists, urging the DoD to incorporate AI into every aspect of its operations by 2025. Perhaps unsurprisingly, a large amount of the report references the increasing AI capabilities of countries like China and Russia. 

It’s not hard to see why. According to a 2019 report by the Center for a New American Security on Chinese strategic thinking regarding AI, political figureheads and tech leaders in the country see LAWS as an inevitability. China already exports surveillance platforms as well as unmanned aerial vehicles (UAVs), and those efforts will likely broaden and intensify in the coming years. 

Russia has no intention of being outpaced, either. The Stockholm International Peace Research Institute claims the country’s tests of unmanned underwater vehicles (UUVs), potentially capable of delivering nuclear warheads, have already been successful. The International Institute for Strategic Studies furthers this claim, adding that “up to 32 Poseidon systems are expected to be deployed aboard at least four submarines, although possibly not operationally until 2027.” 

Highly-advanced automated weaponry may not be a distant dream. As reported by The New York Times, Britain had already been using basic AI technology in their Brimstone missiles in 2011 in Libya. The missile’s programming is relatively basic, allowing it to differentiate between various types of vehicles, like busses and tanks, to minimize civilian casualties.

Jump forward to 2021 and we’re seeing militaries testing sophisticated autonomous drone swarms. The Modern War Institute at Westpoint states that “every leg of the US military is developing drone swarms,” adding that “Russia, China, South Korea, the United Kingdom, and others are developing swarms too.” Last year, Forbes reported on the possibility of swarms with enough firepower potentially falling into weapons-of-mass-destruction (WMD) territory.  

Droning on about deliberation

Swarms of autonomous machines with WMD-level capabilities are a disconcerting prospect to some, and voices of caution are emerging from a variety of political and social spheres. One such voice is Paul Scharre, now a senior fellow for CNAS, which liaisons with leaders in the US government and the private sector on issues of national defense policy. 

He’s also the person who led the team that penned the DoD’s directive on LAWS over a decade ago, establishing a non-binding ethical compass for military programs. 

Soldiers participate in an exercise on Fort A.P. Hill, Va., September 22, 2017. (U.S. Army photos by Pfc. Gabriel Silva) The appearance of U.S. Department of Defense (DoD) visual information does not imply or constitute DoD endorsement.
Soldiers participate in an exercise on Fort A.P. Hill, Va. Source: Pfc. Gabriel Silva/U.S. Army

Speaking at Stanford University in early 2019, Scharre emphasized the need for the debate to be an interdisciplinary one, saying, “We need not just technologists but also lawyers, ethicists and social scientists, sociologists to be a part of that.” 

Such calls to action may have started to resonate. In October 2019, the Defense Innovation Board (DIB) published a list of five AI ethics principles to further guide DoD officials in orienting their weapons programs. The DIB functions as an independent advisory group composed of figures from Google, Microsoft, several universities (such as Massachusetts Institute of Technology and Carnegie Mellon), and public science figures like astrophysicist Neil deGrasse Tyson. It was set up in 2016 to bring the technological innovation and best practice of Silicon Valley to the U.S. Military, and provide independent recommendations to the U.S. Secretary of Defense. 

Tech companies investing in AI have a complicated relationship with LAWS. In 2018, Google promised that it wouldn’t work with entities using technology for weapons development due to ethical concerns. According to Forbes, however, Google’s parent company, Alphabet, is now doing business with companies dealing in autonomous robotics projects that could “directly facilitate injury.” 

The ghost in the machine 

For some, the very concept of ethical weapons is an oxymoron, and simply regulating their development isn’t good enough. The questions they raise are poignant: Is removing human agency, to any degree, from choices that involve ending human lives a wise road to go down? Because the complexity of human values cannot be adequately represented in autonomous systems, can we conscientiously delegate such decisions to robots? 

“When you look at all of the different concerns, any single one of them would be a reason to make you wonder why we would want to embark on the path toward autonomous weapons.” 

A 2017 poll of 25 countries conducted by the independent market research company IPSOS found that a majority of respondents (56 percent) opposed the use of autonomous weapons. Leading up to the UN Convention on Certain Conventional Weapons in 2019, countries like Japan advocated for stricter regulations on LAWS

One of the larger organizations to speak out against LAWS has been the European Union’s legislative branch, having released their guidelines for the military use of AI in January 2021. In a press release, European Parliament representatives call for what amounts to a ban on the technology. This position resembles the views of the Campaign to Stop Killer Robots, a coalition of non-governmental organizations (NGOs) working to prevent their development and use. 

That group is headed by Mary Wareham, the head of the Arms Division at Human Rights Watch and co-laureate of the 1997 Nobel Peace Prize for her work in the International Campaign to Ban Landmines. In an interview with Ethics in Tech in September 2020, Wareham said, “When you look at all of the different concerns, any single one of them would be a reason to make you wonder why we would want to embark on the path toward autonomous weapons.” 

In spite of such opposition, the world seems to be moving forward with this technology. Fortunately, discussions about ethical robotic warfare are seeing an increasing presence on the world stage. 

A US Army autonomous weapons system passes over desert terrain during a test at Yuma Proving Ground, Arizona. (U.S. Army photo by Pvt. Osvaldo Fuentes, 22nd Mobile Public Affairs Detachment). The appearance of U.S. Department of Defense (DoD) visual information does not imply or constitute DoD endorsement.
A US Army autonomous weapons system passes over desert terrain during a test at Yuma Proving Ground, Arizona. Source: Pvt. Osvaldo Fuentes/U.S. Army

While major international powers like China continue to invest heavily in LAWS, for example, they appear not to be doing so blindly. In April 2020, the Brookings Institute noted that researchers and military scientists in the country are beginning to contend with the ethical implications of AI

The US may also be giving these worries due attention. The final report by the NSCAI in February 2021 devotes an entire chapter to the preservation of civil liberties and civil rights in the age of autonomous warfare, offering a number of recommendations on how to do so effectively. The organization argues that the “use of AI by officials must comport with principles of limited government and individual liberty [….] In a democratic society, any empowerment of the state must be accompanied by wise restraints to make that power legitimate in the eyes of its citizens.”

An international consciousness surrounding ethical technology may indeed be growing. In September 2020, the DoD hosted the first meeting of the Artificial Intelligence Partnership for Defense, a coalition including Australia, Canada, Denmark, Estonia, Finland, France, Israel, Japan, Norway, the Republic of Korea, Sweden, and the United Kingdom. The AIPD released a statement after the group convened last year, defining its goal as gathering nations to encourage and facilitate the responsible use of AI.

A balanced and realistic approach may be the best option available to those worried about this new technological horizon. Organizations and individuals remaining open to collaboration and forging relationships are likely to be crucial in creating a secure vision of the future. Speaking in a recent interview with the Future of Life Institute, Paul Scharre reaffirms this perspective in a distinctly human tone: 

“If you’re worried about future AI risk, [create] the institutional muscle memory among the relevant actors in society—whether it’s nation-states, AI scientists, members of civil society, whoever it is. Coming together and reaching any kind of agreement, it’s probably really vital to start exercising those muscles today.” 

Disclaimer: The appearance of U.S. Department of Defense (DoD) visual information does not imply or constitute DoD endorsement.

Source Article

Copyright © All rights reserved. | Newsphere by AF themes.