Global AI and Data Science

 View Only

AI coming to the US military

By Michael Mansour posted Wed October 02, 2019 11:04 AM

  

AI coming to the US Military [Link]

[Summary]

This objective expose into the US military’s plans for further advancing AI in the battlefield tells a somewhat surprising story.  It is not going to be an all-automated system as one might think - rather the military is mainly using this technology to help human operators be better warriors in an increasingly competitive landscape for military supremacy. The article breaks machine learning in military systems into two camps, defensive and offensive.  The defensive strategy has less qualms with using automated systems for targeting attacking adversaries or locating and diffusing mines since the risks are lower and offer resource allocation efficiency. Offensive systems, on the other hand, must take into account impacts to civilian lives and are not expected to take autonomous lethal action, but rather to assist human operators in making decisions. For example, an AI system may prioritize the order in which certain targets in an area should be attacked, or combine sensor information from multiple units to provide a more holistic view of a hostile situation.  Systems like these are not entirely new though, the Navy has been using auto-targeting systems like the Phalanx on ships for 30 years, offering the ability to target, shoot, and track performance in real time.

image from TheAtlantic.com

 


[Commentary] 

The age of AI in the battlefield has arrived whether anyone likes it or not.  The competitive landscape for global military dominance will continue to push ethical boundaries.  In the case the US is attacked by fully autonomous offensive systems originating from a country of differing values, the stance of prioritizing fully-automated systems for defensive systems only may shift in order to keep that dominance.  At least the US DoD is trying to find a clear path to this, as indicated by their search for an ethicist in this area


Another interesting challenge introduced by autonomous systems in military systems is akin to a self-driving car issue: As human operators rely on automated systems, will they lose their “edge” in operating these systems in a manual-mode?  Airline pilots rely on automated flight systems, but must regularly undergo training to keep their skills up in times of need. The same argument could apply for controllers of defensive systems, it’s not unlikely that future adversarial attacks will attempt to render autonomous systems useless, requiring human intervention.  


#GlobalAIandDataScience
#GlobalDataScience
2 comments
29 views

Permalink

Comments

Tue November 26, 2019 07:04 AM

Very good point of Michael. The military landscape is filled with promises of technological advancements which will secure assets in a more efficient way. These advancements aren't new however, and we are now at a crossroads of AI advancement underlining the need for human intervention.

This is especially true when thinking of military technology as both offensive and defensive technologies.

Using AI to assist targeting, stabilisation and procedural efficiency are all examples of industrial AI offering a human actor more options and better decision-making data and tools.

We need to think of the broader term of "military" however. Looking at long range scanners and mobile phalanx capabilities, autonomous drone ships and AR headsets for soldiers, is just the tip of the iceberg.

The military is a traditional command and control organisation which relies on multiple sources of data for the correct decision-making tree, the logistics of a military deployment for example would generate such an immense amount of data, it is tantamount to moving a small city across the globe. Simply talking about AI in offensive or defensive technology rules out the strategic advantages of ensuring smoother running deployment phase, management phase and cost cutting phase. These are major wins when talking about AI in the military as well, as the faster and more efficiently you can run an organisation, the more successful that organisation will be in its goals.

The need for expediency in working out frameworks and policies governing the use of AI in the military is vital, because of the command and control nature of its work. Decisions can take time to work through but when they are approved, they are generally adopted within days, if not hours. This speed can be a major boost to AI but can also shortcut policy makers and perhaps generate a scenario whereby technology is being held back due to the sensitive nature of its goals.

The ethics of this technology lies with the regulators understanding how these technologies are used and applying global governance to both the technical level but also its correct use. AI is not inherently good or bad, it is how we use it and understand it which will create a safer adoption.

Fri October 11, 2019 11:52 AM

And in Illinois: 

Hiring Robots Law Could Trigger Litigation

By Patricio Chile

(Bloomberg Law) -- 10/11/2019 06:57:03

Illinois employers using “hiring robots"—artificial intelligence tools to recruit and screen job applicants—will have to adapt to a new set of legal requirements under a law taking effect next year.

New Obligations: Employers in the state, starting Jan. 1, 2020, will have obligations under the Artificial Intelligence Video Interview Act, a first-in-the-nation statute that imposes transparency, consent, and data destruction duties on employers using AI to screen applicants for Illinois-based positions.

Potential Litigation: The law could serve up a fresh avenue for employment litigation in a climate already crowded with civil rights claims, wage and hour claims, and, at least in Illinois, class actions alleging abuses of employees’ biometric privacy.