The US uses AI to identify airstrike targets in Iraq and Syria

US military officials said this country’s forces used AI to identify targets during airstrikes in Syria and Iraq earlier this month.

Schuyler Moore, head of the technology department of the US Army’s Central Command (CENTCOM), said on February 26 that this unit built a machine learning algorithm to determine target lists for 85 air strikes that day.

The Pentagon said US bombers and fighters attacked 7 facilities in Iraq and Syria in this campaign, aiming to completely destroy or cause significant damage to rockets, missiles, and unmanned aircraft warehouses, headquarters.

“We use computer-generated images to confirm the location of the suspected threat,” Ms. Moore said. “US forces are searching for many rocket launchers from rival factions in the region.”

The AI ​​system was also deployed to identify rocket launchers in Yemen and ships in the Red Sea.

Ms. Moore said the US military began deploying “computer vision” systems in operations after the Hamas attack on Israel last year. “Everything changes after October 7, 2023; we immediately change status and operate at a much higher pace than before,” she said.

The US military uses artificial intelligence (AI) algorithms to identify potential targets; then soldiers operate weapon systems to attack them. The US could have used this software to determine the location of rockets, missiles, unmanned vehicles and facilities of enemy armed groups.

Moore emphasizes that human supervision is indispensable throughout the process. Staff should be responsible for approving AI suggestions. “We never let an algorithm do the work from start to finish. Every step involving AI goes through human verification,” Moore noted.

CENTCOM once used an AI recommendation engine to see if it could recommend the best weapons for the campaign. However, this system was judged to “often fall short” compared to humans when it comes to suggesting appropriate attack sequences or types of weapons.

The Pentagon is stepping up efforts to integrate and test AI combat capabilities; the agency announced the idea in 2017 when it launched Project Maven and sought contractors who could develop target recognition software. Target from video recorded by drones. However, Google announced its withdrawal from the project when its employees objected to using AI for military activities.

Craig Martell, director of the Pentagon’s AI and digital division, last week affirmed that the agency “pursues the responsible application of AI models” and “identifies measures to protect adequately and mitigate national security risks caused by issues such as poor data management.”

“We must also consider the extent to which adversaries adopt this technology or attempt to prevent us from using AI-based solutions,” Mr. Martell said.

According to the Register, AFP, Reuters