Military is the missing word in AI safety discussions

Attempts by governments to regulate the technology must look at its use on the battlefield.

The Israel Defense Forces have used an AI-enabled program called Lavender to flag targets for drone attacks. PHOTO: EPA-EFE
New: Gift this subscriber-only story to your friends and family

Western governments are racing each other to set up artificial intelligence (AI) safety institutes. The UK, US, Japan and Canada have all announced such initiatives, while the US Department of Homeland Security added an AI Safety and Security Board to the mix only last week. Given this heavy emphasis on safety, it is remarkable that none of these bodies governs the military use of AI. Meanwhile, the modern-day battlefield is already demonstrating the potential for clear AI safety risks. 

According to a recent investigation by the Israeli magazine +972, the Israel Defence Forces have used an AI-enabled program called Lavender to flag targets for drone attacks. The system combines data and intelligence sources to identify suspected militants. The program allegedly identified tens of thousands of targets, and bombs dropped in Gaza resulted in excessive collateral deaths and damage. The IDF denies several aspects of the report.

Already a subscriber? 

Read the full story and more at $9.90/month

Get exclusive reports and insights with more than 500 subscriber-only articles every month

Unlock these benefits

  • All subscriber-only content on ST app and straitstimes.com

  • Easy access any time via ST app on 1 mobile device

  • E-paper with 2-week archive so you won't miss out on content that matters to you

Join ST's Telegram channel and get the latest breaking news delivered to you.