Summary

  • James Owen, technology journalist for MIT Technology Review, has highlighted three issues surrounding AI adoption in the American military.
  • The US military has recently deployed generative AI to conduct intelligence and surveillance in the Pacific, via chatbot-style interfaces.
  • However, AI safety experts have raised alarms about whether large language models are reliable for analysing highly sensitive geopolitical intelligence.
  • Furthermore, the generative AI is inching toward not just analysing data, but suggesting actions, such as target lists for strikes.
  • This presents the question of to what extent “human in the loop” can prevent potential misuse of AI.
  • There are also questions about the appropriateness of data classification given the potential for AI to join up thousands of data points to draw new conclusions.
  • Finally, as AI becomes more capable and widespread, it is increasingly being used to inform high-level decision-making in the military, raising questions of how far up the chain AI should go.
  • These issues will be closely monitored, and any updates will likely be reported.

By James O’Donnell

Original Article