AI involvement in Middle East situation raises concerns, US Congress calls for strengthened regulation

robot
Abstract generation in progress

Is artificial intelligence (AI) also involved in the Middle East situation?

After the U.S. military confirmed “expanded use of AI in military operations against Iran,” U.S. lawmakers on the 11th called for increased regulation and transparency of the technology.

U.S. Secretary of Defense Pete Hegseth stated this week that he hopes to place AI at the core of U.S. military operations. Admiral Brad Cooper, commander of U.S. Central Command, said on the 11th that AI can sift through large amounts of data within seconds to help commanders make faster decisions, but the final decision to strike a target still rests with humans.

Lawmakers are concerned

Previously, media reports indicated that Palantir’s Maven intelligent system is using satellite data and surveillance information to assist the U.S. military in real-time target identification and prioritization during operations in Iran.

However, as AI’s role in military actions continues to grow, Congress has expressed concerns about its reliability.

“We need a comprehensive and fair review to determine whether AI has caused harm or endangered lives in Middle Eastern conflicts,” said Jill Tokuda, a Democratic member of the House Armed Services Committee from Hawaii.

Some members of Congress also believe that clear safeguards should be established before AI is involved in military operations to ensure human participation in major decisions.

Sara Jacobs, a member of the House Armed Services Committee from California, said, “We have a responsibility to impose strict limits on the military’s use of AI and ensure human involvement in every decision to deploy lethal force, because a mistake in decision-making could be devastating for civilians and soldiers on the ground. AI tools are not 100% reliable and can malfunction in subtle ways.”

AI giants face bans

American AI giant Anthropic filed two federal lawsuits against the U.S. government in the U.S. District Court in California and the U.S. Court of Appeals in Washington, D.C. On the 5th, the Department of Defense notified that Anthropic would be designated as a supply chain risk entity. President Trump subsequently directed the U.S. government to cease cooperation with Anthropic. The U.S. Department of the Treasury, State Department, and Federal Housing Administration will stop using all products from Anthropic.

This move means not only will the Pentagon ban the use of Anthropic’s technology, but all U.S. defense suppliers and contractors must prove they have not used Anthropic’s models in their collaborations with the Department of Defense.

Anthropic responded that the Department of Defense’s designation of it as a “supply chain risk entity” is unlawful, procedurally flawed, and arbitrary, and that other federal agencies’ sanctions and restrictions without legal authorization are also illegal.

After Anthropic sued the U.S. government, major American tech companies including Google, Amazon, Apple, and Microsoft expressed support for Anthropic. In their court filings, these tech giants voiced concerns about the federal government’s actions. Microsoft warned that the government’s behavior could have widespread negative impacts on the entire tech industry, stating, “This administration has strongly promoted the development and growth of the AI ecosystem, but now there is a trend toward endangering that system.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin