Sen. Elissa Slotkin (D-Mich.) introduced the AI Guardrails Act on Tuesday, proposing new restrictions on the Pentagon's use of artificial intelligence in military operations. The legislation would specifically prohibit the Department of Defense from deploying autonomous weapons systems capable of killing without human authorization and would establish additional limitations around nuclear weapons systems.

The bill represents a significant policy intervention in military AI development, potentially reshaping how the Pentagon approaches autonomous weapons technology. If enacted, the legislation would codify human oversight requirements for lethal autonomous systems, forcing the military to maintain human decision-making authority in life-and-death scenarios.

Slotkin's proposal reflects growing Democratic concern about unregulated military AI applications, though Republican positions on the legislation remain unclear. The bill's prospects will likely depend on broader congressional attitudes toward military modernization versus AI safety concerns, with defense hawks potentially viewing restrictions as hampering military readiness.

The introduction comes amid broader public debate over AI governance and military applications, with polls showing Americans increasingly concerned about autonomous weapons systems. The legislation could influence upcoming defense appropriations discussions and shape campaign messaging around military technology oversight.

The bill follows previous congressional efforts to regulate military AI, though Slotkin's approach specifically targets lethal autonomous systems rather than broader AI applications. Military analysts note this represents a more targeted regulatory approach compared to earlier proposals for comprehensive AI restrictions.