U.S. President Donald Trump said on Monday that he expects to sign an executive order this week establishing a single nationwide standard for artificial intelligence, a move he argued is necessary to replace the growing web of state-level AI laws. The announcement marks a significant step in the administration’s effort to shape the national regulatory structure around rapidly advancing AI systems, and it comes amid intensifying debate over who should hold authority to set protections for consumers and businesses.
In a message posted on his social media platform, Trump said the United States must operate under “one rulebook” if it intends to remain a global leader in artificial intelligence. He added that companies cannot be expected to obtain approvals in all fifty states each time they develop or deploy new technology. His comments reflect long-standing concerns among technology firms that a state-by-state regulatory landscape could slow innovation, complicate compliance, and undermine the country’s competitiveness in an area viewed as strategically important.
While Trump did not outline the specific provisions of the upcoming order, people familiar with earlier discussions have said the administration has considered instructing federal agencies to challenge state AI laws in court and potentially leverage federal funding to discourage states from moving ahead with their own regulations. Such an approach would likely be welcomed by major technology companies that have repeatedly urged Washington to adopt a single national standard. Several prominent developers and investors have argued that a fractured regulatory environment could weaken America’s ability to keep pace with global competitors.
However, the push for a unified federal rule has met resistance from state leaders of both political parties. Many governors and legislators say they must retain the authority to enact protections tailored to the needs of their residents. In recent years, states have pursued a wide spectrum of AI-related policies, ranging from bans on nonconsensual sexually explicit imagery to restrictions on unauthorized political deepfakes. Other measures have sought to prevent discrimination arising from automated decision-making. California, home to many major AI companies, is preparing to require large developers to disclose how they plan to limit potential catastrophic risks linked to advanced systems.
The divide between state and federal approaches has widened in recent months. Trump previously urged Congress to include language in a defense bill that would block state-level AI laws. That proposal faced significant pushback from lawmakers and attorneys general who argued that eliminating state authority would leave consumers vulnerable. They noted that Congress has not yet established the protections many experts believe are needed for a technology that is increasingly embedded in sectors such as health care, employment, finance, and education.
Earlier this year, the Senate overwhelmingly rejected an effort to restrict states from passing AI-related legislation, with bipartisan opposition rooted in concerns that sweeping federal preemption would remove vital safeguards. Consumer groups have also warned that preventing states from acting could slow the development of meaningful oversight at a time when AI systems are evolving rapidly.
As the White House prepares to release the executive order, the debate over the appropriate balance of federal and state authority in regulating artificial intelligence is expected to intensify. Technology companies continue to call for uniform national standards, while state officials maintain that their role is essential to protecting residents in an era of swiftly changing digital tools. The administration’s decision in the coming days will likely shape the next phase of the United States’ approach to AI governance.









