Skip to main content

Trump creates AI Task Force to oversee and challenge state regulation

The executive order says it revokes attempts to paralyze the AI industry and establishes an AI Litigation Task Force to challenge state AI laws inconsistent with national policy.
By Jessica Hagen , Executive Editor
President Donald J. Trump

Photo: Anna Moneymaker/Getty Images

The White House unveiled an executive order signed by President Donald Trump to create a national AI policy framework to avoid a patchwork of differing U.S. state regulations around the technology. 

"First, State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups," the order reads.  

"Second, State laws are increasingly responsible for requiring entities to embed ideological bias within models." 

Third, the order says that states' laws can impinge on interstate commerce as they impermissibly regulate beyond a state's borders. 

The executive order directs Attorney General Pam Bondi to form an "AI Litigation Task Force" within 30 days of the date of the order, which will challenge state AI laws that are inconsistent with national policy, including those that "unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment." 

Additionally, within 90 days, U.S. Secretary of Commerce Howard Lutnick is to examine existing state AI laws that conflict with national policy and provide a list to the Attorney General of onerous laws that should be referred to the task force. 

The order says the list may also include state AI laws that promote innovation consistent with national policy. 

The Secretary of Commerce and Attorney General are instructed to consult  throughout the process with the Assistant to the President for Economic Policy, the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President and Counsel to the President.

The order says U.S. states complying with national policy may be eligible for additional funding, and those with onerous AI laws will be ineligible for funds under the Broadband Equity Access and Deployment (BEAD) Program to the maximum extent allowed by federal law.  

The White House also calls for legislative recommendations to Congress to establish a uniform federal AI policy framework that would pre-empt conflicting state laws.

"The resulting framework must forbid State laws that conflict with the policy set forth in this order. That framework should also ensure that children are protected, censorship is prevented, copyrights are respected and communities are safeguarded. A carefully crafted national framework can ensure that the United States wins the AI race, as we must," the order said. 

THE LARGER TREND

Numerous states have passed AI laws, particularly pertaining to AI use in healthcare. 

An Illinois state law passed in August explicitly prohibits the use of AI to provide independent therapy or psychotherapy services without a licensed professional's oversight. 

Nevada Assembly Bill, which became effective in July of this year, also restricts the use of AI in providing direct mental and behavioral healthcare or claiming to be a licensed professional.

Utah bill signed by Republican Gov. Spencer Cox in March regulates AI mental health chatbots by requiring the disclosure that a user is interacting with AI, not a human. 

California has established numerous AI laws around healthcare and chatbots, including a law that became effective in January that mandates that healthcare providers disclose their use of AI when creating patient-facing clinical communications.

Two other laws in California, both of which will be effective in January 2026, mandate that companies provide clear notification that chatbots are AI, ban AI from using terms that imply they have a healthcare license and require safety protocols that prevent self-harm content from being displayed and, instead, refer individuals to crisis resources. 

The Colorado AI Act, slated to go into effect in June 2026, is not healthcare-specific, but healthcare is included as a high-risk AI category. The act requires AI systems that have a consequential impact on healthcare to meet specific safeguards against bias and discrimination.