AI Regulation in the U.S.: Navigating Post-EO 14110
The Trump administration revoked EO 14110, shifting the U.S. toward a market-driven AI strategy to spur innovation and investment.
Join the DZone community and get the full member experience.
Join For FreeAs the Trump administration revokes Executive Order 14110, the U.S. shifts toward a market-driven AI strategy, departing from the Biden administration’s regulatory framework. While proponents see this as a catalyst for innovation and economic growth, critics warn of increased risks, regulatory fragmentation, and strained transatlantic relations. With Europe reinforcing its AI Act and states like California exploring their own regulations, the future of AI governance in the U.S. remains uncertain. Will deregulation accelerate progress, or does it open the door to new challenges in ethics, security, and global cooperation?
Just days after taking office, Donald Trump, the 47th President of the United States, issued a series of executive actions aimed at dismantling key initiatives from the Biden administration. Among them was the revocation of Executive Order (EO) 14110, a landmark policy that established a framework for AI governance and regulation.
This decision marks a turning point in U.S. AI policy. For its supporters, it is a necessary reform; for its critics, it is a dangerous setback. While EO 14110 aimed to structure AI adoption by balancing innovation and oversight, its repeal raises critical questions about the future of AI in the United States and its global impact.
Background on Executive Order 14110
Executive Order 14110 was issued on October 30, 2023, under the Biden administration. This major initiative aimed to regulate the development and deployment of artificial intelligence. Its goal was to balance innovation, security, and economic stability while ensuring that AI systems remained reliable, safe, and transparent.
In the Biden administration’s vision, EO 14110 was designed to address key concerns such as algorithmic bias, misinformation, job displacement, and cybersecurity vulnerabilities. It was not intended to impose direct restrictions on the private sector but rather to establish security and ethical standards, particularly for AI used by federal agencies and in public sector contracts, while also influencing broader AI governance.
From an international perspective, EO 14110 also aimed to strengthen the United States' role in global AI governance. It aligned with the European Union’s approach, particularly as the EU was developing its AI Act. The order was part of a broader transatlantic effort to establish ethical and security standards for AI.
"Artificial Intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.
At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security." (EO 14110 - Section 1)
EO 14110 as Part of a Broader AI Strategy: Continuity in Biden’s Policy
It is important to understand that EO 14110 was not an isolated initiative. It was part of a broader strategy built on several existing frameworks and commitments.
- Blueprint for an AI Bill of Rights (2022). A foundational document outlining five key principles: safe and effective AI systems, protections against algorithmic discrimination, data privacy, transparency, and human alternatives.
- Voluntary AI Commitments (2023-2024). Major tech companies, including Google, OpenAI, and Microsoft, agreed to self-regulation measures focusing on AI transparency, security, and ethics.
- National Security AI Strategy (2024). The Biden administration made AI a priority in cybersecurity, military applications, and critical infrastructure protection.
It is worth noting that even after the revocation of EO 14110, these initiatives remain in place, ensuring a degree of continuity in AI governance in the United States.
Objectives and Scope of EO 14110
Executive Order 14110 pursued several strategic objectives aimed at regulating AI adoption while promoting innovation.
It emphasized the security and reliability of AI systems by requiring robustness testing and risk assessments, particularly in sensitive areas such as cybersecurity and critical infrastructure. It also aimed to ensure fairness and combat bias by implementing protections against algorithmic discrimination and promoting ethical AI use in hiring, healthcare, and justice.
EO 14110 included training, reskilling, and protection programs to help workers adapt to AI-driven changes. It also aimed to protect consumers by preventing fraudulent or harmful AI applications, ensuring safe and beneficial use.
Finally, the executive order aimed to reinforce international cooperation, particularly with the European Union, to establish common AI governance standards. However, it’s important to note that it did not aim to regulate the entire private sector but rather to set strict ethical and security standards for AI systems used by federal agencies.
Summary of EO 14110’s Key Principles
To quickly get the essentials, here are the eight fundamental principles it was built on:
- Ensure AI is safe and secure.
- Promote responsible innovation and competition.
- Support workers affected by AI’s deployment.
- Advance equity and civil rights.
- Protect consumer interests in AI applications.
- Safeguard privacy and civil liberties.
- Enhance AI capabilities within the federal government.
- Promote U.S. leadership in global AI governance.
Why Did the Trump Administration Revoke EO 14110?
So, on January 20, 2025, the Trump administration announced the revocation of EO 14110, arguing that it restricted innovation by imposing excessive administrative constraints.
The White House justified this decision as part of a broader push to deregulate the sector, boost the economy, and attract AI investment. The administration made clear its preference for a market-driven approach. According to Trump, private companies are better positioned to oversee AI development without federal intervention.
Clearly, this shift marks a geopolitical turning point. The United States is moving away from a multilateral approach to assert its dominance in the AI sector.
However, this revocation does not mean the end of AI regulation in the United States. Other federal initiatives, such as the NIST AI Risk Management Framework, remain in place.
"Republicans support AI development rooted in free speech and human flourishing." (The 2024 Republican Party by Reuters)
Consequences of the Revocation in the United States
The repeal of EO 14110 has immediate effects and long-term implications. It reshapes the future of AI development in the United States. From the Trump administration’s perspective, this decision removes bureaucratic hurdles, accelerates innovation, and strengthens U.S. competitiveness in AI. Supporters argue that by reducing regulatory constraints, the repeal allows companies to move faster, lowers compliance costs, and attracts greater investment, particularly in automation and biotechnology.
But on the other hand, without a federal framework, the risks associated with the development and use of AI technologies are increasing. Algorithmic bias, cybersecurity vulnerabilities, and the potential misuse of AI become harder to control without national oversight. Critics also warn of a weakening of worker and consumer protections, as the end of support programs could further deepen economic inequalities.
In practical terms, regulation is becoming more fragmented. Without a federal framework, each state could, and likely will, develop its own AI laws, making compliance more complex for businesses operating nationwide. Some see this as an opportunity for regulatory experimentation, while others see it as a chance for opportunistic players to exploit loopholes or fear legal uncertainty and increased tensions with international partners.
Impact on Europe
The revocation of EO 14110 also affects global AI governance, particularly in Europe. Transatlantic relations are likely to become strained, as the growing divergence between U.S. and European approaches will make regulatory cooperation more challenging.
European companies may tighten their compliance standards to maintain consumer trust, which could influence their strategic decisions. In fact, the European Union may face pressure to adjust its AI Act, although its regulatory framework remains largely independent from that of the United States.
Conclusion
The revocation of Executive Order 14110 is more than just a policy shift in the United States. It represents a strategic choice, favoring a deregulated model where innovation takes precedence over regulation. While this decision may help accelerate technological progress, it also leaves critical questions unanswered: Who will ensure the ethics, security, and transparency of AI in the United States?
For Europe, this shift deepens the divide with the U.S. and strengthens its role as a "global regulator" through the AI Act. The European Union may find itself alone at the forefront of efforts to enforce strict AI regulations, risking a scenario where some companies favor the less restrictive U.S. market.
More than a debate on regulation, this revocation raises a fundamental question: In the global AI race, should progress be pursued at all costs, or should every advancement be built on solid and ethical foundations? The choices made today will shape not only the future of the industry but also the role of democracies in the face of tech giants.
One More Thing
The revocation of EO 14110 highlights a broader debate: who really shapes AI policy, the government or private interests? While the U.S. moves toward deregulation, California’s AI safety bill (SB 1047) is taking the opposite approach, proposing strict oversight for advanced AI models. But as an investigation by Pirate Wires reveals, this push for regulation isn’t without controversy.
Dan Hendrycks, a key advocate for AI safety, co-founded Gray Swan, a company developing compliance tools that could directly benefit from SB 1047’s mandates. This raises a crucial question: When policymakers and industry leaders are deeply intertwined, is AI regulation truly about safety, or about controlling the market?
In the race to govern AI, transparency may be just as important as regulation itself.
Published at DZone with permission of Frederic Jacquet. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments