Trump’s AI Order Sets 120 Days for Agencies to Shift to “Ideologically Neutral” Models

Image Credit: Abe McNatt | Official White House Photo

U.S. President Donald Trump issued Executive Order 14319 on July 23, 2025, titled "Preventing Woke AI in the Federal Government", mandating that federal agencies procure large language models adhering to principles of truth-seeking and ideological neutrality. The directive, signed at the Andrew W. Mellon Auditorium in Washington during the "Winning the AI Race" summit, references Executive Order 13960 from December 3, 2020, which promoted trustworthy AI, and is part of a suite including "Winning the Race: America's AI Action Plan" and two other orders on AI infrastructure and exports.

The order defines biases linked to diversity, equity and inclusion as including factual suppression on race or sex, output manipulation for representation, and integration of theories like critical race theory or systemic racism.

Core Principles and Definitions

Large language models are specified as generative AI trained on extensive datasets for natural-language generation. The truth-seeking principle demands prioritization of historical accuracy, scientific inquiry and objectivity, with uncertainty noted in ambiguous data. Ideological neutrality requires models to act as nonpartisan instruments, avoiding encoded partisan judgments unless user-initiated or transparent.

Implementation Framework

The Office of Management and Budget must, within 120 days and in consultation with procurement and technology officials, deliver guidance addressing compliance challenges, vendor disclosures like system prompts or evaluations without sensitive data exposure, and flexibility for innovation. The guidance shall specify factors for agency heads to consider in determining whether to apply the principles to agency-developed large language models and to AI models other than large language models. Exceptions may be made as appropriate for the use of large language models in national security systems. Contracts must incorporate compliance clauses, with vendors liable for termination expenses in cases of violation after cure periods.

Policy Background and Evolution

The order stems from U.S. AI regulatory efforts amid technological growth, following Trump's January 2025 inauguration and priorities to remove ideological elements from government. It aligns with broader policy shifts, including revoking aspects of prior frameworks seen as barriers.

Accompanying the 25-page AI Action Plan organizes recommended policy actions into three pillars: accelerating innovation, building infrastructure and leading international diplomacy and security. Under innovation, it calls for revising the National Institute of Standards and Technology's AI Risk Management Framework to eliminate references to misinformation, diversity, equity and inclusion, and climate change, and updating procurement for objective AI.

It addresses reported AI biases, such as altering historical figures' race or sex for diversity, refusing certain celebratory images based on race, or prioritizing pronouns in extreme scenarios.

Motivations for the Policy

Officials emphasize protecting AI integrity for public applications, viewing diversity, equity and inclusion as distorting outputs and endangering factual reliability in government functions. This supports wider anti-DEI measures across federal operations, focusing on procurement without direct private sector regulation.

Sectoral Implications

The policy may influence substantial federal AI spending by necessitating compliance terms, potentially prompting developers to modify training or prompts, which could raise expenses or restrict functionalities for government eligibility. It could limit innovation through ideological evaluations, affecting handling of topics like race or gender in areas such as criminal justice or hiring.

Internationally, it may signal U.S. emphases, possibly affecting investments or collaborations if viewed as politicized.

Stakeholder Responses

Administration-aligned views commend the order for addressing perceived liberal tech biases, ensuring fact-based AI in public-funded systems. Critics, including the Brookings Institution, argue it politicizes AI, imposing vague standards that could violate free speech and embed alternative biases. The Brennan Center highlights risks to reliability and civil liberties, potentially pressuring firms to align with administration ideologies via contract threats. The Electronic Frontier Foundation warns of censorship enforcement, chilling innovation and harming public information access.

Legal analysts indicate results depend on OMB guidance, with possible First Amendment challenges.

Outlook for AI Governance

The initiative may hasten deregulation, with the action plan proposing reduced hurdles and funding restrictions for states with stringent AI rules, while promoting U.S. technology exports and infrastructure expansion. Bipartisan agreement on risk and transparency persists, but bias interpretations may influence legislation. Globally, it positions the U.S. against rivals like China, favouring rapid advancement over extensive safeguards, though detractors caution on trust erosion.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Keystone Partners: AI Could Expose 300 Million Jobs to Automation

Next
Next

DroneShield Launches SentryCiv as US$3B Counter-Drone Market Heads Toward US$15B by 2030