Welcome. Today, we're doing an in-depth analysis of US AI policy under President Trump. Over the next, uh, 15, 20 minutes or so, we'll be unpacking the key shifts, the forces driving them, and crucially for you, the business leader, whether you're based here in the US or operating internationally, what this all means. Exactly. Our aim here is to give you a clear, concise rundown of these policy changes. We're drawing on official documents, industry analysis, news reports. Really trying to synthesize it, because for anyone steering a business today, understanding the nuances of this administration's AI approaches, well, it's vital for making strategic calls for investment and just navigating where tech is heading, both domestically and globally. Right. And what we're seeing is a pretty significant change in direction, wouldn't you say? Yeah. The Trump administration has, um, very deliberately pivoted. They're calling it a pro-innovation, nationalistic approach to AI. It definitely marks a contrast with the previous administration's focus. Right. We saw the, uh, the pretty swift revocation of Biden's Executive Order 14110, the one centered on safe, trustworthy AI, and almost immediately, its replacement, Trump's EO 14179, explicitly titled Removing Barriers to American Leadership in Artificial Intelligence. That title says a lot. It really does. And the core goal stated within it is to sustain and enhance America's global AI dominance. Hmm. That single phrase, global AI dominance, kind of encapsulates the whole ambition. It sets the tone for everything else we're seeing. And it signals a shift, right, from maybe a more collaborative global view towards something more competitive, more nationalistic. Absolutely. That's a key takeaway for businesses operating across borders. This could have real ramifications for international standards, for collaborations that companies rely on. Okay, so let's untack this America First AI doctrine a bit more. What are the, uh, the core principles driving it? You're seeing a clear focus on American technological supremacy, first and foremost, then using AI to boost economic dynamism. There's a very strong emphasis on deregulation, cutting red tape to accelerate what the private sector is doing, and fostering these large-scale public-private partnerships, especially around critical infrastructure. EO 14179 is the key document here, explicitly tasking agencies to remove those bureaucratic hurdles. And the motivation behind this, this assertive stance. Competition seems like a huge factor. Oh, undoubtedly. The competition with China is, well, you could argue it's the central driver. Achieving and maintaining that global AI dominance is seen as absolutely essential in that context. Things like China's Deep Seek AI model emerging, they get framed as these urgent wake-up calls. Right, fueling the need for the US to speed up, maybe even implement stricter tech controls down the line. Exactly. That's part of the discussion. We've also heard this phrase, ensuring AI systems are free from ideological bias. What's the thinking there? Yeah, that's an interesting element. The administration's focus here seems different from, say, traditional algorithmic fairness concerns about protected characteristics. It often aligns more with critiques of what they might see as left-wing ideas or perceived censorship by big tech platforms. So less focus on DEI initiatives within AI development. It seems that way. That perspective was actually cited as one reason for revoking the Biden EO. It suggests a potential deprioritization of government-led diversity, equity, and inclusion efforts, specifically within the AI R&D space. This dominance language, though, it feels like it could, uh, ruffle some feathers internationally. What are the risks for businesses operating globally? That's a really significant point for any multinational company. Mm-hmm. This assertive posture, combined with talk of maybe expanding export controls- Mm-hmm. ... it definitely risks creating friction, even with allies. If you restrict technology access too broadly, even to friendly nations, you could undermine the very cooperation needed for things like resilient global supply chains or setting international tech standards. Standards that businesses rely on for, you know, smooth operations everywhere. Precisely. And you've seen comments criticizing, say, European regulatory approaches. That just highlights the potential for misalignment. It's interesting, though, because the rhetoric is often deregulation, hands-off, but then you see these huge government interventions. That's a crucial contradiction, or perhaps tension, for businesses to track. You have the push for deregulation side by side with very targeted interventions. Mm-hmm. Look at the Stargate Project or the DOE initiative to lease federal land for data centers. Even the talk of using executive power to fast-track infrastructure. So it's deregulation in some areas, but very active government shaping in others. Exactly. Substantial government involvement aimed directly at building strategic AI capacity, potentially favoring certain players or national champions. Okay, let's dive into some of the specific policy directives flowing from this. We know EO 14179 is the foundation. What else should business leaders have on their radar? Well, EO 14179 kicked off the process for a national AI action plan. That's a big one to watch. Expected maybe mid to late 2025, led by key advisors like Krassios, Sachs, and Waltz. It also ordered the review and potential removal of policies from the old Biden EO, and told the Office of Management and Budget, OMB, to revise its guidance. And we've seen that OMB guidance come out. Yes, two key memos dropped in April 2025, M-25-21 and M-25-22. Okay, tell us about M-25-21 first. What's the gist? M-25-21 is titled Accelerating Federal Use of AI. It guides how federal agencies should use AI themselves. It lays out three main priorities. First, innovation push agencies to use US-developed AI, remove barriers. Second, governance agencies need chief AI officers, AI governance boards, compliance plans, specific policies for generative AI, inventories of their AI use. And there's an inter-agency CAIO council. And third, public trust. Agencies need to ensure AI is trustworthy, secure, accountable. This includes minimum risk management for what they call high-impact AI. High-impact AI. We should probably define that later. What about the other memo, M-25-22? How does that affect businesses, especially those wanting government contracts? Right. M-25-22 is all about driving efficient acquisition of AI, so how the government buys AI. Key directives here are prioritize American AI providers, set clear privacy requirements, avoid vendor lock-in, mandate ongoing testing and monitoring, and importantly.... require disclosure if a plan AI use qualifies as high impact. There's also mention of a GSA OMB repository for AI procurement tools. So yeah, crucial reading for gov tech vendors. And there is also an AI Education EO mentioned. What's the aim there? Yes, the Advancing Artificial Intelligence Education for American Youth, EO. The goal is pretty straightforward: get AI literacy embedded in the US education system, right from K-12 up. It sets up an AI Education Task Force, calls for things like a Presidential AI Challenge, encourages public/private partnerships for creating educational resources, and tells existing federal agencies to leverage their resources. Like directing the Department of Education or Labor? Exactly. DoED is told to prioritize AI in teacher training grants. The National Science Foundation, NSF, needs to research AI in education. Department of Labor should promote AI apprenticeships and use WIOA funds. It's all part of building that future AI talent pipeline, which feeds back into the larger national strategy. Which brings us back to that AI Action Plan being developed. What do we know about it so far? What might it contain? Well, they put out a Request for Information, an RFI, and got apparently over 10,000 comments, so huge interest from businesses, researchers, everyone. Content-wise, we expect it to cover the waterfront, hardware, data centers, energy needs, AI models themselves, data privacy, safety, national security, IP rights, government procurement rules, export controls, the works. Any hints from industry what they were pushing for in those comments? Reports suggest some key themes, for example, weakening copyright restrictions for AI training data; stronger export controls, particularly targeting China; and a push for federal rules to preempt varying state regulations. The final plan, expected mid-2025, should give a much clearer roadmap for the administration's priorities. That's when businesses can really start adjusting strategies based on concrete plans. You mentioned high impact AI earlier, defined in the OMB memo. Why is that definition important? It's crucial for compliance and risk. Right. OMB Memo M-25-21 defines it as AI where the output could significantly affect someone's rights or safety. Now, the core requirements for managing this AI, things like impact assessments, monitoring, human oversight, they largely carry over from previous guidance. But... Some observers point out that the specific examples given in this new guidance might be narrower. Areas like election integrity or deepfakes, which were highlighted before, seem less prominent in the examples this time. How agency chief AI officers interpret significant effect will be key, and there's always the possibility of waivers. So, potentially a narrower application, meaning less stringent oversight in some sensitive areas. That's the concern some have raised. It could lead to reduced oversight, potentially increasing risks in those areas. It's something businesses using or developing AI in sensitive fields need to watch closely. Okay, let's shift gears slightly. Who are the key people actually driving this policy within the White House? Who should business leaders be aware of? Well, at the very top, President Trump sets the narrative: AI dominance, deregulation, the focus on China competition. He's involved in major announcements, like Stargate, meets with tech CEOs. Then you have key advisers. David Sak's a major figure, the AI and crypto czar. Mm-hmm. He co-leads the AI Action Plan development; chairs PCST, that's the President's Council of Advisors on Science and Technology; and acts as a key liaison to Silicon Valley. His focus is very much on removing barriers, US leadership, free speech concerns, and infrastructure. And Michael Kratsios. Michael Kratsios, back as the director of OSTP, the Office of Science and Technology Policy. He's crucial for interagency coordination; advises the president and OMB on R&D strategy; co-leads the AI Action Plan with Sak and Waltz; and chairs that new AI Education Task Force. His mandate is really about ensuring US tech supremacy, accelerating R&D, and aligning AI safety work with, uh, what they define as national values. And there's been talk of Elon Musk potentially having a role. Yes, his potential appointment as a senior adviser is definitely something people are watching. His influence could be significant given his background and relationship with the president. Uh, and beyond the White House, which agencies are most involved on the ground? Where might businesses interface? OMB, as we discussed, is critical for the government-wide rules. Commerce, through NIST and the US AI Safety Institute, handles standards, testing, risk management frameworks, and potentially export controls. There's talk of an AI diffusion rule and using the entity list. DNA and DARPA are obviously huge players in defense. AI R&D, things like CJDC2, the Chief Digital and AI Office, CDAO, AI Forward, the AI Cyber Challenge- Mm. ... AICC, they have a massive AI budget. What about research funding, NSF, NIH? NFF funds a lot of the basic and applied research, the national AI research institutes. The MAIR Pilot, though its funding is uncertain. Also relevant is the CREATE AI Act. There are concerns about potential cuts to DEI-focused funding there. NIH supports AI and biomedical research projects like AIM-AHEAD. They're focused on safe and responsible AI and health, but face budget pressures. Both NIH and DOE are involved in a new task force called TRAINS, focused on trustworthy AI and science. Yeah. DOE is also key for that land leasing initiative for data centers, and does significant AI R&D for science, energy, and national security through groups like NNSA and the FAST Initiative. And State, Education, Labor. State handles international AI diplomacy and has its own enterprise AI strategy. And Education and Labor, as we mentioned, are implementing the AI Education EO Teacher Training Apprenticeships using WIOA funds. It seems like a lot of activity, but you mentioned power might be quite concentrated in the White House. That's a common observation. With key advisers driving the agenda from the White House, co-leading major initiatives like the Action Plan, there's potential for friction with the traditional processes and expertise within the established agencies. Let's talk about that White House-Silicon Valley connection. It seems particularly strong now. It does. There's very direct engagement. President Trump meeting with NVIDIA's CEO Jensen Huang to talk AI policy.... chips, power needs, manufacturing, even potential tariffs. David Sacks hosting the Digital Assets Summit, previous meetings with OpenAI's Sam Altman. And the most prominent example, of course, is the Stargate Project. Right, Stargate. Let's break that down. Who's involved? It's a major consortium. OpenAI is reportedly the operational lead. SoftBank, led by Masayoshi Son, is the financial lead. Oracle, with Larry Ellison, is a key tech partner. MGX a U.A.E. fund is also involved. Initial tech providers mentioned include Arm, Microsoft and NVIDIA. And Microsoft seems to have secured Azure cloud exclusivity, which is significant. And the scale is staggering. And reported target is up to $500 billion by 2029. An initial $100 billion commitment is often cited, though details on the equity breakdown and how SoftBank plans to source its large loan portion remain somewhat unclear. There's definitely some skepticism about the funding feasibility at that scale. But the government's role isn't direct funding. It's facilitation. Correct. The project was announced at the White House. President Trump pledged to expedite the necessary infrastructure, things like energy access, permitting potentially using emergency declarations. It's about smoothing the path. And the goals beyond just building massive compute power. Officially, it's about cementing U.S. AI leadership, bolstering national security, creating jobs, they claim, up to 100,000, advancing health care applications and generally re-industrializing parts of the economy. The first big data center is planned for Abilene, Texas with other sites under evaluation. So this close relationship, Stargate being a prime example, what, what are the implications for the wider tech ecosystem, smaller players? That's the big question. This tight nexus could potentially lead to policies or infrastructure build outs that disproportionally benefit the established tech giants involved. Some call it regulatory capture lite. Smaller startups, open source communities, academic researchers might find themselves at a disadvantage compared to these government-facilitated mega-projects. It could create a kind of two-tiered system. And Stargate itself feels like more than just a commercial venture. Oh, absolutely. It's being framed as an instrument of industrial policy, even geopolitical strategy. It's about U.S. leadership versus China, about onshoring and controlling the means of AI production. Okay. Let's connect this to government spending. Where is the investment actually flowing to achieve this dominance? Well, the clear priority is AI infrastructure enablement. You have the Stargate facilitation, which is indirect support. Then the DOE Land Use Initiative leasing federal land for data centers, ideally co-located with power generation. They've issued an RFI aiming for operations by late 2027. There's also a broader focus on streamlining permitting for energy infrastructure needed to power all this AI. And what about direct federal R&D funding? It continues, but there are uncertainties. We need to see how the final budgets shake out compared to, say, the Biden FY25 proposals, especially with potential spending caps. NSF has its requests out for the AI research institutes and NER pilot, but NER's funding path is particularly unclear. The CREATE AI Act authorization is there, but appropriations are key. And we mentioned the potential impact of DEI-related grant reviews or terminations at NSF. DARPA actually saw a budget increase request for FY25, supporting initiatives like AI Forward and Core AI Research, plus the AI Cyber Challenge. And health, energy, defense? NIH has its requests for AIM-AHEAD and other AI health grants, but faces budget pressures. They're part of that TRAINS task force. DOE has its FY25 request covering AI for science, energy, national security and the FAST initiative. And the DoD's overall FY25 R&D request includes significant funds earmarked for AI and autonomy, supporting things like CJADC2. Plus the education and workforce funding we touched on earlier. Right. The DoED grants, NSF research, DoL apprenticeships using WIOA funds. These are all tied into the National AI Initiative Act, NAIIA of 2020 framework theoretically. So the strategy seems heavy on infrastructure enablement, maybe less certain on basic research or things like NER R. That seems to be the emphasis. There's a potential mismatch between the grand ambitions like Stargate's $500 billion target and the realities of year-to-year federal appropriations. Political decisions like the NSF grant reviews can also significantly impact specific funding streams. The focus seems to be on leveraging private capital for the big infrastructure push facilitated by government action. Okay. Looking at the whole picture, what are the potential enablers for AI progress under this approach? And what are the hindrances? On the enablers side, you could see a targeted innovation boost from deregulation in certain areas. The major infrastructure investments, if they materialize via Stargate and DOE land, could be significant. Streamlined federal procurement guided by OMB M-25-22 could speed adoption in government. And there's clearly strong political will from the top leveraging public-private partnerships. And the potential roadblocks or downsides? A major concern is the potential erosion of public trust if safety and ethical considerations are seen as secondary to speed and dominance. This could lead to backlash or reputational damage. Regulatory fragmentation is a risk if federal action doesn't preempt a patchwork of state laws. International isolation or misalignment with allies, as we discussed. Workforce and talent gaps could persist, especially if immigration policies become more restrictive or STEM diversity initiatives are cut. And smaller innovators or non-favored players might face bottlenecks. Plus, general policy instability always creates uncertainty. So maybe a trade-off. Short-term speed versus long-term sustainability. That seems to be the core tension, particularly in high-stake sectors like health care or finance. The potential benefits of AI are huge, but so are the risks. Without clear federal guardrails, reliance falls on state laws or industry standards. Even the Treasury Department acknowledged the need for regulatory clarity in finance AI, and groups like EDvamed and MedTech are calling for federal guidance. Right. So putting this all together, what are the key strategic considerations for a tech investor or business leader listening to this? First, opportunity identification.The AI infrastructure ecosystem hardware, energy, data center construction software looks like a major focus. Defense AI contractors could benefit, US-based AI providers targeting government contracts- Mm-hmm. ... the AI education and training sector, maybe niche R&D areas aligned with priorities. And the risks to assess. Regulatory uncertainty is high, both federal and state fragmentation. Ethical, safety, and reputational risks if guardrails are weak. Geopolitical risks and supply chain vulnerabilities, especially concerning China, talent constraints, and the risk of backlash against this dominance focus internationally. So, strategically? Monitor policy implementation very closely- Yeah. ... especially the AI action plan. Prioritize investments that align with the infrastructure push. Assess your geopolitical exposure carefully. Scrutinize the AI governance practices of companies you invest in or partner with, and stay plugged into the policy dialogue. You mentioned David Sacks earlier, the AI czar effect. Political access seems important. It certainly appears so. Alignment with the White House's priorities and key figures seems beneficial in this environment. Ultimately, investors need to balance the potential for policy-driven, short-term acceleration against long-term sustainability and true global competitiveness. Differentiating between a temporary surge and fundamentally sound AI capabilities will be key. This has been incredibly insightful. To quickly summarize for everyone listening, the Trump administration's AI policy marks a real shift. It's about innovation, yes, but framed strongly through national dominance. That means deregulation efforts alongside significant infrastructure investments, often through public-private partnerships. Key sectors like defense and education are getting specific focus. For business leaders in the US and abroad, really grasping these dynamics is critical for navigating what's coming. Spotting opportunities in areas like infrastructure or gov tech, and managing the risks, like regulatory shifts or international friction. Absolutely crucial. And as we wrap up, here's something to think about. How might this very nationalistic deregulatory drive reshape the global AI ecosystem over the next few years? What long-term strategies should your business be considering now to adapt to- to potential policy shifts or changes in international collaboration? Definitely keep an eye out for that AI action plan when it lands, and continue monitoring regulatory moves at all levels. Thank you for joining us for this in-depth analysis.