As Donald Trump prepares to take office, America’s artificial intelligence (AI) policy stands at a crucial turning point, sparking debates about the future of AI development and regulation. His victory as the 47th president has already earned congratulations from tech giants like Jeff Bezos and ignited discussions about potentially giving Elon Musk a significant role in shaping the country’s AI policies moving forward. Trump has vowed to reverse many of the AI executive orders put in place by the Biden administration, arguing that they "stifle AI innovation" and push "radical left-wing ideas" onto the development of the technology. Although it's unclear what the direct impact of this repeal will be, it signals a move toward deregulation.
The rhetoric surrounding it is dividing opinions, turning AI policy into a partisan issue—although voters across party lines support balanced regulation. Industry experts, however, are concerned that cutting back regulation may jeopardize AI safety at a time when it’s needed most. “The U.S. must focus on a solid and efficient infrastructure that allows organizations evaluating AI systems and their deployment to publish credible and verifiable information about these systems, including their origins, training data, sensor data provenance, and any security incidents,”
Musk, who supported Trump throughout his campaign, has been appointed to lead the newly created
“The U.S. should lead the world in advancing AI safely and securely. No one is better equipped to help the Trump Administration make America lead on AI than Elon Musk,” reads the
Musk has long warned about AI’s existential risks and expressed concern over AI becoming too powerful too quickly. But critics aren’t convinced. Some remain skeptical about Musk’s role in shaping AI policy, pointing to his decision to distance himself from OpenAI, a company he helped create, as well as his outspoken opposition to AI regulations. “There’s no way for Elon Musk to be unbiased, just like there’s no such thing as a truly ‘unregulated market.’ The absence of regulation is, in itself, a form of regulation—one that gives big corporations free rein to push their own agendas,”
The U.S. faces mounting competition from China, which has invested heavily in AI development intending to surpass America by 2030. The AI race has become a central issue in discussions on national security, with both Democrats and Republicans viewing AI technology as a crucial component in defense strategies. China’s lower labor costs and focus on model training give it an edge in the AI race. Trump’s administration will likely continue to tighten restrictions on Chinese access to advanced semiconductors, a strategy initiated during his first term and expanded under President Biden.
Trump’s stance on AI has been erratic, often praising its potential while warning of its dangers. He said that the U.S. would require vast infrastructure upgrades for AI, particularly in energy and computing power, to ensure the U.S. maintains its lead over China. To secure America’s technological position, experts call for ambitious investments in AI infrastructure. A recent proposal from OpenAI suggests creating "National Transmission Highways" to modernize the power grid and meet AI’s immense energy demands, a plan that aligns with Trump’s vision of improving infrastructure. “AI can optimize energy distribution and usage through virtual power plants, which manage thousands of components involved in energy production, storage, and consumption,” added Maher. “However, this is only feasible with extensive automation and AI to support precise decision-making. AI safety and security will be crucial for such systems’ successful, large-scale deployment.”
Despite widespread bipartisan support, the U.S. AI Safety Institute (AISI) —an organization created after Biden’s executive order to spearhead government efforts on AI safety—could face an uncertain future under President-elect Trump’s newly created DOGE. The department is expected to target federal programs for cuts, and AISI might be on the chopping block. However, tech leaders, lawmakers, and advocates are rallying for a more nuanced approach, arguing that safeguarding AI technology is critical to both national security and ethical progress. Moreover, AI companies like Leading OpenAI and Microsoft are vocal about the need for strong safeguards to maintain the U.S.’s position as a global leader in AI.
“AI regulation has to start at the top,” Raj De Datta, CEO and co-founder of Bloomreach, shared with me. “A handful of companies dominate the AI market, and everyone else depends on their data centers or the models they produce. It’s vital that we start with these tech giants—ensuring they respect privacy, operate fairly, use diverse datasets, and uphold values we all agree on. That’s how we get the outcomes we want, the kind that benefit society.”
But others, like Ekbia, caution that the current profit-driven approach of big tech companies is unlikely to prioritize the safety of their systems, let alone ethical or environmental concerns. He pointed to recent controversies at OpenAI as evidence that many tech companies put profit—their so-called "bottom line"—ahead of legal and ethical considerations. “How can we expect companies like Google, which has moved operations to tax havens like Ireland or the Cayman Islands, to act responsibly when it comes to developing AI?” he asks.
The current situation underscores a key tension: balancing innovation with accountability. As Trump prepares to reshape U.S. AI policy, the industry faces a period of uncertainty. Whether his administration will accelerate or stifle innovation remains to be seen, but one thing is clear: the stakes for AI safety, security, and leadership are higher than ever before.