top of page

United States and AI (de)regulation: an evolving framework

Aggiornamento: 25 mar 2025

By Andrea Colucci and Gabriele Fusco


It has now been two years since AI, a tool that was only paid attention to by industry “nerds,” permeated our lives. Many of us learned about it at school, finding a faithful comrade in homework or for the second maturity exam. Moreover, thanks to its development, it helps us on a daily basis to organize notes, provide us with study material and, more generally, create the optimal conditions to arrive prepared for the exam.

While its positive effects are clear, as an aid tool potentially applicable in all fields of “knowledge,” we must not underestimate its intrinsic risks. Counter-arguing the example above, using it as a tool for studying, we risk resting on our laurels, forgetting grammar and writing, or no longer knowing how to perform basic operations. Furthermore, it will replace countless jobs, lowering costs for companies, but bringing with it incalculable social costs (unemployment, underpaid employees, ...).

Here, therefore, law must intervene. And it must do so by laying the foundations for its most optimal development, most congruent with the needs of society and man (without forgetting that it must be a tool at its service, and not vice versa). In this area, governments and legislative bodies of many States have obviously already worked, but with many differences.

Starting from our context, the European Union has promptly taken action to ensure that this cannot undermine the fundamental rights of citizens, inducing us to equip ourselves with the AI Act, aimed at regulating the phenomenon, guaranteeing particular protections for the rights and freedoms of European citizens.

This vision does not fit well with the prevailing ideology on the other side of the Atlantic: free enterprise, geared towards the development of new technologies with a business-centric vision. This has ensured a more flexible approach, aimed at promoting technological development without stringent constraints, unlike the European approach. In addition to the centrality of the United States in the geopolitical landscape, it is even more important to pay attention to the regulation adopted by them due to the fact that (as is now customary in the technology sector) the greatest investments and developments in the AI sector come from there.

The regulatory landscape in this regard is complex and constantly evolving, as we will see better later with the arrival of the new President Trump. In fact, the system is characterized by competing legislation between federal and state initiatives; the latter benefiting from less bureaucracy and a consequent streamlining of the legislative process (we have an example of this with the Colorado AI Act). As we can imagine, it is therefore a situation that best lends itself to market self-regulation and soft law tools.

However, there have been attempts at far-reaching regulatory interventions. During the Biden administration, he reiterated that his administration believed it was of the utmost urgency to govern the development and use of AI in a safe and responsible manner. This was already demonstrated in 2021 with the issuance of the Blueprint for an AI Bill of Rights (which established some principles aimed at guiding the development and distribution of automated systems based on AI technologies).

This first and light attempt was followed by the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023), which addresses several sectors and is based on the key principle that exploiting AI for the common good and realizing its multiple benefits necessarily requires the mitigation of its intrinsic risks. This is mainly addressed to federal agencies and developers (businesses), demonstrating, as already mentioned, a business-centric approach (unlike the European vision). New standards for AI security were then introduced (under the Defense Production Act); an approach that, in the context of the American manufacturing landscape, requires no small amount of effort, considering that state control over industrial assets is viewed with extreme disfavor by large companies. It was a real attempt at 360- degree legislation, intent on embracing and regulating every single and possible expression of AI.

If politics isn’t everything, everything is politics. In fact, less than two months have passed since the inauguration of the newly elected President Trump at the White House, and yet the regulatory landscape regarding AI has already been turned upside down. Also responsible for all this is the political participation of Elon Musk, founder of xAI and OpenAI (ChatGPT), now head of the Department of Government Efficiency. This has clearly changed the US government’s view of AI, giving rise to an increasingly liberal perspective, oriented towards poor regulation of the phenomenon.

A first example of this is the immediate revocation of the 2023 executive order, which, as we know, imposed detailed safety requirements for AI systems. This move aims to eliminate policies that, according to the administration, were hindering American innovation in the sector, basing the development of the sector on free market principles, free from ideological bias.

in order to fill the void was launched Stargate, announced on January 21, a collaboration between OpenAI,

Oracle and SoftBank. This project involves an investment of up to 500 billion dollars over the next four years to develop AI infrastructure in the United States, including the construction of data centers and energy plants. The initiative is estimated to create more than 100,000 jobs.

In addition, David Sacks, former COO of PayPal and venture capitalist, has been appointed "zar" for AI and cryptocurrencies. Sacks will lead the President’s Council of Advisors on Science and Technology, which aims to develop a 180-day roadmap for AI, strengthening U.S. global leadership in the field.

As we can understand from the very explicit fact sheet published by the White House on January 23rd, 2025, the revocation of Biden's executive order was deemed necessary. This because, by imposing such stringent limitations on businesses, it would have compressed economic and market freedom in an intolerably dangerous manner.

If the first part of the fact sheet is characterized in a negative way (eliminating what his predecessor had done), the second part develops positively. In fact, it is written in capital letters "Enhancing America's AI Leadership". All this with a view to conquering (or perhaps, regaining) primacy and dominance within the global economic and geopolitical landscape.

It is then explicitly clarified that AI is a priority for the US government. And, ensuring a solution of continuity, what Trump did in his first term is fished out. It is recalled that the first Executive Order on AI dates back to 2019, under the Trump presidency, recognizing the importance of American leadership in the sector in order to guarantee economic and national security. Citing an historic action, his administration doubled investment in AI, creating the first National AI research institute, and providing the world’s first AI regulatory guidance to govern private-sector development.

Moreover, in 2020 Trump took another executive action, in order to create the first guidance for Federal agency adoption of AI, to more effectively deliver services to the American people and foster public trust in this critical technology. All of this, according to the White House, is rooted in a culture of free speech and human flourishing.

Last but not least, the aforementioned Stargate. The name, as many of you may have already guessed, draws inspiration from the namesake 1994 blockbuster, in which stargates were portals to other worlds.

The venture was launched with an initial investment of $100 billion, with the goal of reaching $500 billion by 2029. According to an article published by The Information, SoftBank and OpenAI have each committed $19 billion to initially finance Stargate, each holding a 40% ownership stake in the joint venture. Oracle and MGX, on the other hand, would each contribute $7 billion, while the remaining funds would be raised through debt financing and limited partners.

One of the primary goals is to develop AGI (Artificial General Intelligence), an artificial intelligence capable of developing cognitive processes comparable to those of humans. This, as the adjective “General” suggests, has the potential to be applied in a virtually unlimited multitude of fields: from global and social crises, through climate change, to medicine and so on. The declared objectives are not limited to just ensuring the oft-recalled primacy of the USA in the field of AI, but go as far as trying to guarantee a general collective well-being on a global level.

However, there is no lack of internal disagreements. While President Trump has publicly shown himself enthusiastic about the opportunities that the project seems to offer, his “shadow president”, Elon Musk, has not held back from showing skepticism.

We still know little about how the project will actually evolve. But, beyond the more practical and compelling discussions, we must ask ourselves higher, more ethical questions. And in doing so, we must ask ourselves towards which horizons the United States, the beacon of the West, is actually sailing.

Let us leave ourselves with a question: should we fear the deregulation implemented, considering the high intrinsic risk that AI drags with it, or should we rejoice for the victory of the so-called freedom of enterprise? Always assuming that we can here talk about “freedom”.

 
 
 

Commenti


bottom of page