AI

Achieving balance: a global approach to mitigating AI-related risks

It is no secret that modern technology has been pushing moral boundaries under existing legal frameworks that are not suitable for them, creating legal and regulatory minefields. In an attempt to combat this impact, regulators choose to do it in a variety of ways, thereby increasing global tensions when agreements are not found.

These regulatory differences were highlighted in a recent AI action summit in Paris. The final statement of the incident focused on inclusive and openness in the development of artificial intelligence. Interestingly, it only mentions security and trustworthiness extensively without highlighting the specific risks associated with AI, such as security threats. The signature of the statement is clearly lacking in the UK and the US, drafted in 60 countries, which shows how much consensus is currently reached among key countries.

Solve AI risks globally

In each country, AI development and deployment are regulated differently. Nevertheless, the position that fits best between the two extremes – the position of the United States and the EU (EU).

American way: first innovate, then adjust

In the United States, there is no federal level behavior to regulate AI, but instead rely on market-based solutions and voluntary guidelines. However, AI has some key legislation, including the National AI Program Act, which aims to coordinate federal AI research, the Federal Aviation Administration’s reauthorization law, and the National Institute of Standards and Technology (NIST) voluntary risk management framework.

The regulatory landscape in the United States remains in effect and has undergone a huge political shift. For example, in October 2023, President Biden issued an executive order on secure, secure and trustworthy artificial intelligence, establishing standards for critical infrastructure, enhancing AI-driven cybersecurity and regulating federally funded AI projects. But in January 2025, President Trump revoked the executive order, a hub away from regulation and prioritization innovation.

The American approach has critics. They noted that its “scattered nature” led to a complex network of rules, “lack of enforceable standards” and “gaps in privacy protection”. But the overall position is constantly changing – in 2024, state lawmakers introduced nearly 700 new AI legislation and held multiple hearings on governance as well as AI and intellectual property rights. Although it is obvious that the U.S. government has not shy away from regulation, it is clearly looking for ways to implement without compromising on innovation.

EU approach: priority prevention

The EU has chosen another approach. In August 2024, the European Parliament and the Council introduced the Artificial Intelligence Act (AI Act), which is widely regarded as the most comprehensive AI regulation to date. By adopting a risk-based approach, the bill imposes strict rules on highly sensitive AI systems, such as those used in healthcare and critical infrastructure. Low-risk applications face only minimal oversight, while in some applications, such as government-run social scoring systems, are completely prohibited.

In the EU, it is mandatory to provide AI solutions on its market not only within its boundaries but also the compliance of any provider, distributor or user operating in the EU, even if the system is developed externally. This is likely to pose a challenge to us and other non-EU providers of integrated products.

Criticism of the EU approach includes its alleged failure to set the gold standard for human rights. Due to the lack of clarity, excessive complexity is also noted. Critics focus on the EU’s highly stringent technical requirements because they are at a time when the EU seeks to enhance its competitiveness.

Find regulatory middle-level positions

Meanwhile, the UK has adopted a “lightweight” framework that is located between the EU and the United States and is based on core values ​​such as security, equity and transparency. Existing regulators, such as the Office of the Information Commissioner, have the power to implement these principles in their respective fields.

The UK government has released an AI Opportunity Action Plan, outlining measures to invest in the AI ​​foundation, implementing cross-economic adoption of AI and promoting “local” AI systems. In November 2023, the UK founded the Institute of AI Security (AISI), which developed from the Frontier AI task force. AISI was created to evaluate the security of advanced AI models, working with key developers to achieve this through security testing.

However, criticism of the UK’s AI regulation approach includes limited enforcement capabilities and a lack of coordination between departmental legislation. Critics also note the lack of central regulatory agencies.

Like the UK, other major countries have found their place within the U.S. EU. For example, Canada proposes a risk-based approach to the proposed AI and Data Apology (AIDA) that aims to balance innovation, security, and ethical considerations. Japan adopts a “people-oriented” AI approach by issuing guidelines for promoting trustworthy development. Meanwhile, in China, AI regulations are subject to strict control by the state, and recent laws require the generation of AI models for security assessment and aligning with socialist values. Similar to the UK, Australia has released an AI ethical framework and is considering updating its privacy laws to address emerging challenges posed by AI Innovation.

How to establish international cooperation?

With the continuous development of AI technology, the differences between regulatory methods have become increasingly obvious. Each individual approach on data privacy, copyright protection and others makes global consensus on key risks associated with AI more difficult. In this case, international cooperation is essential to establishing baseline standards to address critical risks without cutting down on innovation.

The answer to international cooperation may depend on global organizations such as the Organization for Economic Cooperation and Development (OECD), the United Nations and several other organizations that are currently working to establish international standards and ethical norms for AI. The path forward is not easy because it requires everyone in the industry to find common ground. If we think innovation is moving at a light speed – now is the time for discussion and agreement.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button