In a world increasingly driven by algorithms, artificial intelligence (AI) has transcended being a hot topic in the tech world to become a key force in driving the global economy, social governance, and even changing the way we live our daily lives. In the face of rapid technological progress, an important global question has emerged: How to monitor the development of AI so as not to harm humanity and be consistent with human values ????and global interests?
This article analyzes the regulatory concepts and practices of three major regions that are leading in the field of AI - the United States, the European Union and China, and explores how they affect the future development of the technology.
At the same time, the article also attempts to solve a key question: How can we build a regulatory mechanism that can not only adapt to the changing characteristics of AI, but also meet the needs of global governance?
United States: Self-regulation can easily lead to monopoly
The United States has adopted industry-specific regulatory principles in three areas: privacy, cybersecurity, and consumer protection. U.S. regulatory laws are formed from the bottom up. Various industry groups first propose draft laws, which are then repeatedly revised and improved through the legislative body. The U.S. legislative system gives each industry the authority to independently formulate regulatory laws related to its industry.
The U.S. government’s request for voluntary commitments from leading AI companies to manage AI risks reflects its support for industry-specific approaches. For example, Meta Corporation (formerly Facebook) has established an AI responsibility team and launched a "Generative AI Community Forum" to solicit public feedback on AI products in a transparent manner.
This approach, which relies on industry self-regulation, is supported by some experts who argue that expert panels made up of industry practitioners have a deep understanding of specific areas. By integrating AI experts into these groups, complex and detailed regulatory frameworks can be built for AI applications in various industries.
However, this approach also risks the arbitrariness of self-regulation and the control of the rules by a few dominant companies. In view of the transformative impact and rapid popularity of AI, we need to be wary of over-reliance on "well-intentioned" practices and the possible dominance or even monopoly of a few industries and companies in rule-making.
The United States has also adopted an industry-specific approach when it comes to regulatory enforcement. For example, as businesses increasingly shift to digital operations, cybersecurity has become a key focus across industries. Cybersecurity is primarily the responsibility of the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, while other agencies such as the Federal Trade Commission and Securities and Exchange Commission each have specific responsibilities in specific industries. The same is true for consumer protection, with multiple agencies working across industries, with the Federal Trade Commission being the primary responsible agency, while the Consumer Financial Protection Bureau and the Food and Drug Administration play important roles within specific industries.
EU: Overly strict regulation may stifle innovation
The EU's Artificial Intelligence Law inherits and develops the legislative framework of the General Data Protection Regulation (GDPR) and proposes a comprehensive AI regulatory system. This system covers everything from defining requirements for high-risk AI systems to establishing a European Artificial Intelligence Commission. The bill specifically emphasizes the importance of user safety and fundamental rights, and stipulates transparency standards for AI systems and strict post-market monitoring rules for AI suppliers. This move significantly reflects the EU’s firm determination to cultivate a human-centered, ethically oriented ecosystem in the field of artificial intelligence and to protect the public interest.
The Artificial Intelligence Law classifies AI products and imposes different levels of supervision on each category based on the core concept of risk. This classification considers the potential risks that AI products may bring and proposes necessary safeguards accordingly. For example, low-risk AI systems (such as spam filters or video game algorithms) may be subject to only minimal regulation to maintain their innovation and usability. For high-risk applications (such as biometrics and critical infrastructure applications), more stringent requirements are imposed, including tight risk management and enhanced user transparency.
In order to implement the Act, the EU has adopted the approach of establishing a central regulatory agency, namely the establishment of the Artificial Intelligence Commission, which is responsible for elaborating the legal framework for AI, interpreting and enforcing the provisions of the Artificial Intelligence Act, and ensuring that high-risk AI systems are subject to unified supervision. However, the Artificial Intelligence Law may face challenges similar to GDPR during its implementation, such as unintended consequences and complex rules that burden enterprises and fail to significantly improve user trust or experience.
A risk-based regulatory approach may oversimplify the complex reality of AI products and ignore the inherent uncertainty and diverse risk scenarios of AI systems. Recent research suggests that a large number of AI systems may be classified as high risk, suggesting that this approach may lead to undue regulatory burdens that hinder the development of beneficial technologies.
Given the rapid development and global deployment of AI, a single centralized regulatory entity, despite adopting a comprehensive approach, may face challenges in responding to the diversity and rapid change of AI issues. Delays in decision-making bottlenecks and bureaucratic procedures can hinder the timely response that is critical in a dynamic AI environment, affecting the efficiency and adaptability of regulation. While the purpose of establishing the AI ??Commission is worthy of recognition, its effectiveness in dealing with the complexities of the real world remains to be proven.
China: Finding a balance between strong regulation and industry innovation
China’s regulatory strategies and methods in the field of artificial intelligence reflect state guidance and control. China regards artificial intelligence not only as an aspect of technological development, but also as an important part of the country's economic and social infrastructure. This is consistent with China’s management of traditional public resources such as energy and electricity. China’s main goal is to promote the development of artificial intelligence and its applications while ensuring safety and order, while avoiding possible excessive influence or monopoly by the private sector.
The recent introduction of artificial intelligence-related regulations reflects China’s determination in this regard. These regulations are consistent with the principles of the Cybersecurity Law and extend regulatory responsibilities originally directed at Internet service providers and social media platforms to service providers involving artificial intelligence. This means that AI service providers need to operate under the guidance of regulators and report detailed records of their operations and maintenance to relevant agencies. The rapid development and implementation of these regulations shortly after the launch of ChatGPT demonstrates the determination of Chinese regulators to keep pace with the rapid development of artificial intelligence.
This state-led artificial intelligence regulatory model is not only conducive to ensuring that the development of artificial intelligence is in line with the country’s overall development strategy and planning, but is especially important for developing countries that need to be cautious about the rapid spread of artificial intelligence technology and its potential impact.
At the same time, compared with the public resources such as land, minerals, and electricity that China has traditionally regulated, the dynamic and rapidly developing nature of artificial intelligence requires that the regulatory framework must be flexible, knowledge updated frequently, and require a large amount of computing resources. Faced with this challenge, China is working hard to find a balance: on the one hand, it must protect public interests through strong regulatory mechanisms, and on the other hand, it must remain flexible enough to stimulate innovation and allow the industry to conduct necessary experimentation and exploration. .
Is it necessary to establish a global AI governance body?
Given that artificial intelligence (AI) technology and its impact are not limited by national borders, the United Nations faces the important task of establishing a unified global AI regulatory mechanism aimed at cutting across cultural and policy differences.
Building a truly “target-adaptive” global AI regulatory system is a huge challenge. As the different regulatory strategies of the United States, the European Union, and China illustrate, the key lies in how to deal with complex socioeconomic and political differences, as well as each country's deep-rooted regulatory traditions in its legal and administrative systems.
When countries evaluate AI regulation, they need to consider the application of technology in different national contexts and the corresponding pros and cons. Developed countries may pay more attention to risk control and privacy protection, while developing countries may be more inclined to use AI to promote economic growth and solve pressing social problems. To balance these different objectives, the United Nations needs to use its unique position to promote intercultural dialogue and harmonize diverse perspectives.
The open source and self-generating nature of AI requires a flexible, responsive governance mechanism that goes beyond traditional governance systems for high-risk technologies such as nuclear power. There are proposals to establish an international AI agency, similar to the role of the International Atomic Energy Agency (IAEA) in nuclear governance, to guide countries’ AI strategies and fill policy gaps as AI technology develops.
However, we believe that the IAEA is efficient because it oversees a smaller number of nuclear entities and because nuclear armaments are limited to a few countries. Unlike nuclear risks, the open source nature of AI and the significant influence of non-state actors may require a more open and dynamic regulatory platform, similar to cloud service platforms such as GitHub, rather than the traditional centralized governance model with regular consultative meetings. .
Given the wide range of applications of AI technology in different fields and the diverse risks it brings, such as mass unemployment, deepfakes, and automated weapons, it is important to ensure broad participation from different socioeconomic sectors, geographies, and ethnic groups to achieve inclusivity decision making.
The development of artificial intelligence is still in its infancy, but without urgent and appropriate intervention, its rapid and unmanaged growth could lead to a pandemic-like situation. Based on these analyses, we recommend the establishment of a global open source public goods governance mechanism that adheres to standards of security, human dignity and fairness, ensures diverse representation of geopolitical, technological and socio-economic dimensions, respects national priorities and cultural contexts, And adapt to the self-generated and open source characteristics of artificial intelligence to lay a solid foundation for global AI regulation.
In summary, global AI governance is not only a technical challenge, but also a policy and ethical issue. As AI technology rapidly develops and penetrates around the world, a comprehensive, diversified and adaptable governance framework is needed to ensure that its development is consistent with global interests and human values. The cases of the United States, the European Union, and China illustrate different governance strategies and approaches, each with its own strengths and limitations. These different approaches reveal the complexity of AI governance and also provide valuable reference for the construction of a global governance framework.
Ultimately, our proposed open-source, public interest-oriented global AI governance framework aims to combine these different approaches and insights to build a system that can adapt to a rapidly changing technological environment while meeting global governance needs. The United Nations will play a crucial role in this process, not only as the shaper of the framework, but also as a key force in promoting the realization of the Sustainable Development Goals.
About the Author:
Kong Ao, director of external cooperation relations at the United Nations, and senior project expert at the United Nations Technology Bank.
Wu Weih holds an MBA from Oxford University and a Master of Laws from UPenn.
Liu Shaoshan is the director of the Embodied Intelligence Center of the Shenzhen Institute of Artificial Intelligence and Robotics (AIRS) and a member of the Technical Policy Committee of the Association for Computing Machinery (ACM).