(US, 21st) Due to the reluctance of the US, China, and the European Union to promote international cooperation—and instead developing their own individual AI governance systems—the number of global artificial intelligence (AI) regulations and policies has increased by about 30% over the past three years.
According to Nikkei Asia, data from the Organisation for Economic Co-operation and Development (OECD) as of Friday (September 19) showed that the total number of AI-related regulations, guidelines, and other policies worldwide has exceeded 1,300, about 30% more than in 2022. At that time, OpenAI had officially launched the chatbot ChatGPT, accelerating the widespread adoption of generative AI.
Among major countries and regions, the European Union, which is gradually developing relatively strict AI regulations, and the US and UK, which have rolled out global AI development blueprints, have all seen policy increases ranging between 10% and 20%.
Since generative AI is mainly led by the US, Europe, and China, it is clear that the development of AI regulations is driven by developed economies. About 30% of the world’s AI policies come from G7 and EU member countries. Each has launched measures such as subsidies for AI infrastructure and regulations requiring companies to manage the risks of AI abuse and respond to misinformation.
On the other hand, countries in the Global South are lagging behind developed countries in developing and using AI, making them even more dependent on developed economies for this technology.
Analysts point out that as competition between nations to improve AI capabilities intensifies, it is crucial for the international community to strengthen cooperation in establishing related safety and ethical codes, especially as AI technologies could be used to develop lethal weapons and carry out cyberattacks.
Stanford University in the US tracked 230 major AI incidents in 2024, an increase of about 60% from 2023. ChatGPT has recently even been linked to incidents of user suicides.
According to the International Association of Privacy Professionals, only 39% of companies globally have established AI governance committees, indicating that self-regulation among businesses is still uncommon.
Some leading AI companies, such as OpenAI, have called for governments worldwide to cooperate in setting regulations and even suggested the establishment of an international regulatory body. However, the current focus among nations seems to have shifted to competition rather than cooperation.
According to Nikkei Asia, data from the Organisation for Economic Co-operation and Development (OECD) as of Friday (September 19) showed that the total number of AI-related regulations, guidelines, and other policies worldwide has exceeded 1,300, about 30% more than in 2022. At that time, OpenAI had officially launched the chatbot ChatGPT, accelerating the widespread adoption of generative AI.
Among major countries and regions, the European Union, which is gradually developing relatively strict AI regulations, and the US and UK, which have rolled out global AI development blueprints, have all seen policy increases ranging between 10% and 20%.
Since generative AI is mainly led by the US, Europe, and China, it is clear that the development of AI regulations is driven by developed economies. About 30% of the world’s AI policies come from G7 and EU member countries. Each has launched measures such as subsidies for AI infrastructure and regulations requiring companies to manage the risks of AI abuse and respond to misinformation.
On the other hand, countries in the Global South are lagging behind developed countries in developing and using AI, making them even more dependent on developed economies for this technology.
Analysts point out that as competition between nations to improve AI capabilities intensifies, it is crucial for the international community to strengthen cooperation in establishing related safety and ethical codes, especially as AI technologies could be used to develop lethal weapons and carry out cyberattacks.
Stanford University in the US tracked 230 major AI incidents in 2024, an increase of about 60% from 2023. ChatGPT has recently even been linked to incidents of user suicides.
According to the International Association of Privacy Professionals, only 39% of companies globally have established AI governance committees, indicating that self-regulation among businesses is still uncommon.
Some leading AI companies, such as OpenAI, have called for governments worldwide to cooperate in setting regulations and even suggested the establishment of an international regulatory body. However, the current focus among nations seems to have shifted to competition rather than cooperation.