INTRODUCTION
On December 26, 2024, South Korea passed a new law about AI called the AI Framework Act.[1] This makes South Korea the second place after the European Union to create rules about how Artificial Intelligence (“AI”) should be developed and used ethically.[2]
This law reflects the fact that South Korea wants to help AI grow while keeping it safe and trustworthy. Its main objective is to create a balance between watching over AI companies and helping them grow. The law includes many provisions on how the government and different agencies (central and local) should invest in and grow the AI industry in Korea. At the same time, it includes similar articles to the EU AI Act and bill C-27 (“Artificial Intelligence and Data Act” or “AIDA”) of Canada: definition of high impact AI system, generative AI, distinguishing AI developers and deployers, safety, transparency, and reporting requirements.
This article provides a comprehensive analysis of the AI Framework Act ("Act"), with a particular focus on regulatory compliance requirements for businesses. The discussion examines key definitions, scope and exemptions, statutory and obligations, and enforcement mechanisms under the Act.
DEFINITIONS
Article 2 of the Act contains important definitions for businesses and will have a significant impact on their operations. It should be noted that these translations are the author's and are not official versions.
“Artificial intelligence”: it refers to the electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgment, and language comprehension.
“Artificial Intelligence system”: AI-based system that infers outcomes such as predictions, recommendations, and decisions that affect real and virtual environments to achieve given goals with varying levels of autonomy and adaptability.
“High-impact AI”: AI systems having a significant impact on or risk to human life, physical safety, and fundamental rights and are utilized in any of the following areas:
“Generative AI”: An AI system that generates various outputs such as text, sound, pictures, videos, and other forms by imitating the structure and characteristics of input data.
“AI industry”: Industry that develops, manufactures, produces, or distributes products utilizing AI or AI technology ("AI products") or provides services related to these products ("AI services").
“AI business operator”: A corporation, organization, individual, or national institution, etc., engaged in businesses related to the AI industry, falling under any of the following categories:
“User”: A person who receives AI products or AI services.
“Affected person”: A person whose life, physical safety, and fundamental rights are significantly affected by AI products or AI services.
“AI ethics”: Ethical standards that all members of society should abide by in all areas of AI development, provision, and use, based on human dignity, to create a safe and trustworthy AI society that can protect the rights and interests of citizens, as well as their lives and property.
SCOPE & EXEMPTION:
South Korea’s newly passed law applies to AI systems in the jurisdiction of South Korea but also has an extra-jurisdictional effect like EU’s AI Act. Any activities conducted outside of South Korea will be ins cope if they affect the domestic market or users.
Furthermore, the Act specifically states that it does not apply to AI developed or used solely for national defense or security purposes as specified by the Presidential Decree (Article 4, paragraph 2). Similar exemption exists in the EU’s AI Act as well.
OBLIGATIONS
This section delves into the specific regulatory obligations established by the Act. They target key stakeholders involved in the development, provision, and use of AI systems in South Korea.
The Act states that any Affected person should be provided with clear and meaningful explanations about the key principles and criteria used in AI decision-making processes. In order to achieve this, there are various specific obligations that are included in the Act.
AI business operators must notify Users in advance that a product or service is operated based on a High-impact AI or a Generative AI, if they wish to provide them. A specific requirement exists for AI business operators providing Generative AI products or services. They must indicate that the outputs are generated by Generative AI.
Furthermore, if AI business operators provide virtual outputs, such as audio, images, or videos, that are difficult to distinguish from reality using an AI system, they must notify or display this fact in a manner that users can clearly understand. In response to the growing concern surrounding deepfake sex crimes in South Korea,[3] a new regulatory approach mandates clear labeling of AI-generated content. This measure aims to fight back the proliferation and impact of such harmful materials. However, the specific technical and practical implementation details, including the viability of AI Watermarks[4] given past performance issues, require further clarification and are expected to be addressed in forthcoming regulations. If the AI creates art or something creative, the notification is still required. However, it can be done in a way that doesn't ruin the viewing or listening experience.
AI business operators that operate large AI systems—specifically, those that use a huge amount of computing power for training, above a certain level set by Presidential Decree, must implement AI risk management program to ensure the safety of AI systems:
They must also submit the results of the implementation of their AI risk management program to the Minister of Science and ICT (“Minister”). The specific methods on how are yet to be announced but hope is that international standards such as ISO/IEC 23894: 2023[5] or ISO/IEC 42001[6] are considered. This specific article focuses on large-scale AI systems, similar in scope to those addressed by the now vetoed bill: SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
Before providing AI or products/services using AI, AI business operators must assess in advance whether the AI falls under the category of high-impact AI. If needed, they may request confirmation from the Minister. Should the Minister receive such a request, it shall confirm whether the submitted AI is classified as high-impact AI. Furthermore, it may establish and distribute necessary guidelines on criteria and examples of high-impact AI and other necessary matters regarding the confirmation procedure.
AI business operators that provide High-impact AI or products/services using High-impact AI must implement following measures to ensure the safety and reliability of High-impact AI.
Again, the Minister may provide specifical details of the measures to take and if AI business operators implemented equivalent measures as above pursuant to other laws, they shall be deemed to have implemented above measures mentioned above.
Following a risk-based approach, like those adopted by the EU and Canada, persons involved with high-impact AI systems are subject to more stringent compliance requirements in South Korea.
AI business operators shall undertake (note that in the original version in Korean, it does not use the verb “shall”) to assess the impact on fundamental human rights before providing products or services using High-impact AI.
When the governmental institutions intend to use products or services that use High-impact AI, they shall prioritize products or services that have undergone impact assessment. South Korea aims to govern the use of High-impact AI by governments within the Act (unlike Canada’s AIDA which exempts governments). Other necessary matters concerning the specific content and methods of the impact assessment will be determined by Presidential Decree.
As the Act has extra-jurisdictional effect, AI business operators without a local Korean address or place of business, whose number of users, sales, etc., meet the standards to be prescribed by the Presidential Decree, must designate a domestic agent in writing to represent them in the following matters and report this to the Minister:
The domestic agent shall have a Korean address or place of business. A violation of this Act by a domestic agent concerning any of the matters previously enumerated shall be imputed to the AI business operator that designated the agent. These requirements are similar to EU’s AI Act.
PENALTIES
The Minister may order AI business operators to submit relevant data or have their public servants conduct necessary investigations in any of the following cases:
The Act gives power to the governments to enter the offices or business sites of AI business operators to investigate books, documents, other data, or objects. If the Minister determines that an AI business operator has violated this Act based on the results of the investigation, they may force the AI business operator to take necessary measures to stop or correct the violation.
As for administrative penalties, a person who falls under any of the following subparagraphs shall be subject to an administrative penalty not exceeding 30 million won:
CONCLUSION
South Korea's Act contributes significantly to the evolving global landscape of AI regulation. Its emphasis on balancing innovation with safety, transparency, and ethical considerations, informed by international frameworks such as the EU AI Act and adapted to address specific national challenges like deepfakes, positions South Korea as a key player in responsible AI development. With the Act coming into force on January 24, 2026, businesses operating in or targeting the South Korean market must prioritize compliance to navigate this new regulatory environment.
[1] 인공지능 발전과 신뢰 기반 조성 등에 관한기본법안(대안) (과학기술정보방송통신위원장) https://likms.assembly.go.kr/bill/billDetail.do?billId=PRC_R2V4H1W1T2K5M1O6E4Q9T0V7Q9S0U0 (last visited Jan. 12, 2025)
[2] James, M., South Korea Joins EU in Establishing Comprehensive AI Legislation, CCN (Dec. 26, 2024) https://www.ccn.com/news/technology/south-korea-ai-basic-act-joins-eu/
[3] The Korea Times, 387 apprehended for deepfake sex crimes this year, 80% teenagers (Sep. 26, 2024)( https://www.koreatimes.co.kr/www/nation/2024/09/251_383167.html
[4] Kate K., Researchers Tested AI Watermarks—and Broke All of Them, Wired (Oct 3, 2023) https://www.wired.com/story/artificial-intelligence-watermarking-issues/
[5] ISO/IEC 23894:2023, Information technology — Artificial intelligence — Guidance on risk management (2023), https://www.iso.org/standard/77304.html
[6] ISO/IEC 42001:2023, Information technology — Artificial intelligence — Management system (2023), https://www.iso.org/standard/81230.html
Sun Gyoo Kang, South Korea's AI Framework Act and Its Obligations for Businesses (Feb. 11, 2025), https://digital.law.nycu.edu.tw/blog-post/jkutah/.
No. 1001, Daxue Rd., East Dist., Hsinchu City 300093, Taiwan