본문으로 바로가기 본문으로 바로가기 대메뉴로 바로가기
네이버 블로그
인스타그램
유튜브


[C STORY VOL.48 / Reading the World Through Copyright Laws] Obligations of AI Business Operators under the AI Act and Countermeasures under Copyright Law

  • 작성일2025.12.04
  • 조회수541

Obligations of AI Business Operators under the AI Act and Countermeasures under Copyright Law

By Professor Moon Sunyoung, School of Law, University of Seoul 








I. Introduction 

Korea became the second country in the world to enact an AI law, following the EU, with the passage of the Framework Act on the Development of AI and Establishment of Trust (hereinafter, AI Act) by the National Assembly on December 6, 2024. In contrast, while the EU AI Act, which became the first in the world to take effect on August 1, 2024, will be phased in starting February 2025, Korea's AI Act will take effect on January 22, 2026. As such, Korea's AI Act will be ahead solely based on its effective date. The innovative development of AI technology has drawn widespread attention, particularly with the emergence of generative AI (GAI) and DeepSeek late last year. There are mixed expectations and concerns about the future role of the newly enacted AI Act.


The AI Act aims to promote the development of the AI industry while establishing a foundation of trust in AI. Therefore, it should not simply prioritize consumer or user protection, leading to regulation, or, conversely, prioritize industry convenience. In addition to provisions that foster innovation and industry growth, the act imposes a series of regulations for AI business operators, with a focus on high-impact AI. This article will focus on the latter, examining it in more detail below.






II. Key Provisions in Relation to Obligations and Sanctions for Business Operators under the AI Act

The recently enacted AI Act, which centers on obligations and sanctions for business operators, defines “artificial intelligence” as the electronic implementation of human cognitive abilities, such as learning, reasoning, perception, judgment, and language comprehension, and it imposes certain obligations on AI business operators.1)


The AI Act imposes specific regulations on high-impact AI and GAI. High-impact AI refers to AI systems that pose a significant risk to human life, physical safety, and fundamental rights, and are utilized in certain areas, such as education, employment, finance, public services, healthcare, and the judiciary, as defined in Article 2, Paragraph 4 of the AI Act. GAI refers to an AI system that generates text, sound, images, video, and other outputs by imitating the structure and characteristics of input data.


AI business operators that provide products or services using high-impact AI or generative artificial intelligence (GenAI) shall notify users in advance that the product or service is AI-based. (Article 31, Paragraph 1 of the Act) For GAI products and services, AI business operators shall notify users in advance that such products and services are GenAI-generated. (Paragraph 2) AI business operators shall clearly notify or indicate to users when virtual sounds, images, or videos are AI-generated, and may be difficult to distinguish from authentic ones. If the outcome is part of an artistic or creative work, the notification or indication may be presented in a manner that does not hinder its exhibition or enjoyment. (Paragraph 3)


In addition, AI service providers must implement certain safety measures for AI systems when the cumulative computational load used for learning exceeds the standards prescribed by the Presidential Decree (Article 32 of the Act). AI business operators providing AI-based products or services must review in advance whether the AI in question qualifies as high-impact AI. (Article 33 of the Act). AI business operators providing high-impact AI or AI-based products and services must implement measures, including those in Article 34, Paragraph 1 of the Act, as prescribed by the Presidential Decree, to ensure the safety and reliability of high-impact AI. (Article 34 of the Act)2)


AI business operators providing products or services using high-impact AI shall assess in advance the impact on basic human rights. When national institutions intend to use high-impact AI products or services, they must prioritize those that have undergone an impact assessment. (Article 35 of the Act) This Act shall apply to acts committed abroad that affect the domestic market or users in the Republic of Korea. (Article 4 of the Act)


The AI Act sets forth sanctions in the event that an AI business operator violates the given duties. First, if the Minister of Science and ICT discovers or suspects a violation of Article 31, Paragraphs 2 or 3, Article 32, Paragraphs 1 or 2, or Article 34, Paragraph 1, or if a report or complaint regarding such violation is filed, the Minister may request the AI business operator to submit relevant materials or appoint a subordinate public official to conduct an investigation. If the investigation confirms a violation, the Minister may issue orders to stop or correct the violation. (Article 40 of the Act) A person who fails to comply with a notification in violation of Article 31, Paragraph 1, or a person who fails to comply with a suspension or corrective order pursuant to Article 40, Paragraph 3, etc., shall be fined up to KRW 30 million. (Article 43)




III. Review of the Obligations and Sanctions of Business Operators under the AI Act

While this AI Act includes transparency obligations and safety and reliability requirements for high-impact AI and GAI businesses, and imposes fines for AI businesses, it does not extend to fines or criminal penalties for violations, reflecting a preference for avoiding excessive regulation. In comparison, the EU AI Act classifies AI by risk level and stipulates detailed obligations for businesses accordingly, with fines of up to EUR 35 million or 7% of global annual revenue for violations. Korea's regulations are relatively modest.


Despite Korea's law mandating businesses to review whether their business is subject to regulation as high-impact AI, it provides no specific criteria for this determination, thus creating uncertainty in the AI field.


Moreover, matters concerning the obligation to provide prior notification to users of high-impact AI or GAI under AI transparency requirements, the methods and exceptions for displaying GAI results, standards for the cumulative production volume used in training AI systems subject to safety requirements, specific implementation methods for safety and reliability assurance measures, content and methods of impact assessments for high-impact AI, and the obligation to designate a domestic agent for AI businesses without a domestic address or place of business are all delegated to lower-level laws, such as presidential decrees and public notices.


It is also disappointing that there are no specific provisions for exclusions or partial exemptions compared to the EU AI Act. The Ministry of Science and ICT has launched a “Lower-Level Law Maintenance Team” to actively develop follow-up legislation detailing the specific content and implementation methods of these legal obligations. Consequently, future lower-level laws, public notices, and guidelines will require monitoring.



IV. Implications for Responses to the Copyright Act

The role of the Copyright Act is receiving more attention than ever following the innovative development of AI technology. Not only is the copyrightability of AI-based outputs being debated, but questions about copyright infringement arising from the use of learning data and the production of AI outputs persist. 


Given these circumstances, discussions about using copyrighted works in AI learning, the scope of permitted use, and the establishment of a compensation system have been ongoing. However, the newly enacted AI Act leaves this issue to the Copyright Act, without specific provisions on copyright. In contrast, the AI Act stipulates obligations for high-impact AI and GAI business operators to provide prior user notification and display the results of GAI. These measures aim to prevent the risk of user rights violations caused by AI, such as fake news and deepfakes, while also reinforcing safety and reliability to deter copyright infringement.


There are several areas that will require further consideration in the future when specifying the content and methods of the notification and labeling obligations under the AI Act, and the degree to which the legal requirements can be specified in subordinate statutes and public notices remains a challenging task.


Moreover, Article 31, Paragraph 3 of the Act stipulates, “AI business operators shall clearly notify or indicate to users when virtual sounds, images, or videos are AI-generated, and may be difficult to distinguish from authentic ones. If the outcome is part of an artistic or creative work, the notification or indication may be presented in a manner that does not hinder its exhibition or enjoyment.” It will also be necessary to examine how guidelines defining the scope of artistic or creative expression and the method of displaying such expressions can be specified.


Furthermore, we agree on the necessity of the aforementioned user notification and result display obligations as measures to prevent the unauthorized use of copyrighted works from the perspective of rights holders. However, since the Text and Data Mining (TDM) exemption clause, as stipulated by the EU and Japan, has not yet been incorporated into our law, the legislative imbalance is likely to create significant difficulties in establishing specific display methods or exceptions solely through the AI Act.3)  In summary, copyright issues surrounding AI have thus far been addressed cautiously through guidelines,4)  but the AI Act appears to be the first to propose a swift review of the legal framework for copyright, at least in terms of the scope of copyright use by AI and the establishment of a compensation system.

1) These refer to AI developers who create and supply AI, and AI-using business operators who provide AI products or services that rely on AI provided by such AI developers. (Article 2, Paragraph 7 of the AI Act)
2) These safety and reliability measures include: 1. Establishment and operation of a risk management plan; 2. To the extent that it is technically feasible, establish and implement a plan to provide explanations for AI-generated outputs, including the key criteria used to derive such outputs, as well as an overview of the learning data used in the development and utilization of AI; 3. Establishment and operation of user protection measures; 4. Human management and supervision of high-impact AI; 5. Preparation and storage of documents that demonstrate measures taken to ensure AI safety and reliability; 6. Other matters deliberated and resolved upon by the Committee to ensure the safety and reliability of high-impact AI are prescribed by the Presidential Decree. (Article 34, Paragraph 1, Subparagraphs 1 to 6 of the Act)
3) The TDM exemption clause is a regulation that allows the use of copyrighted works for AI learning under specific conditions. The EU explicitly states that TDM for non-commercial research purposes is a reason for copyright restrictions, and Japan broadly permits copying for information analysis.

4) With numerous lawsuits pending worldwide regarding copyright infringement of AI, the Korean government published the world's first copyright guide on GAI in 2023, and the U.S. Copyright Office and Japan's Agency for Cultural Affairs have also recently published several reports on AI and copyright.


공공누리와 CCL 안내입니다.
공공누리/CCL
이전,다음 게시물 목록을 볼 수 있습니다.
이전글 [C STORY VOL.47 / INTERNATIONAL COPYRIGHT ISSUES] We respond swiftly to copyright infringement through law enforcement and website blocking
다음글 [C STORY VOL.48 / Copyright Technology Trends] Similarity Comparison Technology and Improvement Measures in Response to the Proliferation of Choreography Copyright Claims.

페이지
만족도 조사

현재 페이지에 대하여 얼마나 만족하십니까?

평가
  • 담당부서 : 홍보협력부
  • 문의전화 : 02-3153-2473