Politics

South Korea regulates artificial intelligence first of all: what really changes

South Korea is the first country in the world to implement a comprehensive law on generative AI, imposing watermarks on content produced with artificial intelligence. However, important questions remain about the practical application of the rules and jurisdictional boundaries

There South Korea gave the go-ahead for the entry into force of theAI Basic Actthe law considered the first organic discipline in the world for the regulation of generative artificial intelligence. Starting from January 22, 2026companies that produce and distribute AI-generated content must affix markings (watermarks) to indicate that texts, images, videos or audio are produced by algorithms and not by people. The declared objective is to increase transparency and trust in AI systems, combating phenomena such as deepfakes and disinformation.

How the watermarks required by the standard work

According to the text of the law, which takes the official name of Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthinessthe generated content must be clearly and visibly identified if it can be confused with real materials. For content considered artificial and easily recognizable as such – for example animations or webtoons – the regulation allows the use of invisible digital watermarks detectable by software, but for deepfakes and realistic synthetic media the watermark must be perceivable by users.

The law applies only to commercial operators that provide products or services based on generative AI, not to individual users who use AI tools for personal or hobby purposes. Furthermore, there is a period of tolerance of about a year during which the authorities will not impose sanctions to allow companies to adapt to the new rules.

Another element that emerges from the rules is the distinction between generative AI and “high-impact AI”, i.e. those systems that can significantly influence sensitive sectors such as health, transport or finance. For these cases, not only watermarking is required, but also greater transparency on risks, management and monitoring plans.

Although the principle of transparency is at the heart of the legislation, they remain practical difficulties in application. Many legal and non-legal online tools allow you to remove or obscure watermarks with just a few clicks, raising questions about whether the rules are actually effective in preventing the misuse of synthetic content.

The limits of enforcement and the issue of foreign content

A further issue concerns extraterritoriality: the law requires foreign companies to appoint a local representative to comply with obligations when operating in the South Korean market, but this provision only applies if certain turnover or number of user thresholds are reached. This leaves out many tools and platforms that generate content but do not carry out direct activities in the country, making it difficult for authorities to control the production and dissemination of content generated abroad.

A difficult balance between regulation and competitiveness

As regards sanctions, the regulatory framework provides for administrative fines for those who do not comply with transparency obligations — up to approximately 30 million won (about 20,000 euros) — but these can only be fully applied after the transition period provided by law.

The Basic Act South Korea is placed in an evolving international context: theEuropean Union is implementing his AI Act at various stages and other jurisdictions are debating similar labeling requirements for synthetic content, but Korea stands out as having implemented a comprehensive framework already in full implementation.

It remains to be seen how technical tools and enforcement rules will evolve in the coming months and years, and whether other countries will follow the South Korean example by adopting similar regulations to respond to the challenges posed by the rapid spread of generative artificial intelligence.