Technology: South Korea hopes to become AI leader with Asia’s first comprehensive legal regime

Stephen Mulrenan, IBA Asia Correspondent Monday 23 March 2026

In January, South Korea introduced Asia’s first comprehensive legal regime for AI. Seoul is hoping that the legislation, the AI Basic Act, will strengthen its claim to regional leadership and enable the country to become a global powerhouse in the sector.

Jurisdictions have so far taken very different approaches to AI regulation. The US prefers a light-touch regime, hoping to avoid stifling innovation, while the influential EU AI Act establishes tiered regulation based on risk classification.

China has issued numerous policy documents since 2017, clarifying the country’s national strategic goals and pathways for development in this area. But it’s yet to enact laws directly targeting AI.

Alex Roberts, a partner at Linklaters who leads the firm’s China TMT practice out of Shanghai, says that South Korea’s growth-orientated industrial policy and dedicated governance structures, such as its new National AI Committee, show the country is ‘making a coordinated effort to align regulatory credibility with economic ambition.’

South Korea’s AI Basic Act includes a set of core principles, among them ethical development, human oversight, responsible use and fundamental rights protection. Under the law, human oversight is mandatory for ‘high-impact’ AI – such as in healthcare – while generative AI must be clearly labelled when used.

The law could serve as a meaningful reference point for other jurisdictions in the region seeking to balance the dual goals of fostering AI industry growth and protecting citizens

Junho Lee
Partner, Bae, Kim & Lee

While this risk-based, tiered approach to regulation is reminiscent of the EU’s legislation, the latter is much more prescriptive, with detailed technical requirements and standards. In contrast, the Korean law relies on subordinate regulations to provide specifics. ‘The Korean Act seeks to promote AI, which is not the primary focus of the EU Act,’ says Sönke Lund, Chair of the IBA Working Group on AI and the legal profession. He says that the EU AI Act contains eight specific prohibited practices, such as social scoring. However, ‘there is no equivalent blanket prohibition in the Korean law,’ says Lund.

Only six of the 43 articles in South Korea’s legislation impose obligations on businesses. Junho Lee, a TMT partner at Bae, Kim & Lee in Seoul, says this light-touch approach to compliance is designed to support growth in the sector. ‘The vast majority of the remaining provisions deal with government support programmes to promote AI technology development,’ he says.

The EU Act includes mandatory conformity assessment procedures and CE marking for high-risk AI to indicate that products have met the bloc’s requirements, but there’s no equivalent in the Korean legislation. Penalties for non-compliance also differ greatly, with the cap for South Korea’s penalty regime set somewhat lower than its EU counterpart.

Despite the differences, South Korea’s Act has features reminiscent of the EU’s legislation, such as its extraterritorial reach and requirement for those foreign companies in scope to appoint a local representative. Domestically, concerns have been raised by industry that the extraterritorial reach of the Korean law creates a competitive imbalance, since all local entities are covered by the legislation, but only those foreign companies meeting certain thresholds must comply.

While the Korean law was created after extensive consultation, it has also faced criticism regarding the vagueness of some of the language, with many businesses in the sector frustrated that key details remain unsettled.

Markus Beham, Co-Chair of the IBA Human Rights Law Committee, says that simply striving towards a pioneering role in regulation as an end in itself is counterproductive. ‘While it appears essential to place a regulatory framework around AI, particularly safeguards to protect from unintended consequences or violation of fundamental rights, regulation and innovation can only correlate if the needs of developers are sufficiently addressed,’ he says.

The publication of a draft Enforcement Decree at the end of 2025 reduced some uncertainty for organisations seeking to deploy AI systems in the Korean market. But Roberts says businesses still face practical ambiguity, particularly because several operational details will only be finalised after the ongoing public consultation process ends and through forthcoming guidance issued by the Ministry of Science and Information and Communication Technology.

As they await the publication of these standards, ‘companies may default to a more conservative deployment of AI systems or over compliance during the initial phase of implementation, even with the one year grace period promoted in the Enforcement Decree,’ he says.

The new legislation arrives amid growing unease about artificially created media and automated decision-making. The situation is particularly sensitive in South Korea, as 53 per cent of individuals appearing in deepfake pornography are of Korean nationality, according to research by US identity protection company Security Hero.

The first AI-related bill was submitted to the Korean parliament in 2020 but stalled, in part because critics felt it prioritised industry interests over the protection of citizens. The new Act takes meaningful steps to address such harms, including by introducing requirements for operators of ‘high-impact AI’ to provide explanations of outcomes and to implement human oversight. ‘The faster publication of the subordinate legislation on which much of the Act’s practical effect depends, and greater specificity around impact assessments for fundamental rights, are steps that would lead to more protection for affected individuals,’ says Roberts.

Civil society groups maintain that the new legislation provides limited protection for people harmed by AI systems. Lee says that the area causing the most confusion is the scope of entities covered by the Korean law, as only ‘AI businesses’ are subject to the Act, while users aren’t included. Such businesses include companies that develop and provide AI, as well as entities using the technology designed by others to offer products or services.

Beham, who’s Chair of Public Law, International Law and European Law at the European University Viadrina in Frankfurt, says the Korean legislation focuses much more strongly on creating a legal landscape that will attract and retain AI developers than its EU counterpart. This, he says, ‘raises questions of prioritisation of fundamental rights protection, particularly since the legislation appears more oriented towards creating governance structures as opposed to seeking implementation of substantive rules. Any form of AI regulation should also include some form of redress in case individual rights are violated.’

Despite its shortcomings, Lee is hopeful that Korea’s AI Basic Act has successfully managed to integrate strategy, promotion and regulation into a single law. ‘If this law operates effectively, it could serve as a meaningful reference point for other jurisdictions in the region that are seeking to balance the dual goals of fostering AI industry growth and protecting citizens,’ he says.

Header image: heerim studio - stock.adobe.com