Resolution No.57: People should be put at centre of AI governance
The Law on Artificial Intelligence (AI) adopted by the 15th National Assembly at its 10th session not only establishes a legal framework for the research, development and application of AI in Vietnam, but also clearly reflects the country’s choice of a proactive and responsible approach to technology governance.
Built on the principle of placing people at the centre, regarding ethics as the foundation and managing AI based on risk levels, the law seeks to strike a balance between promoting innovation and mitigating potential negative impacts on society. With mechanisms such as risk classification, labelling of AI-generated content, protection of children and vulnerable groups, and the integration of ethical requirements from the design stage, it is expected to become a key instrument for safeguarding human rights, strengthening digital trust and shaping the sustainable development of AI in Vietnam. This move also aims to realise the Politburo’s Resolution No.57 on breakthroughs in science and technology development, innovation, and digital transformation.
A step forward in AI development, application and governance
Assessing the role of AI, Minister of Science and Technology Nguyen Manh Hung emphasised that it is an “intellectual infrastructure.” More than an applied technology, AI is increasingly becoming a form of national infrastructure, comparable to electricity, telecommunications or the Internet. Whoever masters AI will gain a significant advantage in production and business, healthcare, education, national governance and even defence and security. Vietnam, he stressed, must develop its own AI infrastructure and is moving swiftly to build a national AI supercomputing centre and open AI data platforms.
Hung also stressed the importance of popularising AI, likening AI literacy to past nationwide efforts to eradicate illiteracy or promote foreign language learning. In the future, every Vietnamese citizen could have a personal AI assistant, significantly enhancing collective intelligence even as population growth stabilises. What was once available only to senior officials at high cost is now becoming accessible to ordinary citizens.
According to the minister, the adoption of the AI Law represents the first institutional step in the field. As a concise, framework-based law, it focuses on principles and governance while allowing flexibility in implementation. Rather than regulating AI models, which evolve rapidly as a result of innovation, the law governs AI applications, usage behaviours and associated risks, in line with international practice and without constraining creativity.
Tran Van Son, Deputy Director of the National Institute of Software and Digital Content Industry under the Ministry of Science and Technology, said the law marks a major advance in building a comprehensive legal corridor for AI. As AI continues to develop rapidly and exert wide-ranging socio-economic impacts, the law is designed not only to encourage innovation but also to proactively address ethical, social and legal risks.
A human-centred approach runs throughout the law, defining AI as a tool that serves people and remains under human control. This approach is reflected in three core pillars: institutionalising AI ethics, applying risk-based management, and establishing mechanisms to protect citizens’ legitimate rights and interests.
Protecting citizens, children and vulnerable groups
Protecting citizens, especially children and vulnerable groups, is a central priority of the AI Law. It requires mandatory labelling of AI-generated content in both human-readable and machine-readable formats, enhancing transparency and enabling early detection of misinformation, fraud and manipulation.
The law also strictly prohibits the use of AI to exploit the vulnerabilities of children or the elderly. AI products intended for children must meet higher safety standards, as defined in the forthcoming National AI Ethics Framework. To support implementation, the Ministry of Science and Technology is preparing technical infrastructure, unified labelling standards and tools to identify harmful AI content, helping schools and families better protect children in the digital environment.