1
|
Article 1 – Clause 2
|
“This Law stipulates activities related to research, development, provision, deployment, and use of artificial intelligence systems; the rights and obligations of relevant organizations and individuals; and state management of artificial intelligence systems.”
|
The current draft focuses on the entire lifecycle of AI systems while integrating ethics, responsibility, and compensation. This comprehensive approach is positive but risks creating an overly broad law, potentially resulting in “full-spectrum regulation,” especially given the fast-evolving nature of AI that requires flexible legal updates. International legislative practices, including frameworks from the EU and South Korea, are cautious about embedding detailed liability regulations directly in the main law, due to complexity and need for long-term judicial review. Proposal: (i) establish principles, orientations, and scope at the framework law level; (ii) empower the Government and specialized agencies to issue detailed documents, pilot, assess impacts, and finalize. Also, clearly distinguish between research – experimentation – commercial deployment; avoid premature legal application to academic research, limited trials, or personal use without social impact, to encourage innovation and maintain an open research environment.
|
2
|
Article 1 – Clause 3
|
“This Law does not apply to artificial intelligence systems developed, deployed, or used solely for defense, security, or intelligence purposes.”
|
Excluding AI systems for defense and security is aligned with state management practices and international norms. To ensure transparency, consistency, and avoid legal gaps, it is recommended to clarify that the exemption applies only to functions or tasks strictly serving defense-security under specialized law; components, features, data, or applications with potential civilian use must still comply with this Law. A function-based approach ensures actors cannot misuse the defense-security exemption to avoid disclosure, registration, risk assessment, or compliance with ethical and technical standards in civilian applications. Proposal to add: (i) periodic review mechanisms for exempt systems to ensure continued alignment with defense-security objectives; (ii) inter-ministerial coordination procedures (between Ministry of Public Security, Ministry of Defense, and Ministry of Science & Technology) for dual-use AI systems, ensuring clear management responsibilities and risk prevention. Clarifying exemptions in this way balances national defense-security objectives with transparency and fairness for civilian AI governance.
|
3
|
Article 3 – Clauses 5, 6
|
“5. A provider is an organization or individual that develops an AI system or a general-purpose AI model and markets it or deploys it under its own name or brand.
6. A deployer is an organization or individual that uses an AI system within its authority, except in cases where the AI system is used for personal, non-commercial purposes.”
|
The draft has made progress by distinguishing key actors in the AI lifecycle. However, definitions remain overlapping, especially between “provider” and “developer.” Having “provider” encompass “developer” may create legal ambiguity in multi-layered supply models common in AI deployment (e.g., model developers, integrators, deployers). Proposal: clarify roles and responsibilities based on level of control and technical intervention at each stage—development, integration, deployment, operation. Add criteria for role identification when systems are provided as a service to avoid missing liability. Expand definitions of “user” and “affected party” to include organizations and legal entities, as AI systems can significantly impact businesses (e.g., credit scoring, risk classification, fraud detection, recruitment filtering), potentially causing reputational, economic, or operational harm. This aligns with international practice (EU, Singapore, South Korea) and ensures a clear legal basis for accountability.
|
4
|
Article 4 – Basic Principles
|
All AI-related activities in Vietnam must adhere to the following basic principles:
1. Human-centered (Humanity): AI systems must serve and support humans, respect dignity, freedoms, privacy, and cultural values. AI shall not replace humans in critical decisions and must remain under human control and accountability.
2. Safety, fairness, transparency, accountability: AI systems must be developed and operated safely, reliably, securely, and fairly. Risky AI must ensure transparency, explainability, and clearly defined legal responsibility for harm.
3. National autonomy and international integration: Develop capabilities in technology, infrastructure, data, and AI models, while proactively cooperating internationally in line with global principles and practices.
4. Inclusive and sustainable development: AI development must align with sustainable socio-economic goals, ensure fairness and access for all, protect the environment, and preserve national cultural identity.
5. Balance and harmony in policy-making: harmonize laws with international norms, promote international cooperation while building national capacity, balance investment in large strategic projects and SMEs/startups, balance development of general-purpose and specialized AI models, safeguard research freedom while managing risk.
6. Risk-based management: Apply management proportional to AI system risk; mandatory regulation only for high-risk AI; prioritize innovation and voluntary standards for others.
7. Promote innovation: State creates favorable legal and policy environment to encourage AI research, startups, and commercialization.
|
Principles reflect core responsible AI governance values. However, some principles overlap with AI ethics chapters, potentially reducing clarity in guidance documents. Proposal: two-tier structure: (i) macro-level value orientation: human-centered, rule of law, human rights protection, international integration, technology neutrality; (ii) specific ethical and technical principles issued in national ethical standards. Clarify “human-centered” principle with respect for dignity, human rights, and Vietnamese cultural-ethical values. This approach ensures a scientific legal structure, avoids content overlap, enhances transparency, and aligns with international best practices, creating a consistent, flexible AI legal framework in Vietnam.
|
5
|
Article 5 – State Responsibilities
|
The State has the following responsibilities:
1. Respect and protect creativity of organizations and individuals; create a clear, safe, reliable legal environment.
2. Issue and implement policies/programs to disseminate AI knowledge, train digital skills, support equitable benefits, especially for vulnerable groups.
3. Prioritize AI services, computing, and cloud platforms for public tasks; invest in infrastructure only when necessary for efficiency and national security.
4. Proactively invest and lead development of National AI Infrastructure, foundational AI models, and strategic core technology.
5. Engage internationally in AI, promote joint research, mutual recognition of AI standards and certification.
6. Facilitate foreign investment in AI while ensuring compliance with Vietnamese law and balancing sovereignty, national interests, technology transfer, competition, and cooperation.
|
Current policies show balance between risk governance and innovation but need integration of mandatory legal tools, voluntary guidance, ethical and technical standards. Proposal to add: (i) policies addressing social impacts of AI (retraining workforce, new employment, social equity); (ii) policies ensuring foundational research capacity, national computing infrastructure, high-quality human resources; (iii) mechanisms for periodic legal impact assessment for flexible adjustments. These measures create a comprehensive legal environment, promote a sustainable AI ecosystem, and align with international experience (EU, Singapore, South Korea).
|
6
|
Article 10 – Prohibited AI Systems
|
It is strictly prohibited to research, develop, provide, deploy, or use AI systems in the following cases:
1. Manipulating human perception or behavior to impair autonomy or decision-making, causing or potentially causing physical or mental harm.
2. Exploiting vulnerabilities of specific groups (age, disability, economic/social status) to influence behavior.
3. Social credit scoring by state authorities on a wide scale causing unfair treatment.
4. Real-time remote biometric identification in public places for law enforcement, except as specified by specialized law for serious crime prevention and authorized by competent authorities.
5. Creating or exploiting large-scale facial recognition databases through indiscriminate collection.
6. Using emotion recognition in workplaces and educational institutions unless permitted for medical or safety reasons under specialized law.
7. Producing or disseminating AI-generated false content causing serious harm to social order, safety, or national security.
8. Developing or using AI systems to undermine the Socialist Republic of Vietnam.
9. Other cases specified by the Government after consultation.
|
These provisions are necessary to protect privacy, data security, and human rights. To avoid hindering essential state and business operations in economic-financial security and cybercrime prevention, propose conditional deployment mechanisms for high-risk applications (e.g., fraud detection, anti-money laundering, cybersecurity, critical infrastructure protection). Proposal for controlled exemptions: pre-deployment impact assessment, specialized authority approval (e.g., Ministry of Public Security, State Bank), high data and cyber security standards, mandatory reporting and periodic monitoring, independent supervision or technology audit. This balances privacy protection with national security, financial safety, public order, and supports responsible AI governance without stifling innovation.
|
7
|
Article 14 – Obligations for High-Risk AI Systems
|
Providers and deployers of high-risk AI systems must:
1. Establish and operate risk management systems throughout the lifecycle.
2. Implement data governance ensuring quality, provenance, representativeness; mitigate bias.
3. Maintain comprehensive technical documentation as per law or government regulation; maintain operation logs.
4. Set up human oversight, intervention, and control; ensure final human decision when legally required.
5. Ensure accuracy, safety, and cybersecurity consistent with declared purpose.
6. Maintain transparency to affected parties, including system nature, decision mechanisms, and human review rights.
7. Register system in National AI Database before deployment.
8. Monitor post-market, collect feedback, update and adjust to mitigate emerging risks.
9. Conduct AI impact assessment as per Article 45.
10. Report serious incidents via electronic portal; cooperate in investigation and remediation.
11. For high-risk systems based on general-purpose AI models, providers may reuse existing technical documents and assessments, provided verification, updates, and final responsibility.
|
Propose risk-tiered human intervention mechanisms: (i) post-decision review for low-medium risk, (ii) real-time monitoring and intervention for significant impact systems, (iii) mandatory human approval for sensitive areas (healthcare, life safety, law enforcement, justice, critical administrative decisions). Standardized intervention procedures: assign responsible personnel, operation guidance, training, maintain logs. This ensures human rights protection, social safety, operational feasibility, legal clarity, and innovation-friendly environment.
|
8
|
Article 18 – Obligations for General-Purpose AI Models
|
1. Providers of general-purpose AI models (large and small language models) must: (a) maintain technical documentation (training, testing, evaluation, usage limits), (b) conduct safety testing, (c) comply with IP laws; data used for AI training under lawful access does not constitute infringement if technical safeguards are applied and rights respected, (d) provide guidance for downstream deployers/users, (e) establish serious incident reporting, (f) transparency and labeling per Article 16.
2. For high-risk general-purpose models: (a) standardized model evaluation, adversarial testing, (b) continuous risk assessment and mitigation, (c) incident reporting to authorities and affected parties, (d) ensure cybersecurity, (e) continuous updates to mitigate new risks.
3. Documentation may be used by downstream AI system providers but does not exclude verification and final responsibility.
|
To balance IP rights and AI development, propose mechanisms for data usage refusal for model training, with standardized technical signals per international standards (EU AI Act). Also, establish traceability and transparency of training data: maintain logs, disclose source (except trade secrets), allow regulatory access in case of violations. Propose rapid dispute resolution between IP owners and AI model providers via simplified procedures, online mediation, or specialized tech arbitration. This protects rights while ensuring a flexible, innovation-friendly legal environment.
|
9
|
Article 21 – Registration, National AI Database, and Information Disclosure
|
1. Competent authorities establish and manage the National AI Database for management, supervision, and information provision.
2. High-risk AI systems must register before use; updates required for significant changes. Registration is electronic and linked to national public service portal and sectoral databases.
3. Foreign AI providers serving Vietnamese users must appoint a legal representative in Vietnam for notification, registration, inspection, and violation handling.
4. Basic information of high-risk AI systems is public while balancing state secrets, trade secrets, and personal data. Government specifies scope, level, and form of disclosure.
|
To ensure feasibility, international alignment, and avoid technology access barriers, propose criteria-based thresholds for registration and representation obligations for foreign providers, e.g., (i) user count exceeds threshold (e.g., 100,000 users in last 12 months), (ii) revenue, transaction value, or scale exceeds threshold (e.g., 5 billion VND/year), (iii) system classified as high-risk. Also, allow electronic representation for foreign providers instead of physical legal presence, referencing EU Digital Services Act and Singapore AI Governance Framework. Additionally, propose simplified reporting or exemptions for non-commercial research, education, innovation, or open-source models. This balances regulation, user protection, and innovation, enabling faster access to advanced AI technologies and promoting national competitiveness.
|