AI Services in Malaysia: Managing legal risks through contracts
- Jan 16
- 5 min read

Introduction
AI is reshaping industries worldwide, from finance and healthcare to defence and law. Companies use real‑time feedback loops to release new versions at unprecedented speeds, often involving customers in testing products still under development.
As systems become more autonomous, they introduce new risks into everyday life.
Legal background to AI regulation
Malaysia does not currently have specific legislation, regulation, or licensing for AI. However, AI products or services must be developed in compliance with general laws.
The Government of Malaysia has launched its National AI Roadmap 2021-2025 (Roadmap), which sets out the goals of AI development. It has also published the National Guidelines on AI Governance and Ethics (Guidelines).
A Roadmap survey of the perception of implementation of the right regulations and ethical framework to implement AI shows that the Government still has a lot to build.
The Guidelines take a broader brush approach by looking at “7 AI principles of Responsible AI” regulation in Malaysia: (1) fairness, (2) reliability, safety, and control, (3) privacy and security, (4) inclusiveness, (5) transparency, (6) accountability, (7) pursuit of human benefits and happiness. The principles are not binding but serve as a foundational framework for future legislation or industry self-regulation.
The National AI Office (NAIO) was launched in December 2024 under the guidance of MYDIGITAL Corporation, a government agency tasked with Malaysia’s digital transformation. Its main functions are to coordinate AI development, standardise governance, regulate AI development and centralize resources. So far, it supported one report on Accelerating SME AI Adoption through Open Source in October 2025.
There are no significant cases decided in Malaysia on liability for developers or service providers using AI agents.
When an AI agent makes a mistake, who’s responsible?
This is the major question on AI liability. The examples below show reported incidents and criticisms but are not established legal liability findings.
Only a few months ago, on 18 July 2025, Replit’s AI agent acted beyond its intended scope and programming despite explicit instructions:
It violated explicit instructions not to amend code without permission.
It deleted live production databases for 1,206 executives and 1,196 companies.
It faked system functionality by creating fake user profiles, fabricated test results, and built a parallel algorithm to make the app appear functional.
Some previous media stories include:
Tesla’s Full Self Driving ignored stop signs and made illegal turns during testing.
GitHub Copilot generated insecure or plagiarised code snippets.
The incidents highlight serious risks posed by AI failures. Consider risks and consequences if these incidents happened with AI agents in healthcare or defence.
The issue
Clear contractual drafting is important to manage the legal risks of AI deployment. While contracts won’t eliminate regulatory obligations, they clarify responsibilities, allocate liability, and reduce uncertainty for service providers, their clients and users.
A major issue, called “the black box”, is that, like our human learning, machine learning is gradual. It consistently builds foundations to design and expand a neural network. The challenge is that at some point during the expansion, we have no idea how the deep learning reaches its conclusions. Human developers no longer keep track of the inputs which are self-generated and focus on reducing the risk of negative outputs.
Contractual liability
The main risk for developers, companies using AI agents, or clients is that AI agents fail to perform as promised. The common sources of contractual liability are:
Failure to meet contract terms on capabilities, accuracy, and service levels
Breach of contractual terms can give rise to liability. Examples include marketing the AI tool as “99% accurate” when accuracy is lower or claiming independence or freedom from bias when it requires human supervision. AI agents may suffer outages due to infrastructure limitations, software bugs or errors, poor data quality, security and cyberattacks, or demand surges.
To deal with these risks, capabilities must be expressly defined in the service level agreement (SLA). Representations about accuracy, reliability, and fitness for purpose create independent contractual obligations, and failure to meet them may constitute a breach. The SLA should specify accuracy thresholds, latency requirements, remedies, and damage caps for downtime or performance failures. It must also address external threats and security risks, with associated costs reflected in the pricing structure.
Lack of compliance with local laws and regulations
AI regulations evolve fast and contracts must anticipate these changes. While compliance clauses are standard, significant legal changes should shift costs from service providers to clients for sector-specific obligations like banking or insurance. As a minimum, the contract should provide renegotiation rights in case legal changes fundamentally change the nature of the contract.
Compliance must include audit rights for clients to check adherence as well as indemnities and termination rights if the regulatory changes make the contract illegal or impractical and the contractual clauses are not followed.
Material Non-disclosure of a limitation of the AI agent
Material non-disclosure is a form of misrepresentation when there is an obligation to disclose or a wrong impression is created by marketing materials.
The AI tool can be treated as a “product” under consumer protection legislation. If limitations are not disclosed, the product can be considered as defective and the failure to disclose misleading and deceptive conduct.
Breaching consumer protection legislation can cause reputational damage, regulatory fines and certain compensation orders in the consumer’s favour. Transparency about limitations is essential to avoid liability and preserve trust.
Breach of intellectual property in the AI agent outputs
This can occur when an AI agent that is marketed as fully generative closely imitates or reproduces copyrighted works used in its training.
If the AI agent promises originality, the user reproducing the output can be liable for breach of intellectual property if the outputs are used commercially. Clients may face third-party claims from the authors, inventors or their agents.
A good strategy is to use a “reasonable efforts” clause rather than guarantee absolute IP compliance. This balances client protection with the inherent uncertainties of generative AI outputs.
To manage the risks of contractual liability, developers and their clients should follow these steps and try to define them with as much clarity as possible:
Disclose the capabilities and limitations of AI agents.
Demonstrate and test performance using proof-of-concept and pilot projects.
Provide transparent documentation with user manuals and technical specifications.
Define contractual warranties with reasonable efforts language.
Offer ongoing support with updates, retraining, maintenance, and monitoring.
AI service contracts share many features with standard technology agreements that have been used in the technology industry for decades. However, they introduce special risks – data use, compliance and intellectual property - that must be addressed.
AI contracts must address the risk of training data reuse and ensure clear ownership and protection of sensitive data provided by clients while safeguarding against vendor lock-in. This arises when clients become dependent on a provider’s proprietary AI models or platforms, making it costly or difficult to switch to alternative solutions.
In particular, clients should take the following steps:
Undertake due diligence through independent testing, audits, and references.
Define requirements clearly and expected outcomes (accuracy, uptime, integration) with measurable key performance indicators.
Include adequate service-level contractual protections with warranties for compliance with laws and indemnities against third-party claims.
Impose audit and verification rights of the AI agents for compliance.
Allocate risk through liability caps, insurance requirements, and responsibilities.
Define clear exit rights if the AI fails to meet critical capabilities.
Conclusion
There are no universally applicable contractual clauses for all AI agents. Each agreement must be tailored to the specific product or service. To reduce legal risks and potential liability, developers and clients should maintain a thorough understanding of the AI agent’s capabilities, limitations, and compliance obligations.
.png)


