Article
Deepfakes in Singapore: Legal Recourse, Regulatory Powers, and Building Long-Term Corporate Resilience
4 December 2025 | Applicable law: Singapore | 9 minute read
This article is Part 2 of our two-part feature on what corporations and their leadership need to do in the event of a deepfake incident. Part 1 addressed the risk landscape and the immediate and coordinated response that is required. Part 2 will address legal recourse, regulatory powers, and long-term governance measures that corporations and their leadership may wish to adopt.
Once the initial crisis has stabilized, with the content contained, evidence preserved, communications managed; attention must turn to legal recourse and long-term resilience. This second part of our two-part feature (read the first part here) explores these measures and outlines how organizations can strengthen preparedness in the longer term.
Legal recourse under Singapore law in response to deepfake-related harm is multi-layered. It covers criminal sanctions, regulatory powers, and civil remedies.
Criminal sanctions under Singapore law
Deepfakes used to impersonate corporate officers or executives may attract criminal prosecution under the Penal Code 1871 ("PC") for cheating.
- Section 415 of the PC defines cheating as fraudulently or dishonestly deceiving a person, whether by making a false representation or by other means, with the intention of causing that person to deliver property or consent to an act they would not otherwise do, which causes or is likely to cause harm.
- Section 416 of the PC deals specifically with cheating by personation, which occurs when a person pretends to be some other person.
A convincing deepfake video call or audio message that involves deception that induces the delivery of property will satisfy the elements to make out these cheating offences. In that case, the penalties for cheating under section 420 of the PC will apply. They are imprisonment of up to ten years and a fine.
A deepfake could also potentially constitute criminal defamation under section 499 of the PC where the video or audio clip falsely attributes statements or conduct to an individual, harming his or her reputation. That would be the case where, for instance, the deepfake diminishes that individual's moral or intellectual character in the eyes of others. This is punishable under section 500 of the PC by imprisonment for up to two years, a fine, or both.
The Criminal Law (Miscellaneous Amendments) Bill 2025, introduced for First Reading in Parliament on 14 October 2025, proposes provisions making it an offence to produce or possess intimate images without consent, even if generated using AI. The Bill expands the definition of "intimate image" under Section 377BE to cover AI-generated material, including completely synthetic images that do not alter existing recordings. It also clarifies that computer-generated child abuse material is prohibited even without proof that an image of a real child was used in production, and increases penalties for scam-related crimes, recognizing that digital deception including deepfake impersonation is now a core enabler of fraud.
Regulatory powers for rapid intervention
Singapore’s regulatory framework complements criminal sanctions with mechanisms enabling swift intervention against harmful content.
The Protection from Online Falsehoods and Manipulation Act 2019 (POFMA) empowers Ministers to issue the following directions:
- A Correction Direction: This requires the publisher of the falsehood to put up a correction notice. A correction notice explains that the material is false and provides the Government’s clarified position or verified facts, enabling the public to understand the truth without requiring the false content to be taken down.
- A Stop Communication Direction: This requires the publisher of the falsehood to take down the falsehood entirely and cease further dissemination.
- A Targeted Correction Direction: This requires the internet intermediary to communicate a correction notice to users who have accessed the falsehood.
- A Disabling Direction: This requires the internet intermediary or service provider to block access to the falsehood in Singapore.
These measures may apply where a deepfake threatens public interest, undermines market confidence, or damages investor relations. That would be the case where, for example, a viral falsehood implies insolvency or regulatory investigation of a listed company.
Online Criminal Harms Act 2023
This Act enables designated officers from government agencies to issue legally binding directions, including Stop Communication Directions and Account Restriction Orders, where online activity is suspected to be preparatory to, or in furtherance of, a scam or malicious cyber offence.
For corporate victims, this allows authorities to compel platforms to remove deepfake content, block accounts, and disrupt networks facilitating deception before the harm escalates.
Elections (Integrity of Online Advertising) (Amendment) Act 2024
Singapore's Parliament passed the Elections (Integrity of Online Advertising) (Amendment) Bill on 15 October 2024, prohibiting the publication, boosting, sharing, and reposting of deepfake content depicting election candidates. The law came into effect for Singapore's May 2025 general election.
The prohibition applies from the issuance of the writ of election until the close of polling, and targets digitally generated or manipulated content, including AI-generated deepfakes and non-AI editing methods such as Photoshop, dubbing, and splicing, that realistically depicts a candidate saying or doing something they did not in fact say or do. Individuals who publish, share, or repost prohibited content may face fines of up to S$ 1,000 and/or imprisonment for up to 12 months; social media platforms that fail to comply may be fined up to S$ 1 million.
While this law applies specifically during election periods, it signals Singapore's broader legislative intent to combat synthetic media harms and may inform future corporate governance expectations for publicly listed companies and executives who engage in public discourse.
Forthcoming Online Safety Commission (2026)
Singapore's framework will be further strengthened by the Online Safety (Relief and Accountability) Bill 2025, introduced for First Reading on 15 October 2025 and passed by Parliament on 5 November 2025. It will establish an Online Safety Commission ("OSC") with broad powers to issue directions to take down content, restrict perpetrator accounts, and assist victims in identifying anonymous perpetrators for civil enforcement.
The OSC will cover 13 categories of online harm. The first tranche, covering online harassment, doxxing, online stalking, intimate image abuse, and image-based child abuse, is expected to be operationalized in the first half of 2026. The OSC is expected to be fully operational by mid-2026. This reflects a proactive regulatory stance toward synthetic-media harms.
Civil remedies
Civil proceedings are often the most immediate way for companies and executives to remove harmful content and protect reputation.
- Protection from Harassment Act 2014 (POHA)
POHA can apply to deepfakes where the synthetic content causes harassment, alarm or distress. It provides a rapid mechanism to take down such content, prohibit further dissemination and clarify the truth.
Victims of image-based abuse, including synthetic content, may seek:
- Stop Publication Orders
- Correction Orders
- Monetary compensation
The Protection from Harassment Court offers expedited relief, with orders sometimes issued within 48 hours. Corporations have standing to apply for these orders when false statements or harassing content affect their business or reputation, even if no individual employee is targeted. This allows companies to act swiftly against misinformation or synthetic media that could damage brand integrity or public trust.
- Defamation
Defamation law primarily protects an individual’s or corporation’s reputation.
A deepfake that falsely attributes statements or conduct may constitute defamation at common law. For example, if the deepfake causes the victim to be shunned, or exposes them to hatred, contempt, or ridicule. Remedies include injunctions, restraining further publication of the falsified content, and damages to compensate for reputational harm.
- Copyright infringement
Under the Copyright Act 2021, copyright holders may pursue civil actions where a deepfake incorporates protected material such as photographs, footage, or performance recordings without permission.
- Other causes of action
Depending on circumstances, where synthetic media is deployed to mislead, damage goodwill, or disclose confidential information, then the following civil actions should also be considered:
- Malicious falsehood: This applies where false statements are published maliciously to third parties, and the publication directly and naturally lead to financial loss (e.g., lost sales, cancelled contracts). A deepfake alleging misconduct by a company, for example, may advance such a claim if customers or investors are misled.
- Passing off: This protects goodwill, which is simply the reputation of a business that makes customers come back. At its core, the rule is simple: no one may pass off their goods as those of another. For instance, a deepfake that falsely suggests endorsement, association, or impersonates a corporate executive may amount to misrepresentation, thereby damaging the company’s goodwill.
- Breach of confidence: This arises where confidential material is used or disclosed without permission. A deepfake incorporating internal footage, internal communications, or confidential corporate imagery may constitute a breach of confidence.
Building long-term corporate resilience
Legal tools alone are insufficient. Organizations must invest in preparedness.
Regular training should raise awareness among employees and executives regarding suspicious communications. Organizations should explore digital watermarking or cryptographic signing for official corporate videos and statements.
Companies should establish a deepfake response plan integrating legal, communications, and cybersecurity functions. This should include:
- A cross-functional team with clear authority.
- Templates for urgent Court filings, regulatory notifications, and public statements.
- Updated contact lists for regulators, forensic specialists, and media platforms.
- Regular crisis simulations.
Cyber and media-liability insurance policies should be reviewed to ensure coverage for reputational harm and fraud losses arising from digital deception.
Strengthening trust in an era of synthetic media
A viral deepfake of a CEO can trigger panic almost instantly with consequences that ripple across markets, stakeholders, and reputations, well before it is proven to be fake. Singapore’s legal and regulatory measures are strong and continue to evolve but preparedness, governance, and rapid response remain the most effective shields.
If you would like guidance on deepfake-ready governance, crisis-response planning, or navigating Singapore’s evolving AI and online-safety regulations, please reach out to our team of legal experts. We regularly support corporations, boards, and senior executives manage technology-enabled risks.
Get in touch:

Pardeep Khosa | Partner and head of litigation, Withers KhattarWong

Jonathan Kok | Partner, Withers KhattarWong
