Article
Deepfake technology in Singapore: Immediate risks and the critical first response
27 November 2025 | Applicable law: Singapore | 7 minute read
This is Part 1 of our two-part feature on what corporations and their leadership need to do in the event of a deepfake incident. Part 1 will address the risk landscape and the immediate and coordinated response that is required. Part 2 will address legal recourse, regulatory powers, and long-term governance measures.
Deepfake technology has evolved into one of the most insidious applications of artificial intelligence. Fabricated audio-video content can now be virtually indistinguishable from authentic material, and the ability to replicate reality has, perhaps inevitably, been weaponized to distort it.
The misuse of this capacity is not theoretical. It is operational, accelerating, and the trajectory of misuse is alarming. Deepfake-related scams reportedly surged by over 1,700% in North America between 2022 and 2023, with estimated losses exceeding US$ 200 million in the first quarter of 2025 alone.
The ramifications for corporations and their leadership are manifold and potentially catastrophic. This first article focuses on the risk landscape and the immediate, coordinated response required when a deepfake incident unfolds. Part two will cover the legal mechanisms available under Singapore law and long-term organizational resilience.
The risks of deepfake technology
The sophistication of deepfake technology means that fabricated audio-video content is now almost indistinguishable from authentic footage. The convergence of high-fidelity synthetic media with frictionless digital communication has created a systemic vulnerability: deception can outpace detection and verification.
Deepfakes can be used to weaponize identities and procure corporate actions to a company’s detriment. This is no longer hypothetical. In 2024, a Hong Kong finance officer authorized the transfer of US$ 25 million following a video call with an individual impersonating the company’s chief executive. Months later in 2025, a Singapore-based finance director wired nearly US$ 500,000 under similar circumstances.
In financial centres like Singapore, where digital penetration exceeds 95%, a single falsified video depicting a CEO making inflammatory statements could spread within hours, destabilize markets, jeopardize investor relations, and erode public trust.
The question is stark: how does one respond?
A three-pronged framework
Although responses must be tailored to the specific facts, any effective playbook will consist of three broad phases:
- An immediate coordinated response prioritizing containment, evidence preservation, and communication.
- Legal recourse to secure takedowns and pursue civil or criminal remedies.
- Long-term resilience measures through education, governance, and technology.
Containment: acting in the first hours
Containment is the priority when responding to a deepfake incident. When a deepfake goes viral, time is crucial. The longer the delay before steps are taken to contain dissemination, the greater the likelihood of widespread reputational and financial damage.
Immediate steps should be taken to limit the spread of the content across digital platforms. This includes reporting the material through impersonation or misinformation channels provided by major social media and hosting services, and escalating to relevant authorities where necessary.
Preservation of evidence
Steps to preserve the evidence should be taken almost concurrently with containment attempts and certainly before any takedown. Properly preserved evidence forms the foundation for subsequent steps, including civil claims for damages and criminal investigations, ensuring that perpetrators can be identified and held accountable.
This means securing and preserving copies of the deepfake and related posts, together with timestamps, usernames, and engagement metrics. Where possible, metadata and platform logs should be captured to establish the chain of publication. Engaging forensic experts at an early stage is critical to verify manipulation, authenticate the synthetic nature of the content, and trace its origin.
Crisis communications: internal and external
Timely and targeted communication must follow shortly after containment and evidence preservation. A crisis team comprising legal, compliance, communications, and IT should be activated to ensure that all messaging to regulators, investors, and the public is consistent, factual, and measured.
If the deepfake threatens public order or market confidence, the matter should be immediately escalated to the Singapore Police Force, the Infocomm Media Development Authority, or the Cyber Security Agency. Where financial fraud is involved, lodging a police report without delay is essential.
Regulatory notifications should also be made. Notify Monetary Authority of Singapore ("MAS") if the incident could affect market stability, Accounting and Corporate Regulatory Authority for compliance matters, and the Singapore Exchange if the company is listed. Listed issuers should also consider issuing a clarification on SGXNet to prevent a false market.
Internally, employees should be informed promptly and provided clear talking points to ensure consistent responses. They should also be reminded not to share and/or comment on the fake content online.
A fast-shifting regulatory backdrop: Singapore’s new Artificial Intelligence ("AI") guidelines
Companies should also be aware that Singapore is strengthening its governance framework around AI and synthetic media. This includes:
- MAS's Guidelines on AI Risk Management, issued for public consultation on 12 November 2025, which will apply to AI systems including generative and synthetic-media-creating models. Public comments are due by 31 January 2026.
- Cyber Security Agency's Addendum on Agentic AI (2025), which updates its “Securing AI Systems” guidelines to address autonomous AI agents capable of acting without direct prompts.
Both highlight the growing expectation for companies to implement robust controls, auditability, and human-in-the-loop oversight; all directly relevant when deepfakes threaten market confidence or organizational integrity.
These regulatory developments reinforce one point: Deepfake readiness must sit within broader AI governance.
Staying ahead of the next wave
A viral deepfake can destabilize a company within hours. The first line of defense, namely containment, evidence preservation, and clear crisis communication, can significantly limit the fallout. Organizations that fail to prepare inevitably find themselves managing a crisis in real time, under intense public and market scrutiny.
Part two of this series will examine the legal and regulatory frameworks available in Singapore and the long-term resilience measures that companies should adopt.
If your organization would like assistance developing a deepfake response protocol, aligning governance with Singapore’s emerging AI and agentic-AI guidelines, or preparing crisis-communications frameworks, please reach out to our team of legal experts. We advise corporations and senior management across sectors on navigating these risks.
Get in touch:

Pardeep Khosa | Partner and head of litigation, Withers KhattarWong

Jonathan Kok | Partner, Withers KhattarWong
