Article

Cracking the code: Legal hurdles for integrating AI in Singapore

26 March 2024 | Applicable law: Singapore | 3 minute read

What is next for AI in healthcare and life sciences? With generative and predictive AI playing a bigger role across the healthcare value chain – from diagnosis, population health to enterprise solutions for the delivery of care, what are the implications of AI technology for companies?

In a panel discussion hosted by Amazon Web Services (AWS), Jonathan Kok, partner and co-lead of the Withers Tech Asia team, shared his predictions from the legal and IP perspectives. Moderated by Dr Terence Tan, head of healthcare and life sciences startups at AWS, the multi-faceted panel discussion featured three other distinguished speakers: Gavin Teo, general partner at Altara Ventures, Eric Dulaurans, healthcare digital growth leader, GE Healthcare and Martin Nielsen, co-founder, Riverr.

The panellists agreed that while AI is an empowering tool that will shape the future of healthcare and life sciences, it needs to be deployed safely and responsibly.  Jonathan shared four key takeaways:

Mitigating risks in AI deployment

The panellists agreed that while AI is an empowering tool that will shape the future of healthcare and life sciences, it needs to be deployed safely and responsibly.

The effectiveness of AI technology hinges on the quality of data sets used for training the AI tool. The solutions produced by AI are rooted in the patterns on which the technology is built. In instances when the generated results are inaccurate or when something goes wrong during deployment, the matter of liability comes to the forefront.

To mitigate risks, key principles which companies ought to observe for AI deployment include upholding ethical standards, ensuring the reliability of data sets and transparency in explaining how the AI tools work. Most importantly, companies should have in place a sound system to deal with misinformation or unknown outcomes.

Unlike Europe, Singapore does not have a specific law that places liability on companies that deploy AI in their systems. However, the Infocomm Media Development Authority has proposed a new Model AI Governance Framework for Generative AI in January 2024, which builds on an existing model AI governance framework released by the Personal Data Protection Commission of Singapore. The proposed framework guides companies’ decision-making regarding AI deployment. For Singapore’s healthcare industry specifically, companies can refer to the Artificial Intelligence in Health Guidelines published by the Ministry of Health in October 2021.

Understanding IP ownership in AI generated outputs

Laws are still catching up in relation to the ownership of IP in AI generated materials. The recent Air Canada case exemplifies the legal challenges in which the airline was held liable for what its chatbot said.1 It raises questions about who should be legally liable for AI negligence in various fields, including healthcare. 

Under the existing copyright and patent regimes, only human beings can be listed as inventors of AI patents and authors of AI copyright works. This is due to  the  complexities in determining the entity to sue for patent or copyright infringement if AI is deemed the owner.

However, over time, with AI machines taking on more critical roles as they work alongside human beings, it might be increasingly challenging to assign responsibility, culpability, and liability in cases of negligence.

However, over time, with AI machines taking on more critical roles as they work alongside human beings, it might be increasingly challenging to assign responsibility, culpability, and liability in cases of negligence.  This is expected to impact the global governance of AI tools in the future.

Navigating regulations in different jurisdictions

Highlighting a real concern for technology companies, Jonathan said, “Technology is never territorial or jurisdictional specific. If you want to make your technology applications available worldwide, it is important to navigate regulations in the different jurisdictions.” 

Drawing a parallel to personal data protection laws which are generally harmonised throughout the different jurisdictions, Jonathan emphasised the crucial need for global uniformity and harmonisation in AI regulations. This will provide companies with a clearer and more predictable legal framework, enabling them to navigate the complexities of AI-related legal issues with greater confidence. The resulting uniformity can in turn stimulate greater innovation in the field of AI. 

The right partnership is key

In response to a question from the audience about selecting the right partner for AI innovation, Jonathan suggested that technology companies should choose partners that can complement rather than stifle innovative efforts. 

“Go with a partner who has no desire to claim ownership of your IP or commercialise your products,” Jonathan advised.

In summary, to effectively navigate the complexities of regulatory and IP ownership issues, it is important to stay current with the rapidly evolving AI landscape and its impact on the future of healthcare and life sciences

Backing innovation

Where private capital and powerful ideas meet

BACKING INNOVATION

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.

Share

Related experience

As a full-service law firm, we are able to provide advice and information about a wide range of other issues. Here are some related areas.

Join the club

We have lots more news and information that you'll find informative and useful. Let us know what you're interested in and we'll keep you up to date on the issues that matter to you.