Blog

Harnessing artificial intelligence for data analytics in digital health

4 September 2019 | Applicable law: England and Wales

This blog is part one of our three part blog series which looks at artificial intelligence in digital health in Cambridge. Please click here to view the introductory article.

There’s an abundance of data being captured and analysed as a result of artificial intelligence (AI) being used in digital health. By 2025, market research indicates that global big data in healthcare analysis forecast will reach $68.75B.

Healthcare data capture is growing at a fast pace and is expected to surpass 2,314 exabytes by 2020. This growth is stimulated by declining storage costs and the emergence of organisational reliability on cloud-based services and subscription models.

European Union is set to invest $24B into artificial intelligence (AI) by 2020 and healthcare is one of the market sectors that stands to benefit the most as it has become so reliant on big data.

Big data in the healthcare sector will be driven by an urgent need to control rising healthcare costs and to improve patient outcomes and resource management. Analytics services contributed most of the $5.80B in 2017 alone.

The spotlight is very much in the UK. Heavyweights like Google’s DeepMind Health*, are using AI to mine medical records and processing hundreds of thousands of medical data profiles in minutes at Moorfields Eye Hospital and University College London Hospitals (UCLH) Trust.

BioBeats* is a UK digital health and AI company that creates easy-to-use corporate and personal wellness solutions to combat stress. Alongside their funding from Oxford Sciences Innovation, White Cloud Capital and IQ Capital, the digital health start-up received backing from actor and singer, Will Smith in August 2018, bringing* the total funding for BioBeats to $6M.

The great petri-dish

Cambridge, UK has grown to be a petri-dish for innovation, globally renowned for incubating biotech and digital health innovation -- spin-outs and scale-ups in life sciences, digital therapeutics, and biotech. Cambridge (known as Silicon Fen) has a higher than average digital density compared to most UK cities and is the third-largest bio-cluster in the world (following Silicon Valley and Boston area) with £681 million [$868 million] in venture capital in 2017.

Trailblazers such as Cambridge Antibody Technology (acquired by Astra Zeneca in 2006 for £702m.), Kymab (currently preparing for a NASDAQ IPO), and Solexa, have been the UK's biggest biotech success stories, firmly paving the yellow brick road all the way to Cambridge for the next aspiring biotech unicorn.

AI digital start-ups like Benevolent AI and Healx* are using AI to speed up the process of drug development, discovering and developing treatments for rare diseases by combining AI with pharmacological expertise and patient engagement.

Benevolent AI, a UK tech unicorn valued at more than £1B, opened up a facility in Cambridge in 2018 after they acquired a lab at the Babraham Research Campus. . And, Healx has raised $11.9 M, including a $10M Series A round led by Balderton Capital.

But with this exponential growth in data harvesting, what are the biggest risks to digital health companies and start-ups with all this influx of this data and upsurge in health data analytics as a result of AI?

In one word. Data.

Without placing a dampener on this exciting growth story, in the words of the UK Information Commissioner's Office (ICO), while finding that the processing of sensitive personal data received from the Royal Free London NHS Trust Hospital by Google's DeepMind failed to comply with the privacy laws in 2017, _"[..] the price of innovation does not need to be the erosion of fundamental privacy rights".

In this case, in 2015-2016, DeepMind had been conducting clinical safety tests of an app it developed. The app, Streams, pooled together non-anonymised health data of 1.6 million patients streamed in from various clinical sources such as vital signs and test results which were obtained from the Royal Free London NHS Hospital to help clinicians monitor the onset of serious conditions such as acute kidney injury.

In response to the ICO's criticism, numerous changes were implemented as Streams moved from safety testing to live use in a clinical environment in early 2017. However, in 2018 DeepMind announced that the Streams team would join Google to make the app available globally. It will be important to clearly understand how the sensitive medical data that the Streams team have access to will be dealt with under these new arrangements.

And, here’s a little known fact: despite DeepMind's pioneering work in AI, from its inception Streams itself didn’t use any AI. Since the merger with Google, with the intention of powering Streams with AI in the future, an AI algorithm was finally created, leading to the exciting announcement last month of the development of an AI model that can accurately predict the onset of acute kidney injury up to 48 hours in advance.

In health data analysis, it’s not the technology that’s key, it’s the access to a broad enough data set that is vital. The most valuable asset to an AI health-tech venture is not the AI platform, but the data. Without a broad clinically representative data set to learn on, the AI will not provide a clinically usable output.

When such outputs may be relied on to make critical life-saving decisions, the quality of the data on which the AI learns to make its predictions is crucial -- i.e. how clinically representative the input data is, of the patient population that is intended to be treated based on the AI's output.

For example, DeepMind’s AI was trained on patient data from the US Department for Veteran Affairs (US DVA). As the dataset from US DVA belonged to an overwhelmingly male (93.6%) group, it was not representative of the general patient population which has a more balanced gender demographic. This is reflected in the AI's performance across gender (and separately ethnicity), where the model’s performance is lower for women.

Since the ICO’s remarks in 2017, the number of data breaches in the digital health sector has only grown exponentially year on year, including exposure of sensitive medical data.

Therefore, when data collection or analysis is central to the technology being built, the privacy of users should be one of the key values driving such innovation. Strong risk management should motivate innovators to factor privacy into product development right from day minus, not as an afterthought.

Why?

Privacy barriers and future privacy opt-outs need to be carefully built in to limit data collection by the system to ensure the technology is compliant by its very design. Attempts to retrofit prototypes or products right before roll out to comply with privacy laws usually don't end well. DeepMind's reflection in its 2017 blog is piercingly insightful, "…to achieve quick impact when this work started…, we underestimated the complexity of the NHS and of the rules around patient data…".

The message is clear: just because you can, it doesn't always mean you should.

A good starting point is to establish the legal basis under privacy laws for data collection. Carrying out privacy impact assessments to understand what further measures need to be taken not only to comply with such laws, but also to build user trust through greater transparency. Planning regular data audits to ensure the original model is still suitable for any new applications.

As a final word, it is also worth remembering that the life sciences and health care industries are already some of the most heavily regulated sectors. Some health products and services are often simultaneously regulated by a range of regulations and regulators in the UK, such as the MHRA (Medicines and Healthcare products Regulatory Authority), the CQC (Care Quality Commission), the Human Tissue Authority and if applicable even by professional licensing bodies such as the General Medical Council. Such companies, their senior management and their products will already need to be compliant with many of the standards relating to informed consent, confidentiality and patient privacy right from product or service launch and throughout the product life cycle.

In the next part of this series in September, we will focus on gene therapy which is predicted to be one of the next biggest trends in biotech. AI is set to revolutionise gene therapy by making precision editing and highly personalised healthcare a reality. Alongside gene therapy’s unique history with Cambridge and why Cambridge may continue to hold the key to its future success, we will look at several key intellectual property (IP) issues this raises, including around AI as an inventor.

Stay tuned!

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.

Share

Related experience

As a full-service law firm, we are able to provide advice and information about a wide range of other issues. Here are some related areas.

Join the club

We have lots more news and information that you'll find informative and useful. Let us know what you're interested in and we'll keep you up to date on the issues that matter to you.