Eric Topol is an American cardiologist and geneticist – among his many roles he is founder and director of the Scripps Research Translational Institute in California. He has previously published two books on the potential for big data and tech to transform medicine, with his third, Deep Medicine, looking at the role that artificial intelligence might play. He has served on the advisory boards of many healthcare companies, and last year published a report into how the NHS needs to change if it is to embrace digital advances.
Your field is cardiology – what makes you tick as a doctor?
Well, the patients. But also the broader mission. I was in clinic all day yesterday – I love seeing patients – but I also try to use whatever resources I can, to think about how can we do things better, how can we have much better bonding, accuracy and precision in our care.
What’s the most promising medical application for artificial intelligence?
In the short term, taking images and having far superior accuracy and speed – not that it would supplant a doctor, but rather that it would be a first pass, an initial screen with oversight by a doctor. So whether it is a medical scan or a pathology slide or a skin lesion or a colon polyp – that is the short-term story.
You talk about a future where people are constantly having parameters monitored – how promising is that?
You’re ahead of the curve there in the UK. If you think you might have a urinary tract infection, you can go to the pharmacy, get an AI kit that accurately diagnoses your UTI and get an antibiotic – and you never have to see a doctor. You can get an Apple Watch that will detect your heart rate, and when something is off the track it will send you an alert to take your cardiogram.
Is there a danger that this will mean more people become part of the “worried well”?
It is even worse now because people do a Google search, then think they have a disease and are going to die. At least this is your data so it has a better chance of being meaningful.
It is not for everyone. But even if half the people are into this, it is a major decompression on what doctors are doing. It’s not for life-threatening matters, such as a diagnosis of cancer or a new diagnosis of heart disease. It’s for the more common problems – and for most of these, if people want, there is going to be AI diagnosis without a doctor.
If you had an AI GP – it could listen and respond to patients’ descriptions of their symptoms but would it be able to physically examine them?
I don’t think that you could simulate a real examination. But you could get select parts done – for example, there have been recent AI studies of children with a cough, and just by the AI interpretation of that sound, you could accurately diagnose the type of lung problem that it is.
Smartphones can be used as imaging devices with ultrasound, so someday there could be an inexpensive ultrasound probe. A person could image a part of their body, send that image to be AI-interpreted, and then discuss it with a doctor.
One of the big ones is eyegrams, of the retina. You will be able to take a picture of your retina, and find out if your blood pressure is well controlled, if your diabetes is well controlled, if you have the beginnings of diabetic retinopathy or macular degeneration – that is an exciting area for patients who are at risk.
What are the biggest technical and practical obstacles to using AI in healthcare?
Well, there are plenty, a long list – privacy, security, the biases of the algorithms, inequities – and making them worse because AI in healthcare is catering only to those who can afford it.
You talk about how AI might be able to spot people who have, or are at risk of developing, mental health problems from analysis of social media messages. How would this work and how do you prevent people’s mental health being assessed without their permission?
I wasn’t suggesting social media be the only window into a person’s state of mind. Today mental health can be objectively defined, whereas in the past it was highly subjective. We are talking about speech pattern, tone, breathing pattern – when people sigh a lot, it denotes depression – physical activity, how much people move around, how much they communicate.
And then it goes on to facial recognition, social media posts, and other vital signs such as heart rate and heart rhythm, so the collection of all these objective metrics can be used to track a person’s mood state – and in people who are depressed, it can help show what is working to get them out of that state, and help in predicting the risk of suicide.
Objective methods are doing better than psychologists or psychiatrists in predicting who is at risk, so I think there is a lot of promise for mental health and AI.
If AI gets a diagnosis or treatment badly wrong, who gets sued? The author of the software or the doctor or hospital that provides it?
There aren’t any precedents yet. When you sign up with an app you are waiving all rights to legal recourse. People never read the terms and conditions of course. So the company could still be liable because there isn’t any real consent. For the doctors involved, it depends on where that interaction is. What we do know is that there is a horrible problem with medical errors today. So if we can clean that up and make them far fewer, that’s moving in the right direction.
You were commissioned by Jeremy Hunt in 2018 to carry out a review of how the NHS workforce will need to change “to deliver a digital future”. What was the biggest change you recommended?
I think the biggest change was to try and accelerate the incorporation of AI to give the gift of time – to get back the patient-doctor relationship that we all were a part of 30, 40-plus years ago. There is a new, unprecedented opportunity to seize this and restore the care in healthcare that has been largely lost.
In the US, various Democratic candidates for 2020 are suggesting a government-backed system – a bit like our NHS. Would this allow AI in healthcare to flourish without insurers discriminating against patients with “bad data”and allow AI to fulfil its promise?
Well I think it certainly helps. If you have a single system where you implement AI and you have all the data in a common source, it is just much more liable to succeed. The NHS efficiency of providing care with better outcomes than the US at a lower cost per person, that is a lot about the fact you have got a superior model.
• Deep Medicine by Eric Topol is published by Basic Books (£25). To order a copy for £22 go to guardianbookshop.com. Free UK p&p on all online orders over £15