The shivers of Bablyon
The stated mission of Babylon is global and ambitious, “to put an accessible and affordable health service in the hands of every person on earth”. They seem to be making good progress. Already 30% of the population of Rwanda, over 2 million people, have accessed their app-based primary care services and a recent deal with Chinese firm Tencent puts Babylon’s AI into the hands of the one billion users of China’s WeChat platform. Add in the Middle East, Europe and North America and their global ambition is plain to see.
In the UK, Babylon’s GP At Hand service, operating through a single GP practice in Fulham, London, has quickly attracted tens of thousands of people from across the capital (and beyond), keen for the convenience of near instant, app-based access whilst still inside the NHS. However the Royal College of General Practitioners has been less keen, accusing them of “damaging traditional general practice services” by “cherry-picking” less complex patients. The current model of primary care funding in the UK was not set up with this kind of movement of patients in mind and is causing a rethink from the NHS centrally. GP At Hand does have one influential fan though, new UK Secretary of State for Health Matt Hancock has revealed that he is a user of the service and finds it “brilliant”.
'The digital revolution of primary care has started'
The ability to contact a doctor remotely and at a time that suits you (i.e. not Monday to Friday, 9-5) is not new, and there are multiple products and schemes that are already delivering this. In my own area we currently have remote consultations live to half a million people and will be live across the whole 1.2 million population by the end of the year. In this sense the much anticipated digital revolution of primary care has finally started. It is where it goes next though that most interests me.
Across the world all of our health systems are facing similar challenges, not least an ageing population and a shortage of qualified staff, and the need to deliver higher quality care in an ever more challenging financial situation. In that context the role for digital, and in particular artificial intelligence (AI) is compelling.
Throughout Europe there is a shortage of qualified clinical staff and attempts to grow that workforce had have very limited results. In the UK the Royal College of Nursing estimates that the NHS has 42,000 nursing vacancies and the World Health Organisation has put the worldwide shortage of nurses and midwives at a staggering nine million. It is perhaps no surprise that the greatest need is in the developing world. So with clinical staff such a precious resource it is surely essential that they operate at the top of their specialisms, undertaking the tasks that only they can do. However much of the day-to-day work of clinicians today is ‘routine’ and if it is routine then you probably don’t need an expert clinician to perform it.
Can you replace care with an algorithm?
I initially wrote no as the answer to that question, an immutable fact. However as I think about this, maybe AI will be able to in some settings and for some people. Much has been made recently about the ability of the Babylon App AI to ‘pass’ the MRCGP exam, with a better score than many trainee general practitioners. This led to a flurry of press speculation that GPs were now obsolete, and an equally strong denunciation of the software. Some doctors took to twitter to show off specific examples of where the AI had likely mis-diagnosed using the hashtag #deathbychatbot. This raises some interesting questions.
When a clinician makes a mistake there is not only a clear route of accountability, there is also a limited scope for harm based on how many patients they can interact with. When AI is wrong who is responsible? Ali Parsa claims that for Babylon it is him but that liability must be corporate as well as personal. The bigger challenge is the scale at which AI can be deployed and therefore the scale at which mistakes could be made. Giving one person a mis-diagnosis is worrying, delivering that same mistake to a worldwide audience would have major repercussions. The challenge to Babylon and others at this point is how to prove that their algorithms are safe whilst protecting their intellectual property, and that is a challenge for regulators too. The internet has proven hugely beneficial for patients in learning about their health and care but has also given rise to “Dr Google” helping people to incorrectly self-diagnose. So it matters where the next generation of search, via Alexa, Siri and others, gets its information from. Last month it was reported that NHS Choices information would be available via Alexa in the UK but it is essential that patients and clinicians alike understand where any AI information is being sourced from, and how it is regulated. However in the digital age a global product reach will pose multiple challenges for domestically located regulators.
AI to partner with clinicians?
One thing is certain though, the future cannot be willed away, nor should it be. There is going to be a role for AI in helping clinicians and citizens to diagnose and even care. In his book Digital Transformation at Scale Tom Loosemore describes video-chain Blockbuster not reacting to the advent of streaming as being akin to not getting out of the way of a ‘giant snail’ heading towards them. We face a similar challenge. This doesn’t need to be an unregulated race to the bottom as suppliers look to provide lower cost models of digital care to a wider audience. Instead this becomes most powerful when we adapt our existing health and care services to welcome AI and allow it to partner with clinicians, taking out much of the routine work, creating a more responsive service and letting care professionals do the work that only they can deliver. To be a partner though requires trust and right now we need that regulation for AI to create a trusting environment.
I started this blog by saying that Babylon, and others, were sending shivers of anxiety and excitement through the healthcare system. The anxiety is understandable but can be addressed through appropriate regulation and trials. The excitement is also very real. We talk about ‘disruptive technologies’ a lot but to date much of healthcare follows the same models that it always has. We cannot conjure up millions more doctors and nurses so we must seek to let AI and assistive technology support our clinical staff to meet our future challenges and continue to deliver safer, higher quality and compassionate care.