Prizing the people whom AI is designed for, over the AI that is designed for them.
by Konstanze Tan Wan Yu
Hodenschmerzen? #tellAda.
Plastered in bold, white font against a brilliant turquoise signboard in a busy railway station in Berlin1, is a word which many try to squirm away from in conversation: Hodenschmerzen is German for ‘testicular pain’. Following this mention, the ad goes on to make an equally bold suggestion: that Ada should be a part of this uncomfortable, but necessary conversation.
In the healthcare industry, game-changing, AI-enabled technologies have been both praised as panacea and blamed for paranoia. Neither of these polarizing depictions are helpful in the realization of AI’s grand ambitions for the healthcare industry. Ada’s conviction that empty rhetoric isn’t the currency for consumers’ loyalty explains her painstaking efforts to understand and engage with her users. This fuels her consequent, quiet rise onto the pedestal: above the razzle-dazzle of many other AI-powered gimmicks which have sought that place at the expense of trust.
In the course of writing this article, I have had the opportunity of dissecting many success stories of AI-powered innovations which have made a difference in healthcare. In summary, most of these are narratives of breaking barriers between users and the ways in which AI can give them the help they need. They also speak of building bridges — between AI’s enchanting promises for the healthcare industry and the people who can give it platform to deliver these.
An Invitation to a Necessary Conversation
“Let’s start with the symptom that’s troubling you the most.” Ada ditches the full stop as soon as she’s through with her conversation starter. She much prefers to speak in questions: and she sure knows how to keep the conversation going with the right ones. The process of conversation grants Ada invaluable opportunity to acquaint herself with her user’s concerns, before generating suggestions on what could be going wrong, and the type of medical attention required.
The task of training chatbots to be adept at conversation is especially formidable because there are no clear rules for it. For those seeking a useful one, the advice given in a Fast Company article: “First of all, try to be human”2 could just be one of the most clearly articulated rules-of-thumb around.
Ironically, it seems that the process of scripting conversations for chatbots has given us more insight into what it means to be human: computer scientists, linguists and cognitive scientists strive to unpack the subtleties of what our daily conversations contain in order to design chatbots which we are willing to talk to.
Jargon, for one, forms a class of notorious conversational killers. Unfortunately, the language of the healthcare industry is notoriously peppered with these. “Communication” between practitioner and patient, as a Reuters Health journalist notes, “may not be happening as often as doctors think it is”.3 The inability to interpret jargon can be lethal when patients, unable to accurately discern the severity of what they are experiencing, do not seek the necessary medical treatment in time. AI-powered chatbots and applications combat this by communicating reliable medical information through disarmingly simple language and illustration.
When I told Ada about a headache (or cephalgia) I was having, Ada responded by asking whether my “temple is sore to the touch”. I wanted to be absolutely sure that what I thought a temple was corresponded to the medical definition of it, so I found it particularly helpful when Ada offered me a corresponding visual depiction of “the area between the head between the eyes and the ears” (or, “above the side of the zygomatic arch” which was the top result when I searched the same term up on Google.)
Within the conversational space shared between chatbots and their users, sensitive health-related matters can be surfaced. Many of these would otherwise have remained under wraps for fear of embarrassment. Hodenschmerzen? #tellAda is a bold invitation to a necessary—and unfortunately often overdue—conversation. In cases whereby Hodenschmerzen suggests testicular torsion, chronic damage can be prevented if treatment is given within 6 hours of the onset of symptoms.4
Preventable, pregnancy-related complications can similarly remain under wraps, when young women do not have timely access to reliable medical information on potential danger signs. Bonzun was birthed out of her founder Bonnie Roupé’s own harrowing encounter with pre-eclempsia and made for the woman who is reluctant to talk about her reality of discomfort and pills, after she has seen pictures of celebrities flaunting baby bumps on heels. With Bonzun’s “Test Tracker” feature which turns input in the form of pregnancy test results into insight which users can easily comprehend. Blood pressure and urine protein levels which exceed the expected range would be flagged; and the thousands of lives which could have otherwise been surrendered to pre-eclempsia, would be redeemed.5,6,7
People Need Stories, Not Lies
“And in that moment, I grabbed down and I felt my wrist, and I felt my watch, and I immediately said “Siri, call 911”. “I looked at my watch and thought, “Wow, I’m going to try this out. I hadn’t even tried a call from the water”. I’m quoting two of Apple Watch’s product marketers who had never signed up for the job: A “completely blindsided” mother with a 9-month old son in tow and her senses overtaken by a car crash, and a kite surfer, stranded miles offshore in an ocean, as both recount their experiences with an unlikely savior.8
Narratives like this hold promise in enabling AI-powered technologies like the Apple Watch to emerge from behind the smokescreens of AI’s hazy doomsday affiliations, to perform real, life-saving work. Unfortunately, the trust inspired by these narratives are continually being undermined by records of past data privacy scandals involving giants in the healthcare industry continue to bleed through pages of the narrative which AI is writing for the future of healthcare.
The team behind the ‘Streams’ app had a big dream: that one day patients will no longer die from preventable conditions. But the legally questionable flow of 1.6 million patient records between Google DeepMind and NHS RoyalFree during the development of the app greatly dampened much of the initial excitement generated.
Julia Powles, writing in her capacity as a legal researcher at the University of Cambridge, and Hal Hodson in his as a technology reporter for New Scientist, took aim at DeepMind and its collaborating partner for pursuing a “narrative that it’s not really about artificial intelligence at all, but just about direct care”. Through press releases, the fears of a privacy-conscious public were assuaged by repeated assurances that artificial intelligence techniques would not be applied to the dataset. The public was made to understand that Streams would be no more than a mere interface for the collation of patient information, to facilitate collaboration between the various parties involved in the direct care of an acute kidney injury (AKI) patient.
Yet all this while, grander ambitions had been carefully tucked away from the public eye in an Information Sharing Agreement (ISA), which governed the flow of data between Streams’ developers, under a different blanket of terms. “Real time clinical analytics’ and “across a range of diagnoses and organ systems” would have been obvious even to the untrained eye, as buzzwords in the field of AI. Little was made clear with the use of these ambiguous terms in defining what exactly Streams was designed to do. Within these faint boundaries, data from every patient admitted to a few hospitals within the span of 5 years— AKI or not— was found streaming into DeepMind from RoyalFree. For all that ambiguity clouding the intended usage of highly sensitive data, it seemed that both DeepMind and RoyalFree were clear about one thing: that Streams was never intended to be just a mere interface for the management of a single disease.9
AI’s ongoing mission to bring about real transformation to the healthcare industry is conditioned on the consent of the ones it claims to serve. “Data can only benefit society if it has society’s trust and confidence”, wrote an encumbered DeepMind in 2017, fresh from the previous year’s fiasco, on a webpage introducing a new project ‘Verifiable Data Audit’.10 With this blockchain-inspired system dedicated to rigorous scrutiny of any interactions with individual data, DeepMind has been in the business of restoring broken trust, bit by bit.
AI Which Clinicians Won’t Mind Using
“Doctors don’t like the idea of having a third-wheel in the (patient-doctor) relationship. We do like the idea of having tools and an assistant though.”11 At an informal sharing session by Dr David Benrimoh, a small group of AI engineers and interaction designers were gathered in search of a clue to the enigma of transitioning an AI-enabled innovation from research labs into clinical application.
Himself a CEO of Aifred Health, an AI company seeking to improve treatment efficacy for mental disorders, Dr Benrimoh’s additional affiliation as a psychiatry resident lends his insights a unique credence amongst his audience. With a good deal of candor, Dr. Benrimoh uncovered the role of hubris in influencing a doctor’s guardedness toward AI-enabled technology, which he calls the unwelcomed “third wheel”.
Aifred Health’s algorithm espouses a conviction in assisting, rather than replacing clinicians, in its faithful adherence to existing, tested and trusted practices in treating mental health disorders. These are often contained within 20-30-page manuscripts, and the algorithm-assistant’s task is to convert them into a “clinically accessible format”12, explained Aifred Health’s co-founder, Sonia Israel. The algorithm does not leverage on the power of deep learning techniques to generate risky, novel treatment methods which also pose as a direct affront to a clinician’s experience. Instead, it uses this capability to pore through colossal bodies of clinical data detailing individual patients’ responses to past applications of known treatment plans for depression. Eventually, the clinician makes the ultimate decision; the algorithm just helps to make it an informed one.13
Amidst the abundance of dizzying portraits of the dramatic transformative potential of AI-enabled healthcare technologies, there has been a quiet emergence of design principles which demonstrate a careful avoidance of the portrayal of AI as disruptive technology. Seeking to design AI which clinicians will embrace, AI engineers and interaction designers have been paying greater heed to pioneering computer scientist Mark Weiser’s vision of making technology “so imbedded, so fitting, so natural, that we use it without thinking about it.”14
Involving AI in the decision-making loop isn’t often the first thing on a clinicians’ mind. Why clinical decision support tools (DSTs) fail to be integrated into clinical practice, Professor John Zimmerman’s team believes, is because many of these have been designed with the assumption that clinicians would be willing to break away from their workflows to interact with an algorithm, which produces an output they “want and will trust”. Scrutiny of how mechanical heart implant decisions were made amongst cardiologists, for instance, revealed that close to none of these decisions were discussed or made “in front of a computer”, but rather, in hallway conversations and in decision meetings.15
Hubris is also at play here. As Zimmerman notes of leading clinicians: “They see that 50% of people die during some kind of procedure, and they say ‘Not my people’.”16 With a track record of excellence behind them, leading clinicians may be less receptive to an AI-powered DST’s predictions of a lower-than-expected probability of success.
These observations informed the team’s novel design of a new AI-powered DST which automates the generation of slides for decision meetings. An obvious incentive for hospitals to adopt this DST would be the irresistible efficiency which it introduces into the tedious process of collating patient updates for the slides. On these slides, AI slips in a piece of its mind, in the form of a single percentage point indicating the predicted post-implant life expectancy of a patient. In this manner, the incentive for clinicians to consider AI-generated inputs is thus expressed in a more tacit tone: Take it or leave it, but it’ll be here if you need it.
The AI-enabled technologies which will successfully bring people in need, to the help they never knew they needed, are those which are experts in manufacturing trust. Over the years, barriers have been consistently erected by sediments of scandals and unhelpful portrayals. There’s no telling how much time it takes to rebuild eroded trust: perhaps as much as is needed for us to grasp what it means to be human. But there’s also no telling how much of vision we can translate into reality, just by prizing the people whom AI is designed for, over the AI that is designed for them. [APBN]
About the Author
Konstanze Tan Wan Yu is a second-year biological sciences undergraduate at Nanyang Technological University (NTU) and Editorial Intern at World Scientific.