Last week I was lucky enough to be on a panel for a Medfest event. Medfest is a medical film festival hosted at medical schools all around the country during February and March. Audiences at each event see the same selected short films on the theme of medicine, after which a panel (mostly consisting of doctors) shares their thoughts on the films. My event was chaired by Alex Langford (aka @PsychiatrySHO) and I joined two consultant psychiatrists and a paediatric SHO on the panel.
As you would expect, given the subject matter, we spent a lot of time talking about ethical or philosophical points (for example, whether a woman with dementia could give meaningful consent to being filmed; how much patients should be at the centre of films about medicine, etc). One film we saw was a science fiction short set in a dystopian future about a medical robot called Dr Easy.
I’m pausing for two big warnings, here. First, if you want to watch for yourself and avoid SPOILERS, skip the next paragraph. Secondly, big TRIGGER WARNINGS for suicide.
The film opens with a siege situation in progress, blue lights flashing and choppers buzzing overhead. A lone man is holed in up his flat with a shotgun, shot but not killed by a police sniper. The commanding officer’s next move is to deploy a robot medic. Dr Easy, who was vaguely humanoid in shape and programmed to have a soothing female human voice, enters the perpetrator’s flat carrying a bag of medical equipment, noting as she does so that there is “a strong odour of blood and of petrol”. It is clear as soon as she speaks to the perpetrator, Michael, that she is programmed to establish rapport with humans in distress. Michael has been shot in the mouth and cannot speak so it is down to Dr Easy to establish his needs and motives through eye contact, Michael’s attempts to write on the wall in his own blood, and Dr Easy’s access to online information about his financial and family problems. Just when it looks like an understanding might be reached and that Dr Easy might be able to coax Michael into putting the gun down, she makes an error in mentioning Michael’s son (I’ve written whole posts on why this is a bad strategy when dealing with suicidal people). Michael begins dousing himself with petrol and sets himself alight. There is a large explosion from his flat, after which we see Dr Easy exiting the building, partially alight. Partway down the staircase she stops, immobilized and presumably “dead”, insofar was we can speak of death in a robot.
Robots, by definition, have no physical sex and (as yet!) insufficient sense of self to have a sense of gender, so it was of interest to me that Dr Easy had been assigned a female voice. Although her role had elements of negotiation (“Shall I get someone to bring her [your ex-wife] here?”) it was primarily one of nurturing (demonstrating concern, establishing rapport, administering anaesthetic). Nurturing female robots are certainly nothing new. British science fiction writer John Wyndham wrote a story called Compassion Circuit as long ago as 1956 in an anthology called The Seeds of Time. The story opens with a doctor suggesting George purchases a robot nurse for his spouse Janet, an apparently depressed housewife. Janet and the robot, Hester, are alone together all day and become friends. Ironically, Hester’s state-of-the-art compassion circuit makes it difficult for her to tolerate Janet’s ongoing emotional pain. The logical step, at least in Hester’s mind, is to suggest Janet undergoes a procedure to lose the ability to feel distress. Janet agrees and has the procedure carried out, essentially become a robot herself.
Dr Easy prompted a discussion on whether doctors could ever really be replaced by robots (with Kier noting wryly that at times his working day seemed to consist of drawing blood and filling in paperwork, both tasks a robot could easily perform). It’s been suggested – particularly in Japan – that robot assistants could be developed to address the mobility needs and even the need for caring touch in an aging population.
That’s all a long way from a compassion circuit, but even relatively simple programming can generate a bot that appears to be expressing care and concern, even if it’s really doing no such thing. Eliza (yep, another female gendered nurture bot) is a text-based “computer therapist” designed to emulate the Rogerian approach to counselling (very briefly, this means Carl Rogers’ approach of putting the client at the centre of the interaction and holding them in “unconditional positive regard”). Try Eliza out for yourself and you’ll notice she seems really good at “listening skills” – lots of open questions, prompts for further information (“tell me more…” and “can you elaborate on that?”), bouncing questions back at you, and so on. Sometimes she even pauses a little, as if giving your responses some thought. I’d be interested to know what anyone who’s ever used online chat or instant messaging for therapy thinks of Eliza. I’ve had some pretty terrible psychiatrists in the past, including doctors who have failed to make eye contact, failed to read my notes, reflected back information incorrectly, or shown no flicker of having met me before. It certainly felt as if they lacked a compassion circuit. On the social interaction front, Eliza would be at least as good as a doctor like that, and presumably it would be possible to programme robots to manage at least some diagnostic or prescribing decisions based on patient data.
But I like to think – and I’m going to be trusting and believe Alex, who assures me of this – that psychiatrists are improving, that a more dedicated calibre of junior doctor is now choosing to specialise in mental health. I hope we’re both right, because when psychiatry works well it does so on a level far beyond what a diagnostic bot with a programmed compassion could ever achieve. Dr Easy was apparently good at reading expressions, but I don’t believe a robot could really identify the haunted look in my eyes when I’m paranoid and scared. It could look pleased to see me, but I would always know it was smiling because it had been instructed to do so. And a robot could never really support my recovery because it is impossible establish a rule for what recovery is. Eliza can “listen”, but she can’t feel joy or triumph, She can never look at a patient’s progress and think, “I helped make that happen.” We need more than a compassion circuit in psychiatric interaction. We need nuanced clinicians who are able to engage their empathy processor as well as their factual knowledge. I’m thinking of calling them “people.”