NEWS > RESEARCH

Chatbot Comforts Patients

No one likes thinking about death. Funeral arrangements, last will and testaments, and questions of the afterlife make many people inherently uncomfortable, but these things often must be confronted all the same.
Would a non-judgmental, non-threatening virtual friend make these topics easier to grapple with? That’s the concept behind a tablet-based chatbot developed by professor Timothy Bickmore, a human-computer interaction expert in the College of Computer and Information Science.

Over the past 10 years, Bickmore’s lab has developed animated health counselors. This chatbot is the latest iteration of that ongoing effort, and it is deisigned for terminally ill patients in the last year of their lives. The project is funded by the National Institutes of Health’s National Institutes of Nursing Research and is currently being tested in a clinical trial. Here, Bickmore explains how the virtual agent works and how it could change the field of palliative care.

Q1: What was the motivation for creating a chatbot that focuses on end-of-life care and decision-making?

Palliative care is a branch of medicine in which the primary goal is to alleviate suffering and improve quality of life, not to cure. Palliative care is usually brought in much too late in the progress of a terminally ill patient’s condition. I think one-third of patients who get referred to hospice care die within a week, when they could have received services that may have reduced pain or improved their quality of life months earlier.
The primary motivation for this project is introducing something as much as a year before a patient dies that can provide help and counsel them, but also determine if they need palliative care service, and refer them.

Q2: How was the chatbot program built and how does it work?

We worked on this project in collaboration with Boston Medical Center. We had more than a year of weekly meetings with BMC where we designed all the modules of this system, which provides a range of services, from advice on medications for, say, pain management, to meditation for dealing with stress to symptom tracking and alerting.

There’s a nurse involved with the study who sort of monitors the system to see if there are problems and if a palliative care specialist should be brought in. The chatbot also provides spiritual counseling and physical activity promotion, as part of its ability to discuss a range of different topics. We’ve worked with teams of clinicians, and in some cases, chaplains from BMC, and our students and staff here in CCIS have basically mapped out all of the dialogue content for the system. We’ve coupled that with our animated characters and dialogue engine and employed it on a tablet. It’s made to be put on a tablet computer and sent home with a patient for six months of continual use. Patients use the tablet up multiple times per day and have conversations about different topics as they need to. Part of the system is designed to just keep them company, to tell them stories and do social chat.

 

Q3: What sort of guidance and support does the chatbot provide? What might a user see?

When you start it up, there is an animated character that appears. She calls the patient by name, saying something like, “Hi, Bob, how are you doing this morning?” There is a little introductory social chat like you’d have with a caretaker, and then she’d say, “What can I help you with right now?” And then she gives the patient a range of options. If there’s a message from the nurse, or if it’s time to do some exercise or time to take your medication, she might say, “Don’t forget to do your exercises today.” There is guided meditation if patients are feeling stressed or need peace of mind. There is physical activity promotion, depending on whether they are able to walk or they’re restrained to a chair or a bed. There is a range of activities that she can suggest and a number of different exercises she can walk them through, principally to keep their spirits up and to have a positive mental health impact. Another part of the chatbot is designed for carrying out advanced care planning, designating a health care proxy, specifying what happens if your heart stops, how to fill out the DNR form, and who you should discuss this with.

 

Q4: How do users interact with and respond to the chatbot?

The user input is constrained to multiple choices on the touch screen throughout the dialogue. We did that for safety reasons. Since patients use this as a health oracle, we don’t ever want to get into a situation where it’s giving inappropriate advice that could cause harm. For unconstrained natural language input, you can’t control what the user can ask, and if they ask questions that are out of domain, like some medical condition we hadn’t thought of or hadn’t programmed into the system, then potentially, the chatbot could offer advice that could cause harm. Nobody can do fully unconstrained natural language understanding at this point, which is why we use multiple choice responses.

Originally Published at News@Northeastern by Allie Nicodemo Read More