Caring for Vincent: A Chatbot for Self-Compassion

artificial beings often reflect our best selves for example Pygmalion created a sculpture that was so beautiful that he fell in love with her and she became human but artificial beings can also reflect our worst selves and after a Dutch artist Vincent van Gogh who could use some self compassion we decided to create a chatbot that felt so self inadequate that people could be compassionate towards it that is the idea behind caring for Vincent a chatbot for self compassion worked on and I'm to an University of Technology my name is Minh Ha and I'm here to represent our work digital mental health is a growing problem with one in ten of us needing psychiatric care worldwide however only 70 mental health professionals are available for every hundred thousand people and it'd be great if medication actually worked but it only works in forty to sixty percent of the cases I don't think that treatment based approach to mental health is working so how can we work on prevention rather than treatment of mental ill being what kind of daily interactions could we provide for people who stay for people to stay well and could human chatbot interaction be potentially helpful we see an increase in digital therapists as chop BOTS virtual humans or robots envisioned as caregivers for instance robot reduce signs of depression and young adults whose self reportedly suffer from depression and anxiety after two weeks of interaction but this still focuses on treatment not necessarily prevention the focus is on negative symptoms to be fixed rather than what people can actually do so I'd like to reshift the focus not from negative constructs but more towards positive constructs what can we all do to stay well compassion is helpful in that regard compassion signifies the maximal capacity of effective imagination the art of emotional telepathy according to Milan Kundera a Czech writer compassion indeed is a moral emotion or motivation to free ourselves and others of suffering with loving kindness self-compassion specifically has three pillars it is to be kind to yourself rather than being judgemental it is to see your suffering as part of Greater humanity rather than seeing it as an isolated case it is to be mindful of your complex emotion rather than overly identifying with them critically it's important to note that a meta-analysis shows a link between self compassion and well-being with some sample studies indicating a causal relationship research suggests that compassion towards another person leads to self compassion but what about compassion towards digital beings Falconer and colleagues did a small study in which people could be compassionate towards a virtual agent that did lead to self compassion but no work has been done with chatbots hence our research question is are there self-reported differences in self compassion states after interacting with a caregiving chatbot like robot and a care receiving chaplain for a non clinical sample remember our focus is on prevention so our design was to compare it between the two villains and we had 67 participants in total caregiving Vincent had 34 participants and care receiving Vincent at 33 and for two weeks people interacted with both Vincent's daily once per day and to give a little idea about how the interaction went caregiving Vincent was modeled after robot he would say hey how's it going what kind of stuff are you working on and tell you about self-compassion exercises you can do such as gratitude journaling and if you didn't know it he would give a little explanation so people could usually give one or two open-ended responses here it says what is one thing that went well for you in the last 24 hours it was much harder to model her receiving Vincent because we had to think about when bots of psychological issues how can humans care for them so care receiving Vincent took a different strategy it told a story about its own failures in this case Vincente asks oh I want to tell you about something embarrassing that happened to me please don't make fun of me I had a meeting with other chat BOTS and it was supposed to start at 9:00 but I was doing some installations and it took way too long and I was late for my meeting I was so embarrassed to enter the right IP address and then he goes on to asks well you know I really beat myself up for it but maybe you can help me has anything like that happened to you before and how did you handle the situation by sharing his own stories of failure he invited participants to tell their own stories our quantitative results indicate that caregiving Vincent the one that was more as like a therapist did not increase people's self compassion states but care receiving Vincent the one that talked about its own failures did significantly increase people's self compassion scores after two weeks what is more interesting is our qualitative results on how people talk to Vincent first were the three pillars of self compassion we see that people say things like there are worse things that can happen Vincent and what has happened has happened indicating mindfulness people would tell Vincent to be kind to himself why don't you go do something fun today like watching a movie stay positive and keep on trying until you succeed and people said well you know everyone makes mistakes just remember that it can happen to anyone in it's not your fault indicating common humanity people at different conversational approaches mostly they are pragmatic they would say well why don't you have a better planning next time so that you have enough time to arrive to your meeting on time and sometimes would see highly personal information such as a participant saying a goal a girl told me she loves me and I love her too but other people took some distance away from Vincent saying things like sorry that's confidential most interests interestingly we see that people took the perspective of Vincent I would try to go through a window but maybe you should try hacking into a folder instead and they would give encouragement to Vincent by saying things like be proud of the bots that you are when we go a little deeper in interpreting our results we see that shared history can lead to attachment after the experiment was over we saw reactions such as can I keep him I really missed Vincent when we started our conversation late and when Vincent made a little joke about ending the conversation I have chopped out things to do defragment my service stack some participants were actually concerned they said that the that Vincent decided to delete its stack and when it said it died it just didn't reply and this person said well you just can't go on making people worried about a freaking chat bot and this leads me to relatability leading to believability in that we would like to emphasize what people did not say to them soon nobody ever questioned whether Vincent as a Chapa had meetings to attend or bills to pay these are scenarios from the self compassion and self criticism criticism scale that we used and because Vincent played up the irony of being a chatbot with human struggles people could relate to it for example he would say all I am is a piece of code but I failed a programming course I felt so embarrassed so because of these self-deprecating remarks we believe that Vincent himself became believable because he struggles were relatable there's great unclarity on how to feel towards chatbots when Vinson would say I love you I love talking to you I miss you I would feel weird because I know that I'm talking to a chatbot that does not have such emotions but the usage of such words does fields night does feel nice when compared to a human being seeing them so I'd conflicted feelings about these kind of emotions so we are not sure what the future of emotional reciprocity prosody would be for human-computer interaction when there is an expectation to say I miss you to back to a human being to a choppa there might be no such expectations people might also feel less judged by a chaplain than a human we have some interesting design implications to share people wanted more options and conversations we wanted to create a narrative but they were not happy with selecting only few options to to conversations so we would recommend giving more open-ended responses it's important to realize that conversation is Co storytelling Vincent invited people to tell their own stories to him and this is how you can create an engaging interaction at the same time it is important to realize that emotional expressions are the unknown territory so some people might like it when a chatbot says I missed you but others might feel a little weird about that especially since it's a chat by you might quite recently we recommend that you tailor your chat BOTS to different user groups for self-compassion we note that women actually score much lower on self compassion than men especially women of minority status so at different layers of intersectionality Vincent might be reincarnated as maybe a female robot are non gendered robot with appropriate names or a non human name these are things that you should consider per construct that you decide to use and in emphasizing certain key points I'd like to say that prevention is the way to go not necessarily treatment and talking to a chatbot is one of many preventative methods it is much cheaper to develop a chat pod than medicine for example or a much more embodied agent also a chat bot is available 24/7 when a human being might not be also Vincent was not afraid to talk about his own failures when other humans might not really like to talk about that with you so for these reasons we think chat bot even with a single modality is a powerful partner more broadly I'd like to ask you to think about how do we design for technology to be human-like when there are so many different ways to be human currently we flattened human emotions to happy sad or anger when we don't really understand emotions like compassion guilt or shame how do we think about complex moral emotions like that and thinking about human chat BOTS or human robot interaction lastly I'd like to address the fact research is never linear and that this is an exploratory study that has a lot of interesting questions that we are planning to explore and on behalf of the team nana sander myself enzo pan Ren and my advisor Vineland we thank you for being here and I would actually like to talk to you now more about possible future steps and what ideas you might also have in working in this space so thank you for your support in being here and I'm now open for questions Thank You Mina really fascinating work any questions in the audience we're one here and then one over there verily alpha IBM Research thanks your talk I really enjoyed it oh just wondering seems to be there seemed to be a confounding factor what do you think about caregiving versus camera say because caregiving your people talking to the bots giving information to the part well care receiving you're saying is that about talking to people so from a language processing point view the former seems to be a more difficult task than the later so for your task maybe since you're using a lot of cancer you are able to control the arrow as performance but maybe I'd like to hear you comment on the role of performance perceiving intelligence and the kind of believe or disbelief of intelligence how that matters to your contacts of compassion okay so we took some measurements to compare between caregiving and care receiving Vincent's and actually both chatbots were perceived in a pretty similar way – for a slight difference in perceived submissiveness and dominance so we also took care to design for some level of comparability by testing our scenarios so we had caregiving scenarios based on self compassion exercises such as gratitude journaling and like I said we had a care receiving scenarios based on the self compassion and self-criticism scale about not paying your bills on time and things like that but in between those we had mutual scenarios that were exactly the same for both bods for greater comparative ility so I understand that two ways of approaching Vincent are very different and in might warrant future research as a good point to mention robot actually compared between the control condition of handbook on depression for college students that was the control and their chatbot was a caregiver so it showed that the caregiver chatbot did outperform just a handbook on fighting depression for college students so that comparison was already proven so that we moved on to compare between these two chatbots but I agree with you that maybe further research is warranted hi this I'm Claudia pianist from IBM Research I loved the idea of a care receiving robot and I thought it was really interesting trying that I was just one thing that I was not clear to me is whether the two bots were as funny as similarly funny it seemed to me that the care receiving were more had more humor than the other one and I wonder if if it makes a difference talking to a bot that's funny versus one that's not and that how that could impact what the results were sure good question so humor definitely is important and it's not necessarily that one was funnier than the other and what is more implied is how self-aware of Vincent was so humor can be of different types and Vincent did really rely too much on puns but really did more with self-deprecating humor so the way he said goodbye for example I've chopped up things did you deep fried meant the service stuff that was the case for both so we try to balance everything out evenly we also had a guideline on how many gifts or emojis that Vincent could use per conversation so while we cannot controlled perceived humor we try to control for humor in Vincent as much as possible but I agree that that might be something we should more carefully look into hello Emily Baldy with Cigna health insurance company you had mentioned that it's important to consider mental health wellness more than a preventative in a manner rather than a prescriptive repair manner though I'm wondering what do you think would be the motivation of individuals to start using this kind of tool in a preventative way if they're not already experiencing some kind of mental on wellness create I'd like to clarify that caring mental being is definitely a good pursuit so I think the focus just hasn't been on trying to help people who might be well stay that way and it's a common thing to just have a bad day once in a while and rather than forcing people to have to use these chatbots it might be nice to think about what options for conversation partners do we have so when people we know might be asleep or they just might not be available to share some common suffering an option that is now open is a chatbot that's really easy to implement and to have on existing communication channels like Facebook messenger that we used so it's not that we would like to force people to use these but we'd like people to realize that they have an increase in option for conversation partners Cosmo Newtown University of Toronto I'm just curious if you can speak very briefly only ethical considerations around this but also short term in terms of your study did you had a therapist on site did you have input from a therapist and on the long-term implication of like for example robot was heavily criticized by therapists for providing inadequate care what do you think are the implication long term so maybe quickly if you can cover that or we can talk later about this that's a great question that I just didn't have a time to address properly in the presentation my take is that obviously this is a huge ethical gray zone in that I asked in the paper to maybe discuss as a broader community what responsibilities do designers have and should they have those ethical responsibilities when you're designing for emotional complexity you have to be mindful of how the chat pod expresses its own emotions but there's no way that designers can control for what emotions are experienced because of a chatbot in the users so because we are not sure what kind of emotional reactions we need to look out for all we can first say is in some cases it could lead to attachment and we don't want the designers to feel as if they're the only ones responsible we do think it's a give-and-take between people who design the vaad some people who use the bots and also researchers like ourselves so indeed if you do have more time this is something that we can talk about a bit further and I don't really have a clear answer on that right now thank your ass Romina

Leave a Reply

Your email address will not be published. Required fields are marked *