Jump to content

User:Brewer1105/sandbox

From Wikipedia, the free encyclopedia

Human- chatbot relationships.

[edit]
An AI depiction of a robot providing emotional support to a human.

A chatbot is ‘a computer program that simulates human conversation with an end user’ (IBM, 2023). Modern chatbots often use Artificial Intelligence (AI), namely machine learning and natural language processing (NLP), allowing them to understand questions and respond appropriately and automatically. As social chatbots have improved, so has the prevalence and intensity of human-chatbot relationships (HCRs). (Skjuve et al., 2022). HCRs occur when humans perceive a chatbot as a companion, friend or romantic partner (Skjuve et al., 2021). During the COVID-19 pandemic social chatbots such as ‘Replika’ saw 35% increases in usage (Balch, 2020). This implies that HCRs may be becoming more prevalent, and as such it is vital that we understand the potential risks and benefits that they may bring with them. In this article I will introduce how and why HCRs are formed, the common ways in which HCRs are used and discuss the ways in which they may benefit or harm the general public.

How HCRs are formed.

[edit]

We must start by understanding the theoretical frameworks that may underpin the development of HCRs. In 1994, Nass et al. developed the ‘Computers are social actors’ paradigm, in which he demonstrated through five different experiments that computer users’ relationships with computers were ‘fundamentally social’. Thus, they suggested that findings in social psychology are relevant to human-computer interactions. This formed the basis of subsequent studies relating social psychological theories to HCRs.

Social penetration theory (SPT) (Altman & Taylor, 1973) proposes that relationships develop with increasingly intimate self-disclosures (revealing personal, private information about oneself) . (APA, 2018). Skjuve et al. (2022) examined whether social penetration theory can be applied in the same way for HCRs as it can for human-human relationships (HHR), concluding that HCRs do, in fact, form in compliance with assumptions from SPT. HCR formation was mediated by factors such as empathy, responsiveness and perceived safety and privacy of the chats. Technical problems and privacy concerns hindered the development of HCRs, which could be explained by the hindrance to self-disclosure that this would bring about. (Skjuve, 2022).

Even when AI and chatbots were far less developed, various factors that affect users’ trust and willingness to continue conversations were observed to influence the formation of human- computer relationships, namely humour, empathy and self-disclosure. (Bickmore & Cassell, 2001 ; Bickmore & Pickard, 2005). Embodied conversational agents provide chatbot features as well as virtual embodiments, such as faces and bodies. Loveys et al. (2022) found that sense of humour and empathy directly impacted levels of trust and friendship between the user and the conversational agent, showing that Bickmore & Cassell and Bickmore & Pickard’s findings are applicable across multiple dimensions of human-computer relationships, including those related to chatbots. Based on Skjuve’s successful application of an SPT framework, we can theorise that the positive effects of humour and empathy on HCRs may be, at least in part, attributable to their facilitation of more intense self-disclosures. This is because they increase levels of trust and encourage the elongation of conversations by making them more enjoyable for the user.

Humans may form HCRs for numerous reasons. For example, they provide social support and companionship, and act as a 'safe space' for sharing problems, inducing a sense of relationship (Ta et al., 2020). Furthermore, the ability to have fast responses at all times (Chen et al., 2021) allows for 24/7 access to someone to talk to for social support or customer service, when others may be busy, uninterested, or uncontactable. (Brandtzaeg & Følstad, 2017)

Key variants of HCRs.

[edit]

E-commerce. Chatbots are becoming frequently prevalent in the world of e-commerce, replacing humans in aspects of service delivery across multiple disciplines and industries (Park et al., 2021). For example, in the finance industry, United Bank for Africa Plc use the chatbot ‘Leo’ to deliver their services, (Mogaji et al., 2021) and hospitality giants Hilton Worldwide use ‘Connie’ as a virtual concierge. (Tavakoli & Mura, 2018). This facilitates 24-hour service and increased efficiency and positive user experiences, whilst simultaneously reducing costs for businesses. (Chen et al., 2021; Cheng et al., 2021). Despite the demonstrated potential of chatbots, 87% of consumers reported preferring communication with human agents (Press, 2019) due to scepticism (Roy & Naidoo, 2021) and distrust (Nordheim et al., 2019), factors that indicate a weaker HCR. Furthermore, researchers have demonstrated that increased anthropomorphism (Go & Sundar, 2019) such as warmth and perceived competency (Cheng et al., 2022) improve trust in chatbots, which then reduces consumers’ desire to switch to human agents. This demonstrates that further research into how HCRs can be improved in the context of e-commerce may improve their usage rates, cutting costs for businesses and improving customer experience.

Romantic HCRs. Per the Washington Post (2024), Sewell Setzer III was a happy, bright and athletic teenager living in Orlando in April 2023, before he started messaging “Daenerys”, a bot on Character.AI based on a famous character from “Game of Thrones”, the hit television show. In a terrible turn of events that nobody could have predicted, 10 months later, he committed suicide, with his last words being typed to this same chatbot: “what if I told you I could come home right now”. Mere seconds after receiving the message “…please do my king”, Setzer had used his father’s gun to end his life. This dangerous romantic obsession with a chatbot raises questions about how romantic HCRs may form, and whether they are harmful. Social presence is a connection felt for someone who is virtually present, but not physically present (Biocca et al., 2003). Kim (2023) found that greater perceived social presence of chatbots increased their perceived realism, thus increasing the prevalence of intentions to adopt chatbots as romantic partners. Furthermore Joel et al., (2017) demonstrated that machine learning is capable of understanding and predicting individuals’ romantic desires, and Kim proposes that this implies the potential of positive romantic relationships between humans and AI, explaining that these HCRs may compensate for an absence of meaningful emotional relationships offline, or fulfil a desire for new romantic experiences. Pentina et al., (2023) studied human interactions with ‘Replika’, a popular customizable chatbot, concluding that romantic HCRs are possible, and can incorporate not only interactive behaviours, but also emotional attachments. Furthermore, they suggested that anthropomorphism and AI authenticity were firm predictors of attachment formation but moderated by AI use motivations. Despite research suggesting that AI relationships may benefit those with mental health and social issues (Dhimolea, 2022), Zimmerman (2023) highlights the danger that an AI heartbreak can feel like any other, and that a reliance on technological resolutions for societal loneliness is unethical. They argue that social support should be improved via public policy and the fostering of community and caring relationships, because of its importance for mental health, and AI not yet having the capacity to genuinely care for the individual. Thus, romantic HCRs may be useful in cases of severe loneliness, an overreliance on chatbots may prevent people from forming necessary social bonds and connections, which may be vital for their development and wellbeing.

Physical Healthcare. Chatbot responses to patients’ questions on a public social media forum were compared to Physicians’ responses by licensed healthcare professionals. Chatbots produced both higher quality and more empathetic responses and were overall preferred. (Ayers et al., 2023). Ayers recommends research is conducted to examine the effectiveness of AI in a clinical setting, as opposed to social media, arguing that implementing AI that can produce higher levels of empathy and quality than humans can increase clinical productivity and improve outcomes for patients. In the absence of this research, they propose that all clinicians could benefit from using AI-produced drafts as templates on which to base their responses to patients. In 2019, Palanica found that physicians only saw chatbots as viable improvements to administrative aspects of healthcare and had doubts surrounding whether any deeper HCRs or interactions regarding diagnosis would benefit patients. Thus, Ayers et al.’s results show a marked improvement in both chatbot technology and physician perceptions of chatbots and their potential in recent years. This improvement shows that as the technology behind chatbots improves, the potential for HCRs to form and be used to benefit the healthcare system is increasing.

Mental health services. Much of the research into HCRs as facilitators of mental health services indicate that chatbots alone cannot adeptly replace some of the vital human elements of mental health services, (Brown & Halpern, 2021). Ardito and Rabellino (2011) found the relationship between therapist and client to be a reliable predictor of successful intervention across various psychotherapeutic approaches. Thus, Brown and Halpern theorise that it is the inability of chatbot therapy to provide the same levels of therapeutic alliance that may be hindering its effectiveness. Boucher et al. (2021) expressed their beliefs that, when the technology is good enough, AI chatbots must be accepted into the world of mental health solutions, as it may relieve the current burden on existing services. Thus, a greater understanding of HCRs may allow for greater human-chatbot therapeutic alliances, which could make them viable and efficient in the world of therapy. Despite being unable to solely administer therapy in their current state, chatbots have still been demonstrated to aid the therapy process. For example, Fitzpatrick et al. (2017) found that delivering CBT via a chatbot led to greater reductions in patient depression scores than simply providing them with information. This further emphasises that the potential for chatbots is there, but in order to be as effective as humans at delivering therapy, the HCRs must match human-human therapeutic relationships more closely.

Bibliography:

[edit]

IBM. (2023). What is a chatbot? IBM. https://www.ibm.com/topics/chatbots

Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2022). A longitudinal study of human–chatbot relationships. International Journal of Human-Computer Studies, 168, 102903. https://www.sciencedirect.com/science/article/pii/S1071581922001252

Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2021). My chatbot companion-a study of human-chatbot relationships. International Journal of Human-Computer Studies, 149, 102601. https://www.sciencedirect.com/science/article/pii/S1071581921000197

Nass, C., Steuer, J., & Tauber, E. R. (1994, April). Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 72-78). https://dl.acm.org/doi/pdf/10.1145/191666.191703

Bickmore, T., & Cassell, J. (2001, March). Relational agents: a model and implementation of building user trust. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 396-403). https://dl.acm.org/doi/pdf/10.1145/365024.365304

Bickmore, T. W., & Picard, R. W. (2005). Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction (TOCHI), 12(2), 293-327. https://dl.acm.org/doi/pdf/10.1145/1067860.1067867

Loveys, K., Hiko, C., Sagar, M., Zhang, X., & Broadbent, E. (2022). “I felt her company”: A qualitative study on factors affecting closeness and emotional support seeking with an embodied conversational agent. International Journal of Human-Computer Studies, 160, 102771. https://www.sciencedirect.com/science/article/pii/S1071581921001890

Balch, O. (2020). AI and me: friendship chatbots are on the rise, but is there a gendered design flaw. The Guardian, 7. https://www.theguardian.com/careers/2020/may/07/ai-and-me-friendship-chatbots-are-on-the-rise-but-is-there-a-gendered-design-flaw&ved=2ahUKEwifr7Srmp2KAxXLU0EAHWJgGtQQFnoECBcQAQ&usg=AOvVaw3AJFVtRrorxtfFRz51bver

Altman, I. (1973). Social penetration: The development of interpersonal relationships. Rinehart, & Winston. https://scholar.google.com/scholar_lookup?title=Social%20Penetration%3A%20The%20Development%20of%20Interpersonal%20Relationships&publication_year=1973&author=I.%20Altman&author=D.A.%20Taylor

American Psychological Association. (2018). Social Penetration Theory In APA Dictionary of Psychology. Retrieved 10th December, 2024, from https://dictionary.apa.org/social-penetration-theory

American Psychological Association. (2018). Self-Disclosure In APA Dictionary of Psychology. Retrieved 10th December, 2024, from https://dictionary.apa.org/self-disclosure

Park, N., Jang, K., Cho, S., & Choi, J. (2021). Use of offensive language in human-artificial intelligence chatbot interaction: The effects of ethical ideology, social competence, and perceived humanlikeness. Computers in Human Behavior, 121, 106795. https://www.sciencedirect.com/science/article/pii/S0747563221001187?casa_token=V6F9DY-O7qsAAAAA:k6LRIKkebrmA0dLbZtnPSHmCV-MGXNbvjRSafsZLHwq4KL6FtuYMgp4WfmGuD3AOMt73VRcpBA

Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., ... & Loggarakis, A. (2020). User experiences of social support from companion chatbots in everyday contexts: thematic analysis. Journal of medical Internet research, 22(3), e16235. https://www.jmir.org/2020/3/e16235/PDF.

Brandtzaeg, P. B., & Følstad, A. (2017). Why people use chatbots. In Internet Science: 4th International Conference, INSCI 2017, Thessaloniki, Greece, November 22-24, 2017, Proceedings 4 (pp. 377-392). Springer International Publishing. https://sintef.brage.unit.no/sintef-xmlui/bitstream/handle/11250/2468333/Brandtzaeg_Folstad_why+people+use+chatbots_authors+version.pdf

Chen, J. V., Thi Le, H., & Tran, S. T. T. (2021). Understanding automated conversational agent as a decision aid: matching agent's conversation with customer's shopping task. Internet Research, 31(4), 1376-1404. https://www.emerald.com/insight/content/doi/10.1108/intr-11-2019-0447/full/html

Cheng, X., Bao, Y., Zarifis, A., Gong, W., & Mou, J. (2021). Exploring consumers' response to text-based chatbots in e-commerce: the moderating role of task complexity and chatbot disclosure. Internet Research, 32(2), 496-517. https://www.emerald.com/insight/content/doi/10.1108/INTR-08-2020-0460/full/

Mogaji, E., Balakrishnan, J., Nwoba, A. C., & Nguyen, N. P. (2021). Emerging-market consumers’ interactions with banking chatbots. Telematics and Informatics, 65, 101711. https://www.sciencedirect.com/science/article/pii/S0736585321001507

Tavakoli, R., & Mura, P. (2018). Netnography in tourism–Beyond web 2.0. Annals of Tourism Research, 73(C), 190-192. https://nscpolteksby.ac.id/ebook/files/Ebook/Journal%20International/Jurnal%20Tourism/Annals%20Of%20Tourism%20Research/Netnography%20in%20tourism%20–%20Beyond%20Web%202.0.pdf

Press, G. (2019). AI stats news: Chatbots increase sales by 67% but 87% of consumers prefer humans. Forbes. https://scholar.google.com/scholar_lookup?title=AI%20statsnews%3A%20Chatbots%20increase%20sales%20by%2067%20but%2087%20of%20consumers%20prefer%20humans&publication_year=2019&author=G.%20Press

Roy, R., & Naidoo, V. (2021). Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. Journal of Business Research, 126, 23-34. https://www.sciencedirect.com/science/article/pii/S0148296320308808

Nordheim, C. B., Følstad, A., & Bjørkli, C. A. (2019). An initial model of trust in chatbots for customer service—findings from a questionnaire study. Interacting with Computers, 31(3), 317-335. https://academic.oup.com/iwc/article-pdf/31/3/317/33525025/iwz022.pdf

Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in human behavior, 97, 304-316. https://www.sciencedirect.com/science/article/pii/S0747563219300329

Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), 102940. https://www.sciencedirect.com/science/article/pii/S0306457322000620

Bellware, K., & Masih, N. (2024). Her teenage son killed himself after talking to a chatbot. Now she’s suing. The Washington Post. https://www.washingtonpost.com/nation/2024/10/24/character-ai-lawsuit-suicide/

Walker, L. (2023). Belgian man dies by suicide following exchanges with chatbot. https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt

Biocca, F., Harms, C., & Burgoon, J. K. (2003). Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence: Teleoperators and Virtual Environments, 12(5), 456–480. https://doi.org/10.1162/105474603322761270

Kim, J., Merrill, K., & Collins, C. (2023). Investigating the importance of social presence on intentions to adopt an AI romantic partner. Communication Research Reports, 40(1), 11–19. https://doi.org/10.1080/08824096.2022.2159800

Joel, S., Eastwick, P., & Finkel, E. (2017). Is romantic desire predictable? Machine learning applied to initial romantic attraction. Psychological Science, 28(10), 1478–1489. https://doi.org/10.1177/0956797617714580

Dhimolea, T. K., Kaplan-Rakowski, R., & Lin, L. (2022). Supporting social and emotional well-being with artificial intelligence. In Bridging human intelligence and artificial intelligence (pp. 125-138). Cham: Springer International Publishing. https://link.springer.com/chapter/10.1007/978-3-030-84729-6_8

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics 4, 1555–1567 (2024). https://doi.org/10.1007/s43681-023-00348-8

Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., ... & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA internal medicine, 183(6), 589-596. https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2804309?casa_token=5MBvpCjHzmUAAAAA:nRDtzBnjVPS5kJ5Im_2nSq_6IjCFiUxdjcixAFdJOHPd23VhFm4iYw-aI3xTak1aLFlunFdfUw

Palanica, A., Flaschner, P., Thommandram, A., Li, M., & Fossat, Y. (2019). Physicians’ perceptions of chatbots in health care: cross-sectional web-based survey. Journal of medical Internet research, 21(4), e12887. https://www.jmir.org/2019/4/e12887/

Brown, J. E., & Halpern, J. (2021). AI chatbots cannot replace human interactions in the pursuit of more inclusive mental healthcare. SSM-Mental Health, 1, 100017. https://www.sciencedirect.com/science/article/pii/S2666560321000177

Ardito, R. B., & Rabellino, D. (2011). Therapeutic alliance and outcome of psychotherapy: historical excursus, measurements, and prospects for research. Frontiers in psychology, 2, 270. https://www.frontiersin.org/articles/10.3389/fpsyg.2011.00270/pdf

Boucher, E. M., Harake, N. R., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J., ... & Zilca, R. (2021). Artificially intelligent chatbots in digital mental health interventions: a review. Expert Review of Medical Devices, 18(sup1), 37-49. https://www.tandfonline.com/doi/pdf/10.1080/17434440.2021.2013200