(WHO) states that depression is a common illness which is now worldwide wherein almost 264 million people were diagnosed suffering from Depression out of which around 800,000 people die due to suicide every year. To reduce this continuously increasing void in human beings, the tech world came up with the idea to use chatbots as therapy bots. Artificial Intelligence companies have developed chatbots that can be integrated with mobile applications.
Over the last decade, there has been an explosion of digital interventions that aim to either supplement or replace face-to-face mental health services. More recently, several automated conversational agents have also been made available, which respond to users in ways that mirror a real-life interaction. What are the social and ethical concerns that arise from these advances?
In this article, from the perspective of healthcare professional ethics, we discuss the strengths and limitations of using chatbots in mental health support. We also outline what we consider to be minimum ethical standards for these platforms, including issues surrounding privacy and confidentiality, and review the ELIZA chatbot, which aimed at tricking its users into
believing that they were having a conversation with a real human being.