Can a chatbot or Freud app fully replace a therapist?
As of today, no. It seems plausible that the future of psychotherapy involves significant advancements in artificial intelligence (AI) which will have a profound influence on psychological practice with systems that will administer evidence-based and very effective treatments to clients. Although the application of AI technologies in the mental health care field is filled with the potential to increase access to care innovatively, it comes with its own risks and drawbacks.
Artificial intelligence therapy systems
There already have been major breakthroughs which indicate an interesting future in therapy. Beginning with ELIZA in the 1960s, a simple computer programme that mimicked non-directive social interactions and received a shocking number of personal responses by its administrative staff, to the currently popular Woebot, a chatbot by Facebook which responds to maladaptive thought patterns by using the principles of cognitive behavioral therapy (CBT) and engages in more than two million conversations a week.
Paro, a harp seal stuffed robot was created for the elderly and those in homes or those who cannot responsibly take care of a pet. Although non-living, Paro can provide similar comfort as provided by a pet by responding to touch and voice and activating the parasympathetic nervous system, thereby reducing stress.
Ellie, created by USC researchers, helps people with depression and veterans suffering from PTSD using a webcam and microphone which enables her to analyse emotional cues and provide feedback and also by processing the rarest of speech which helps her record pauses. Similarly, Karim, a psychotherapy bot aids workers and refugee communities in the Middle East.
Therapeutic video games and mobile applications are on the rise with increased benefits such as improving self-confidence (eg. Mindbloom), adherence to treatment, reducing stigma surrounding mental health and also enhancing social skills, eg. Second Life, an online virtual game for children with autism or Sosh, a mobile app for individuals with Asperger’s Syndrome.
A bright future?
AI therapy can, in fact, prove to be extremely favourable.
Many times patients lack the motivation to follow up with their therapists or stick to plans and techniques advised for their betterment. Currently, a certain technology is being worked upon which provides tailored mental health treatment that helps clients to stay committed to therapy.
AI therapy can also spot suicidal patterns and thoughts which humans may potentially miss by analysing them and examining curated databases of clinical knowledge. This is a huge plus as it helps to prevent self-harm and will reduce the number of deaths caused by suicide.
The future also takes a look at having potential implanted AI technologies which may repair general cognitive abilities or restore function to areas in the brain that have been damaged by strokes or brain injuries.
A self-help paradigm that can be personalised according to an individual’s needs and can also experience important emotions needed during counselling like empathy or respond and recognise emotions of the patient while taking in cultural differences into consideration does indeed sound all too good. However, it can have threatening downsides.
Chatbots and AI therapy systems aren’t protected by medical data privacy and security law. The early ELIZA programme was shut down immediately when its creator perceived it as a threat after its outraged users found out that all their conversations were recorded and accessible. Similarly, although Facebook’s Woebot keeps identities anonymous, Facebook still owns logs of all conversations which Woebot has with its users. Confidentiality and privacy become blurred with the use of autonomous AI systems.
At the same time, certain AI therapy systems be very expensive. For example, Paro costs almost $7,000. This defeats the purpose of AI therapy being aimed at increasing access to mental health care. Although doubtful, AI therapy can also have other economic implications to the field of psychology leading to job losses in a knowledge-based profession if systems are developed to a point where they can provide a full range of mental health services.
Therapists frequently encounter ethical dilemmas. To tackle these, AI systems will be expected to make value judgements which involve complex reasoning and technology, at the end of the day, is always vulnerable to errors. An extreme advancement in AI that enables systems to develop their own values and beliefs is possible but risky as it may conflict with those of its own creator. Weizenbaum, creator of ELIZA, said, “Computers should not be allowed to make important decisions because computers lack the human qualities of compassion and wisdom.”
An important aspect of therapy is the ‘human element’ which develops the therapeutic bond between the client and therapist – something that futurists believe AI systems will lack. Along with such genuine connections, a consequential change in a patient’s life as a result of therapy comes with the fallibility and tension involved in the process. There is a good chance that AI systems will exceed these sensory capabilities present in humans.
Lastly, an unlikely but still possible scenario – a positive transference toward an AI system would be problematic and baffling to resolve, as depicted in the movie Her.
In any case, therapy may radically change. It is safe to anticipate that face-to-face counselling may not be practised everywhere, at all times due to more convenient therapy systems supplanting them. There may be a rise of ‘surrogate counsellors’ with protocol packaged treatments for common mental disorders (eg. depression, anxiety) that can be accessed by clients. Practice settings are bound to change and payment transactions may also be mostly electronic.
Advanced AI technology is already found in almost all sectors today including mental health care. AI and the future of psychotherapy seems to be ideal in many ways, opening doors to incredible possibilities which were not imagined in the past. Technological singularity is near and it is definitely something to be thrilled about, but it also shoots a series of new professional, legal and ethical complications. Considering how technology dependent we are becoming each day, it’s easy to assume that the above-mentioned paradigms will become a reality. However, it is difficult to predict whether they will eventually and inevitably affect us favourably or not and it is imperative to build a unique framework which only aims at improving healthcare and lifestyle.
Shruti Venkatesh is a the National Co-Lead (Mental Health) at One Future Collective. This piece was originally published here.