Without question, depression and other psychological suffering are some of the biggest challenges of our time.
According to a 2017 report from the World Health Organization, depression is the leading cause of disability around the world, affecting 4.4% of the world’s population. The report also highlights that 3.6% of the global population is affected by anxiety disorders. Not only are these conditions highly prevalent, they have also increased substantially during the last ten years.
Depression and anxiety disorders are highly multi-dimensional in nature, being a result of complex interactions of biology, culture and life experiences. As a clinical psychologist and researcher in clinical psychology, I have worked for more than a decade to understand these conditions. I have approached the problem from several angles, and have been involved in several attempts at creating new treatments for these conditions.
My own conclusion regarding these issues is that it all boils down to the concept of the Self. A cardinal symptom of depression is self-hatred — talking to oneself in a way that would be impossible to do to someone else, labeling ourselves using concepts that are associated with the most severe forms of contempt, such as “I am a horrible person”. We constantly relate ourselves to others in all imaginable ways, such as “I am much less valuable than others,” or “I am not worthy of love”.
Similarly, a very common theme among people struggling with anxiety is fear related to the perception of being judged by others, such as “Other people think I’m pathetic,” or “I need to perform for other people to like me”.
At the root of this is the Self. We identify with the labels we put up to describe ourselves, and we derive an endless amount of self-labels in the endless relating towards others.
This process of forming a conceptual self, which I will call selfing, is what I see as the very root of psychological suffering.
This idea is not new. As illustrated by the Buddhist scripture quote above, a fundamental thesis in Buddhism — which is over 2,500 years old — is that there is no such thing as an unchanging, permanent Self in living beings, and that clinging to a belief in a Self is the source of all suffering.
Importantly, selfing is something we do, a constant ongoing relating of the conceptual self to other concepts. If selfing is a behavior, then it can also be changed.
During the last 30 years, behavioral scientists have made a substantial amount of work towards an experimental understanding of selfing as relating. The contemporary behavioral science theory Relational Frame Theory argues that it is possible to study language and cognition, including selfing, as an act of complex relating.
While this science has made an impressive amount of progress, there is simply a limit to what can be done when doing experiments with human beings. Even if behavioral scientists had an unlimited amount of time and resources, which they clearly don’t, there will always be research that cannot be conducted. Some complex forms of selfing and relating will be impossible to arrange in an experiment. Other forms of experiments will be deemed unethical to conduct. But above all, as the methods of the science of relating are working right now, this science cannot scale to the levels of progress that the world needs right now, to solve some of our most urgent issues.
This is where artificial intelligence comes into the equation.
During the last decade, we have seen an impressive increase in applications of AI, very often in the medical domain. Can AI help behavioral science to scale?
Yes and no. While today’s technology has a lot to offer regarding simulating the mind, there are several challenges that need to be addressed.
One problem with traditional AI is the challenge of transferring what has been learned in one domain into another. This is of course highly relevant for selfing and relating, as the whole point with Relational Frame Theory is to understand how we relate ourselves to things we have never experienced, for instance, “I want to be like that person, because then I would be much happier”. One way to put it is that many clinical problems, and much negative thinking about the self, can be understood as a constant involuntary transfer of knowledge between domains. And the problem of transfer is only one of the problems with AI and its potential applications.
Some very interesting things are going on in the field of Artificial General Intelligence (AGI) — which is both an old and a new field of research. It was the original goal of AI, to create “thinking machines” in the sense of general-purpose intelligent systems that could carry out a range of tasks across different domains. However, most AI as we see it today consists of solutions to highly specific problems in narrow domains, such as image recognition and game-playing, where the algorithms used typically vary from task to task. While this form of problem-solving is very effective and undoubtedly something that will help human society to evolve, it is far from AGI.
But during the last 10 years, there has been substantial progress in the field of AGI. One of the most important attempts of progress in AGI as a science is to establish a theory of general intelligence in itself. With a level of description that is abstract enough, intelligence should be possible to describe as a phenomena that is possible to study in both humans and machines.
A theory of general intelligence should be able to account for what we have learned from human psychology, but be formal enough to possibly allow implementation in a computer system.
Importantly though, AI in the form of problem-solving in specific domains does not provide a theory of intelligence that can be used to guide research and practice in the field of AGI.
While many interesting approaches exist in the AGI field, one particular model that I see as promising is Pei Wang’s Non-axiomatic Reasoning System (NARS). It implements a “non-axiomatic” logic that is a formalization of Pei Wang’s theory of general intelligence.
In essence, NARS takes a logical approach to intelligence, using formal inference rules to derive new knowledge from existing knowledge and experience. NARS uses a formal language of relations to describe all of this. The meaning of a concept for a NARS system is how that particular system has experienced a concept in relation to other concepts. Hence, all “thinking” in NARS can be seen as acts of relating.
While NARS is very much in development, the design of it opens up for a theoretically coherent understanding of thinking, feeling, and consciousness in an AGI system. In addition, it also provides an account of the Self as a concept, being part of an endless ongoing act of relating. In summary, to be able to study selfing as an ongoing process in machines, we will highly likely need AGI, and NARS seems like a very interesting candidate, with its strong theoretical foundation regarding intelligence as an experience-grounded inference process.
Another key aspect of contemporary artificial intelligence is the role of the body. Is it possible to create human-level AI without the embodiment in the physical world?
Most scientists today, including myself, would say no to that question — that human-level AI requires physical embodiment.
A longer answer is that it depends on what kind of AI you’re trying to build. The approach NARS takes towards AGI, that intelligence (including selfing) is an ongoing act in context, where meaning is grounded in experience, assumes the existence of a body.
Importantly though, according to many AGI researchers, the body doesn’t need to be a human body.
Sensing enabled by the body become part of a system’s experience, independently of which “sense organs” the body provides.
In my own research, I have spent the last two years working with a highly-sophisticated robot body, the Furhat robot, created by Furhat Robotics in Stockholm.
My goal is to create a simulation of psychological suffering such as depression and anxiety disorders, taking place in a robot body.
Robert Johansson, PhD, is an associate professor of psychology at Stockholm University, and a researcher in computer science at Linköping University. He is also a licensed psychologist with a special interest in emotion-focused psychotherapy. In his earlier work, he and his colleagues developed a model of affect-focused psychotherapy that enabled it to be delivered as guided self- help through the Internet. The effectiveness of the model has been proven in several clinical trials. Currently, he is working in the field of artificial intelligence, where he studies clinically relevant psychological processes in machines. He is passionate about abstract models of the human mind, Lisp programming, psychodynamic psychotherapy, and meditation in the Samatha-Vipassana tradition.