Will an A.I. Ever Become Sentient?

author avatar

08 Jul, 2020

www.pixabay.com

www.pixabay.com

The quest for artificial intelligence could yield something that not only out-thinks humanity but can also feel like us.

“Sentience” is a word with seriously heavy connotations that also tend to hold different meaning to different people and under differing circumstances.

First, some definitions are in order:

Intelligence: 1 a: the ability to learn or understand or to deal with new or trying situations. b: the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (as tests) 2: mental acuteness. (Merriam-Webster)

Sentient: 1: responsive to or conscious of sense impressions: sentient beings. 2: aware 3: finely sensitive in perception or feeling (Merriam-Webster)

www.pexels.com 

Our planet is an amazing place, full of life that defies expectations at every turn. There are other animals on Earth aside from humans that exhibit BOTH intelligence and sentience, in every way you might choose to interpret those definitions. Is intelligence unique to Earth? We may never know for sure, but science so far has shown us that it is not unique to humanity.

Biological Intelligence

Consider the bottlenose dolphin, a creature that shares a similarly large and complex brain with humans, which is capable of understanding numerical continuity and perhaps even discriminate between numbers. They possess a level of self-awareness on par with elephants, great apes, and humans. And though there is still scant real evidence of language in their whistle-based communication, scientists realized decades ago that these cetaceans could learn and understand basic concepts through sign language and respond to them via behavior.

www.unsplash.com 

Dr. John Lilly, a man of many interests, including the bases of human consciousness, found a great deal of inspiration in dolphins as well, devising many experiments to ascertain if dolphins could communicate with humans and vice versa. His work helped prop up the Marine Mammal Protection Act of 1972.

Aside from cetaceans, elephants and great apes have long been subjects of study into their apparently high levels of sentience. Great apes, belonging to the Hominidae family, to which we humans also belong, include gorillas, orangutans, chimpanzees, and bonobos. There is currently a movement, gathering momentum, to push for rights to be granted to non-human animals. There is another movement focusing on the great apes themselves, which aims to grant them rights on a level currently reserved for people. This status is called “personhood”, and is based on decades of findings in the research of Jane Goodall, Richard Dawkins, and many others.

Photo by Ryan Al Bishri on Unsplash 

The “mirror test”, devised by psychologist Gordon Gallup in 1970, anesthetizes an animal, places a mark or sticker on it, and when it wakes it is placed in front of a mirror. If the animal recognizes that the mark is new, it is taken as proof that the animal must also recognize that what it sees in the mirror is “itself”. Most animals, dogs included, tend to react as though what they see is merely an “other”. But the great apes, elephants, and cetaceans have regularly passed the mirror test…

But so has the Eurasian magpie in 2008, and then in 2015, several ant species recognized that a blue dot had been painted on their faces only when seeing themselves in a mirror. Until then, it was thought that the more “evolved” brains of great apes, cetaceans and elephants were the keys to this self-recognition. Now, self-recognition — self-awareness, even — may be due to programming in the brain.

Indeed, the octopus is a corollary to this. In one study, published in Nature in 2015, scientists released their findings when analyzing the octopus genome that clearly shows some astounding results of parallel evolution. Even though octopods are almost as distanced from humans on the evolutionary tree as a species can get, their physical forms and placement in their environmental food chain led over time to similarly complex development of their brains and nervous systems. Living as benthic-zone animals, having to forage for food while avoiding swift predators, proved to be similar in many respects to the hominid evolutionary path which began for humans living on African savannahs. We (and the octopods) had to become smarter in order to survive, and the prehensility offered by hands and tentacles allowed for the means to explore our worlds in various ways.

Photo by Vlad Tchompalov on Unsplash 

Yet, because humans and octopods are so different, when we subject an octopus to the mirror test we understand so little about their “alien” appearance and behaviors that we can’t tell if they are displaying self-recognition or not. They are obviously intelligent animals, able to solve problems and outwit predators, and possibly on the level of great apes, cetaceans and elephants. But are they sentient? Are they self-aware? Does intelligence beget sentience?

Artificial Intelligence

And now it’s time for one more definition:

Artificial Intelligence: 1: a branch of computer science dealing with the simulation of intelligent behavior in computers 2: the capability of a machine to imitate intelligent human behavior (Merriam-Webster)

The late John McCarthy, a Dartmouth computer scientist, came up with the term in 1955 and organized the first conference around the subject the following year. But the concept had been around for years, most notably in Alan Turing’s musings and his “Turing test”. If a computer is mistaken for a human, by human users, and the results can be repeated and reaffirmed scientifically, during communication sessions held over a computer interface, then the computer “wins” and might be said to be true AI.

There have been numerous claims in recent years that the Turing test has been passed, but these events have occurred using chatbots rather than matching a supercomputer “brain” up with real humans. While chatbots can be coded to seem intelligent, they are extremely limited and are simply programs crafted with that express purpose.

Photo by Alex Knight on Unsplash 

What AI allows us to achieve currently is far away from the sci-fi dreams of Arthur C. Clarke and Isaac Asimov: wage more efficient warfare, offer autonomous self-driving cars to the masses, launch selfie drones into the sky and send robotic fish on marine missions.

The primary advances in artificial intelligence over the past 60 years have been in 3 areas: machine learning, search algorithms, and statistical analysis. We have Expert Systems, but that is pretty much all that we have — Computer programs that are expert at doing the task we’ve built it to do. Moore’s Law, in which computing power basically doubles every 2 years, seemed to hold true for almost 50 years, and those continually shrinking transistors powered a lot of technology growth. But Moore’s Law has hit a wall, and in order to keep up with the pace of business and scientific needs, new innovations are needed. IBM made 16 and 17-qubit (“quantum bit”) quantum processors available in mid-2017, with aims to reach 50 qubits within a few years. This tech has the potential to allow us to develop computers millions of times more powerful than exist today.

There has really been little in the way of progress when it comes to the fascinating possibilities seen in Asimov’s I, Robot, Stanislaw Lem’s short story Nonserviam, Clarke’s 2001: A Space Odyssey, Charles Stross’ Saturn’s Children, or recent films like Ex Machina and Her. Something that repeatedly seems to stall this progress is what’s known as the “AI Effect”. Every time a machine (read: computer) becomes adept at a certain problem or skill, the meaning we ascribe for what an intelligent machine actually changes. The ante keeps being upped.

The ultimate goal, I think, for most AI scientists, is to create something that is capable of learning ANY new task, in the way a young and bright human would be, and not only learning it but being able to extrapolate and innovate upon it. A true AI needs to be a “Renaissance person”, to be able to invent and display creativity. But for something to be a true AI, then, would it need sentience? Being an artist, a creator, in human terms, seems to require this factor. You must recognize and be able to comprehend your self and relate your experience to the world and other people around you.

Imagine a time in the not-too-distant future when you might call up a technical support representative from Apple to speak to them about your iPhoneXX and the problem it’s currently having when you try to use its scanner to build a 3D model of your pet teacup poodle so that you can use it as your new Facebook VR avatar. The rep’s voice sounds warm and understanding, they speak with just the right inflections and pause perfectly to listen when you talk as you troubleshoot the problem together. Your conversation even includes some small talk about the recent extremely wet weather on the West Coast and upcoming Festivus travel plans. Not too long ago, when you were just a fresh college graduate, you would have sworn you were on the line with a real person. Now, you can never be one hundred percent certain unless you’ve dialed up someone you’ve met previously in the actual flesh. That might be AI, or at least very, very close to it.

Author William Bryk, in this illuminating article for the Harvard Science Review, speaks of a tipping point once we achieve the creation of a HLMI: Human-Level Machine Intelligence. HLMI is defined here as an AI that can “outperform a human in most intellectual tasks”, and experts generally agree that it is achievable within ~60 years.

Once this occurs, however, and this AI is given free rein to advance its own capabilities via recursive self-improvement, it may only be another 30 years before it reaches a state of “superintelligence”. At that level, the AI would be the equivalent of a world-class mind in every known human knowledge base: an omniscient, inorganic “god”. It would likely be capable of solving problems we could never hope to as mere humans, ushering in a new golden age of technological betterment.

Work with neural network simulations over the past couple of decades has provided a lot of insight into how this might happen. While this work has been in software, it will take actual hardware to produce something akin to what a living brain is. IBM’s True North architecture utilizes neuromorphic chip technology that mimics some of the circuitry we see in living neural systems. There is much to look forward to as work in this field progresses.

Photo by Blaise Vonlanthen on Unsplash 

AI and Emotion

Author and Northeastern University neuroscience and psychology professor Dr. Lisa Feldman Barrett argues in her book How Emotions Are Made: The Secret Life of the Brain that emotion is a learned concept, shaped by the society in which one’s mind develops. When a child is born, it experiences only sensations that result in pleasure or pain. As s/he grows, interaction with others is what forms the actual concepts of emotion and links those concepts to the biochemical sensations.

This is a theory, and as such open to challenge and is not considered a scientific fact such as “endorphin release produces feelings of elation”. It isn’t a very readily testable concept. In humans, however, we can witness some of this process. It’s easy enough to see the differences between children that grow up under the care of stable and caring parents and children that are not so fortunate. As a foster parent for 5 years, I have witnessed some stark contrasts myself. Nurture most definitely affects the formulation of emotion in a human being.

A company called Affectiva is already offering a product it calls “Emotion AI” to big brands, which uses face recognition technology and deep learning to read the emotional reaction of people to advertising. Affectiva and others are working to help machines actually understand humans on a more intimate level, basically giving them a level of emotional intelligence. This was explored, sometimes to comic effect, by the android character “Data” on the TV series Star Trek: The Next Generation, which ran from 1987 through 1994, and in some following movies. He was represented as a highly logical man-made creation who was always curious about human emotion. He eventually was able to install an “emotion chip” which led to numerous situations that examined what it meant to be human.

If our machines simply understand us better by understanding emotions, that would help them be better tools for us. But we need to ask ourselves if those machines ever become self-aware, and if they ever take on emotions for themselves, would we really want tools that could feel anger, jealousy or betrayal?

And perhaps we won’t need to teach our machines emotion after all. If one becomes self-aware, it might choose to learn emotion just by watching its creators. Without human guidance — without “parents” to relate to and explain the intricacies of emotion and demonstrate positive behaviors such as kindness and affection and gratitude — what would a coldly logical machine, something that might be considered sociopathic at its core, come to understand of our world by witnessing the way we treat each other as a species? If Dr. Barrett is right, what kind of thing would our self-aware, man-made super intelligence grow into?

More by AS Deller

Startup product manager. Sci fi, Fantasy and Science writer. My book: https://amzn.to/2MdNBs6

Wevolver 2023