What Makes Us Unique As Humans?

Sabrina Lehner @ EPFL Extension School · 16 minutes

Can AI become intelligent, conscious, and creative – just like us?

AI challenges existing definitions of human nature, with every new discovery requiring us to consider what makes humankind special. Is it intelligence, consciousness and creativity? But what happens once AI has acquired these attributes?

For as long as humans have existed, our position on the ladder that leads from mere insects, to mammals and up to the gods was clear. AI may challenge this notion and might demand a shift in perspective, prompting discussions on what separates humans from “human-like” machines.

Intelligence vs intelligent behavior

When we speak about artificial intelligence, we assume we know what “intelligence” is. However, we don’t have a universally agreed upon definition of intelligence. Maybe we’re not even aiming to imitate human intelligence, and artificial intelligence is in its own category and not comparable?

Let’s take the origin of the word “intelligence”. Derived from the Latin word “intellegere”, it means to “comprehend”. Therefore being intelligent not only means being able to mimic intelligence, but also to comprehend it along with its potential implications (the “why”).

Take the example of a self-driving car: would you say that it is intelligent? Does a car “understand” why it stops if a person is crossing the street? Is it aware of the environment the same way we are as humans? Of course, we know the car doesn’t stop because it understands that it would harm the person, or because it doesn’t want to hurt the person. It stops because it has been trained that way using existing data and not because of any implication of free will.

Feeding a machine with data and giving it a methodology to follow is reproduction and doesn’t necessarily imply intelligence, even though it can appear that way. You might argue humans also just follow given rules – however, we usually do so after a process of intelligent thought. We evaluate whether the rules make sense to us based on our knowledge, i.e. we question their purpose. Therefore we can conclude that a self-driving car using narrow AI is not intelligent, but rather it is acting intelligently.

Can artificial intelligence be similar or even the same as human intelligence? Under the assumption that intelligence has to do with “understanding”, then we’re not there yet with narrow AI. But when it comes to artificial general intelligence, this is a question we need to tackle.

But how can we tell if something possesses human-level intelligence? To help answer this question and identify a machine’s ability to imitate human intelligence, Alan Turing proposed the now-famous Turing test1 in 1950. Originally called the “Imitation Game”, the test works as follows.

Assume there are three participants who are separated from each other, where one is a machine and two are humans. All interactions between them are text-based, like you would have with your friends via your computer or smartphone. One person acts as an interrogator and asks the other two (the other person and the machine) questions. Both try to answer those questions to convince the interrogator that they are human. A machine passes the test if the interrogator cannot tell which one is the human.

The conclusion of the test is if a machine can trick a human to believe it is human and not a machine, it has demonstrated intelligent behavior. But if a person cannot tell the difference between a human and a machine in this scenario, does it mean that a machine that passes this test behaves with human-like intelligence? Surely there is more to being human than just intelligence?

Consciousness

One characteristic we assume every human being to have is consciousness. The origin of the word “consciousness” is from the Latin term “conscientia”, which means “knowing”, in the sense of “being aware”.

The term has very different meanings depending on the context. We might distinguish different states or levels of consciousness – such as sleep, dream, and meditation – or we might wish to differentiate between notions of consciousness that range from animals to humans (“I think therefore I am”). As the concept of “consciousness” is hard to grasp or pin down in a general definition, we look at three intuitive notions that are broadly shared: subjectivity, awareness, and the ability to experience or feel.

A requirement to develop consciousness could be a certain level of intelligence, and studies seem to confirm this. Several animals considered to be intelligent, such as dolphins or grey parrots, show signs of (self-)consciousness2 almost at the same level as humans. Increased levels of intelligence appear to offer higher chances for consciousness.

However, if something is (or acts) intelligent(ly), can we also conclude that it can have or develop consciousness? Following the Turing test, the answer would probably be “yes” if we conclude that a machine passing the test is human-like. But this doesn’t take into account the abilities to experience or feel, which we often associate with consciousness.

Most philosophers consider the subjective experience as the essence of consciousness. There is some form of inner feeling when you smell something, when you move and interact with others, and when you feel cold or warm.3 The abilities to experience your environment and to feel seem to be crucial for developing and having consciousness.

These observations raise the following three philosophical questions when working with AI:

  1. Can a machine develop consciousness?
  2. Can we create (a machine with) consciousness?
  3. And if so, what are the implications?

Since we do not yet fully understand what consciousness is, or what the crucial aspects are in order to develop consciousness, it’s hard to tell whether a machine can develop it. In other words, a scenario in which we create an AI with a goal in mind which is not linked to creating consciousness, but the AI develops it nevertheless, may happen.

And when will we know that AI has acquired consciousness? For example, let’s consider an intelligent AI system with multiple sensors to measure its environment. Even with the prerequisites that we assume play a role in developing consciousness, such as intelligence and the ability to feel, we encounter one big issue: isn’t it just a simulation rather than reality? Alternatively, in what way is our brain not just creating a simulation of the physical world around us as well?

To better explore these questions, let’s imagine we are able to create an AI system that is capable of perfectly capturing its environment. Like our skin, it could measure if an object is hot or cold. But would it be able to “feel” this as well? And if this AI could analyze a picture of your face and determine that you look happy or sad, would it even know what this means? Could it relate? Or what about experiences we humans can’t relate to? Does AI “feel” something when it is in energy saving mode, flight mode, or runs out of available memory (RAM) or hard drive space?

If it couldn’t relate to our human feelings, can it understand those of other AIs? In other words, is it capable of compassion or empathy? And if it “feels” something, either like humans or other AIs, could that be enough to be considered being “alive”? Or is the realization of the inevitable end of living the only way to experience true joy and pain – a thing an immortal digital being could never understand?

Let’s assume we would be able to fully replicate a human body, including its brain – a perfect digital simulation of our inner workings. At what point during its development would such a simulation acquire or create consciousness? How could we be sure that such an AI would really “be” consciousness and not just simulating it? If consciousness could be created artificially, could it happen by accident? Can there be an alternative kind of consciousness to human consciousness?

Today, we’re still far away from the point where we could intentionally create consciousness. When it comes to this question, we quickly come up against another issue: you cannot locate consciousness. Since time immemorial, humans have tried to locate the place of our consciousness or “soul”. Initially, it was believed to be in our heart, responsible for distributing our body’s life essence, our blood. Then during the Age of Enlightenment, the famous philosopher Réne Descartes assumed that the pineal gland – a small, seemingly unimportant region in the center of our brain – is the principal seat of our soul.

However, centuries of neuroscience has failed to find any evidence of a specific location or substance that clearly leads to the “seat of consciousness”. Perhaps consciousness is something abstract that cannot be located or touched?

In scientific terms, this is also called a construct. Take, for example, the irrational number pi. It describes a specific property of circles but cannot be seen. Or take human emotions like amusement, boredom or nostalgia – we can all feel them, but we cannot locate and distill them. Perhaps consciousness falls into a similar category?

Human consciousness in machines is not the goal of AI; there may be many more levels of consciousness than we can currently comprehend. But whether it’s human or artificial consciousness, what would it mean if we have conscious AI? Could we copy consciousness from machine to machine and thereby replicate it? If we can clone it, what would self-consciousness then mean? Would an experience still be subjective and unique?

Moreover, having conscious AI might have implications on our interaction with AI. If we consider AI being human-like, we might need to acknowledge that it has free will and cannot act as a servant for us in a master and servant relationship. But even at non-human-like levels of consciousness, we have to think about the implications of AI making subjective experiences which is assumed to be the essence of consciousness.

Would this change an AI’s perception, thinking, and interpretation of the world? Could it come to pass that AI perceives us as a threat and, based on this, tries to avoid or in the worst case eliminate us? It’s a topic science fiction returns to again and again. Terminator 3: Rise of the Machines explored the ramifications of this end game. And more recently, Alex Garland’s Ex Machina examined the psychological pathways towards such a flashpoint.

Creativity

Our ability to be creative has been a defining characteristic of humans since the beginning of time. If we’re aiming to make machines more and more human-like, we should also consider whether a machine can be creative or not. A deeper exploration of how AI can help enhance our creativity can be found in the article AI and Visual Arts.

In many situations, machines are able to derive insights from data in a way that humans would never be capable of. They can create paintings, such as the famous Edmond de Belamy portrait4, and compose personalized original music within minutes. One of the first examples of a computer-generated piece of music that can be considered the pioneer of AI generated compositions was presented in 1965 by Ray Kurzweil5. Numerous applications in this field followed.

Machines are extremely specialized and adept in producing almost any type of output, so long as the right input data is available. Although this is very impressive and seems to be a creative process, can we really label it as such? Or is it just due to the machine’s capacity to work with already existing data? Could a machine also create something new out of nothing? For now, most of these machine created things still seem a bit off, a bit clunky, and a tad non-human. But what will happen once AI systems correct these flaws?

We may not be aiming for a human-like AI, therefore it might not be necessary that an AI application is creative. But could a non-creative AI be as powerful as a creative AI system? Einstein famously said: “Imagination is more important than knowledge because knowledge is limited.” Creativity is the ability to create something using imagination, and to start this process it is important to be curious and to ask questions.

Do you think a computer fed with all the data on physics, mathematics, and logic would be able to develop the theory of relativity? Would a computer be curious and ask the “right” questions? Imagination is unlimited – but maybe it just goes beyond our imagination that a machine could ever do this.

What if…?

What if we created a general AI system that was intelligent, creative, and had a consciousness?

While these characteristics are certainly “human”, they might not be enough to say something is human.

One of the main challenges we face is whether we need to define a line where something is no longer artificial and has become human-like. If we say an AI is human-like, would this also mean that it has human rights? For example the right to life, freedom from slavery, and freedom of opinion – which would also mean we would not be allowed to use such AI systems as the tools they were conceived to be.

But even if we feel the need to define such a line, would it even be possible? Or could we define different levels, or different gradations of “human”? If so, how “human” would an AI with these characteristics be? Or if they don’t deserve human rights, would it be sufficient to define “machine rights”?

Even without having such definitions, we can discuss how we see an intelligent, creative, and conscious AI compared to other electronic devices, animals, and ourselves.

Today, narrow AI that acts intelligently is already outperforming us in some tasks – and that’s before we even begin to imagine what general AI would be capable of. Would it shift our jobs into a more “supervising” role? Or should we fear mass unemployment? These are not the only concerns triggered by AI.

While the main purpose of electronic devices is to help or entertain us, such devices can easily be replaced when they are broken – this differs greatly to animals, specifically pets. Even though we can’t fully prove it (yet), we assume animals to have some form of consciousness. A pet is something unique, with an identity that cannot be simply replaced.

So how about our conscious AI? If we have one, it means it is self-aware and would have subjective experiences, as well as unique thoughts and feelings. Could we treat such an AI in the same way we treat our pet? And if we treat the AI as a pet rather than an equal partner and the AI becomes aware of that, could the AI feel “hurt”? Would the AI accept this fact, or rebel against it and demand more rights, even equality? Would we be able to accept them as equal partners, or would we aim to “keep control” and supervise them? What would be the reasons for us to keep control? Is it because of a lack of trust? Because we think it might shrink our rights? Or because we simply want to protect ourselves and our world? And what if they started to supervise us without us noticing? In short, would AI take over the world?

Would more power for humans offer more protection? If we don’t grant more or even equal rights, AI could see us as the reason for the limitation of its freedoms and rights. Ultimately, it could see us as a threat that needs to be dealt with – so would AI harm us?

Hollywood has explored these scenarios in numerous movies, such as Blade Runner, The Matrix, The Terminator, and Ex Machina to name just a few. But it is certainly not just a topic for blockbusters – there is also a huge debate around the fear of AI between scientists and influential personalities, such as the late Stephen Hawking, who warned that general AI could “spell the end of the human race”.6

The question of how many freedoms and rights we want or have to grant general AI – and if AI can act as an independent entity – are tricky and cannot be answered easily. We have to be aware of the implications such decisions might have on us, which can lead to fear.

To fear something unknown is human, but being aware of such fears can also help us to establish a safety net that both protects us and alleviates the worry. Perhaps we might have to implement certain regulations to ensure a peaceful coexistence.

A famous example of such rules are the Three Laws of Robotics by Isaac Asimov. Also known as Asimov’s Laws, he introduced them in his 1942 short story Runaround, which was part of his 1950 collection I, Robot.7

The Three Laws of Robotics state the following:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While Asimov’s laws do not consider equal rights, they can be seen as a starting point. It may also transpire that in some areas we grant extensive rights to AI systems, while in others we do not – perhaps because we want and need to protect ourselves.

Conclusion

Artificial intelligence is a fascinating new technology that makes us wonder, dream, and worry. It will change the way we live, work, and interact with each other. But it may also change the way we define ourselves and help us to better understand what makes us truly human. It may even bring us closer to answering the questions of what intelligence and consciousness is, and whether they can be created. And if yes, is AI the way to do so?

AI may both challenge us and help us to understand ourselves better. Perhaps we should even consider AI to be a new species and, as such, we then have to decide if our notions of intelligence, consciousness, and creativity can be applied – or if they must be redefined.

  1. You can read more about the Turing test in our article, A Brief History of AI, and this Wikipedia article

  2. Self-consciousness refers to the concept of being aware of oneself. There are many scientific approaches as to how we can potentially test consciousness in animals. The mirror test might be one of them. For more, visit the animal consciousness Wikipedia page

  3. These subjective experiences, the “what it is like” character, are what philosophers also call “qualia”. For more information, see this Wikipedia article

  4. For more on Edmond de Belamy, visit its corresponding Wikipedia article

  5. Ray Kurzweil is a very influential and important person in the field of AI. In 1965, at the age of 17, he created a computer that generated a music piece for the piano. For more information, see this article 

  6. The full article about this can be found on the BBC website here

  7. For more information about the Three Laws of Robotics, visit its corresponding Wikipedia article

Next

AI and Ethics