AI - Two Letters, Many Meanings

Michael Notter, Christian Luebbe & Arnaud Miribel @ EPFL Extension School · 12 minutes

What Is AI? In what way is it intelligent? And why is it artificial?

The goal of this article is to clearly describe what we mean by the term artificial intelligence (AI). Why do we call it “artificial”? And in what way is it “intelligent”?

AI describes the broad discipline of developing systems that emulate human intelligence and our capacity to perceive, learn, reason, plan, solve problems, move, manipulate, and be creative. To be more precise, it is the study of “intelligent” algorithms that are capable of analyzing data to detect information patterns. By observing how these patterns are represented in data, algorithms can learn how to improve their abilities to successfully achieve a specified task.

While the definition above provides a short and clear outline of what we mean by AI, the concept remains rather vague, and can often be confusing. One reason for this is that it’s an ever-evolving field. For now, no universally accepted definition of AI has been agreed upon – and so people use it in many different ways.

What Is AI? and What Is It Not?

What AI Is

  • AI describes the field of research that tries to train machines to perform “human tasks”. It intersects with other fields such as robotics and statistics.
  • AI is the tool used to perform these “human tasks”. Like an oven in a kitchen is a tool to transform raw ingredients into a delicious cake, AI can take raw data and transform it into useful outputs.
  • AI describes a family of computer algorithms that are able to improve automatically through experiences. This could be from analyzing data, or receiving rewards following an AI’s actions.
  • AI stands for a new technology – it is a new way of doing things. Just like the invention of the steam engine, the assembly line, the discovery of electricity, or the creation of computers and the internet, AI revolutionizes industries and opens up many new possibilities.

What AI Is Not

  • AI is not an entity or object, it’s a technology. A smartphone that uses AI to unlock itself via face recognition isn’t AI – the phone didn’t “recognize” you, nor did the underlying app “know your face”. The physical phone and the software app are just the packaging of the AI technology.
  • AI is not intelligent like humans. Even though AI learns from experiences in ways that are somewhat comparable to us, it behaves much differently. While we are intelligent, current AI just imitates intelligence – at least for now!
  • AI is not a being, it does not have a conscience, nor is its behavior good or evil. But people make assumptions about an AI system because of how it behaves. While AI learns from data to adapt its behavior to optimally perform a task, it does not understand the consequences of its decisions to perform this task. None of the outcomes are made with the intention of doing good or harm. The potential problem lies more in the data provided to the AI, or the phrasing of the task. So if it appears to be good or evil, then the credit (and the blame) should be attributed to the humans involved in creating the system.

The fallacy of comparing AI with humans

To put it simply, AI isn’t a conscious entity with a soul that behaves with good or evil intent. Projecting such human traits and concepts onto the machine and its technology – a process called anthropomorphizing – is a fallacy that should be avoided.

AI is a tool that can be used and abused. It is the intentions and shortcomings of both its designers and its users that lead to these human descriptions. The Global Positioning System (GPS) is a powerful technology, for example, but it’s up to the designers and users if it will be used for good or evil.

A designer might want to use it to create drones that can deliver medicine to the furthest regions in the world; alternatively, it could be used by the military to launch missiles and start wars. GPS navigation enables us to travel anywhere safely and efficiently, but it can also be used to follow and stalk an unknowing party.

In both of these scenarios, the GPS doesn’t mind how it is used. It doesn’t actually do anything – it just provides a service in accordance with the framework in which it finds itself. Similarly, AI only follows the patterns in the data, and efficiently performs the task it was trained to do.

That means everybody involved with an AI technology should be mindful of what it can be used for. Frameworks need to be implemented, along with restrictions that can prevent negative consequences. If you drive down a one-way street in the wrong direction because your “navigation device told me to” and you have an accident, you should certainly be blamed for it. But in part, so should the company that allowed its software to make such inadequate recommendations.

The decisions are ours, so we need to be mindful of the potential as well as the dangers!

Narrow and General Artificial Intelligence

Let’s take a closer look at the two possible kinds of AI: narrow and general. In broader terms, general AI is capable of doing anything a human can do, while narrow AI is only proficient at one particular task.

A narrow artificial intelligence, sometimes also called weak AI, is a system with very limited expertise. It is trained to perform one task well, but cannot adapt and generalize beyond that. As such, a narrow AI is not able to perform outside its framework.

For example, your AI-enabled email spam filter will not be able to distinguish an image of an apple from that of a pear. Nor will a self-driving car be able to recommend movies for you to watch when you get home. An AI designed to detect lung cancer from medical chest X-ray images will not be able to detect COVID-19 infections from them, unless it is specifically trained for both.

General artificial intelligence, sometimes referred to as strong AI, doesn’t have these shortcomings. General AI is a hypothetical machine that can understand and learn any intellectual task a human being can.

This AI can emulate all of our cognitive capabilities: it is able to think, reason, draw new conclusions, plan ahead, and solve problems in innovative and creative ways. As such, it can translate previous experiences and existing knowledge to unseen new tasks, just like us. But unlike humans, it can do this with a speed and potential for self-improvement that the world has never seen before.

It is this aspect that makes general AI fascinating, but also frightening.

So far, every human achievement and every technological advancement has been a product of our intelligence. But what might we achieve if our efforts are supported – or even led by – such a new and potentially more advanced intelligence? Would millenniums of human research, exploration and ingenuity be shortened to just a few hours’ work for a single general AI’s “mind”?

The potential seems unfathomable and endless: all known diseases could be cured; climate change could be reversed; and our energy crisis could be solved with even more advanced renewable energy. What if such a machine could eradicate hunger, poverty and inequality in the world? And what if, for reasons unknown, it did the opposite instead?

Some people hypothesize that the moment a general AI learns how to improve itself, the evolution of AI is only bound by computational limits, and no longer the human mind. AI could then very quickly surpass the smartest humans in all conceivable domains and develop itself into a so-called artificial super intelligence.

But if the past 50 years of AI hypes and winters have taught us one lesson, it is that we underestimate the complexity of human intelligence and the efforts needed to emulate it. Thus general AI and super AI are just dreams made of what-ifs that provide interesting food for philosophical discussions and science fiction – for now, anyway.

While narrow artificial intelligence is the AI everyone is talking about today (and what we intend to explain through That's AI), general artificial intelligence is what science fiction stories are all about. How far away general AI is from becoming reality is anyone’s guess – some experts believe it might be possible by the year 2050, while others predict it won’t arrive for centuries, if ever.

But what should be clear to everyone is that the invention of general AI might be the biggest event in human history so far. And as with any new technology, we need to be careful that humanity does not take a step backwards as a result.

Why do we call it “artificial”?

The reason is rather straightforward. In contrast to the “natural” intelligence that humans and animals possess, this intelligence is not a product of biological processes or evolutionary selection – it was artificially created. As such, an AI cannot be considered an intelligence in the way we would use this term in the context of living things.

So is AI actually “intelligent”?

To better understand why AI is and isn’t intelligent in the same way as humans are, you have to examine how a human’s intelligence compares to an animal’s.

All animals, including us humans, are able to learn and perform an amazing range of feats. We can recognize prey or predators by sight, smell or echolocation. We can cooperate with others in a group through intricate behaviors and sophisticated communications, and even solve riddles and bypass obstacles in order to survive. All of this is thanks to our impressive capability for perception, understanding, and our ability to learn from experiences, adapt to changes in our environment, and to react to unforeseen circumstances.

While a narrow AI could learn many of these skills individually, it is still struggling to combine them together into a holistic approach – one that is robust to changes in initial conditions and is flexible enough to generalize learned insights from old experiences before applying them to new (and perhaps unrelated) situations.

So is AI not yet “intelligent” like animals because it hasn’t learned to connect the last few missing dots? Unfortunately, it’s not that simple. What's key here is the difference between "being" intelligent and "behaving" intelligently.

For example, imagine driving a car on a cold winter night. A human who has only learned about the dangers of ice on the road from books will critically observe their surroundings and try to plan ahead to prevent any unwanted bad surprises. We would say that such a driver is intelligent and is trying to prevent an accident.

In contrast, an advanced automated car might have numerous sensors available to scan its surroundings, state-of-the-art wheel control and braking protocols. It may have learned from experience to adapt its driving style whenever its sensors indicate temperatures are below zero, or spots snow on the road. So while such a car could be able to drive in an intelligent and smart way, it wouldn’t need to be intelligent like our human driver in order to behave intelligently.

Let’s return to the animal kingdom for another example. On their own, a fish or bird could be considered a somewhat intelligent animal. But when in a group, they become something quite remarkable – their behavior can lead to something called swarm intelligence. This is a form of collective behavior that is self-organized and decentralized.

With birds or fish, their individual task is rather simple: maintain the same distance to your neighbors and match their speed and direction. For an animal on its own, this doesn’t involve much thought. But when they are part of a whole that is driven by numerous small variations and interactions between all the animals in the group, these swarms show beautiful motion1 that can be used as a strategy to deter predators thanks to the size of the swarm and its complexity.

Pushing this concept a step further, let’s consider the intricate social structures and behaviors of ants, bees, and other insects living in huge hives. Individually, each member of the hive cannot be considered as intelligent; they do not have the capability to consider the big picture and future of the hive. But through a combination of numerous interactions and reactions, such hives as a whole can achieve impressive feats – such as the creation of vast city-like structures or military-like defence strategies.

To a certain degree, this kind of group intelligence goes beyond the simple “behaving intelligently” of the self-driving car. Such hives are self-aware – they understand their surroundings, can learn and plan ahead, and solve new and unforeseen problems in a creative way.

The more advanced AI becomes, the more difficult it will be to draw the difference between human, animal and artificial intelligence. All three have the ability to perceive their environment, infer relevant information, retain knowledge, and apply it to adaptive behaviors within their current (or a new) context.

So instead of wondering how our human intelligence is smarter or dumber than an artificial one, we should instead ask how its intelligence is equal or different to ours. While we are trying to emulate human behavior with AI, does artificial intelligence need to be comparable to our natural intelligence? Or could we end up with an altogether different form?

What is clear is that a simple judgment of the behavioral outcome is not enough anymore.

  1. Clouds of birds, schools of fish or swarms of bees are incredible phenomena! 

Next

AI - More Than 'Just' a Program