AI and Society

Catherine Dietrich @ EPFL Extension School · 17 minutes

How AI challenges the way we live together.

Artificial intelligence (AI) is the very embodiment of progress and has been the most transformative technology of the past decade. It has become an integral part of the way we live – not just as individuals, but also together as a society – often without us even noticing how it permeates our everyday lives.

AI is all around us, bringing enormous conveniences, ways of problem solving we’ve never seen before, a release from mundanity, greater efficiency, and thus far untold potential. It also has a lot to answer for: as our global and local societies become increasingly underpinned by, and dependent on, AI technologies, we must also pay attention to the challenges this presents.

We must consider the social implications of this new technology, and some of the specific advantages, outcomes and dilemmas our society faces with the growth and increasing application of AI.

What do we mean by society?

Before we continue, let’s pause to consider for a moment what we mean by ‘society’. Society is a group of people who are interdependent in some shape or form. What happens in one sub-section of society will affect others, and vice versa.

A ‘successful’ society adheres to certain norms and rules – whether overt and imposed by laws (we stop at red traffic lights to give the other side of the street a turn), or unspoken where ‘socially acceptable’ behavior applies (we stand in line in the shops and wait our turn to be served).

The term ‘society’ also evokes the power of the collective to achieve what the individual could not, and the rules and norms that a society adopts will define the kind of society we are. When it comes to how AI affects society, we are considering many different societies around the world with different norms and rules, and vastly different experiences of AI technologies.

When we pause to consider what universal values our society should embody so that it can function as we would like, these may include fairness, trust, transparency, compassion, care, tolerance, and empathy. Crucially, the success of a society rests on the principle of a group of people working together towards such values as a common goal. For AI technologies to benefit society and help it progress, they should respect the same principle.

Let’s look at some of the ways in which the development of AI touches these universal values and can either strengthen or threaten them, affecting our society in both positive and negative ways.

FAIRNESS

Machines are rational, so they can not be biased – or can they?

Fairness is a key concept in our society. We expect to be treated fairly in the workplace, by the judicial system, or when applying for insurance and loans. The problem with fairness, or lack thereof, arises because our decisions can be influenced by biases, prejudices, or emotions. These influences may be subconscious, or in the worst case intentional. Unlike us humans, computers have no feelings, emotions or intentions; they are rational machines that follow rules. Therefore, one would expect decisions made by AI systems should be inherently fair and free of biases. But when it comes to fairness, AI systems display the same shortcomings humans are prone to.

Why is this so? Remember that data is one of the most important components for the learning process of AI (read more on this in the Building Blocks of AI). So if the data used to train AI systems is not fairly representing some groups in society, or it encapsulates existing human discrimination and bias, then AI systems will manifest these biases too. In short, AI is only as good as the data on which it has been trained.

So what if AI algorithms influence the verdict of court cases or the duration of prison sentences? What if HR departments use AI supported software in selection processes to help with hiring or promotions? In fact, what if an AI system gets to decide whether you will even be shown a job ad in the first place? It is a recurring problem that people are disadvantaged based on their gender or ethnicity, thus AI systems trained on past discriminatory1 decisions are prone to replicate, or even amplify, these problems. However, simply excluding gender or ethnicity from the data is not enough, as subconscious biases are encoded in other parts of the data, like job titles or past salaries.

For AI to work for everyone, everyone needs to be represented in the datasets on which it is based. Moreover, with appropriate application and fair pre-programming at its inception, an AI program or technology can avoid developing the biases to which humans are susceptible.

The risk of marginalisation

Besides the right data and carefully created AI algorithms, the access to AI technologies plays an equally important role. Existing inequalities between marginalized communities and better resourced ones are further increased by unequal access to the building blocks of AI – data, algorithms, hardware, and expertise. This problem is referred to as ‘the AI divide’ and it is deepened by the development of AI systems that focus solely on the health, prosperity, and safety of privileged groups.

The sharing of resources reduces the effects of the AI divide. This includes the open source publication of software and advanced pre-trained AI models, as well as easy access to the necessary infrastructure and tutorials. Here, market leaders like Google, Facebook, and Amazon have started to provide free access to some of these building blocks.

TRUST

Seeing is no longer believing

The film and gaming industries have spent decades developing ever better CGI (computer generated imagery) to create more realistic-looking artificial characters and surroundings, or morph them with real life images and videos to create augmented realities.

However, the use of CGI is often transparent as it is either obvious or implicitly assumed from the get-go. In the 1994 film Forrest Gump, the lead character appeared in historical footage with US presidents at the White House; these doctored images may have been more believable than what had been experienced until then, but it was obvious to the viewer2.

Fast-forward 25 years and AI allows us to create images and video sequences that are hard to distinguish from genuine ones without further scrutiny (for more about this, see AI and visual arts. The technology to do this is called deep learning, hence these images have been coined ‘deepfakes’.

Doctored photographs have been around for a very long time, but their creation and dissemination has never been this easy. While we have become accustomed to Photoshopped images in the advertising industry or apps on our smartphones, deepfakes now facilitate the creation of content that depicts events that never happened, or at least not in the form depicted.

The same holds true for audio information, with AI reading out text in any voice you feed it samples of. Deepfakes can therefore be used to create fake evidence, or even fake identities for criminals. Although not all deepfakes are created with malicious intent, they can be used to compromise credibility and erode trust in individuals, science, news, governments, and institutions.

As technology improves, we will be able to create better software to detect deepfakes more quickly. But the speed with which social media can spread false information to large crowds, and our tendency to fall for information that confirms already existing biases, makes it hard to quickly check and debunk falsehoods effectively. So who can we still trust?

This question remains open, but being aware of deepfakes is a vital first step. Over time, we will develop tools capable of distinguishing fiction from truth, similar to spam emails. But in the meantime, question what you see – because seeing isn’t always believing.

The computer will see you now

Even where the use of AI is transparently communicated to us, are we willing to trust it? In recent years there have been a range of new AI applications that help medical professionals diagnose diseases more accurately, or by less invasive means. Other devices can monitor patients’ vital signs 24/7 and flag adverse developments earlier. These AI diagnostic and monitoring tools can deliver huge gains in accuracy and efficiency for the medical profession. But what if AI steps out of its supporting role and becomes a decision maker?

Suppose a monitoring tool can also decide a treatment or the dose of a medication? On the one hand, its ability to handle continuous data streams allow it to react faster and more gradually, learning over time what works best. But on the other hand, AI is only trained for a narrow task and thus may react incorrectly when faced with factors outside its limited expertise. What safety barriers and independent evaluations would need to be in place before we would trust such a system, or comfortably put our health in the hands of an AI doctor?

This issue of trustworthiness extends to any AI system or product. The degrees or types of potential harm vary and would be naturally unintentional from the side of the AI. Which risks are we willing to accept when stepping into an autonomous vehicle, or handing our hard-earned pension fund to an AI investor?

We need regulations that protect us from poorly designed and untested AI systems and legal frameworks that ensure accountability for any negative consequences. One step in this direction would be international regulators and independent approval processes, analogous to bodies with vested interests, such as those in the pharmaceutical industry, to guarantee all AI systems are safe and compliant with agreed standards.

TRANSPARENCY

Key questions we must ask

In order to facilitate trust in AI applications and in the fairness of their outcomes, we need to have transparency. There are four areas where transparency plays a role in the domain of artificial intelligence:

  1. Where are AI systems involved in production or decision processes, and what is their purpose?
  2. What data is used to train these AI systems, and how is it collected?
  3. Is there an independent evaluation of AI systems with regards to quality and fairness?
  4. What personal data is being collected and stored about us, and can we control its use?

AI provides many useful support tools that make our everyday lives so much easier. In each case, the awareness of the involvement of AI allows us to adjust our decisions and judgment of the outcomes accordingly. However, there are many cases where we are not informed that AI has been involved, and/or with what purpose. Is the AI system really meant to help the consumer, or is it actually just returning maximum profit for the company involved?

Are we all ‘walking data points’?

Over the last decade, we have become a massively quantified society. According to a Forbes article from 2018, 90% of the world’s data was generated in the years 2016 and 2017 – and during that decade we amassed more data than since human records began.

As seen in the article the building blocks of AI, data is a vital component of AI, and data harvesting and storage can arguably be viewed as an inevitable necessity for progress. After all, some AI applications rely on complex patterns in order to achieve the desired performance.

Some examples include monitoring financial movements to detect fraudulent activities 3, developing advances in medical research, language translation tools 4, navigation services, recommender systems for our entertainment and shopping, and more. In each of these cases it’s in the interest of the public or the end users that developers have access to large, diverse datasets to guarantee excellent performances.

However, we also need to be able to trust that this data will be used responsibly. For every positive use case for the data, there is likely an application with negative consequences. Security cameras are a useful tool for detecting and prosecuting criminal activities, but could also be abused through unwarranted mass surveillance without any legal justification.

While it is desirable that existing medical datasets remain available for later studies to facilitate quick scientific progress, what if they end up in the hands of insurance companies? They could use an AI to predict how likely you are to develop lung disease or dementia in the next 10 years, and thus start to increase your premiums or refuse future treatments.

The anonymization of data is an important first step. But once several datasets are combined, who knows what behavioral patterns and personal identifiers future AI systems will be able to extract from today’s data?

It is undeniable that by gathering data about all of our preferences, companies can know what makes us tick and what our ‘carrots’ and ‘sticks’ are. This should make us uncomfortable – especially as it often feels like data is being gathered and sold without our knowledge at worst, or with only tacit compliance at best.

Hence we need clear principles and regulations that give us the rights to control how our personal data can be used by others, and demand transparency on how that data is used. However, we’re helpless if large companies can buy out small competitors for their data and blackmail their users into accepting less favorable or opaque terms and conditions for their data. As long as we remain insufficiently versed in these technical topics, companies will get away with paying minimal lip service to new regulations without substantial consequences – and the mistrust of big corporations and AI will grow.

Current legislation needs to catch up with topics around AI and the use of personal data in order to hold corporations accountable for how they use that data. We need to put legal systems in place that are effective in protecting society and guarantee transparency around the use of personal data.

COMPASSION AND CARE

AI and the aging population

It has often been said that the true measure of a society can be taken from how it takes care of its elderly. So how do we bring the older generation along on the AI ride? Technology is changing so fast, seniors may appear at ever greater risk of being excluded from the many benefits AI promises.

Do you remember the struggles you had introducing your parents or grandparents to SMS or email? And that was a technology that was ‘easy to understand’ for most of us! So what about automated intelligent robots and tools that help with household chores, or self-driving cars to transport them from place to place? They may feel overwhelmed by these concepts and the speed of change, or excluded by the lack of understanding and easily accessible explanatory tools. This leaves the elderly at risk of not only missing out on many of the daily conveniences that AI offers, but also possibly falling prey to people using AI technologies for nefarious purposes.

Those developing and designing AI systems need to take into account how these technologies could be misunderstood or misused by the elderly or vulnerable. At the same time, families and carers need to gain the ability to understand and present AI technology in an accessible way for this section of our society.

Nevertheless, the overall implementation of AI technologies for the aging population is a positive one. Thanks to the advancement of AI, we will see unprecedented medical advances as well as technologies which will contribute significantly to an improved quality of life for this group in our society. One such example will be AI devices that can predict and prevent a fall – giving seniors with conditions like Parkinson’s disease, or other difficulties affecting mobility, the confidence to be independent they may not have had before.

AI will contribute towards the relief of loneliness – which many people struggle with – by providing more lifelike and experiential means of communicating with loved ones. Autonomous vehicles will allow the elderly to regain some mobility and independence. And AI technologies performing unobtrusive physical monitoring may detect medical issues like cardiac abnormalities that could be treated before it becomes a serious problem.

With proper application, the aging population stands to gain a lot from the development of AI, so we must ensure they’re not afraid of it.

TOLERANCE AND EMPATHY

What are you not seeing?

The existence of deepfakes means we need to ask ourselves questions about what we think we’re seeing. Equally, we need to understand that sometimes ‘not seeing’ can also lead to problems.

Do you have a particular news outlet you rely on? Do you engage with particular kinds of content on social media? Are you aware that your behavior online contributes to the filtering of what you are presented with and what is excluded from your news and social media feeds?

Depending on their most recent online activities, two people could be in two unique and very different 'filter bubbles', where each is exposed to content that reinforces their bubble and makes them sceptical of content from any other bubble.

The collection of data around our interests and the AI application of that data has led to a phenomenon known as ‘the filter bubble’. This term was coined by social media activist Eli Pariser who, in his 2011 viral TED Talk, defined it as an echo chamber and a “personal, unique world of information that you live in online”.

“What’s in your filter bubble depends on who you are, and it depends on what you do,” he explains “But the thing is that you don’t decide what gets in. And more importantly, you don’t actually see what gets edited out.”

Social media giants like Google, Facebook and Twitter use algorithms that are highly secretive and ever-changing, which ultimately create these filter bubbles. Over time, we will experience the narrowing of our exposure to different content – and by extension to different perspectives. This is the case not just with the news bubble, but with general information as well.

Your search on Google may be completely different from somebody else’s, based on where you live, your online activity, and what your interests are (according to your online activity). Your subsequent news and information bubble can lead to the obstruction of informed decision making and – just as deepfakes do – increased polarisation.

This becomes problematic for society, as tolerance is vital to coexisting – and empathy cannot be built without tolerance. The information bubble leads to different parts of a society developing different understandings of reality according to what they consume, and it narrows their tolerance for people whose opinions differ from their own.

We have seen that in extreme cases, this can lead to acute polarisation, disharmony and even political radicalization and brainwashing – all of which can be dangerous for any society’s value system.

Conclusion

As we can see, AI impacts our society in an infinite number of ways. The biggest questions are how much agency do we as humans have when we are making decisions based on AI-generated technology, and how do we ensure this technology is not working at cross-purposes to what is best for us – both as individuals and as a collective?

The answer is not yet clear, but demystifying AI and gaining a strong understanding of how it touches our society is a good start. Crucially, we need to be able to trust that the development of AI technologies is aligned with the common good. And for that we have the right to demand accountability.

It has always been human instinct to reject (or at least suspect) progress before we embrace it, and it is right that we should seek to protect the values that are non-negotiable to our society. Embracing change does not mean sacrificing these values – in fact, it’s quite the opposite.

  1. In North America, the term “redlining” refers to the discrimination of particular groups of society through a systematic disadvantaging, even exclusion, when providing services such as banking, insurance, retail, housing, or access to healthcare. In each of these cases, AI systems trained on historical data that encodes the consequences of redlining are likely to reproduce the same adverse effects. 

  2. In the following two excerpts from the movie, Forrest Gump meets US presidents John F. Kennedy and Richard Nixon

  3. Fraudulent transactions form only a very small part of the overall financial data, and taken in isolation appear perfectly normal. Thus detecting fraudulent activities data is difficult to do. 

  4. Translations to and from languages that are not related to other more common languages are hard as the relevant linguistic patterns and nuances may not exist in both languages and hence cannot be matched easily. Examples of such language isolates include Basque and Korean, as well as several sign languages. 

Next

AI in Sports