Will artificial intelligence ever have emotions or feelings?

Will artificial intelligence ever have emotions or feelings?

8 Min.
Light
By the Bitbrain team
October 10, 2018

Humans interact with artificial intelligence systems on a daily basis without even realising. Many people have already started to feel emotionally connected to them, but could this feeling be reciprocated in the future? Is it possible that machines could ever feel emotionally involved with us?

Let's start by understanding the difference between machines and robots. From an academic point of view, the difference is the degree of intelligence embedded in the system. We use the term machine for an electro-mechanical system (for example, “washing machine”), and the term robot when the system is able to reproduce some intelligent behaviour. This intelligent behaviour is reproduced by an artificial intelligence that can either be pre-programmed, such as “following a line” in the context of a robot guide, or can be learnt, such as “imitate this person walking” in the context of a humanoid robot. However, could artificial intelligence learn to have genuine feelings? This is the main question this post hopes to investigate, but to do so we must first accept the fact that it is very rare to learn behaviour that isn’t useful.

Artificial intelligence and emotional robots

AI is not science fiction and is already among us, not in the form of cruel assassin robots such as the Terminator, but in a much more subtle manner: chatbots, facial expression recognition, translators, personal assistants, and movie recommendations for example. However, many people are unaware of the fact that they are already interacting with AI systems and often react with rejection and fear to the idea that an intelligent machine could learn for itself. The most immediate concern is being substituted at work by an AI system, but there are also concerns regarding the possible destruction of the human race by machines.

These reactions are completely normal. For centuries we have nurtured our self-esteem as a superior species (homo sapien literally means ´wise man´), based on our higher human intelligence. Now that AI is here and has started to demonstrate its capabilities- winning chess games (the deep blue machine), manage large amounts of data without effort, carry out complicated operations in minimal time and even decipher the human genome- inevitably we have started to wonder whether machines or intelligent robots will be better than us, with some people worrying that they could enslave or eliminate us.

This way of thinking is due to the nonconscious manner in which we assume that a robot is able of feeling emotions (a sentient robot), and that these emotions could lead them to try to exterminate the human race. However, the truth is that artificial intelligent systems do not have emotions.

Human beings have emotions as a result of our own evolution. Scientists such as Charles Darwin studied the fact that the final objective of human emotions is to help the organism to survival, and that the organism needs to survive because it is alive. This begs three fascinating questions

  1. Will robots require emotions at some point?

  2. Would it be useful for humans for robots to have emotions?

  3. Is it possible to provide robots with emotions?

None of these questions has a clear answer.

Do artificial intelligence systems need to feel emotions?

Focusing on the machines that we use habitually and that incorporate artificial intelligence, it does not seem that they require to feel emotions nor does it seem that having emotions would help the system improve the execution of tasks.

In the case of systems that interact with humans, it could be of great value for them to be capable of detecting specific, basic human emotions, and react accordingly. Wouldn’t it be useful if our mobile phones were able to modify their interfaces in response to our emotions? Even if this were possible however, it would not mean that technology has emotions. In other words, machines do not have to be empathetic - they only need to appear so.

Is it useful for humans for robots to have emotions?

In terms of the potential advantages for humans, a priori it could seem that this is not particularly useful: the simple fact that machines seem to have emotions is sufficient. However, it is obvious that we could establish emotional connections with these machines. Furthermore, at some point we might require these links to be reciprocal, mirrowing real inter-personal relationships.

Unknown Object

Some people believe that if AI systems had emotions, they would be compassionate and it wouldn´t beckon in an apaocalypse. Equally, AI systems might have negative feelings towards the human race.

Is it possible to provide artificial intelligence with feelings and emotions?

To answer this question, it is necessary to try to understand how the human brain and emotions work.

Firstly, it is important to understand what causes emotions and reasoning-  basically an emotional reaction can be caused by an external stimulus that is captured by our senses, by internal stimulus that could be an alteration in homeostasis (body autoregulation), or it could be due to our own cognition.

Processing the stimulus produces changes at a nonconscious level in the somatic state. This is known as emotion. If the emotion is sufficiently intense, cognitive, social, contextual and surrounding-related evaluations are carried out, which we refer to as experiencing emotions.

emotions and feelings processing infographic

One of the ways of studying human emotions is to study the nonconscious and uncontrollable changes that occur in the human body. Thanks to the latest advances in neuroimaging and neurotechnology, we can measure these changes with precision and then study them. But we face several difficulties,  such as the problem of reverse inference (there are no specific somatic patterns associated with each emotion), inter-subject variations (no two brains are the same), and intra-subject variations (a person’s brain changes and evolves throughout time).

All this places us far from creating an algorithm that is capable of copying how human emotions are produced. The current standard nowadays is to use calibration stimuli (for example, several positive and negative images), and then to apply automatic learning algorithms. This is known as machine learning. These algorithms, such as artificial neural networks, search for correlations between the emotional responses measured in the brain activity with EEG systems and the classification of the images. Subsequently, a computer program develops a computational model of how this specific person’s brain or nervous system reacts to this specific stimulus (image), which causes a specific emotion (positive or negative). This model is utilised to find out the positive or negative emotional state when the person sees an image. Although the most scientific way to measure human patterns of emotions employs EEG, there are alternatives, such as assessing facial features, electrodermal activity and voice recognition.

But computational models are not the human brain, cannot replicate the human brain, and are far from doing so.

In any case, imagine that in the near future we could utilize an algorithm to replicate the way in which the brain generates emotions. Would this mean that an artificial intelligence system based on this algorithm could feel emotions, develop emotional intelligence or even fall in love?

Probably not. Ultimately, human emotions also depend on our perception of the external world and of our inner self. We perceive the exterior world through our senses, while the perception of the interior world depends on homeostasis at a basic level, and, at a more complex level, on our cognition.

How do artificial intelligence systems see the exterior world and their interior world? Is it comparable to the way we see them? In general, no. For example, the exterior world for a text chatbot is exclusively the text provided to it: this is its reality. For a facial recognition system, the only stimuli available are videos, but nothing more. Would a text or video-only-based reality ever be able to generate the same type of emotions from the reality perceived by us, with our five senses? Again, it's highly unlikely. However, these systems do not need internal sensors. In other words, they will not generate any type of emotion that originates from introspection and, therefore, it will be difficult for them to feel love, jealousy or any of the myriad emotions we experience.

The systems that are closer to resembling human beings today are those studied in developmental robotics. In this field of study, researchers work with robots that have increasingly complex senses (vision, audio, touch, etc.) and internal information (battery level, system heating, balance, energy required to execute a task, etc.). These researchers aim to understand how human beings develop and evolve, from childhood to adulthood- how humans learn and how decision making is produced- and then seek to instill these processes in autonomous robots. Within this field, cognitive architects study how behaviour emerges through experience. Without a doubt, these artificial intelligence systems are the closest to developing synthetic emotions akin to those of a human.

Antonio Damasio, one of the neuroscientists that has studied human emotions and feelings, is categorical: “I am totally against the idea that artificial intelligence could recreate a human brain”.

And he is probably right, because in order for artificial intelligence to have human emotions, we would not only have to recreate the human brain but also its senses, body and cognition. This is would involve designing of robots with extremely advanced sensors, electronics and mechanical capabilities.

However, Ray Kurzweil predicts that computers will pass the Turing test by 2029; being able to exhibit intelligent behaviour (intelligence, self-awareness, emotional richness, etc.) indistinguishable from that of a human.

Note that to pass the Turing test many learning mechanisms must be employed. Here, the deep-learning algorithms that enable continuous improvement of learning as more data becomes available will be crucial. This will force many difficult ethical decisions to be made regarding technology, especially when involving human data.

We will see who is right. By the way, don’t miss the thoughts of María López, CEO of Bitbrain, and other speakers, during the Everis event: Love and Artificial Intelligence.

You might be interested in:

Get your latest collection of posts on Neurotechnology, Health, Research and Business.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.