Future and ethics of neurotechnology and brain-computer interfaces

Future and ethics of neurotechnology and brain-computer interfaces

6 Min.
By the Bitbrain team
July 31, 2019

Cars connected to the mind, implants in the brain to cure Alzheimer's or letters that are written just by thinking about it are some of the advances on which neurotechnology is working. Brain-machine interfaces (BCI systems) will completely change the way we live, but are we aware of the ethical issues and implications that these developments will involve?

In 2017, Elon Musk announced that he was founding Neuralink, a neurotechnology company with one goal: to create brain implants that connect our brains function to artificial intelligence systems. According to Musk, in the short term, Neuralink will try to use these implants to cure major neurodegenerative diseases such as Alzheimer's or epilepsy. On the other hand, Musk is confident that the human being has nothing to do against artificial intelligence, so another of his objectives is to create a neural cord that allows us to merge with machines.

Shortly after Elon Musk made this announcement, Facebook also took to the stage to tell that they had been betting on neurotechnology for more than a year. Thus, they revealed that, from their Building 8, a team of specialists’ work trying to transmit through non-invasive technology (a cap or diadem that would capture brain signals through light) text messages with the mind. They seek to achieve a speed of 100 words per minute.

The possibility that the technological developments of Elon Musk and Facebook will come to fruition in the short term is an unknown, although the scientific community has many doubts considering state of the art in brain-computer interface technologies. 

However, similar technologies are already being used to diagnose neurological diseases, treat phobias or perform cognitive rehabilitation in dementia, ADHD and the elderly; or neurorehabilitation to recover mobility with neuroprosthesis in people with movement disabilities), or also recover motor control and motor function with a BCI technology based on robotic arm, entertainment (video games that offer interfaces directly connected to the brain), neuromarketing services, or wellness (such as cognitive stimulation and cognitive training). Other industries, such as the automotive industry have also adopted this technology, such as Nissan and its Brain-To-Vehicle solution, which provides the first real-time detection and analysis system of brain activity in motor cortex related to driving.

In the case of Musk, there are invasive devices that are already being used in the field of BCI research in extreme cases of Parkinson's, epilepsy, and other neurodegenerative diseases. However, throughout the world there are very few patients with sophisticated implanted devices (and even with essential devices the number is reduced to a few thousand), and this is due to the risk that this type of intervention poses today. 

However, in addition to the possible danger of brain surgery, the reality is that our understanding of how neurons communicate is not so sophisticated yet, and we are far from being able to establish two-way communication with an artificial intelligence system through an external brain that makes us smarter.

In the case of Facebook, the highest speed obtained by typing words with the mind has been approximately 35 characters per minute using invasive technology [Ref] (note that a rate of at least 250 characters per minute is most normal when typing). Also, if we simplify the use of the system (shortening calibration periods and eliminating engineer tracking throughout the test) so that the BCI user is entirely autonomous, then the pace is even slower: between three and seven characters per minute [Ref]. It, therefore, seems that we are currently a long way from a person writing 100 words per minute - approx. — Five thousand characters per minute - autonomously and with non-invasive devices.

In any case, the reality is that technology companies are beginning to take an interest in neurotechnology. Indeed, their expectations of what this technology will be able to do in the next two or three years are in many cases exaggerated, which is probably due to their conception that the human brain is like a computer that can be reprogrammed. 

However, there is no doubt that the millionaire investments that these companies are beginning to make will have a substantial impact on the advance of applied neuroscience and neurotechnology (there is currently talk of an investment of more than 100 million dollars a year by private companies, and an exponential growth is estimated for the next few years).

What Is the Future of Neurotechnology? How Can We Prepare for It?

Perhaps now is the time to start thinking about where and how to apply neurotechnology before its applications get out of hand. It is clear that we are heading towards a future where neurotechnology could manipulate people's mental processes, could allow telepathic communication and could technologically increase human capabilities. 

All this could be beneficial for human beings, but at the same time it could be tremendously negative, causing social inequalities to grow, changing the essence of the human being as an individual linked to his body and with a private mental life, or allowing new ways of exploiting and manipulating people by hackers, companies or even governments. Many unknowns open up about how to prepare for this future.

According to a recent article published in Nature by prestigious neuroscientists, artificial intelligence experts, and ethicists, there are four areas we should start thinking about:

  1. Privacy and Consent: This challenge is not exclusive to neurotechnology and is well known. The Cambridge Analytica scandal is just one example of what unfettered access to data in the wrong hands can do and, obviously, the alarm rises when the information being talked about comes directly from your brain (see cybersecurity on brain-computer interfaces).
    Therefore, it is proposed that the default option for neural data is not to share and oblige users to give their authorization explicitly. Also, the sale, financial transfer, and use of neural data must be strictly regulated.
  2. Identity and Agency: This challenge involves facing the question of to what extent neurotechnology and artificial intelligence may in the future make us lose our human character, the link with our physical body or the ability to make our own decisions.
    If we connect with artificial intelligence systems that support us in decision making, that allow us to communicate mentally with other human beings, or that will enable us to act in areas where we're not even there, to what extent will we still be a human being and have the ability to choose our actions?
    What's more, this dilemma is already beginning to occur in some results of scientific research projects [Ref] in which a person develops with deep brain stimulation an altered personality with a character and behavior different from those he had.
    What should be done in such cases? Is the stimulation maintained or is it removed? Perhaps the answer lies in asking the patient, but, who should be asked? The original patient or the patient whose personality has been altered by the implant?
  3. Regulation in Capacity Building: If neurotechnology combined with artificial intelligence allows those who use it to somehow become "superior" beings with greater sensory, physical or mental capacities, will this not create a change in social norms, pose no problems of equitable access and will it not generate new forms of discrimination or greater social inequality? It is difficult to anticipate the future, but it would be desirable to begin to regulate these aspects in the same way that is being done with the development of genetics.
  4. Impartiality in the Development of Technologies: Again, this is not an exclusive problem of neurotechnology, but many other disciplines. It is necessary to consider how the biases of technology developers and society end up influencing the technology generated, causing it to favor particular groups of people and harm others. 

For example, in 2015 it was shown that Google Ads faced with two identical profiles, the female one received far fewer well-paid job offers than the male profile [Ref]. This lack of bias that we already have in some technologies could also occur in neurotechnology with effects that could be very harmful.

In the coming years, we will witness major technological innovations that will bring about a substantial change in the way we interact with the world around us. Thus, while many of these innovations bring with them many benefits that will improve our quality of life, we need to be aware of the repercussions they may also have and to establish laws that guarantee and prevent undesirable consequences. Technology is neither good nor bad, but the use we make of it. 

Ethical aspects in science and technology should never be ignored, especially in the face of advances in scientific and technological knowledge that could bring about change at both individual and social levels.

You might be interested in: