Malwarebytes Labs - Monday, August 5, 2019 at 11:00 AM
How brain-machine interface (BMI) technology could create an Internet of
Thoughts
She plugged the extension for car transportation in the brain-machine interface
connectors at the right side of her head, and off she went. The traffic was
relatively slow, so there was no need to stop working. She answered a few more
emails, then unplugged her work extension. Weekend mode could now be initiated.
How about we play a game? her AI BrainPal companion, Phoenix, suggested. Or
would you rather sit back and enjoy the ride?
Too futuristic? A Scalzi rip-off<https://en.wikipedia.org/wiki/John_Scalzi>?
Sci-fi that’s really more of a fantasy? Sure, it’s not technology that’s ready
to ship, but driving a car while writing emails or playing video games—all
while being physically paralyzed—is a future not-too-far-off. And no, we’re not
talking about self-driving cars.
Brain-machine interface (BMI)
technology<http://www.afanporsaber.es/files/homepage/group/loveLAB/love/classes/design/readings/bmi2.pdf>
is a field in which dedicated, big-name players are looking to develop a wide
variety of applications for establishing a direct communication pathway between
the brain (wired or enhanced) and an external device. Some of these players are
primarily interested in healthcare-centric implementations, such as enabling
paralyzed humans to use a computer<https://www.nature.com/articles/416141a>,
but for others, improving the lives of the disabled are simply short-term goals
on the road to much more broad and far-reaching accomplishments.
One such application of BMI, for example, is the development of a Human
Brain/Cloud Interface
(B/CI)<https://www.frontiersin.org/articles/10.3389/fnins.2019.00112/full>,
which would enable people to directly access information from the Internet,
store their learnings on the cloud, and work together with other connected
brains, whether they are human or artificial. B/CI, often referred to as the
Internet of
Thoughts<https://cosmosmagazine.com/the-future/the-internet-of-thoughts-is-coming>,
imagines a world where instant access to information is possible without the
use of external machinery, such as desktop computers or Internet cables. Search
and retrieval of information will be initiated by thought patterns alone.
So exactly how does brain-machine interface technology work? And how far off
are we from seeing it applied in the real world? We take a look at where the
technology stands today, our top concerns—both for security and ethical
reasons—and how BMI could be implemented for optimal results in the future.
Brain-machine interface technology today
At some level, brain-machine interface technology already exists today.
For example, there are hearing aids that take over the function of the ears for
people that are deaf or hard of hearing. These hearing aids connect to the
nerves that transmit information to the brain, helping people translate sound
they’d otherwise be unable to process. There are also several methods that
allow mute or paralyzed people to communicate with others, although those
methods are still crude and slow.
However, organizations are moving quickly to transform BMI technology from
theoretical to practical. Many of the methods we’ll discuss below have already
been tested on animals and are waiting for approval to be tested on humans.
One company working on technology to link the brain to a computer is Elon
Musk’s startup Neurolink, which expects to be testing a system that feeds
thousands of electrical probes into the human brain around 2020. Neuralink’s
initial goal is to help people deal with brain and spinal cord injuries or
congenital defects. After all, such a link would enable patients to use an
exoskeleton<https://en.wikipedia.org/wiki/Powered_exoskeleton>. But the
long-term goal is to accomplish a brain-to-machine interface that could achieve
a symbiosis of human and artificial intelligence.
Working from a different angle are companies like Intel, IBM, and Samsung.
Intel is trying to mimic the functionality of a brain by using neuromorphic
engineering. This means they are building machines that work in the same way a
biological brain works. Where traditional computing works by running numbers
through an optimized pipeline, neuromorphic hardware performs calculations
using artificial “neurons” that communicate with each other.
These are two wildly different techniques that are both optimized for different
methods of computing. Neural networks, for example, excel at recognizing visual
objects, so they would be better at facial recognition and image searches.
Neuromorphic design is still in the research phase, but this and similar
projects from competitors such as IBM and Samsung should break ground for
eventual commoditization and commercial use. These projects might be able to
provide a faster and more efficient interface between a real brain and a binary
computer.
Using a technique called “neuralnanorobotics” neuroscientists have expressed
they expect connections that are a lot more advanced to be possible within
decades from now. While the technology is mainly being developed to facilitate
accurate diagnoses and eventual cures for the hundreds of different conditions
that affect the human brain, it also offers options in a more technological
direction.
The human brain is an amazing computer
At a possible transmission speed racking up to ∼6 × 10^16 bits per second, the
human brain is able to relay an incredible amount of information super fast. To
compare, that is 60,000 Terabit per second or 7,500 Terabyte per second, which
is a lot faster than the fastest stable Internet connection (1.6 Terabits per
second) over a long distance recorded to
date<https://www.broadbandtvnews.com/2018/06/17/com-hem-aims-for-internet-connection-record/>.
This means that in our coveted brain-to-Cloud connection, the Internet would
be the speed-limiting factor.
However, it’s most likely that the devices we are going to need to transform
one kind of data into another will determine the speed at which BMI technology
operates.
Beyond speed, there are other limiting factors that result in a technological
mismatch for pairing brains and computers. Neuromorphic engineering is based on
aligning the differences between the computers we are used to working with and
biological brains. Neuromorphic engineers try to build computers that resemble
a more human-like brain with a special type of chip. Of course, it is possible
to mimic the functioning of the brain by using regular chips and special
software, but this process is inefficient.
The main difference between logical and biological computers is in the number
of possible connections. Simply put, if you want to match the thousands of
possible connections that neurons can make, it takes a huge number of
transistors. Enter: specially-crafted chips whose architecture resembles the
human brain.
Yet, for all the brain’s marvels, speed, and thousands of neurons making
connections and relying information, we are human after all, which means
multiple flawed outcomes can result from those connections.
Know that frustrating feeling when you see a familiar face you’ve passed by
hundreds of times, but can’t remember her name? Or hear a tune you’ve hummed
endlessly for weeks, but don’t remember the lyrics? The number of connections a
neuron can make does not always lead directly to the right answer. Sometimes,
we are distracted or overwrought or we simply cannot retrieve known information
from whichever location it’s been stored.
What if you could put a computer to work at such a moment? Computers do not
forget information unless you delete it—and even then, it can sometimes be
found. Now add the cloud and Internet connectivity, and suddenly, you’ve got an
eidetic memory. It’d be like being able to Google something instantaneously in
your head.
Should humans be wired to machines?
As we have shown, researchers are following several paths to determine
applications for connecting our brains to the digital world, and they are
considering the strengths and weaknesses of each as they attempt to achieve a
symbiotic relationship. Whether it’s an exoskeleton that allows a paralyzed
person to walk, Artificial Intelligence-powered computers that can ramp up on
speed and visual capabilities, or connecting our brains to the Internet and the
Cloud for storing and sharing information, the applications for BMI technology
are nearly endless.
I’m sure this research can be of great benefit to handicapped people, enabling
them to easier use appliances and devices, move around more freely, or
communicate easier. And maybe one or more of these technologies could even
bring relief to those suffering from mental health diseases or learning
disabilities. In those cases, you will hear no argument from me, and I will
applaud every step of progress made. But before I have my brain connected to
the Internet, a lot of other requirements will have to be met.
For one, there are countless concerns about the ethical development of this
technology. What is happening to the animals that are being tested? How would
we determine the best way to move forward on testing humans? Is there a point
of no return where once we hit a certain threshold, we lose control—and the
computer or AI gains it? At which point do we stop and think: Okay, we know we
can do this, but should we?
From a practical standpoint alone, there are some questions that need
answering. For example, Bluetooth would be sufficient to control the medical
applications, so why would we have to be hardwired to the Internet?
What is stopping brain-machine interface technology development today?
From where we are now in technology’s development, we see a few hurdles that
will need to be jumped to move these techniques into a fully functional BMI. At
a high level:
* Progress needs to be made on developing smaller specialized computer
chips that are capable of a multitude of connections. Remember, the first
computers were the size of a whole room. Now, they fit in our pocket.
* The research conducted in these fields will undoubtedly teach us more
about the human brain, but there is so much we still don’t know. Will what we
uncover about the brain be enough to successfully connect it to a machine? Or
will what we don’t know hinder us or put us in danger in the end?
* Approval from regulatory bodies like the FTC (Federal Trade Commission),
law makers, and human rights organizations will be necessary to start testing
on humans and expanding development into commercially viable products.
But there are more reasons that would stop me from using a BMI, even when the
above points have been addressed:
* Not everything you find on the Internet is true, so we would need some
type of filter beyond search ranking to determine which information gets
“downloaded” into people’s brains. How would we do so objectively? How could we
simplify this without looking at a screen of search results, headlines,
sources, and meta descriptions? Where does advertising come into play?
* The combination of healthcare and cybersecurity has never been one that
favors the security side. How will BMI integrate with hospital systems that use
legacy software? What are the implications of someone actually hacking your
brain?
* Privacy will be a huge issue, since a cloud-connected brain could
accidentally transmit information we’d rather keep to ourselves. I cannot
control my thoughts, but I do like to control which ones I speak out loud, and
which are published on the Internet.
* The good old fear of the unknown, I will readily admit. We just don’t
know what we don’t know. But who knows, maybe someday it will be as normal as
having a smart phone.
What could stop us a few decades from now?
Let’s suppose we are able to work out all the high-level issues with privacy,
security, filtering fact from fiction, and even learning all there is to know
about the human brain. In a couple decades, we might have BMI in a place where
we can conceivably release it to the public. There would still be kinks to iron
out before this technology is ready for mass adoption. They include:
* The cost of development will weigh heavily on the first use-cases. That
means, quite simply, that the first people with access to BMI tech will likely
be those with considerable wealth. With a widening gap between the haves and
the have-nots today, how much further will this divide civilization when the
top 1 percent not only control the majority of money, land, and media on the
planet, but now they have super-powered brains?! Only mass production will make
this sort of technology available to a larger part of the population.
* The early adopters would be equipped with a super power, for all intents
and purposes. Imagine interacting with a person that actually has all of human
knowledge readily available. What will this do to working relationships?
Friendships, marriages, or families? Outside of the economic imbalance
mentioned above, what sort of sociological impact will result from BMI being
unleashed?
* Physical dangers are inherent when we directly connect devices of any
sort to our bodies, and especially to our fragile brains. What are the possible
effects of a discharge of static electricity directly into our brain?
Security concerns
Given that the path to the Internet of Thoughts seems destined to include
medical research, discoveries, and applications, we fear that security will be
implemented as an afterthought, at best. Healthcare has struggled as an
industry<https://blog.malwarebytes.com/cybercrime/2019/04/sophisticated-threats-plague-ailing-healthcare-industry/>
to keep up with cybersecurity, with hospital devices or computers often
running legacy software or institutions leaking sensitive patient
data<https://blog.malwarebytes.com/threat-analysis/2019/05/medical-industry-struggles-with-pacs-data-leaks/>
on the open Internet.
Where we already see healthcare
technologies<https://blog.malwarebytes.com/101/2019/04/managing-security-medical-management-apps/>
with the capability to improve quality of life (or even save lives) hurried
through the development process without properly implementing security best
practices, we shiver at the prospect of inheriting these poor programming and
implementation habits when we start creating connections between the brain and
the Internet.
Consider the implications if cybercriminals could hack into a high-ranking
official’s BMI because of security vulnerabilities left unattended. One missed
update could mean national security is now compromised. Imagine what an
infected BMI might look like? Could criminals launch zombie-like attacks
against communities, controlling people’s actions and words? Could they hold
important information like passwords or your child’s birthday for ransom,
locking you out of those memories forever if you don’t fork over? Could they
extort celebrities or politicians for their private thoughts?
As a minor note and at the very least, I would certainly recommend using
short-range connections like BlueTooth to develop medical applications for
brain-machine interface. That might improve the chances of establishing a
secure B/CI protocol for applications that require an Internet connection.
However, the main concern with BMI technology is not whether we’re capable of
producing it, or even which applications of BMI deserve our attention. It’s
that the Internet of Thoughts will become a dangerous and dark experiment that
forever alters the way we humans communicate and interact. How can we be civil
when our peers have access to our very thoughts—abstract or grim or judgmental
or otherwise? What happens when those who cannot afford BMI attempt to compete
with those that do? Will they simply get left behind?
When we connect our brain to a machine, are we even still human?
Questions to consider as this fairly new technology gains traction. In the
meantime, stay safe everyone!
https://blog.malwarebytes.com/artificial-intelligence/2019/08/how-brain-machine-interface-bmi-technology-could-create-internet-of-thoughts/
David Goldfield
Assistive Technology Specialist
Feel free to visit my Web site
WWW.DavidGoldfield.info<http://WWW.DavidGoldfield.info>