• Menu

Tech vs. Tech: Brain-computer interfaces vs. eye-tracking

Oct 29 2021


Aurore Geraud

An arena in which the two "combattants" are a brain with computer nodes attached to it, and an eye with a laser focal point over it. The header reads "Tech vs Tech" and below each respective combatant is their label: Brian-Computer Interfaces and Eye-Tracking Technology.

For a long time, my idea of immersion was imagining all the possible combinations of Tetris.

As a ‘90s kid, my experience with gaming and virtual worlds was primarily pixellated. The sense of tangibility, such as it was, happened through a controller or mouse. 

When, in the 2010s, the first commercial virtual reality headsets were released, they didn’t feel unfamiliar and uncomfortable ... to me (even if women allegedly tend to be more prone to VR sickness). I was used to sticking my face to a screen, and the presence of remotes or controllers made me feel grounded. 

But is it truly immersion if our virtual “actions” are made by proxy, pushing invisible buttons our fingers aren’t actually pushing?

The next stage of virtual reality is controller-free. Two technologies are competing to bring that reality to mainstream lives. This Tech vs. Tech is between eye-tracking technology and brain-computer interfaces.

The Technologies

As its name suggests, eye-tracking technology measures eye movements to identify the direction someone is gazing at. It is actually an old technology: The first eye-trackers can be dated back to the early 1900s. They were intrusive devices that took the form of contact lenses with a pointer attached to them. 

Nowadays, eye-tracking is done by projecting an infrared light directly into a subject’s eye, then using a camera to analyse the reflection of light inside the pupil and cornea to calculate gaze direction. The eye then becomes a “pointer”; users can interact with computers by focusing on specific areas for a predetermined number of milliseconds, to perform tasks like clicking or scrolling.

Unlike eye-tracking, brain-computer interfaces (BCIs) are relatively new. The first devices were developed in the 1970s. Like eye-tracking, their names reveal their function: Connecting the human brain to a computer. They are labeled invasive when surgically implanted in the brain, or non-invasive if you’re instead using an electroencephalography cap. Either way, the technology remains more or less the same: A multitude of sensors track brain activity and translate it into commands to a computer, enabling the user to control it hands-free. 

Current uses

First developed as a way to study reading and learning patterns, today eye-tracking is mainly used for two purposes: Market and scientific research. Marketers use eye-tracking to determine consumer engagement or behaviour on a given website, for example, to gauge whether an advert or product draws enough desired attention, or whether the design of a website is user-friendly.

In academic research, eye-tracking is used in neuroscience, or cognitive or social psychology, to help diagnose or follow the progression of neurodivergent individuals. 

Until recently, BCIs hadn't even left the laboratory environment—of both research institutions and startups. Before it was brought to public attention by Elon Musk and the promise of “being one with a computer,” the technology was mostly used in the medical field for rehabilitation, mental health, and control of assistive robots. But commercial uses are starting to arise, and even the automotive industry is interested.

The results of this race never cease to surprise us.

And the winner is...

First eye-tracking, but ultimately BCIs.

The benefits of eye-tracking in human-machine interaction are undeniable, considering it offers seamless immersion, comfort (no need for a controller), and security (retinal biometric identification). On the other hand, accuracy and precision vary considerably across participants and environments.  

The HTC Vive Pro Eye, one of the first VR headsets to integrate eye-tracking technology, was released in 2019. Since then, the immersion industry hasn’t fully embraced it. 

Of course, the gaming industry was already experimenting with the technology to provide better immersion experiences. But it seems that BCIs attract more interest in terms of future commercial use. For instance, Valve, the video game developer and creator of Steam, recently revealed its experiments with BCI because they consider it the next step in immersive gaming. (Maybe this is thanks to, or in spite of, that awkward video of a monkey playing video games with its brain, courtesy of Musk’s Neuralink brain chip.) 

Researchers have compared both technologies to see which is more efficient when it comes to interactions with computers. In the case of eye-tracking, setup and calibration is faster, but is more tiresome to work with. BCI provides better results, but takes more time to set up and is considered more stressful by subjects.

Both technologies have a bright future that extends beyond immersion: They are premises for further accessibility and inclusion. For example, current experiments with both include gauging their efficacy in helping people with motor impairments and disabilities interact with virtual interfaces. 

For a long time, people with disabilities have found safe haven in virtual worlds, praising them for the sense of community they bring and the social interactions they offer. In 2017, the population of Second Life was 50 percent people with disabilities. Eye-tracking and BCI can further enable the lesser-abled to participate more advantageously in the virtual economy. The capacity to access design tools to create user generated content, or even mint or invest in NFTs, represents better possibilities and access to financial independence, too.

Overall, the gaming industry will likely adopt BCI faster due to greater need for immersion. Both will be used in the virtual economy realm, but not necessarily in the same timeframe. Eye-tracking will likely come first, because it's already out of research labs … and in 10 to 15 years, BCI may replace it as it becomes more socially palatable, and because it is ultimately more powerful.

Tech vs. Tech is a regular L’Atelier Insights feature that pits two up-and-coming technologies or trends against each other, using a single lynchpin… like the future of virtual collaboration.

DATA-DRIVEN TECH & SOCIAL TRENDS. DISCOVERED WEEKLY. DELIVERED TO YOUR INBOX.

Illustrations by Debarpan Das.

Aurore Geraud

Senior Researcher

Aurore Geraud joined L’Atelier in 2012 as a journalist and managing editor, before becoming an analyst with the company’s consulting and acceleration units. Today she uses her investigative skills to research emerging technologies and their unforeseen opportunities. Originally from France, Aurore is an alumna of Sorbonne Nouvelle & University of Paris 8 Vincennes-Saint-Denis.

Connect

About L'Atelier

We are an interdisciplinary team of economists, journalists, scientists and researchers interested in emerging technology related cultural phenomena and change.

Learn more

Sign up for the latest news and insights

Your e-mail address is used to send you newsletters and marketing information on L’Atelier. You can unsubscribe at any time by using the unsubscribe link in our emails. More information about the management of your personal data and your rights can be found in our Data Protection Notice.