August 25, 2015 Tom

Featured Project: Neural Drift

NeuralDrift is a collaborative Muse game (and homage to the movie Pacific Rim) in which two operators must control a LEGO Mindstorms robot by matching levels of mental activity. We caught up with one of creators, Hubert Banville, to learn more:

So what’s the inspiration behind NeuralDrift?

The main inspiration behind the NeuralDrift came from the movie Pacific Rim – an action-packed blockbuster in which humanity use super advanced neurotechnology to control huge figther robots against aliens (would definitely have been my favorite movie had I been a few years younger). Although we are not there yet, we wanted to show people that this technology is closer than we think. We made the NeuralDrift a multiplayer, collaborative game, in which people can control a simple robot using their mental activity. This simple Brain-Computer Interface (BCI) shows the limitations and possibilities provided by neurotechnology today.

What’s it like to drive it?

Driving the NeuralDrift can be challenging at first, because one has to stay still and minimize movements as much as possible to avoid noisy signals, while switching back and forth from a relaxed to a focused state to control the robot. Specifically, the NeuralDrift has two players, or “pilots”, who both wear a Muse headband. Each pilot controls one side of the robot, by either concentrating to increase its speed, or relaxing to decrease its speed and eventually stop it. The tricky part though, is to collaborate to know when to concentrate or relax: if both pilots concentrate at once, the robot will go straight; if one concentrates more than the other, the robot will then turn. People usually get really excited when they see they can actually control it just by thinking.


Hubert and teammate Yannick Roy demo NeuralDrift at Wearhacks 2014.

Tell us a little bit a out yourself and your team. What are your backgrounds?

Our team was awesome! We had two electrical engineers, two biomedical engineers (including myself) and one designer. All four engineers were at the time pursuing graduate studies in closely related fields such as EEG and BCI research, machine learning and medical imaging. We took care of the hardware (robot and EEG recording) and software (EEG data classification, communication, gamification) parts. Our designer did a wonderful job at turning a cool but raw concept into a visually coherent and attractive game.


Hubert and team hard at work!

What was it like working with Muse?

The Muse is both a high quality EEG device and a very stylish wearable. Setting it up is super fast as you just have to put it on, without the need for gel or saline. We picked it for the hackathon mainly because of these reasons. Our interface with the Muse was done through the MuLES EEG Server, which handles communication with the Muse SDK and a few other EEG consumer devices ,saving us time not having to port the game if people wanted to try it with other devices.

Besides Muse, what tools did you use in the project?

Our robot was made using LEGO Mindstorms, a kit that contains motors, sensors, and a central programmable unit with which you can communicate via Bluetooth. We also used an Android tablet to display the game’s interface, programmed in the Processing language. Our entire backend handling device communication and EEG processing and classification was done in MATLAB and ran on a separate laptop. Our material was graciously provided by the MuSAE Lab  in Montreal.


The NeuralDrift website says that you measured the drivers’ “mental activity”, can you explain that a little?

One of the most studied phenomenons in EEG is the modulation of alpha and beta bands. An increase in the alpha band power (that is, the power of the EEG signal in the frequency band between approximately 8 to 13 Hz) happens whenever someone relaxes, and especially when they also close their eyes. Conversely, tasks requiring focused, active thinking are known to produce power in the beta band, at higher frequencies (15 to 30 Hz). The NeuralDrift’s algorithm relies on this phenomenon: before playing the game, each user has to first relax, and then concentrate, for a few seconds. Using this data, the game then trains a statistical model to discriminate between the two brain states. When playing, the user’s mental activity is thus measured, and translated into an input command to control the robot, what we call a Brain-Computer Interface.

What sort of processing and analysis did you perform on the raw EEG to extract the information you needed?

EEG devices spit out very noisy signals that require a lot of processing before they can actually be used purposefully. In our case, we went for a classic approach, inspired by what is done in a typical research lab. We first collected some EEG data on two of our team members, while they were performing different mental tasks such as mental subtraction, word generation and relaxation. We bandpass-filtered the data and removed really noisy chunks, then extracted many features we felt could be helpful in distinguishing someone’s brain state (as explained above), such as band powers and signal statistics. We tried many classifiers, such as logistic regression, linear and kernel SVMs, etc, and picked the best performing configuration of the system. That’s what was finally used in the NeuralDrift (the code is available on Github here).


The final demo!

It sounds like there are some pretty exciting changes happening at BCIMontreal, which is becoming NeuroTechX. Can you tell us a bit more about this new phase of the organization?

Some members of the NeuralDrift team (including me) are involved in BCI Montréal, a non-profit aiming at promoting innovation in neurotechnologies in and outside of Montreal. We recently joined forces with other similar groups in cities such as San Francisco, and renamed our initiative NeuroTechX. We now aim at creating an international community of neurotech enthusiasts and hackers, to push the field forward (and help people build much cooler things than the NeuralDrift!). We have a bunch of great announcements to come, including a student club competition, so anybody interested in neurotechnologies, stay tuned!

Tagged: , , , , , , ,

Comments (4)

  1. Hubert

    We didn’t use the concentration/mellow values, but instead based the game on user-specific data collected during a ‘calibration’ session. The game thus collects a few seconds of data every time it is restarted, and then trains a classifier that is used to detect the player’s state.

  2. Can you say a bit more about the ‘calibration’ session? Is it similar to the calibration in the Calm app (“Think of many famous places as you can.”), or is it asking the user to be intentional in some way?

  3. Hubert

    The calibration session in the neuralDrift is not the same as in the Calm app. I said calibration, but maybe ‘machine training’ would be more accurate. During that ‘machine training’ session, we record data for 2 different mental tasks/states, for example some active mental imagery task vs. relaxation. Then we train a binary classifier on this data. We thus have a customized classifier for each player, trained each time she/he plays, that can be used in-game to detect the player’s state. Since it’s a full machine learning approach, players can also experiment and try different mental states during the machine training session, and the game will try to distinguish between these 2 states during the game.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>