Uni-Logo
You are here: Home Research Research spotlights How machine learning can help to understand brain signals
Document Actions

How machine learning can help to understand brain signals

If a neurotechnological prosthesis should act upon commands from the brain, these have to be interpreted correctly. It is a challenging task because the amount of data is huge and the signals that the brain emits also change over time. Computer scientists therefore want to leave it to learning machines to solve this ever-changing puzzle.

Gunnar Grah talked to Martin Riedmiller, Machine Learning Lab

 

 

 

 

 

 

 


The usual way to make a computer do something goes like this: You decide what problem the machine is supposed to solve for you, then you write a programme that does exactly this. Modern washing machines work that way. They have been told exactly what to do at what moment. This approach works very well for clearly defined problems, like getting dirt out of laundry. But there are many problems where solutions are not obvious, and the point of departure might always be a little bit different and require different solutions. In this case, computer scientists opt for letting the computer figure out the optimal solution for itself, an approach that is called “machine learning”.

The human as trainer, not programmer

Machine learning resembles somewhat how humans acquire new skills, i.e. through examples, the structuring of a problem, and gaining experience. If, for instance, a computer is supposed to learn the different appearances of a broad-leaved tree and a conifer, a human “trainer” would provide the learning machine with examples of both classes, telling it in each case which class the tree on the picture belongs to. After some time, the computer will be able to correctly classify the presented trees, even though no human ever provided it with an actual list of things to look out for in order to master the task. If a problem requires not only a classification, but learning a complex procedure, like a robot walking on two legs without falling over, a trainer will at first be required to tell the machine whether an approach was “good” or “bad”. After sufficient training, the machine will master the problem and show some robustness in its solution, even if the situation varies slightly. Also, a trained machine will be able to solve a new situation – e.g. the robot carrying something and therefore changing its weight distribution – faster than if it was starting from scratch.

Learning from the brain

The secret of machine learning lies in the nature of the software that is at work at its base. A solution to a problem does not exist as specific lines of code that govern how the input from the environment is dealt with in order to reach a desired outcome. Instead, it employs an “artificial neural network”. This is a kind of programme that took its inspiration from the structure of the brain, where many individual items (the nerve cells) solve relatively simple problems (like adding or multiplying values), but are able to tackle much more complex tasks by being connected and existing in large numbers. An artificial neural network takes this concept into the realm of bits and bytes, but plays by the same rules. Learning, just as in the real brain, manifests itself in the change of connections between individual neurons. These connectivity patterns between a multitude of mathematically represented nerve cells are where the solution to a given problem is stored. In consequence, there exists no neat equation that provides the solution, even though the final answer that the machine provides to a given problem is simple.

The biological brain has acted as an inspiration, but computer scientists take this metaphor only as far as it is useful to them. To date, making the individual artificial neuron more and more complex, and thus ever more life-like, has not provided any real advantages, and software engineers therefore tend to go for what works best, not necessarily what comes closest to a real nerve cell. In terms of the general architecture however, natural evolution has led to time-tested concepts that are being exploited. Machine learning programmes that analyse images, for instance, draw upon the structure of the visual cortex and the way how it breaks down the task into extracting individual features from a scene.

Using artificial networks to understand real ones

Within BrainLinks-BrainTools, the task is a formidable one: Machine learning is destined to help analysing signals recorded from brain activity. There is no intrinsic connection though that makes this link between a real and an artificial neural network something special: Neither is machine learning the only approach to make sense of the activity of billions of nerve cells, nor is it less suitable to analyse completely different sets of data, like stock market developments.

Ultimately, machine learning should make it possible to read out movement commands from the brain’s surface. These could then be sent to a prosthesis or an assistive device like a robot – the use-case which the strategy “LiNC” within the cluster aims for. So far, it is an open question what kind of commands could actually be discriminated within the patterns of signals that electrodes can read out from the brain’s surface. Could machine learning reliably identify the difference between different grasp movements, for instance as they are necessary to grab either a pen or a bottle? Will the computer look for individual movements, or will it be able to extract whole intentions like “I want to put the bottle back into the fridge”? Such higher concepts would be an elegant way to steer a prosthetic arm. The user would not have to think all commands in the right order and with the right extent, but could leave it to the autonomously planning prosthesis to figure out the best way after sending a general command. However, there is a pitfall: In abstract concepts, it might become even more difficult to know what the activity pattern actually stands for. We might find an activity pattern that appears every time that the person thinks “fridge”, but it might actually stand for an association that the person makes, like “cold”. Between the infinite number of possible thoughts and possible connections, this will be an ambitious challenge for data analysis by machine learning.

Furthermore, it remains to be seen how many activity patterns are identical between individuals. Ideally, a patient should not have to go through lengthy training sessions to teach the system all manners of different movements from scratch, but the device should arrive with some pre-installed knowledge of the most basic patterns. Even then, there would be enough that the system would have to learn in order to adapt to the individual patient. After all, no two brains – not even those of twins – are identical. And it is very likely that the same holds true for the brain’s activity patterns. But computer scientists are optimistic that these training sessions would not take too long – rather minutes or hours than days and weeks.

But machine learning is not only a useful tool on the user’s end. Likewise, artificial neural networks could come in handy to help a robotic arm or other assistive device to execute movements and find the ideal action to reach a certain goal. What makes machine learning an attractive candidate for this task it that its artificial neural network learned the solution to a problem and wasn’t spoon-fed a singular answer. Therefore, the system is able to dynamically handle changes in the task or the environment in which it has to be executed.

 

Machine Learning in a real-life example

Related content
— filed under: ,
Personal tools