The BRAIN initiative

The Brain Research through Advancing Innovative Neurotechnologies initiative has been officially unveiled on April 2, 2013 by the President Obama. There had been quite excitement and interrogations in the few weeks before as the President had mentioned a large-scale research project about the brain in his State of the Union. Very few details were given before the official announcement, and it was reported that this would be a decade-long, multi-billion dollars project seeking to map the (eventually human) brain's activity by recording every single spike from every single neuron. When presented like that, this project's goals did not seem more realistic than the EU-funded Human Brain Project which aims at simulating the human brain at all spatiotemporal levels within ten years.

But these goals are merely slogans used to justify the cost of these multi billion dollars projects to taxpayers and politicians. The actual objectives are less ambitious and more realistic, and important progress may be achieved even if the brain is not entirely "solved" in a decade. There have been slightly more details about the BRAIN initiative during the announcement. It will initially be a $100 million investment in 2014 funded by the DARPA, NIH and NSF and a few private institutions including the Allen Institute for Brain Science. The goal is not to "map" entirely the human brain in a decade, which would have been ridiculous, but more realistically to spur the development of innovative technologies for large-scale recordings in the brain. Simple organisms will be considered at first, with recordings in humans coming much later.

The fact that this project only concerns the experimental tools, and not more theoretical investigations like the neural code, is quite interesting. The neural code has never been more mysterious. It is not an overstatement to say that we don't have any idea about how neurons encode and process the information, or how cognitive states emerge from the interactions between billions of neurons. There are some theories of course, but they're all somewhat basic and speculative as we don't have the actual tools to test them properly. There's no way we can progress in our understanding about the brain with the sole development of mathematical theories. We need experimental tools to test our theories, and that's precisely what the BRAIN initiative is about. Even if this project has to last ten years, its goal will eventually be a first small step in the scientific odyssey of the brain exploration. This adventure is not the adventure of the decade, but the adventure of the century.

Experimental tools do exist already. Tools like the MRI, EEG, MEG... record the average activity of large assemblies of neurons. Multielectrode arrays, optical imaging techniques... record the individual activity of tens of hundreds of neurons. But there's currently a lack of tools to observe tens of thousands of neurons individually (about the size of a cortical column). That's the order of magnitude the BRAIN initiative targets, and it will require innovative approaches.

The example of the retina is particularly interesting. Multi-electrode recordings in the retina led to the possibility to record from several neurons simultaneously. Pairwise correlations have been observed, and it has been shown that they almost completely capture the population activity in small networks of a few tens of neurons. A model assuming the independence between the neurons fails completely in describing the collective behavior of the network. This is an important result as it suggests the important role of precise synchronization in a neuronal network, and it wouldn't have been possible without the existence of these multi-electrode arrays. But further technological achievements led to the possibility to record from more neurons: in a larger network with about 100 neurons, it has been shown that pairwise correlations are no longer enough and that one needs to take higher-order correlations into account with a sparse coding assumption. This example illustrates how successive technological achievements help us refine our models about how the brain works.

Some argue that technological developments always need to follow the theoretical concepts rather than the other way around, so that the BRAIN initiative is fundamentally flawed. I find it simplistic to think that there's a unidirectional link between technology and theory. Both need to evolve in parallel. Technology can yield conceptual progress, as the example of the retina shows. But technological advances can also be influenced by conceptual discoveries. There's no reason to discard completely one of those links.

Whereas the project puts the emphasis on the experimental tools, the data-related aspects will also be crucial. Huge amounts of data will be generated when these tools become available, and we will need enough computing power to handle them. It's not a surprise that Google, Microsoft and Qualcomm were represented in an early project meeting in January at Caltech. For example, very large silicon probes for in vivo multichannel extracellular recordings are being developed. They will contain hundreds of microelectrodes, each recording at 20 kHz at least. That's hundreds of gigabytes per recording hour. Spike sorting algorithms, used to extract single-unit spiking activity from raw data, will have to scale to these huge data sets. We may expect to record from thousands of neurons with these probes.

Other tools that were mentioned include nanoprobes and DNA-based recording units. Using DNA tapes to record spike trains seem very futuristic to me!

Some are skeptical about the necessity to record individual neurons to understand the brain, as it is a complex system that is "more than its parts". They make the analogy with gases: statistical mechanics work with a statistical approach, where there is no need to know the exact state of every single molecule. But a brain is really not a gas! I'm not sure that gases think and exhibit behaviors as complex as human thoughts. The analogy has its limits. Neurons are much more complex than molecules in a gas, and are much more tightly connected. Connections are not only local. There are multiscale spatiotemporal structures in the brain. Multiple layers, cortical columns, a wide variety of neuron types, short-term and long-term memory through synaptic plasticity, modulation by hormones and glial cells, etc. Neuronal wiring is not totaly random. Complex computations occur at different levels: micro-circuits, cortical columns, cortical areas, etc. There is definitely something to learn by recording the activity of thousands of neurons. Of course, statistical approaches are also relevant, but they need to be adapted. And this requires adapted experimental tools.

There have been other criticisms. Will money be cut from existing projects to fund this new initiative? Even if the President Obama compared it to the Apollo program, BRAIN's goal is more vague than "landing a man on the Moon and returning him safely to the Earth". How can we assess if the goal has been achieved in ten years? Also, are large-scale projects adapted to such fundamental and complex problems? Aren't independent projects conducted by different labs in the world more adapted? Maybe, but sometimes a big project like this can strongly boost scientific progress in some domain. Especially when there's a strong emphasis on technological achievements, where funding needs to reach a critical mass to trigger innovative technologies. Also, a flagship project like this can really reach the public, inspire young generations and bring them to science.

In the end, I am reasonably enthusiastic about this project, even if the details have yet to be sorted out. A team of scientists, including William Newsome, has the mission to develop a precise multi-year scientific plan in the following months. These are interesting times for neuroscience.