Inspired by the human brain, Belgian researchers develop a new generation of sensors | Computer Weekly (2023)


Belgian researchers have found ways of mimicking the human brain to improve sensors and the way they pass data to central computers

Inspired by the human brain, Belgian researchers develop a new generation of sensors | Computer Weekly (1)


  • Pat Brans,Pat Brans Associates/Grenoble Ecole de Management

Published: 06 Apr 2023

The human brain is much more efficient than the world’s most powerful computers. A human brain with an average volume of about 1,260cm3 consumes about 12W (watts) of power.

(Video) #shorts How to Crack every Interview that you sit for?

Using this biological marvel, the average person learns a very large number of faces in very little time. It can then recognise one of those faces right away, regardless of the expression. People can also glance at a picture and recognise objects from a seemingly infinite number of categories.

Compare that to the most powerful supercomputer in the world, Frontier, which runs at Oak Ridge National Laboratory, spanning 372m2 and consuming 40 million watts of power at peak. Frontier processes massive amounts of data to train artificial intelligence (AI) models to recognise a large number of human faces, as long as the faces aren’t showing unusual expressions.

But the training process consumes a lot of energy – and while the resulting models run on smaller computers, they still use a lot of energy. Moreover, the models generated by Frontier can only recognise objects from a few hundred categories – for example, person, dog, car, and so on.

Scientists know some things about how the brain works. They know, for example, that neurons communicate with each other using spikes (thresholds of accumulated potential). Scientists have used brain probes to look deeply into the human cortex and register neuronal activity. Those measurements show that a typical neuron spikes only a few times per second, which is very sparse activation. On a very high level, this and other basic principles are clear. But the way neurons compute, the way they participate in learning, and the way connections are made and remade to form memories is still a mystery.

Nevertheless, many of the principles researchers are working on today are likely to be part of a new generation of chips that replace computer processing units (CPUs) and graphics processing units (GPUs) 10 or more years from now. Computer designs are also likely to change, moving away from what is called the von Neumann architecture, where processing and data are in different locations and share a bus to transfer information.

New architectures will, for example, collocate processing and storage, as in the brain. Researchers are borrowing this concept and other features of the human brain to make computers faster and more power efficient. This field of study is known as neuromorphic computing, and a lot of the work is being done at the Interuniversity Microelectronics Centre(Imec) in Belgium.

“We tend to think that spiking behaviour is the fundamental level of computation within biological neurons. There are much deeper lying computations going on that we don’t understand – probably down to the quantum level,” says Ilja Ocket, programme manager for Neuromorphic Computing at Imec.

“Even between quantum effects and the high-level behavioural model of a neuron, there are other intermediate functions, such as ion channels and dendritic calculations. The brain is much more complicated than we know. But we’ve already found some aspects we can mimic with today’s technology – and we are already getting a very big payback.”

(Video) Illegal Baby Names You CANNOT Name Your Kid!

There is a spectrum of techniques and optimisations that are partially neuromorphic and have already been industrialised. For example, GPU designers are already implementing some of what has been learned from the human brain; and computer designers are already reducing bottlenecks by using multilayer memory stacks. Massive parallelism is another bio-inspired principle used in computers – for example, in deep learning.

Nevertheless, it is very hard for researchers in neuromorphic computers to make inroads in computing because there is already too much momentum around traditional architectures. So rather than try to cause disruption in the computer world, Imec has turned its attention to sensors. Researchers at Imec are looking for ways to “sparsify” data and to exploit that sparsity to accelerate processing in sensors and reduce energy consumption at the same time.

“We focus on sensors that are temporal in nature,” says Ocket. “This includes audio, radar and lidar. It also includes event-based vision, which is a new type of vision sensor that isn’t based on frames but works instead on the principle of your retina. Every pixel independently sends a signal if it senses a significant change in the amount of light it receives.

“We borrowed these ideas and developed new algorithms and new hardware to support these spiking neural networks. Our work now is to demonstrate how low power and low latency this can be when integrated onto a sensor.”

Spiking neural networks on a chip

A neuron accumulates input from all the other neurons it is connected to. When the membrane potential reaches a certain threshold, the axon – the connection coming out of the neuron – emits a spike. This is one of the ways your brain performs computation. And this is what Imec now does on a chip, using spiking neural networks.

“We use digital circuits to emulate the leaky, integrate and fire behaviour of biological spiking neurons,” says Ocket. “They are leaky in the sense that while they integrate, they also lose a bit of voltage on their membrane; they are integrating because they accumulate spikes coming in; and they are firing because the output fires when the membrane potential reaches a certain threshold. We mimic that behaviour.”

The benefit of that mode of operation is that until data changes, no events are generated, and no computations are done in the neural network. Consequently, no energy is used. The sparsity of the spikes within the neural network intrinsically offers low power consumption because computing does not occur constantly.

A spiking neural network is said to be recurrent when it has memory. A spike is not just computed once. Instead, it reverberates into the network, creating a form of memory, which allows the network to recognise temporal patterns, similarly to what the brain does.

Using spiking neural network technology, a sensor transmits tuples that include the X coordinate and the Y coordinate of the pixel that’s spiking, the polarity (whether it’s spiking upward or downward) and the time it spikes. When nothing happens, nothing is transmitted. On the other hand, if things change in a lot of places at once, the sensor creates a lot of events, which becomes a problem because of the size of the tuples.

(Video) 10 Best Movies to Binge on DISNEY+

To minimise this surge in transmission, the sensor does some filtering by deciding on the bandwidth it should output based on the dynamics of the scene. For example, in the case of an event-based camera, if everything in a frame changes, the camera sends too much data. A frame-based system would handle that much better because it has a constant data rate. To overcome this problem, designers put a lot of intelligence on sensors to filter data – one more way of mimicking human biology.

“The retina has 100 million receptors, which is like having 100 million pixels in your eye,” says Ocket. “But the optical fibre that goes through your brain only carries a million channels. So, this means the retina carries out a 100 times compression – and this is real computation. Certain features are detected, like motion from left to right, from top to bottom, or little circles. We are trying to mimic the filtering algorithm that goes on the retina in these event-based sensors, which operate on the edge and feeds data back to a central computer. You might think of the computation going on in the retina as a form of edge AI.”

People have been mimicking spiking neurons in silicon since the 1980s. But the main obstacle preventing this technology from reaching a market or any kind of real application was training spiking neural networks as efficiently and conveniently as deep neural networks are trained. “Once you establish good mathematical understanding and good techniques to train spiking neural networks, the hardware implementation is almost trivial,” says Ocket.

In the past, people would build spiking into their network chips and then do a lot of fine-tuning to get the neural networks to do something useful. Imec took another approach, developing algorithms in software that showed that a given configuration of spiking neurons with a given set of connections would perform to a certain level. Then they built the hardware.

This kind of breakthrough in software and algorithms is unconventional for Imec, where progress is usually in the form of hardware innovation. Something else that was unconventional for Imec was that they did all this work in standard CMOS, which means their technology can be quickly industrialised.

The future impact of neuromorphic computing

“The next direction we’re taking is towards sensor fusion, which is a hot topic in automotive, robotics, drones and other domains,” says Ocket. “A good way of achieving very high-fidelity 3D perception is to combine multiple sensory modalities. Spiking neural networks will allow us to do that with low power and low latency. Our new target is to develop a new chip specifically for sensor fusion in 2023.

“We aim to fuse multiple sensor streams into a coherent and complete 3D representation of the world. Like the brain, we don’t want to have to think about what comes from the camera versus what comes from the radar. We are going for an intrinsically fused representation.

“We’re hoping to show some very relevant demos for the automotive industry – and for robotics and drones across industries – where the performance and the low latency of our technology really shines,” says Ocket. “First we’re looking for breakthroughs in solving certain corner cases in automotive perception or robotics perception that are aren’t possible today because the latency is too high, or the power consumption is too high.”

Two other things Imec expects to happen in the market are the use of event-based cameras and sensor fusion. Event-based cameras have a very high dynamic range and a very high temporal resolution. Sensor fusion might take the form of a single module with cameras in the middle, some radar antennas around it, maybe a lidar, and data is fused on the sensor itself, using spiking neural networks.

(Video) Creating New Neurons: The Potential to Reverse Parkinson’s Disease

But even when the market takes up spiking neural networks in sensors, the larger public may not be aware of the underlying technology. That will probably change when the first event-based camera gets integrated into a smartphone.

“Let’s say you want to use a camera to recognise your hand gestures as a form of human-machine interface,” explains Ocket. “If that were done with a regular camera, it would constantly look at each pixel in each frame. It would snap a frame, and then decide what’s happening in the frame. But with an event-based camera, if nothing is happening in its field of view, no processing is carried out. It has an intrinsic wake-up mechanism that you can exploit to only start computing when there’s sufficient activity coming off your sensor.”

Human-machine interfaces could suddenly become a lot more natural, all thanks to neuromorphic sensing.

Read more on Managing IT and business issues


What technology mimics the human brain? ›

Neural network: A subset of machine learning that mimics the neurons in the human brain and how they signal to one another. Neural networks pass data through interconnected layers of nodes until the network creates the output. Neural networks are at the heart of deep learning algorithms.

Why do scientists struggle to replicate the working of human brains into artificial? ›

Complexity of the human brain: The human brain consists of approximately 86 billion neurons, each connected to thousands of other neurons. This level of complexity is difficult to replicate in artificial neural networks.

What technology is used in brain research? ›

“Neurotechnology” refers to any technology that provides greater insight into brain or nervous system activity, or affects brain or nervous system function. Neurotechnology can be used purely for research purposes, such as experimental brain imaging to gather information about mental illness or sleep patterns.

Which type of technology is used to study the brain? ›

Magnetic resonance imaging (MRI) uses changes in electrically charged molecules in a magnetic field to form images of the brain. Both technologies are more precise than ordinary X-rays and can help find problems when people fall ill.

Can artificial intelligence replicate the human brain? ›

While experiments are underway to model artificial intelligence on the human brain, it cannot fully replicate the human brain — at least not in the near future. It can however mimic the way humans learn. Such functionality, if reproduced on a larger scale can make our lives easier and more efficient.

How is human brain different from artificial neural network? ›

For one, human brains are far more complex and sophisticated than neural networks. Additionally, human brains are able to learn and adapt much more quickly than neural networks. Finally, human brains are able to generate new ideas and concepts, while neural networks are limited to the data they are given.

Can artificial intelligence be replaced by human brain? ›

Recent AI achievements closely mimic human intelligence but cannot go beyond the human brain. Our mind acquires knowledge through a sense of understanding, reasoning, learning, and experience.

What are three commonly used methods of brain research? ›

These technological methods include the encephalogram (EEG), magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).

What are 3 technologies used to image the brain? ›

Many brain imaging tools are available to cognitive neuroscientists, including positron emission tomography (PET), near infrared spectroscopy (NIRS), magnetoencephalogram (MEG), electroencephalography (EEG), and functional magnetic resonance imaging (fMRI).

Which type of technology enables researchers to observe how the brain changes over time? ›

Functional magnetic resonance imaging (fMRI) measures blood flow in the brain during different activities, providing information about the activity of neurons and thus the functions of brain regions.

What is the main research method used to study the brain now? ›

Such techniques include computed tomography (CT), positron emission tomography (PET) and single photon emission computed tomography (SPECT) scans.

Which are three types of machine learning? ›

The three machine learning types are supervised, unsupervised, and reinforcement learning.

Can a human brain be simulated on a computer? ›

K computer and human brain

In late 2013, researchers in Japan and Germany used the K computer, then 4th fastest supercomputer, and the simulation software NEST to simulate 1% of the human brain. The simulation modeled a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses.

What sensory systems is AI trying to copy or replace in a human being? ›

Sensory AI is learning through sensory inputs: information from the five human senses, vision, hearing, smell, taste, and touch.

Is human mind the same as artificial intelligence? ›

The purpose of human intelligence is to combine a range of cognitive activities in order to adapt to new circumstances. The goal of artificial intelligence (AI) is to create computers that are able to behave like humans and complete jobs that humans would normally do.

What type of neural network is the human brain? ›

The neurons in the human brain perform their functions through a massive inter-connected network known as synapses.

Why do we consider human brain as new neural network? ›

The human brain consists of neurons or nerve cells which transmit and process the information received from our senses. Many such nerve cells are arranged together in our brain to form a network of nerves. These nerves pass electrical impulses i.e the excitation from one neuron to the other.

Are artificial brains possible? ›

Over the past decade, neuroscientists have begun using stem cell cultures to grow artificial brains, called brain organoids — handy alternatives that sidestep the practical and ethical challenges of studying the real thing.

Why can't humans be replaced by AI? ›

The key to making AI work is human insight, contextual awareness, and creativity. And thus the reason AI can never replace humans is simple — human beings will continue to give value that machines or computers or devices are not proficient of.

Is the human brain a quantum computer? ›

The results of an experiment to explore the human brain and its workings, which was adapted from an idea developed to prove the existence of quantum gravity, indicate that our brains use quantum computation.

Can technology replace human brain? ›

Regardless of how well AI machines are programmed to respond to humans, it is unlikely that humans will ever develop such a strong emotional connection with these machines. Hence, AI cannot replace humans, especially as connecting with others is vital for business growth.

Is it possible to replicate a human brain? ›

A model that replicates the functions of the human brain is feasible in 10 years according to neuroscientist Professor Henry Markram of the Brain Mind Institute in Switzerland. "I absolutely believe it is technically and biologically possible. The only uncertainty is financial.

Can we simulate a human brain? ›

K computer and human brain

In late 2013, researchers in Japan and Germany used the K computer, then 4th fastest supercomputer, and the simulation software NEST to simulate 1% of the human brain. The simulation modeled a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses.

Can AI mimic human brain? ›

Although AI is profoundly inspired by the neurobiological representation of the brain, surprisingly, these brain-mimicking models have never achieved a satisfactory performance, likely due to their over-simplification of the real neural system.

Do human minds work like computers? ›

Both use electrical signals to send messages. The brain uses chemicals to transmit information; the computer uses electricity. Even though electrical signals travel at high speeds in the nervous system, they travel even faster through the wires in a computer. Both transmit information.

Is the human brain still evolving? ›

Two genes involved in determining the size of the human brain have undergone substantial evolution in the last 60,000 years, researchers say, suggesting that the brain is still undergoing rapid evolution.

Can our brains still evolve? ›

Intelligence and personality. Last, our brains and minds, our most distinctively human feature, will evolve, perhaps dramatically. Over the past 6 million years, hominin brain size roughly tripled, suggesting selection for big brains driven by tool use, complex societies and language.

Can the mind alter DNA? ›

Lipton's research illustrates that by changing your perception, your mind can alter the activity of your genes and create over thirty thousand variations of products from each gene.

Can the mind be cloned? ›

Humans are unique and cannot be copied. And vice-versa, mind cloning will never succeed because the soul, in its embodied state, cannot be removed from the physical body and transferred to a non-biological substrate.

Can you upload your consciousness? ›

The science of the brain and of consciousness suggests that, in theory, we can transfer consciousness to a machine. This is also known as 'mind uploading'. However, conventional ideas on mind uploading rely on scanning our postmortem brains, where 'we' are not the ones who live on.

How powerful does a computer need to be to simulate human brain? ›

The human brain contains about 100 billion neurons (1011) and about a hundred trillion synapses (1014). Each neuron can fire about 100 times a second. If we model the brain as a simple neural network, then it would be equivalent to a machine that requires 1016 calculations per second and 1013 bits of memory.

How close are we to simulating the human brain? ›

A human brain's neuronal activity is incredibly complex and simulating it at a 1:1 ratio is impossible with current technology. Achieving just a 10 percent simulation rate maxes out the supercomputers that such limited simulations have been run on in the past.

Can a brain be kept alive artificially? ›

An isolated brain is a brain kept alive in vitro, either by perfusion or by a blood substitute, often an oxygenated solution of various salts, or by submerging the brain in oxygenated artificial cerebrospinal fluid (CSF). It is the biological counterpart of brain in a vat.

Can AI manipulate humans? ›

By analyzing patterns in people's online activities and social media interactions, AI algorithms can predict what a person is likely to do next. Cult leaders and dictators can use predictive models to manipulate people into doing what they want by providing incentives or punishments based on predicted behavior.

Could an AI develop consciousness? ›

The question of whether artificial intelligence can have consciousness is a complex and multifaceted one. While some researchers argue that AI may be capable of subjective experience and consciousness, others put forth arguments to suggest that machines are fundamentally incapable of having these experiences.

Can artificial intelligence outsmart humans? ›

The AI can outsmart humans, finding solutions that fulfill a brief but in ways that misalign with the creator's intent. On a simulator, that doesn't matter. But in the real world, the outcomes could be a lot more insidious.


1. A global perspective on large-scale digital research infrastructures (J. Bjaalie) - NeuroHub Seminar
(Healthy Brains, Healthy Lives)
2. Dr John Casti, Santa Fe Institute
(Linus Pauling Memorial Lecture Series)
3. Stunning new AI "could be conscious" - with Elon Musk.
(Digital Engine)
4. Richard Dolan: The Wilson Memos & Bob Lazar
(Theories of Everything with Curt Jaimungal)
5. IAC2010: Next Generation Visions For Space Operations
(International Astronautical Federation)
6. Open Belgium 2021: Open and FAIR covid 19 data
(Open Knowledge Belgium)


Top Articles
Latest Posts
Article information

Author: Catherine Tremblay

Last Updated: 11/11/2023

Views: 6090

Rating: 4.7 / 5 (47 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Catherine Tremblay

Birthday: 1999-09-23

Address: Suite 461 73643 Sherril Loaf, Dickinsonland, AZ 47941-2379

Phone: +2678139151039

Job: International Administration Supervisor

Hobby: Dowsing, Snowboarding, Rowing, Beekeeping, Calligraphy, Shooting, Air sports

Introduction: My name is Catherine Tremblay, I am a precious, perfect, tasty, enthusiastic, inexpensive, vast, kind person who loves writing and wants to share my knowledge and understanding with you.