Neuromorphic
The word “neuromorphic” comes from two Greek words, “neuro” meaning nerves, and “morphic” meaning shape. A system is neuromorphic if it is modelled on a biological nervous system such as the human brain.
Neuromorphic Computing
Neuromorphic computing means computer hardware and software that processes information in ways similar to a biological brain. As well as enabling machines to become more powerful and more useful, the development of neuromorphic computing should teach us a great deal about how our brains work.
Neuromorphic Hardware
In the 1980s, Carver Mead, a pioneering American computer who coined the term “Moore’s Law”, began thinking about how to develop AIs based on architectures similar to mammalian brains. Initially, neuromorphic hardware used analogue circuits rather than digital ones, but today most of the leading neuromorphic hardware is digital.
The most important characteristic of neuromorphic computers is their sparsity of computation. In traditional artificial neural networks, every neuron makes a calculation many times each second, and stores representations of the results as numbers, usually large numbers with floating decimal points. In neuromorphic computing, only a minority of neurons perform a calculation at any one time.
In mammalian brains, neurons only “spike”, or “fire” when stimulated appropriately. This is replicated in neuromorphic systems, and it makes them much less energy intensive, like mammalian brains. It takes enough electricity to power New York City to train a Large Language Model (LLM) like GPT-4, while your brain consumes the same power as a single light bulb.
Neuromorphic hardware is an alternative to the main established computing paradigm, which is the von Neumann architecture.
Von Neumann architecture
John von Neumann was a Hungarian genius and polymath who played an important role in the development of the first computers, from the end of World War Two onwards.
Von Neumann proposed a system with a central processing unit (CPU), a memory unit, and input and output devices. Inside the CPU is a control unit, which manages the traffic of information and instructions, and an arithmetic logic unit (ALU), which performs calculations. The memory unit houses both data and programmes.
The architecture is simpler than its early rivals, but there are bottlenecks because data needs to be shuffled backwards and forwards between the CPU and the memory, and because data and programmes cannot be shuffled at the same time. As computers have become faster, these bottlenecks have become more problematic.
- Neuromorphic architectures often address this bottleneck by combining memory chips with processing chips, or locating them close to each other.
Neuromorphic Software
Neuromorphic software consists of algorithms and models that operate more like biological brains than traditional computer systems, which are based on the von Neumann paradigm. One example of neuromorphic software is spiking neural networks (see below).
Neuromorphic AI
Neuromorphic AI systems use neuromorphic hardware and / or software to handle cognitive tasks like information retrieval, pattern recognition, sensory processing, and decision making. Their developers argue that they are more adaptive, scalable, and efficient than traditional AIs.
Spiking neural networks
Spiking neural networks (SNNs) are an attempt to get closer to the brain’s architecture, and thus create more efficient and more robust AIs.
In biological brains, neurons transmit signals down fibres called axons. If the signal is strong enough, and if the internal states of the neurons are appropriate, then the signal will cross a gap called a synapse into a second neuron. The signal then travels down a slightly different type of fibre called a dendrite towards the main body, or soma, of the second neuron. The second neuron may get excited enough by the incoming signal to send a new signal out along its axons. And so on – the signal propagates across the brain.
Hoped-for benefits of neuromorphic computing
- Efficiency: significantly lower power consumption
- Speed: much less latency as data does not need to be transmitted to and from centralized data centres
- Adaptability: new data can be integrated, enabling more robust performance in dynamic environments
- Scalability: modular neuromorphic systems should scale efficiently as more computational units (neurons) are added
- Robustness: better at reacting to unexpected circumstances
Neuromorphic AI and machine consciousness
Some argue that neuromorphic AI systems are more likely to become conscious than other types of AI. It may also be easier to detect the early signs of consciousness in them.
Neuroevolution
Neuroevolution is not a form of neuromorphic computing, but it is a related concept, so we are summarising it here.
Neuroevolution uses evolutionary algorithms to generate artificial neural networks, and to simulate the behaviour of agents in artificial life, video games, and robotics. It requires only a measure of a network’s performance at a task, whereas supervised learning algorithms must be trained on a corpus of correct input-output pairs. Reinforcement learning systems are a form of neuroevolution.