Associate Professor Mario Lanza and his team demonstrated a groundbreaking silicon transistor that mimics neural and synaptic behaviours, marking a significant breakthrough in neuromorphic computing.
Associate Professor Mario Lanza and his team demonstrated a groundbreaking silicon transistor that mimics neural and synaptic behaviours, marking a significant breakthrough in neuromorphic computing.
Singapore researchers have transformed the humble silicon transistor into a powerful building block for artificial intelligence that could dramatically reduce the size and energy requirements of next-generation AI systems.
The innovation, announced yesterday in the journal Nature, allows a single conventional transistor to mimic both electronic neurons and synapses – the fundamental components needed to build artificial neural networks that operate more like the human brain.
Led by Associate Professor Mario Lanza from the National University of Singapore’s Department of Materials Science and Engineering, the team discovered a way to exploit a physical phenomenon previously considered a failure mechanism in transistors.
“Once the operating mechanism is discovered, it’s now more a matter of microelectronic design,” said Professor Lanza, whose approach could potentially democratize advanced AI hardware development beyond the handful of companies with cutting-edge fabrication capabilities.
The breakthrough addresses a core inefficiency in current AI systems. Unlike traditional computers, which must constantly shuttle data between memory and processing units, brain-inspired “neuromorphic” systems process and store information in the same location. This approach promises massive efficiency gains for AI applications.
What makes the discovery particularly significant is its accessibility. The team didn’t rely on exotic materials or cutting-edge manufacturing – they used conventional 180-nanometer transistors, a mature technology that’s widely available and can be produced by Singapore-based companies rather than requiring the latest fabrication facilities in Taiwan or Korea.
Dr. Sebastián Pazos, the paper’s first author from King Abdullah University of Science and Technology, highlighted this democratic aspect. “Traditionally, the race for supremacy in semiconductors and artificial intelligence has been a matter of brute force, seeing who could manufacture smaller transistors and bear the production costs that come with it. Our work proposes a radically different approach based on exploiting a computing paradigm using highly efficient electronic neurons and synapses.”
The technique hinges on setting the resistance of a transistor’s bulk terminal to specific values. This triggers a phenomenon called “impact ionisation,” creating current spikes similar to what happens when a biological neuron activates. By adjusting this resistance, the transistor can also mimic synapses, the connections between neurons that strengthen or weaken as learning occurs.
Current approaches to building artificial neurons require at least 18 transistors per neuron and six per synapse. The NUS innovation could reduce these requirements to a single transistor, potentially shrinking the hardware by factors of 18 and 6 respectively.
For systems containing millions of artificial neurons and synapses, this reduction would be transformative, allowing far more complex AI models to run on smaller, more energy-efficient hardware. The team has already designed a cell with two transistors – called Neuro-Synaptic Random Access Memory (NSRAM) – that can switch between operating as either a neuron or synapse as needed.
The discovery comes at a critical time in the AI hardware race. Major chip manufacturers and tech giants are investing billions in specialized AI chips, but most approaches focus on incremental improvements to existing architectures rather than fundamental rethinking of how electronic components can function.
While still in the research stage, the approach is already drawing attention from leading semiconductor companies. If successfully commercialized, it could enable more powerful AI in everyday devices without requiring massive data centers or energy consumption.
The innovation represents a particularly striking example of finding value in what was previously considered a flaw. Impact ionization has long been seen as a failure mechanism to be avoided in transistor design, but Professor Lanza’s team has managed to control it and transform it into a highly valuable feature.
As researchers worldwide race to develop the next generation of AI hardware, this approach offers a pathway that doesn’t depend on pushing fabrication to ever-smaller transistor sizes – potentially allowing a wider range of companies and countries to participate in advanced AI chip development beyond the current leaders in East Asia and the United States.
Did this article help?
If you found this reporting useful, please consider supporting our work with a small donation. Your contribution lets us continue to bring you accurate, thought-provoking science and medical news that you can trust. Independent reporting takes time, effort, and resources, and your support makes it possible for us to keep exploring the stories that matter to you. Thank you so much!