Ultra-efficient transistor breakthroughs could shake the AI computing world

Researchers in Singapore have turned the humble silicon transistor into a powerful artificial intelligence component that can greatly reduce the scale and energy requirements of next-generation AI systems.
The innovation, announced yesterday in Nature, allows a single traditional transistor to mimic electronic neurons and synapses—the basic components needed to build an artificial neural network that is more like the human brain.
Led by Mario Lanza, associate professor in the Department of Materials Science and Engineering at the National University of Singapore, the team discovered a way to exploit a physical phenomenon previously thought to be a transistor failure mechanism.
“Once the operational mechanism is discovered, it is more important now to the problem of microelectronics design.” Professor Lanza said his approach could lead to the development of advanced AI hardware beyond a few companies with cutting-edge manufacturing capabilities.
The breakthrough solves the core inefficiency of current AI systems. Unlike traditional computers, data must be constantly shuttled between memory and processing units, brain-inspired “neuromorphic” systems process and store information in the same location. This approach is expected to provide huge efficiency improvements for AI applications.
What makes the discovery particularly important is its accessibility. The team does not rely on exotic materials or cutting-edge manufacturing industries – they use conventional 180nm transistors, a proven technology that can be produced by a Singaporean company, rather than requiring the use of the latest manufacturing facilities in Taiwan or South Korea.
This aspect of democracy is emphasized by Dr. Sebastián Pazos, the first author of King Abdullah University of Science and Technology. “Traditionally, the supreme competition in semiconductors and artificial intelligence has been a question of brute force to see who can produce smaller transistors and bear the resulting production costs. Our work proposes a fundamentally different approach based on computing paradigms that use efficient electronic neurons and synapses to leverage the computational paradigms.”
This technique depends on setting the resistance of the large terminal of the transistor to a specific value. This triggers a phenomenon called “impact ionization” that produces a spike in currents similar to what happens when biological neurons are activated. By regulating this resistance, transistors can also mimic synapses, which enhance or weaken the connection between neurons as learning occurs.
The current method of building artificial neurons requires at least 18 transistors per neuron and 6 transistors per synapse. NUS innovations can reduce these requirements to a single transistor and potentially reduce hardware by factors 18 and 6, respectively.
For systems containing millions of artificial neurons and synapses, this reduction will be transformative, allowing more complex AI models to run on smaller, more energy-efficient hardware. The team has designed a cell with two transistors – called synaptic random access memory (NSRAM) – that can be switched to between neurons or synapses as needed.
The discovery is a critical moment in the AI hardware competition. Major chip manufacturers and tech giants are investing billions of dollars in professional AI chips, but most approaches focus on incremental improvements to existing architectures rather than essentially rethinking how electronic components work.
Although still in the research phase, the method has attracted the attention of leading semiconductor companies. If commercialized successfully, it can enable more powerful AI in everyday devices without the need for a large data center or energy consumption.
This innovation represents a particularly compelling example of finding value in what was previously considered a flaw. In transistor design, affecting ionization has always been seen as a failure mechanism to avoid, but Professor Lanza’s team has managed to control it and turn it into a very valuable feature.
As researchers are in the global race to develop next-generation AI hardware, this approach offers a pathway that does not rely on pushing manufacturing toward an ever-skilled transistor size—and potentially allowing more companies and countries to participate in advanced AI chip development beyond current leaders in East Asia and the United States.
If you find this report useful, consider supporting our work with a small donation. Your contribution allows us to continue to bring you accurate, thought-provoking scientific and medical news that you can trust. Independent reporting requires time, effort, and resources, and your support makes it possible for us to continue exploring stories that are important to you. Thank you so much!