![]() |
Unlike conventional TPU, the red computer chip is the first to use carbon nanotubes — tiny cylindrical structures made of carbon atoms arranged in hexagons — instead of traditional semiconductor materials like silicon. (Photo: Sankai) |
AI models are data-intensive and require large amounts of computing power to run. This poses a significant obstacle to training and scaling machine learning models, especially as the demand for AI applications increases. This is why scientists are working to create new components, from processors to computer memory, that are designed to consume less energy while running the necessary calculations.
Google scientists created TPUs in 2015 to address this challenge. These specialized chips act as dedicated hardware accelerators for tensor operations—the complex mathematical calculations used to train and run AI models. By offloading these tasks from the central processing unit (CPU) and graphics processing unit (GPU), TPUs allow AI models to be trained faster and more efficiently.
Unlike conventional TPUs, however, the new chip is the first to use carbon nanotubes—tiny cylindrical structures made of carbon atoms arranged in a hexagonal pattern instead of traditional semiconductor materials like silicon. This structure allows electrons (charged particles) to flow through them with minimal resistance, making carbon nanotubes excellent conductors of electricity.
According to the Chinese scientists, their TPU consumes only 295 microwatts (μW) of power (where 1 W is 1,000,000 μW) and can deliver 1 trillion calculations per watt — a unit of energy efficiency. This makes China's carbon-based TPU nearly 1,700 times more energy efficient than Google's chip.
“From ChatGPT to Sora, artificial intelligence is ushering in a new revolution, but traditional silicon-based semiconductor technology is increasingly unable to meet the demands of processing massive amounts of data. We have found a solution to this global challenge,” said Zhiyong Zhang, co-author of the paper and professor of electronics at Peking University.
The new TPU includes 3,000 carbon nanotube transistors and is built using a systolic array architecture—a network of processors arranged in a grid. This allows the TPU to perform multiple calculations at once by coordinating the flow of data and ensuring that each processor performs a small portion of the task at once.
This parallel processing allows calculations to be performed much faster, which is important for AI models that process large amounts of data. It also reduces the frequency with which memory—specifically a type called static random access memory (SRAM)—needs to read and write data, Zhang said. By minimizing these operations, the new TPU can perform calculations faster while using much less power.
In the future, the researchers say, similar carbon nanotube-based technology could provide a more energy-efficient alternative to silicon-based chips. They say they plan to continue refining the chip to improve performance and make it more scalable, including exploring ways to integrate TPU into silicon CPUs.
Comment (0)