Technology

In Real-time, AI Simulates Microprocessor Performance

In Real-time, AI Simulates Microprocessor Performance

Duke University computer scientists have developed a new AI method for accurately predicting the power consumption of any type of computer processor more than a trillion times per second while using very little computational power. The technique, known as APOLLO, has been validated on real-world, high-performance microprocessors and may help improve efficiency and inform the development of new microprocessors.

The approach is described in detail in a paper that was selected as the conference’s best publication at MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, one of the top-tier conferences in computer architecture.

“This is an intensively studied problem that has traditionally relied on extra circuitry to address,” said Zhiyao Xie, first author of the paper and a PhD candidate in Yiran Chen’s laboratory at Duke. “However, our approach runs directly on the microprocessor in the background, which opens up a plethora of new possibilities. That, I believe, is why people are so enthusiastic about it.”

This is an intensively studied problem that has traditionally relied on extra circuitry to address. However, our approach runs directly on the microprocessor in the background, which opens up a plethora of new possibilities. That, I believe, is why people are so enthusiastic about it.

Zhiyao Xie

Cycles of computation are performed on the order of 3 trillion times per second in modern computer processors. It is critical to keep track of the power consumed by such intensely fast transitions in order to maintain the overall performance and efficiency of the chip. When a processor consumes too much power, it can overheat and damage itself. Power demand fluctuations can cause internal electromagnetic complications, slowing down the entire processor.

Computer engineers can protect their hardware and improve its performance by implementing software that can predict and prevent these undesirable extremes from occurring. However, such schemes come at a cost. Keeping up with modern microprocessors usually necessitates the purchase of valuable extra hardware and computational power.

“APOLLO approaches an ideal power estimation algorithm that is both accurate and fast and that can easily be built into a processing core at a low power cost,” Xie explained. “Because it can be used in any type of processing unit, it has the potential to become a common component in future chip design.”

In-Real-time-AI-Simulates-Microprocessor-Performance-1
AI models microprocessor performance in real-time

The key to APOLLO’s power is artificial intelligence. The algorithm developed by Xie and Chen employs artificial intelligence to identify and select only 100 of a processor’s millions of signals that are most closely related to its power consumption. It then creates a power consumption model based on those 100 signals and monitors it in real-time to predict the overall chip’s performance.

Because this learning process is data-driven and autonomous, it can be implemented on almost any computer processor architecture, including those yet to be invented. And, while it does not require any human designer expertise to perform its function, the algorithm may be able to assist human designers in performing theirs.

“You can look at the algorithm and see what they are after the AI selects its 100 signals,” Xie said. “Many of the choices are intuitive, but even if they aren’t, they can provide feedback to designers by informing them which processes are most strongly correlated with power consumption and performance.”

The work is part of a collaboration with Arm Research, a computer engineering research organization that aims to analyze industry disruptions and develop advanced solutions many years before they are deployed. APOLLO has already been validated on some of today’s most powerful processors with the assistance of Arm Research. However, the researchers believe that the algorithm still requires extensive testing and evaluation on a variety of platforms before it can be used by commercial computer manufacturers.

“Arm Research collaborates with and receives funding from some of the industry’s biggest names, such as Intel and IBM, and predicting power consumption is one of their top priorities,” Chen added. “Projects like this give our students the opportunity to work with these industry leaders, and it’s the kind of work that makes them want to keep working with and hiring Duke graduates.”