Hitting the Memory Wall
by Baaba Andam, student writer
Sally McKee helped create the controversial phrase “memory wall.” In 1994, she co-authored a paper demonstrating that the increasing disparity between microprocessor and Dynamic Random Access Memory (DRAM) speeds would eventually lead to computers hitting a “memory wall.” In other words, as McKee explains, “no matter how fast the CPU (Central Processing Unit) of a computer is, it will always be waiting for data from the computer’s memory.”
Her publication caused computer scientists and engineers to reexamine the question of microprocessor speeds and the memory hierarchies to support those microprocessors, i.e., keep them fed with data. “It inspired a renaissance of research into complete memory systems, for instance,” she notes.
Since her publication, McKee has worked on several projects aimed at improving computer memory and optimizing computer performance in general. She describes her work as “trying to make smarter, more efficient use of existing technologies.”
To design better computing systems, computer architects need good design and verification tools. At present, several of McKee’s projects involve creating more efficient tools for modeling computer systems. “Computer systems are difficult to model because of their complexities,” McKee says. “Our aim is to create better ways of modeling so that we have more efficient, more accurate tools that enable us to design better memory systems, and better computing systems, in general (hardware and software).”
To reduce the number of detailed (slow) simulation experiments that computer designers must conduct to explore new design spaces, McKee’s research group, the Fusion group (named to describe their work with the “fusion” of software and hardware), uses predictive models based on machine learning. These neural models work by first picking some design points within the complex design space. These points are then modeled in detail via software simulation tools, and the results of these experiments are used to train the neural network models about the design space being investigated. In effect, the models “learn” from these results to identify other interesting regions in the computer design space (e.g., what design points can maintain high performance at low power?). The final models can predict the behavior of other design points in the computer design space, helping the architect choose regions of interest to model in greater detail.
McKee’s work will help computer scientists understand computer systems better and guide them on how to improve the design of these systems.