NC State researchers figured out a way to separate memory management from common software processing, finally enabling multi-core support.
Researchers from NC State University have discovered a way to break up programs such as web browsers and word processors so that they can use multiple threads. While this ability is already in use with PC games and many other applications, some common programs still dump the entire process operation onto one core despite current multi-core CPUs now available on the market.
According to the researchers, breaking up the more traditional programs into multiple threads means a possible overall 20-percent increase in performance. From an enterprise standpoint, this is good news, allowing workers to be more productive, saving time and money. Unfortunately, the current solution for "hard-to-parallelize" programs isn't merely a simple fix, nor is it readily available.
So what exactly is the solution? "We’ve removed the memory-management step from the process, running it as a separate thread," said Dr. Yan Solihin, an associate professor of electrical and computer engineering at NC State, director of the research project (and co-author of a paper describing the research).
Typically a program will perform a computation, then perform a memory-management function, and then repeat the process via one processor core. Using the new approach, the computation thread and memory-management thread are executing simultaneously (in parallel), allowing the program to run more efficiently. Most of today's consumer apps don't utilize multi-core CPUs effectively, but that may change down the line thanks to new programming and compiler technologies such as this one.
"This also opens the door to development of new memory-management functions that could identify anomalies in program behavior, or perform additional security checks," Solihin said. "Previously, these functions would have been unduly time-consuming, slowing down the speed of the overall program."
Solihin and the group of NC State researchers plan to present their findings in a paper called "MMT: Exploiting Fine-Grained Parallelism in Dynamic Memory Management," slated to be presented on April 21 at the IEEE International Parallel and Distributed Processing Symposium in Atlanta.
This sounds like a George Takei moment: "Oh my!"