The National Science Foundation recently awarded $500,000 to professor Eliot Moss of the College of Information and Computer Sciences to build a platform that provides lasting data storage in devices that use non-volatile memory (NVM), such as the flash storage in a phone or a laptop’s solid-state drive.
As Moss explains, NVM stores data even when the power is off, unlike Dynamic RAM (DRAM), which must have a constant power supply to accomplish this. But while flash storage is a popular replacement for hard drive storage, newer and faster NVM technologies are being developed that promise to take the place of not only device storage, but also main memory.
Moss says his research will aid software developers working across a range of programming languages and hardware platforms. Using his proposed “persistence” model, programmers will be able to easily and reliably build applications for devices with NVM that take advantage of built-in support for fast, automatic autosave.
He adds, “We want to be able to help developers exploit NVM.” He imagines new applications for mobile and desktop devices that can scroll back and forth through branching possibilities, that help users imagine, analyze and plan. “The persistence model in programming is about being able to come back to where you were – and then go anywhere you need to go.”
One example of NVM technology that Moss looks forward to helping developers use fully is known as phase-change memory (PCM), which is based on the special semi-conducting properties of chalcogenide glass. It uses brief bursts of heat to switch bits between glassy and crystalline states. PCM is currently 10 to 100 times faster than solid-state drives, coming close to the performance of DRAM, he notes. Using technologies like PCM as a device’s main memory can allow developers to use techniques such as “instant-on” recovery of a user’s progress, he adds, but it’s not as simple as swapping out a bit of hardware to get the new function.
Without more support for NVM at the programming language level, as opposed to in the hardware or at the user-interface, the processor’s contents – known as registers and caches – would be lost when devices are powered off.
“Just having a hardware capability does not automatically make its features available to programmers, or its advantages available to users,” he explains. This is especially true for programs taking advantage of multicore processors to support multiple concurrent threads, a strategy becoming increasingly popular as applications strive for higher levels of performance. His persistent programming addresses this.
“Without the registers and caches, you end up with memory contents that look like a dog’s breakfast,” explains Moss. “Most of everything is still there, but nothing is very appetizing.”
Moss’s new model will first be built on Mu, a virtual machine for managed languages such as Python or Java. As Moss explains, Mu is lightweight, with approximately 25,000 lines of code compared to approximately 1 million lines of code for Java virtual machines, and it’s designed to work well in environments with dynamic code generation and optimization.
Moss’s was named an ACM Fellow in 2007 and a Fellow of the IEEE in 2010. In 2013, he was co-recipient of the Edsger W. Dijkstra Prize in Distributed Computing for his work on transactional memory. Moss joined the UMass Amherst faculty in 1985 and currently serves as the director of the Architecture and Language Implementation Laboratory. He received his PhD in computer science from the Massachusetts Institute of Technology in 1981.