should provide the needed availability.
Better yet, HA systems with primary
and secondary NVM still will work
thousands of times faster than current
HA disk-based systems.
Now that you’re getting used to the idea
of “RAM” being “NV”, let’s go all the way
down the stack to the operating system
itself. What advantage could the Linux
kernel take of NVM? It’s not just that a
Linux NVM system could boot in fractions
of a second, but that having some (or
most?) kernel state persisted at practically
no extra cost in time opens up many
interesting possibilities. The bootloader
still can execute hardware power-on
self-tests, but there’s very little extra work
required to get the kernel running when
much of the kernel state and instruction
space is magically still available.
During a transition period, when DRAM
and NVM coexist in a system, the kernel
process table could be modified to note
which processes are running wholly in NVM.
On reboot, the kernel process table (also in
NVM) could ignore DRAM-based processes,
while letting NVM processes get going as
soon as system devices are initialized. And,
as mentioned, the kernel could help with
application data restore points.
An NVM kernel also may help with
managing devices. The ability of
116 / SEPTEMBER 2012 / WWW.LINUXJOURNAL.COM
RAM-fast data persistence could enable
the kernel to remember the state of
attached hardware to a far greater ability
(or, to be fair, the devices may have their
own NVM to help out).
NVM is coming. Without much work,
it will provide an enormous benefit to
applications and use cases where storage
performance is a limiting factor. I’ve tried
to outline some of the more revolutionary
ways that we can take advantage of
the technology. As RAM volatility has
been a fundamental assumption of our
computing architecture, it is hard to
figure out what an NVM future could
look like. What may have been the design
principles, kernel semantics and language
design in an alternate computing world
where NVM was invented in, say, the
1960s? More radical notions could work
in theory, but there may be no easy
migration path from where we are today.
It will be up to the global community to
figure out answers with open source and
Linux driving the way.■
Richard Campbell is a trading systems architect living in New
Jersey and the author of Managing AFS: The Andrew File System.
His first computer had a 12KHz Z- 80 CPU with 256 bytes of ROM,
10KB of RAM, and used 1100 baud cassette tapes for storage.
Send comments to email@example.com. See http://www.netrc.com/nvm
for links and more information.