|"...the SSD market has
been the main incubator for disruptive memoryfication trends but now - as we
approach the series finale of the
Top SSD Companies -
I think a new list tracker is needed." |
|Zsolt Kerekes (soon to be retiring editor) in
the Top SSD Companies
in Q2 2018 (August 9, 2018)|
|Everspin zaps supercaps in IBM's FlashSystem, NGD
Systems demos ASIC implementations of In-Situ Processing SSD architecture,
rethinking memory systems design in 290 pages, Netflix and the latencies behind
flash cached as RAM, Open-Channel 2 SSDs, why Intel needs eASIC but doesn't
need 3DXPoint revenue as soon as Micron - which can't sell some of its SSDs in
China anyway, IBM's really new FlashSystem, lighter LDPC codes, lifetime
award by the Flash Memory Summit -
and other SSD news recently
wrapping up SSD endurance
selective memories from 40 years of thinking about endurance by
editor - StorageSearch.com - July 20, 2018
|The intertwined and evolving
mythical relationships between the write endurance of raw flash memory chips
and the reliability
of the SSD drive / array in which they are used as the primary storage
components - has been been one of the most
read by readers of StorageSearch.com for over 12 years. However my own
editorial coverage of that subject started several years before that - at a
time when SSD makers were still nervous about talking openly about the very
idea that their SSDs had any wear-out issues - which could lead to sudden death
of the entire SSD - at all. |
I must admit that the enduring interest in endurance and the high popularity of
these articles was at many times irritating for me - particularly when I had
just written about other aspects of SSD design architecture (which I thought
were just as important) - but the constant tides of memory cell shrinks and
SSD performance progress kept pulling me back to write again and again about
endurance. Including many articles I have now forgotten but which can be found
in the news
Each time that leading SSD thinkers had reached some
kind of consenus about the relationships between the different types of memories
and how best to manage and deploy them in SSDs a new innovation in flash
controller design would come along to facilitate a stretch to applications
elasticity which busted previous limits.
Early on in this long running
saga I told my readers that there were few hard rules except these.
- Raw memory endurance is not the same as SSD endurance.
SSD can live much longer or much shorter than the average life expectancy of a
typical memory cell - when viewed from the R/W perspective of host write
come from differences in understanding and differences in design of controller
architecture (which includes software).
The quality of designs and
their footprint (chipcount, power usage and IP complexity) vary by orders of
magnitude - even in SSDs which superficially are aimed at similar markets and
which are being sold at the same time.
- The risk of early burn out is real.
If you use an SSD in
a way which the designers didn't intend.
On the other hand the cost of
over specifying an SSD means that you may end up paying many times more than you
sell StorageSearch.com and will no longer be writing much about the SSD
market in 2019 I thought I'd write one last article which looks back at some
of my memories about endurance.
- That's why there is no such thing as an ideal endurance figure in a
flash memory, or an ideal DWPD for an SSD.
context and business case are important boundary factors which define how
endurance factors are managed in the optimally affordable SSD.
nvm endurance in 1978 to 1980
My first encounter with the idea of write endurance in semiconductor
memories came in 1978 as a theoretical warning in a datasheet for a
new memory product called EAROM.
In those days I used to read datasheets for chips and processors in the same way
that editors nowadays read blogs and news stories. Having digested the
datasheet (but not having any immediate need for that memory myself in my own
designs) I wasn't greatly surprised when a company
I later worked for
- in 1980 - recalled their memory modules which had used those memories
inside because of failures in the field - due to premature remanence or
wear-out - I didn't ask my colleagues which. Temperature may also have been a
factor too - because the AMD bit-slice processors to which the non volatile
memory had been attached in the 1979 PLC design ran hot. (The solution to that
design problem was battery backed CMOS RAM - an option which had been discounted
earlier because of its dependence on the reliability of the attached battery.)
The next time I met the subject of write
endurance - in 1984 - it was another incidental thing - and not
something I used in a production design. I noticed that when saving data to
Intel's 2816 (an EEPROM) some of the locations could be written to with much
fewer write pulses than others. This meant they had better cells and could be
written to more quickly. But Intel also cautioned anyone who might play around
with these chips that writing too aggressively could damage the chips. In later
non volatile memory chips - the write pulse mechanism was embedded in the chip.
This made writes more foolproof. And I don't think that most electronic
systems engineers gave any thought to the variability of what may be hidden
behind the write mechansim for another 20 years.
2004 - flash takes
aim at the server acceleration space of RAM SSDs
For me and my
working life the subject of write endurance in flash memory became a big deal
from 2004 onwards when
flash SSDs began
to infiltrate the server acceleration market. At first warnings from
experts in the SSD industry that users would experience short working lives with
flash SSDs due to burn out were proven to be correct. But this didn't deter
users who mostly liked the performance gains they were getting and in some cases
simply adjusted their buying behavior to refresh the early flash drives very
frequently. Also, early burnout wasn't inevitable in arrays which used
appropriate SSD controller architectures and related techniques.
- a classic article on wear leveling
Another angle on SSD endurance
was (and still is) longevity in industrial SSDs. In 2005
published a classic paper on
here on StorageSearch.com and invested a lot of resources in ensuing years to
educate industrial systems designers were familiar with the reliability factors
associated with different elements in the design of SSDs.
Also in 2005
I began a news thread on
which datamined related stories from
SSD news. In 2008
- as SSDs became a greater part of all the content here I collected up
SSD reliability papers
into one place. Even in those days endurance was just one part of the
reliability mix as you can see.
By 2010 the SSD market had become much
better acquainted with the idea that SSD controllers were an important and
separate part of every SSD design and specialist companies in that area were
surprised to learn how much hunger there was for trustworthy articles which
explained what they did and why. My article
Imprinting the brain
of the SSD noted how big a change that was compared to before.
that time most stories about SSD endurance became part of mainstream
SSD news - but you can
sample how some of the metrics and ideas appeared in
versions of the SSD
controller page and infrequent updates to my 2007 article -
myths and legends.
An important sanity check is that most of the
key people in the SSD industry (including designers of SSDs and founders of SSD
companies and their biggest customers) were reading these pages during this
period. And my self appointed aim was to help guide the industry forwards in
directions which aligned with my own
- and the start of the SSD market Bubble
2010 was Year 1 of the
SSD Market Bubble. (For the significance of other years - see
From this point the
hitherto unknown and
controller industry invested huge intellectual resources and amazing talent
to enable each successive generation of (less reliable) flash memory to be
used in reliable SSDs and systems. And as the SSD market continued to grow in
revenue and strategic
importance - the big manufacturers of memory - which earlier had little
reason to understand the SSD potential of the chips they had been making - began
to digest lessons from the SSD market and understand understand the
applications for their raw memories better.
a 2018 perspective
users were deploying SSDs in different ways to earlier types of storage and
memory the industry took another 5 to 10 years to characterize what a "good"
level of endurance would be for particular applications.
And every few
years when new types of 2D flash memory came into production with greater
capacity but lower endurance - the wear mitigation arguments and analysis began
again from new (and more challenging) starting points.
In the past 10
years the 5 factors which have done the most to set the stage for the market
acceptance of flash memory endurance in usable SSD roles have been:-
- adoption of DWPD
- drive writes per day - as a standard way to signal which applications a
new SSD has been optimized for.
Endurance became a knowable factor
and users didn't need to be scared about its existence - as long as they
chose the right SSD for the application.
The SSD market grew
alongside other markets which it helped to create. So for example the idea of
a low DWPD SSD - in cloud infrastructure - as a valued and desirable product
would never have been in anyone's SSD business plan in 2004 - when the primary
proposition for enterprise use was server acceleration.
flash care management & DSP integrity IP in SSDs - was a movement
in SSD controller design - to invest extremely sophisticated intelligence and
noise filtering techniques inside each SSD which - among other things -
enabled the use of light weight (and less damaging) write pulses to be used -
compared to traditional hard codable ECC.
- The adoption of big SSD controller
architecture and using software
(for example Software-Defined Flash (Baidu
Host Managed SSD Technology (OCZ -
and other techniques and names), to leverage array level intelligence and
intelligence flow symmetry (see article for citations) to manage the
movement of data and reliability in SSD arrays has become the normal way of
Each AFA company and cloud integrator use their own
brews of standard and proprietary IP tricks and this is a an area of design
which is still evolving with in-situ processing.
- Machine Learning as the discovery tool for the best ways to
explore the optimum settings for R/W (timings and pulse shapes) when
characterizing new generations of 3D flash.
This is a technique
(first widely disclosed in 2013) which promises to maximise flash endurance
when used in conjunction with lightweight SSD controllers. (As opposed to the
kind of heavyweight energy and CPU footprint required by adaptive DSP to achieve
It was pioneered by
- the endurance rot stopped with 3D flash
The slide to
worsening endurance ratings in raw flash memory seemingly paused and
improved during the transition from 2D to 3D due to the use of more expensive
materials and more charge being trapped in each cell and with higher capacity
coming from more planes of flash cells rather than a single plane of smaller
All nvms have endurance issues
- although some are more serious than others - compared to flash. For example
first generation 3DXpoint PCIe SSDs from Intel had similar or worse DWPD
ratings than best in class flash SSDs. Whereas other memory types such as MRAM
and FRAM have endurance which is orders of magnitude higher than flash -
although their data capacity per chip is currently orders of magnitude smaller
It seems likely that DWPD will remain a useful way to
select SSDs for storage. However the best way to characterize the reliability
(and performance) of memories in new tiered memory systems (DIMM wars and cloud
adapted memory) is a problem which is as far away from any commonly agreed
useful solutions as today's neatly ordered classifications and segmentations of
SSDs were 10 years ago.
The subject of
memory endurance and how that relates to the reliability of SSDs and tiered
memory is one which has provided much food for thought for millions of my
readers in past years.
But there have been lighter moments too.
combined a serious historic narrative with some attenpt at humor in the "naughty
flash" description in
for the enterprise.
Whereas my article
razzle dazzling flash SSD
cell care and retirement plans was intended to show just how ridiculous some
of the comparative endurance management claims in the SSD market had already
become in 2012.
re RATIOs in SSD architecture
and the value of comparing one by
editor - June 18, 2018
thing you might not understand
so well to the size of a more familiar nother
new blog -Memory
for Compute - Where do we go From Here? - by Rob Peglar, President at
Advanced Computation and Storage
- among other things - comments on the shortcomings of legacy X86 processor
designs when viewed from the stance of cores to memory channels (and - this is
my scene setting not the words that Rob used - it's 2018 not 1999 for goodness
sake! - we're paradiving into the memoryfication of everything era - we're
falling through the air without a jetpack - so we don't have too much time
for what-is-life -
retrospection before we hit that water and start tasting real juicy vendor
shark meat - spiced up to make it more palatable with big sprinklings of
seasoning. Or we may be the shark meat - or we may even put the shark
to good use to pull
our boat along - depending on your perspective).|
I said something along these lines...
"The ratio of processor
cores to memory channels and local memory capacity is a solid pivot from
which to leverage your forthcoming architecture blogs. I love ratios as they
have always provided a simple way to communicate with readers the design choices
in products which tell a lot to other experts in that field. So in the past Ive
written about ratios like
RAM cache to flash
in SSDs, ratio of chips to controllers (small versus big
fast server based
PCIe SSD capacity ratio to SAN array capacity - which said a lot about the
legacy software base (in 2012 when I wrote that note). I look forward to seeing
your next couple of blogs and will tell my readers about them."
readers - this is me making good on that promise
I was on my
phone when I did the linkedin comment so I didn't extend the list to things
like:- ratio of petabytes/kilowatts in network storage capacity - which led us
to the petabyte SSD
market, or the ratio of chips to storage capacity - aka
design efficiency -
(including raw flash to usable capacity - whatever that is - real or virtual -
RAM is whatever the software is
happy is BTW and in another universe could be implemented by self reparing
fast paper tape). I could have made these lists longer... But I'm trying
The important thing is that you - as a regular mortal
without having wasted all your life immersed in arcane technology - nevertheless
do have a sporting chance of being able to rationally recognize the goodness or
otherwise of complex emerging products -which very few people on the planet
understand - for your own purposes and based on
risk assessments - using the comparative power of ratios. And you don't need
editors or analysts or vendors to tell you that the chosen outputs from their
calculators (or sponsored opinions) is more right for you.
In my own
career - even when I have done deep analysis and all that stuff - I have
sometimes chosen the right direction by doing the sums wrong. But I've always
Anyway - I think you should stay tuned to Rob Peglar's
blogs - because I have a gut feel - the plot is going to get very interesting.
Editor's additional comments:- And here's another
thing... if you like the idea of using ratios to understand SSD design thinking
then take a look too at some similar ideas I've written about which cut across
the analysis of existing and future architectures in other ways.
- the SSD design
heresies - is a sanity check to remind you that designers don't always agree
on the permutations of design features and technology adoption roadmaps favored
by their competitors
are we ready for infinitely faster RAM?
(and what would it be worth) by
editor - May 14, 2018
someone could offer you a memory system which had the same storage density (bits
per chip / module / box) as mainstream RAM - but which had latency and
bandwidth (as measured by what the application sees) which was infinitely
faster - could we use it? - how much would that be worth? and how would it
change markets? For the past 25 years the computer market has voted with its
spending for bigger
faster memory - but is the market now receptive to a disruptive change in
its ideas about the user value proposition of memory performance?|
infinitely faster RAM?
I know that some of you who are reading
this (and maybe it's You) are the kind of people who found companies or fund
them (thanks for staying with me on this ) and when you noticed the words "infinitely
faster" in my title above you wondered if it was some kind of late April
Fool article. (No - I wrote something
Infinitely? Really? - I know you can't put the value of
infinity into a business plan (although it does come in useful sometimes for
boundary assumptions about how markets will react to disruptive change). So
let me explain my use of the term "infinitely faster RAM" in this
article to mean "RAM that's maybe 20x or 100x faster than what you can get
today. As measured by critical bottlenecks in applications. For my purposes
here I'm saying that latency is the most critical fastness
factor - and while acknowledging that there
accepted methods of defining what "faster memory" means - I think
it's good enough for my argument below to assume that if the way the black box
works behaves consistently as if the memory was X-times faster (or X-times
sooner) than before - then that's a good enough understanding.
also assumes we're on the same page - when it comes to agreement on -
what is RAM? - which is a
shifting subject I have written about before. For my purposes - if it behaves
like RAM - and can transparently replace conventional RAM (chips, modules or
boxes or markets) then that's good enough for now - without worrying about
implementation details. I'm not going to speculate on the technology of the
infinitely faster RAM - I'll leave that problem for someone else - (maybe You).
In this article I'm posing the same kind of philosophical and business
what-ifs which I did in earlier phases of the SSD and memoryfication markets -
which asked - if we could get this new stuff - what new products and markets
would we get? - and how would that change pre-existing markets.
the infinitely fast one transistor memory cell. I'd like to make it clear I'm
not interested here in the idea of so-called "ultra fast"
transistors, memory cells and that ilk of research. As far as I'm concerned if
you can't put many megabytes and preferably gigabytes of raw capacity into
the infinitely faster RAM (at chip / module level) then it's not the kind of
animal I'd talking about here.
some lessons from history -
applications create markets and define acceptable latency
early 2000s the value propositions for different implementations of
semiconductor RAM were graded by latency and power - and the order of
precedence (DRAM, SRAM, SoC memory on chip - from slowest to fastest) hadn't
changed since the dawning of their mutual market coexistence in the 1970s.
you wanted bigger capacity - you chose DRAM. If you wanted faster latency at a
board level of integration you chose SRAM which ran hotter and was smaller in
capacity.If you wanted faster than that - there was no contest. It had to be
SoC (usually in the form of RAM on a true ASIC or gate array but also
latterly on FPGA).
At a board level - and system level -
their latency limits in about 1999 and haven't get any faster since.
It didn't matter so much in the early 2000s because enterprise processors
weren't getting any faster either. And the shape of applications (users doing
simple stuff on the internet ) meant that datacenters could get by with
affordable technologies which offered higher densities and lower power (more
users satisfied per box or watt or dollar) rather than users getting speed they
didn't need and couldn't use. The computer industry didn't need faster memory.
And when demand for more applications performance did grow - particularly in
the early days of the cell phone market (and social networks) the enterprise SSD
market took up the slack adequately - as there had been plenty of latency
bottlenecks built around earlier generations of (rotating) storage.
cell phones are coinage, spies and slot machines. And they've been joined by
IoT. There's so much intelligence which can be gathered about the meaning of it
all. But no memory or computing platforms fast enough to resolve
everything which can imagined by the next master plan in a timely
memory world war 1 - flash versus DRAM - in enterprise
I guess the first time there was a serious challenge to
the role of enterprise DRAM from another memory type in the acceleration space
was in the early years of
adoption (from around 2004). Which was fought out and soon won by flash
arrays supplanting RAM
If you'd asked most SSD people even as late as 2007 whether
they really expected DRAM to be replaced by flash as the mainstream enterprise
SSD based acceleration technology there were
could be made for either. But (as we now know) by 2012 the RAM SSD market was
effectively extinct. The principal reasons that a slower latency memory (flash)
could and did replace a faster latency memory (DRAM) in an acceleration role
- Typical user installations needed more memory capacity than could be
integrated by DRAM in a single box. The latency fabric from interfaces wrapped
around these SSD assets negated the latency advantage of DRAM chips compared to
flash chips. (The flash chips had much higher storage capacity per chip and
required much less electrical power.)
acceleration lessons - initially duelled out in Fibre Channel SAN rackmount
boxes - were won by the time the
PCIe SSD market got
started. It showed that a faster memory could lose out to a slower memory in an
acceleration focused role.
- Most of the easy acceleration advantages of enterprise SSDs came from read
requests rather than writes. That's just the way that the legacy installed
software base worked. That bias in the profile of memory R/W meant that the
asymmetic R/W latency of flash chips - with reads being orders of magnitude
faster than writes - was not a serious obstacle in adoption.
But that was storage... What does that tell
us - if anything about different speeds of memory used as memory?
early experiences (2014 to 2017) of tiered memory from the
market - in which flash can be tiered with DRAM (using form factors as
diverse as DIMMs, PCIe modules and even SATA arrays) is that there can be trade
offs in big data applications whereby trading size of memory in the box can
offset the native speed of raw memory (when the fastest memory is DRAM) for
exactly the same reasons which pertained in flash storage accelerators. And
the benefits of doing so have been mostly related to improvements in cost
rather than any risk free overwhelming advantages in application latency.
That's because the low hanging fruit of tiered flash speedup was mostly already
harvested by the bottlenecks uncovered and bypassed by server based PCIe SSD
Here's one last look backwards at the lessons from history -
before going on to speculate about the value of infinitely faster memory in the
I think that if you could go back in time and take with you a
warehouse of today's fastest and highest capacity DRAM chips - along with plug
compatible adapters to retrofit them into past server and storage systems - then
you wouldn't change the world of applications because most performance
constrained servers in the past - already had the maximum amount of DRAM
installed. And if they didn't - then there were so many bottlenecks built into
interfaces and software that - even with an ideally configured modern memory
array taken back in time and fitted as an upgrade - you would at best typically
get a 2x or 3x speedup or - very often - get no speed up at all. That's why the
early adoption of enterprise SSD accelerators was slow and problematic. There
were too many problems baked into the ecosystem for any one new product to make
enough of a change.
today's world of enteprise memoryfication and
memory defined software
Today's computing market - where SSDs are
everywhere and storage latency bands have a precise value and can be
controlled - is better placed to consider and use faster memory systems. It's
the only direction of travel to enable faster software.
needs are well understood. It's easy nowadays to see a direct link between
faster decisions (based on diverse internet signals picked up and analyzed by
real-time AI) and measurable economic outcomes. Also new business models and
markets are being created by the application of heavy duty machine learning.
software industry has had nearly a decade to become accustomed to thinking about
having meaningful choices in latency - which are determined by different types
and tiers of semiconductor devices. Most of the benefits of SSDs came when
enough software changed to fit in with an SSD world. Those changes were hard
to make because of decades of architectural lethargy in the leadup to the modern
SSD era. The next stage of software change - towards more memoryfication - has
already been underway - due to momentum and despite the lack of revolutionaty
memory technologies to take hardware to the next level.
So if you could
offer a GB scale memory chip with 20x or 100x lower latency than existing
mainstream DRAM - there are companies who could do useful things with it.
valuation of much faster memory accelerator systems
obvious market sizing example for infinitely faster memory accelerator systems
is their application in dealing with temporary data.
A simplistic way
of looking at this is that overwhelmingly most of the data which enters
processor space is temporary data. So if you have a memory accelerator which is
100x faster than the original memory which you used before to solve this type
of problem - then provided that you don't run out of new data to feed into the
machine and provided that the usable memory size for any single instance of the
computation can fit into the memory space provided - then you need approximately
100x less machines to provide the same services.
The equivalence of
speed and machines (one machine which is 100x faster being equivalent in
capability to 100 slower machines) is similar to the SSD-CPU eqyuivalency model
which helped to cost justify SSD accelerators in the early 2000s as server
replacements rather than expensive $/TB storage.
However, the memory
accelerator has a greater utility than this comparison might suggest on its own
because the ability to solve problems sooner, and the ability to solve more
complex problems for the first time within a shorter real-time period create
new value propositions by making new algorithmic engines viable for new markets
This ability to create new markets is like the
dynamic energy seen in the computer market in the 1980s and 1990s which began
with microprocessors making computation cheaper but ended (around 1999) when
limits were reached in how fast the GHz clock rate of any particular core could
run. And the collective decision of server companies that fater was not
necessary is partly to blame for the dumbing down of processor design
architecture for nearly 20 years thereafter.
The good thing which
emerged from that lack of investment in making commercial processors faster
was that it created the fertile soil for the SSD acceleration market - which
became the only game in town.
And now that there is a greater
understanding of memory (and the interplay of roles and values between raw
memory capacity and raw latency) and helped by an SSD rich ecosystem in which
larger portions of any computing problem can be economically gathered into
systems where the random access time can be arranged to be a handful of
microseconds rather than tens of milliseconds - the creative juices of computing
architecture have been turning to the much needed creation of new memoryfication
compatible computing engines and new processor architectures.
what has been giving rise to the proliferation in recent years of commercial
in-situ SSDs, processing in / near memory FPGA arrays and dedicated memory
accelerators for machine learning and similar neural algorithms.
early implementations which you can read about in the SSD news archives
demonstrate 2 things.
- The value of an Nx faster memory-compute accelerator can indeed be measured
by at least the cost of all the previous traditional hardware which was needed
to solve similar problems before. So 1 new PU (TPU etc) is indeed worth Nx
conventional server CPU / GPU / when there is sufficient work to be within a
particular problem shape universe. Their value is application specific. (Some
workloads are accelerated better than others.)
will creates an existential problem for makers of memory testers - because the
future of high performance memory systems (where the money used to be) will
become increasingly proprietary.
- Modern memory accelerators don't have to resemble either dumb memory
or dumb processors. Provided that they can interoperate with conventional
servers and infrastructure the new memory accelerators are best viewed as black
boxes whose internal details may change and adapt (just like search engine
algorithms) when more data suggests future areas of improvement.
And as you can realize yourself this
will create a cut-off point for manufacturers of high end server memory. Because
high end memory systems will inevitably look more like a custom processor
And as for traditional processor makers the new memory
accelerator systems don't care about and don't need their "instruction set"
based backwards compatibilities and roadmaps - because the new ML / NN engine
roadmaps (if they need any compatibility at all) will be "application"
and "algorithm" based.... More like the kind of compatibilties you
witness in successive generations of cloud APIs.
question is how many different types of memory accelerators the world needs and
One view might be that the world doesn't need more than a
handful (one for Google, Amazon, Apple, Baidu etc) because if the biggest
benefit and visibility into design optimization only occurs at massive scale
then those companies will each drive their own designs.
On the other
hand if this is indeed the start of a new rennaissance in computing
architecture - then you could argue that there will be the usual explosion of
startups hoping to serve new markets created by the new ideas in architecture.
(And there may be benefits of such new ideas which occur without being colocated
in the cloud.
Going back to my questions
in the title...
Are we ready for infinitely faster RAM?
think I've made a case for the answer being - Yes. More ready than we've ever
And as to - what would it be worth?
My advice to
founders of startups in infinitely faster memory accelerators is - don't
sprinkle the number "infinity" about too much in your spreadsheets
when guessing the market size or attached to the price which you think ideal
customers would be prepared to pay. There are plenty of big numbers you can
choose which are smaller and will still sound impressive without straining
|Hmm... it looks like you're seriously
interested in SSDs. So please bookmark this page and come back again soon. |
|If you could go back in
time and take with you - in the DeLorean - a factory full of modern
memory chips and SSDs (along with backwards compatible adapters) what real
impact would that have? |
|are we ready for
infinitely faster RAM?
|Choosing a slow interface
for a high capacity SSD is the route whereby one innovative enterprise SSD
maker was able to offer "no limits DWPD". |
the state of DWPD?|
|Despite many revolutionary
changes in memory systems design and SSD adoption in the past decade we are
still not at the stage where it's possible to predict and plot the next decade
as merely an incremental set of refinements of what we've got now. |
there yet? - 40 years of thinking about SSDs|
|Enterprise DRAM has the
same latency now (or worse) than in 2000. The CPU-DRAM-HDD oligopoly
optimized DRAM for a different set of assumptions than we have today in the
post modern SSD era.|
reasons for fading out DRAM|
|I said to a leading NVDIMM
company... This may be a stupid question but... have you thought of supporting
a RAMdisk emulation in your new "flash tiered as RAM" solution?|
|what could we
|A couple of years ago - if
you were a big company wanting to get into the SSD market by an acquisition or
strategic investment then a budget somewhere between $500 million and $1 billion
would have seemed like plenty.|
|VCs in SSDs and storage|
|With hundreds of patents
already pending in this topic there's a high probability that the SSD vendor
won't give you the details. It's enough to get the general idea.
R/W and DSP ECC IP in SSDs|
|Why can't SSD's true
believers agree upon a single coherent vision for the future of solid state
storage? (They never did.) |
|the SSD Heresies.|
|If you spend a lot of your
time analyzing the performance characteristics and limitations of flash SSDs -
this article will help you to easily predict the characteristics of any new SSDs
you encounter - by leveraging the knowledge you already have.|
|flash SSD performance
characteristics and limitations|
|The memory chip count
ceiling around which the SSD controller IP is optimized - predetermines the
efficiency of achieving system-wide goals like cost, performance and
|size matters in
SSD controller architecture|
|A popular fad in selling
flash SSDs is life assurance and health care claims as in - my flash SSD
controller care scheme is 100x better (than all the rest).|
|razzle dazzling flash SSD
cell care |