this is the home page of
20 years - leading the way to the new storage frontier .....
SSD news since 1998
SSD news ..
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
SSD SoCs controllers
SSD controllers ..
military storage directory and news
military SSDs ..
SSD symmetries article
SSD symmetries ..
storage history
SSD history ..
storage security articles and news
SSD security ..
high availabaility SSD arrays
HA SSDs ..
how fast can your SSD run backwards?
11 Key Symmetries in SSD design
AccelStor NeoSapphire  all-flash array
1U enterprise flash arrays
InfiniBand or 10GbE iSCSI or 16G FC
NeoSapphire series - from AccelStor
this SSD's power is going down
Surviving SSD sudden power loss - classic article
Inanimate words which suggest Power, Speed and Strength for your SSD - enables you to draw on a much wider set of cultural references than just limiting yourself to animal brands.
Metaphors in SSD brands
inanimate Power, Speed and Strength
Since the early 1970s there have been 3 revolutionary disruptive influences in the electronics and computing markets.
  • the microprocessor
  • the commercialization of the internet
  • the advancement of computer architecture enabled by the modern era of SSDs
That's what I wrote in my 2012 article - comparing the SSD market today to earlier tech disruptions. has been about thought leadership in the SSD market and was the first publication to recognize and promote the tremendous disruptive growth potential of SSDs and the memoryfication of computing architecture. Since the 1990s our readers have been accelerating the growth of this industry and setting its direction and agenda.

My part in that work is now done. In June 2018 it was announced that is being offered for sale. In 2019 it will operate under new management.

About the publisher (founded 1991)
SSD controllers
DWPD - examples
hard drives - articles
history of SSD market
storage history - editor selected stories
storage reliability - news & white papers
SSD ad - click for more info ..

1.0" SSDs
1.8" SSDs
2.5" SSDs
3.5" SSDs

1973 - 2017 - the SSD story

2013 - SSD market changes
2014 - SSD market changes
2015 - SSD market changes
2016 - SSD market changes
2017 - SSD market changes

20K RPM HDDs? - SSD killed RPM

About the publisher - 1991 to 2018
Adaptive R/W flash IP + DSP ECC
Acquired SSD companies
Acquired storage companies
Advertising on
Analysts - SSD market
Analysts - storage market
Animal Brands in the storage market
Architecture - network storage
Articles - SSD
Auto tiering SSDs

Bad block management in flash SSDs
Benchmarks - SSD - can your trust them?
Big market picture of SSDs
Bookmarks from SSD leaders
Branding Strategies in the SSD market

Chips - storage interface
Chips - SSD on a chip & DOMs
Click rates - SSD banner ads
Cloud with SSDs inside
Consolidation trends in the enterprise flash market
Consumer SSDs
Controller chips for SSDs
Cost of SSDs

Data recovery for flash SSDs?
DIMM wars in the SSD market
Disk sanitizers
DRAM (lots of stories)
DRAM remembers
DWPD - examples from the market

Efficiency - comparing SSD designs
Encryption - impacts in notebook SSDs
Endurance - in flash SSDs
enterprise flash SSDs history
enterprise flash array market - segmentation
enterprise SSD story - plot complications
EOL SSDs - issues for buyers

FITs (failures in time) & SSDs
Fast purge / erase SSDs
Fastest SSDs
Flash Memory

Garbage Collection and other SSD jargon

Hard drives
High availability enterprise SSDs
History of data storage
History of disk to disk backup
History of the SPARC systems market
History of SSD market
Hold up capacitors in military SSDs
hybrid DIMMs
hybrid drives
hybrid storage arrays

Iceberg syndrome - SSD capacity you don't see
Imprinting the brain of the SSD
Industrial SSDs
Industry trade associations (ORGs)
IOPS in flash SSDs

Jargon - flash SSD

Legacy vs New Dynasty - enterprise SSDs
Limericks about flash endurance

M.2 SSDs
Market research (all storage)
Marketing Views
Memory Channel SSDs
Memory Defined Software - yes really
Mice and storage
Military storage

Notebook SSDs - timeline
Petabyte SSD roadmap
Power loss - sudden in SSDs
Power, Speed and Strength in SSD brands
PR agencies - storage and SSD
Processors in SSD controllers

Rackmount SSDs
RAID systems (incl RAIC RAISE etc)
RAM cache ratios in flash SSDs
RAM memory chips
RAM SSDs versus Flash SSDs
Ratios in SSD design architecture
Reliability - SSD / storage
RPM and hard drive spin speeds

SCSI SSDs - legacy parallel
Symmetry in SSD design

Tape libraries

Test Equipment
Top 20 SSD companies
Tuning SANs with SSDs

USB storage
User Value Propositions for SSDs

VCs in SSDs
VCs in storage - 2000 to 2012
Videos - about SSDs

Zsolt Kerekes - (editor linkedin)

animal brands in SSD
The SSD market isn't scared of mice.

But mice aren't the only animals you can find in SSD brands.

There are many other examples of animal brands in SSD as you can see in this collected article.

And before the SSD market became the most important factor in the storage market there were also many animals to be found in other types of storage too.
.. is published by ACSL founded in 1991.

© 1992 to 2018 all rights reserved.

Editor's note:- I currently talk to more than 600 makers of SSDs and another 100 or so companies which are closely enmeshed around the SSD ecosphere.

Most of these SSD companies (but by no means all) are profiled here on the mouse site.

I still learn about new SSD companies every week, including many in stealth mode. If you're interested in the growing big picture of the SSD market canvass - StorageSearch will help you along the way.

Many SSD company CEOs read our site too - and say they value our thought leading SSD content - even when we say something that's not always comfortable to hear. I hope you'll find it it useful too.

Privacy policies.

We never compile email lists from this web site, not for our own use nor anyone else's, and we never ask you to log-in to read any of our own content on this web site. We don't do pop-ups or pop-unders nor blocker ads and we don't place cookies in your computer. We've been publishing on the web since 1996 and these have always been the principles we adhere to.
"...the SSD market has been the main incubator for disruptive memoryfication trends but now - as we approach the series finale of the Top SSD Companies - I think a new list tracker is needed."
Zsolt Kerekes (soon to be retiring editor) in the Top SSD Companies in Q2 2018 (August 9, 2018)
Everspin zaps supercaps in IBM's FlashSystem, NGD Systems demos ASIC implementations of In-Situ Processing SSD architecture, rethinking memory systems design in 290 pages, Netflix and the latencies behind flash cached as RAM, Open-Channel 2 SSDs, why Intel needs eASIC but doesn't need 3DXPoint revenue as soon as Micron - which can't sell some of its SSDs in China anyway, IBM's really new FlashSystem, lighter LDPC codes, lifetime award by the Flash Memory Summit - and other SSD news recently

wrapping up SSD endurance

selective memories from 40 years of thinking about endurance

by Zsolt Kerekes, editor - - July 20, 2018
The intertwined and evolving actual and mythical relationships between the write endurance of raw flash memory chips and the reliability of the SSD drive / array in which they are used as the primary storage components - has been been one of the most popular topics read by readers of for over 12 years. However my own editorial coverage of that subject started several years before that - at a time when SSD makers were still nervous about talking openly about the very idea that their SSDs had any wear-out issues - which could lead to sudden death of the entire SSD - at all.

memories of  storage history I must admit that the enduring interest in endurance and the high popularity of these articles was at many times irritating for me - particularly when I had just written about other aspects of SSD design architecture (which I thought were just as important) - but the constant tides of memory cell shrinks and SSD performance progress kept pulling me back to write again and again about endurance. Including many articles I have now forgotten but which can be found in the news archive.

Each time that leading SSD thinkers had reached some kind of consenus about the relationships between the different types of memories and how best to manage and deploy them in SSDs a new innovation in flash controller design would come along to facilitate a stretch to applications elasticity which busted previous limits.

Early on in this long running saga I told my readers that there were few hard rules except these.
  • Raw memory endurance is not the same as SSD endurance.

    The SSD can live much longer or much shorter than the average life expectancy of a typical memory cell - when viewed from the R/W perspective of host write requests.

    These genuine differences come from differences in understanding and differences in design of controller architecture (which includes software).

    The quality of designs and their footprint (chipcount, power usage and IP complexity) vary by orders of magnitude - even in SSDs which superficially are aimed at similar markets and which are being sold at the same time.
  • The risk of early burn out is real.

    If you use an SSD in a way which the designers didn't intend.

    On the other hand the cost of over specifying an SSD means that you may end up paying many times more than you needed to.
  • That's why there is no such thing as an ideal endurance figure in a flash memory, or an ideal DWPD for an SSD.

    The applications context and business case are important boundary factors which define how endurance factors are managed in the optimally affordable SSD.
As I hope to sell and will no longer be writing much about the SSD market in 2019 I thought I'd write one last article which looks back at some of my memories about endurance.

nvm endurance in 1978 to 1980

My first encounter with the idea of write endurance in semiconductor memories came in 1978 as a theoretical warning in a datasheet for a new memory product called EAROM. In those days I used to read datasheets for chips and processors in the same way that editors nowadays read blogs and news stories. Having digested the datasheet (but not having any immediate need for that memory myself in my own designs) I wasn't greatly surprised when a company I later worked for - in 1980 - recalled their memory modules which had used those memories inside because of failures in the field - due to premature remanence or wear-out - I didn't ask my colleagues which. Temperature may also have been a factor too - because the AMD bit-slice processors to which the non volatile memory had been attached in the 1979 PLC design ran hot. (The solution to that design problem was battery backed CMOS RAM - an option which had been discounted earlier because of its dependence on the reliability of the attached battery.)


The next time I met the subject of write endurance - in 1984 - it was another incidental thing - and not something I used in a production design. I noticed that when saving data to Intel's 2816 (an EEPROM) some of the locations could be written to with much fewer write pulses than others. This meant they had better cells and could be written to more quickly. But Intel also cautioned anyone who might play around with these chips that writing too aggressively could damage the chips. In later non volatile memory chips - the write pulse mechanism was embedded in the chip. This made writes more foolproof. And I don't think that most electronic systems engineers gave any thought to the variability of what may be hidden behind the write mechansim for another 20 years.

2004 - flash takes aim at the server acceleration space of RAM SSDs

For me and my working life the subject of write endurance in flash memory became a big deal from 2004 onwards when flash SSDs began to infiltrate the server acceleration market. At first warnings from experts in the SSD industry that users would experience short working lives with flash SSDs due to burn out were proven to be correct. But this didn't deter users who mostly liked the performance gains they were getting and in some cases simply adjusted their buying behavior to refresh the early flash drives very frequently. Also, early burnout wasn't inevitable in arrays which used appropriate SSD controller architectures and related techniques.

2005 - a classic article on wear leveling

Another angle on SSD endurance was (and still is) longevity in industrial SSDs. In 2005 SiliconSystems published a classic paper on wear leveling here on and invested a lot of resources in ensuing years to educate industrial systems designers were familiar with the reliability factors associated with different elements in the design of SSDs.

Also in 2005 I began a news thread on storage reliability which datamined related stories from SSD news. In 2008 - as SSDs became a greater part of all the content here I collected up SSD reliability papers into one place. Even in those days endurance was just one part of the reliability mix as you can see.

By 2010 the SSD market had become much better acquainted with the idea that SSD controllers were an important and separate part of every SSD design and specialist companies in that area were surprised to learn how much hunger there was for trustworthy articles which explained what they did and why. My article Imprinting the brain of the SSD noted how big a change that was compared to before.

After that time most stories about SSD endurance became part of mainstream SSD news - but you can sample how some of the metrics and ideas appeared in archived versions of the SSD controller page and infrequent updates to my 2007 article - SSD endurance myths and legends.

An important sanity check is that most of the key people in the SSD industry (including designers of SSDs and founders of SSD companies and their biggest customers) were reading these pages during this period. And my self appointed aim was to help guide the industry forwards in directions which aligned with my own predictions.

2010 - and the start of the SSD market Bubble

2010 was Year 1 of the SSD Market Bubble. (For the significance of other years - see SSD market history.)

From this point the hitherto unknown and secretive SSD controller industry invested huge intellectual resources and amazing talent to enable each successive generation of (less reliable) flash memory to be used in reliable SSDs and systems. And as the SSD market continued to grow in revenue and strategic importance - the big manufacturers of memory - which earlier had little reason to understand the SSD potential of the chips they had been making - began to digest lessons from the SSD market and understand understand the applications for their raw memories better.

a 2018 perspective

Because users were deploying SSDs in different ways to earlier types of storage and memory the industry took another 5 to 10 years to characterize what a "good" level of endurance would be for particular applications.

And every few years when new types of 2D flash memory came into production with greater capacity but lower endurance - the wear mitigation arguments and analysis began again from new (and more challenging) starting points.

In the past 10 years the 5 factors which have done the most to set the stage for the market acceptance of flash memory endurance in usable SSD roles have been:-
  • adoption of DWPD - drive writes per day - as a standard way to signal which applications a new SSD has been optimized for.

    Endurance became a knowable factor and users didn't need to be scared about its existence - as long as they chose the right SSD for the application.

    The SSD market grew alongside other markets which it helped to create. So for example the idea of a low DWPD SSD - in cloud infrastructure - as a valued and desirable product would never have been in anyone's SSD business plan in 2004 - when the primary user value proposition for enterprise use was server acceleration.
  • Adaptive flash care management & DSP integrity IP in SSDs - was a movement in SSD controller design - to invest extremely sophisticated intelligence and noise filtering techniques inside each SSD which - among other things - enabled the use of light weight (and less damaging) write pulses to be used - compared to traditional hard codable ECC.
  • The adoption of big SSD controller architecture and using software (for example Software-Defined Flash (Baidu March 2014), Host Managed SSD Technology (OCZ - October 2015 and other techniques and names), to leverage array level intelligence and also adaptive intelligence flow symmetry (see article for citations) to manage the movement of data and reliability in SSD arrays has become the normal way of doing things.

    Each AFA company and cloud integrator use their own brews of standard and proprietary IP tricks and this is a an area of design which is still evolving with in-situ processing.
  • Machine Learning as the discovery tool for the best ways to explore the optimum settings for R/W (timings and pulse shapes) when characterizing new generations of 3D flash.

    This is a technique (first widely disclosed in 2013) which promises to maximise flash endurance when used in conjunction with lightweight SSD controllers. (As opposed to the kind of heavyweight energy and CPU footprint required by adaptive DSP to achieve similar ends.)

    It was pioneered by NVMdurance.
  • the endurance rot stopped with 3D flash

    The slide to worsening endurance ratings in raw flash memory seemingly paused and improved during the transition from 2D to 3D due to the use of more expensive materials and more charge being trapped in each cell and with higher capacity coming from more planes of flash cells rather than a single plane of smaller cells.
beyond flash?

All nvms have endurance issues - although some are more serious than others - compared to flash. For example first generation 3DXpoint PCIe SSDs from Intel had similar or worse DWPD ratings than best in class flash SSDs. Whereas other memory types such as MRAM and FRAM have endurance which is orders of magnitude higher than flash - although their data capacity per chip is currently orders of magnitude smaller than flash.

It seems likely that DWPD will remain a useful way to select SSDs for storage. However the best way to characterize the reliability (and performance) of memories in new tiered memory systems (DIMM wars and cloud adapted memory) is a problem which is as far away from any commonly agreed useful solutions as today's neatly ordered classifications and segmentations of SSDs were 10 years ago.


The subject of memory endurance and how that relates to the reliability of SSDs and tiered memory is one which has provided much food for thought for millions of my readers in past years.

But there have been lighter moments too.

I combined a serious historic narrative with some attenpt at humor in the "naughty flash" description in sugaring flash for the enterprise.

Whereas my article razzle dazzling flash SSD cell care and retirement plans was intended to show just how ridiculous some of the comparative endurance management claims in the SSD market had already become in 2012.
SSD ad - click for more info

re RATIOs in SSD architecture

and the value of comparing one
thing you might not understand
so well to the size of a more familiar nother

by Zsolt Kerekes, editor - June 18, 2018
SSD Jargon - Megabyte rows his boat to  a sign saying  flipper channel 3 milesA new blog -Memory for Compute - Where do we go From Here? - by Rob Peglar, President at Advanced Computation and Storage - among other things - comments on the shortcomings of legacy X86 processor designs when viewed from the stance of cores to memory channels (and - this is my scene setting not the words that Rob used - it's 2018 not 1999 for goodness sake! - we're paradiving into the memoryfication of everything era - we're falling through the air without a jetpack - so we don't have too much time for what-is-life - retrospection before we hit that water and start tasting real juicy vendor shark meat - spiced up to make it more palatable with big sprinklings of sponsored soothsayer seasoning. Or we may be the shark meat - or we may even put the shark to good use to pull our boat along - depending on your perspective).

On linkedin I said something along these lines...

"The ratio of processor cores to memory channels and local memory capacity is a solid pivot from which to leverage your forthcoming architecture blogs. I love ratios as they have always provided a simple way to communicate with readers the design choices in products which tell a lot to other experts in that field. So in the past Ive written about ratios like RAM cache to flash in SSDs, ratio of chips to controllers (small versus big controller architecture), fast server based PCIe SSD capacity ratio to SAN array capacity - which said a lot about the legacy software base (in 2012 when I wrote that note). I look forward to seeing your next couple of blogs and will tell my readers about them."

well readers - this is me making good on that promise

I was on my phone when I did the linkedin comment so I didn't extend the list to things like:- ratio of petabytes/kilowatts in network storage capacity - which led us to the petabyte SSD market, or the ratio of chips to storage capacity - aka design efficiency - (including raw flash to usable capacity - whatever that is - real or virtual - RAM is whatever the software is happy is BTW and in another universe could be implemented by self reparing fast paper tape). I could have made these lists longer... But I'm trying to break the habit.

The important thing is that you - as a regular mortal without having wasted all your life immersed in arcane technology - nevertheless do have a sporting chance of being able to rationally recognize the goodness or otherwise of complex emerging products -which very few people on the planet understand - for your own purposes and based on your own risk assessments - using the comparative power of ratios. And you don't need editors or analysts or vendors to tell you that the chosen outputs from their calculators (or sponsored opinions) is more right for you.

In my own career - even when I have done deep analysis and all that stuff - I have sometimes chosen the right direction by doing the sums wrong. But I've always been lucky.

Anyway - I think you should stay tuned to Rob Peglar's blogs - because I have a gut feel - the plot is going to get very interesting. the article

Editor's additional comments:- And here's another thing... if you like the idea of using ratios to understand SSD design thinking then take a look too at some similar ideas I've written about which cut across the analysis of existing and future architectures in other ways.
  • the SSD design heresies - is a sanity check to remind you that designers don't always agree on the permutations of design features and technology adoption roadmaps favored by their competitors
SSD ad - click for more info

are we ready for infinitely faster RAM?

(and what would it be worth)

by Zsolt Kerekes, editor - May 14, 2018
infinitely faster memory click to enlargeIf someone could offer you a memory system which had the same storage density (bits per chip / module / box) as mainstream RAM - but which had latency and bandwidth (as measured by what the application sees) which was infinitely faster - could we use it? - how much would that be worth? and how would it change markets? For the past 25 years the computer market has voted with its spending for bigger rather than faster memory - but is the market now receptive to a disruptive change in its ideas about the user value proposition of memory performance?

what's infinitely faster RAM?

I know that some of you who are reading this (and maybe it's You) are the kind of people who found companies or fund them (thanks for staying with me on this ) and when you noticed the words "infinitely faster" in my title above you wondered if it was some kind of late April Fool article. (No - I wrote something else.)

Infinitely? Really? - I know you can't put the value of infinity into a business plan (although it does come in useful sometimes for testing boundary assumptions about how markets will react to disruptive change). So let me explain my use of the term "infinitely faster RAM" in this article to mean "RAM that's maybe 20x or 100x faster than what you can get today. As measured by critical bottlenecks in applications. For my purposes here I'm saying that latency is the most critical fastness factor - and while acknowledging that there aren't generally accepted methods of defining what "faster memory" means - I think it's good enough for my argument below to assume that if the way the black box works behaves consistently as if the memory was X-times faster (or X-times sooner) than before - then that's a good enough understanding.

This also assumes we're on the same page - when it comes to agreement on - what is RAM? - which is a shifting subject I have written about before. For my purposes - if it behaves like RAM - and can transparently replace conventional RAM (chips, modules or boxes or markets) then that's good enough for now - without worrying about implementation details. I'm not going to speculate on the technology of the infinitely faster RAM - I'll leave that problem for someone else - (maybe You). In this article I'm posing the same kind of philosophical and business what-ifs which I did in earlier phases of the SSD and memoryfication markets - which asked - if we could get this new stuff - what new products and markets would we get? - and how would that change pre-existing markets.

No! to the infinitely fast one transistor memory cell. I'd like to make it clear I'm not interested here in the idea of so-called "ultra fast" transistors, memory cells and that ilk of research. As far as I'm concerned if you can't put many megabytes and preferably gigabytes of raw capacity into the infinitely faster RAM (at chip / module level) then it's not the kind of animal I'd talking about here.

some lessons from history - applications create markets and define acceptable latency

Upto the early 2000s the value propositions for different implementations of semiconductor RAM were graded by latency and power - and the order of precedence (DRAM, SRAM, SoC memory on chip - from slowest to fastest) hadn't changed since the dawning of their mutual market coexistence in the 1970s.

If you wanted bigger capacity - you chose DRAM. If you wanted faster latency at a board level of integration you chose SRAM which ran hotter and was smaller in capacity.If you wanted faster than that - there was no contest. It had to be SoC (usually in the form of RAM on a true ASIC or gate array but also latterly on FPGA).

At a board level - and system level - DRAM and SRAM reached their latency limits in about 1999 and haven't get any faster since.

It didn't matter so much in the early 2000s because enterprise processors weren't getting any faster either. And the shape of applications (users doing simple stuff on the internet ) meant that datacenters could get by with affordable technologies which offered higher densities and lower power (more users satisfied per box or watt or dollar) rather than users getting speed they didn't need and couldn't use. The computer industry didn't need faster memory. And when demand for more applications performance did grow - particularly in the early days of the cell phone market (and social networks) the enterprise SSD market took up the slack adequately - as there had been plenty of latency bottlenecks built around earlier generations of (rotating) storage.

Nowadays cell phones are coinage, spies and slot machines. And they've been joined by IoT. There's so much intelligence which can be gathered about the meaning of it all. But no memory or computing platforms fast enough to resolve everything which can imagined by the next master plan in a timely fashion.

memory world war 1 - flash versus DRAM - in enterprise storage

I guess the first time there was a serious challenge to the role of enterprise DRAM from another memory type in the acceleration space was in the early years of enterprise flash adoption (from around 2004). Which was fought out and soon won by flash arrays supplanting RAM SSDs.

If you'd asked most SSD people even as late as 2007 whether they really expected DRAM to be replaced by flash as the mainstream enterprise SSD based acceleration technology there were arguments which could be made for either. But (as we now know) by 2012 the RAM SSD market was effectively extinct. The principal reasons that a slower latency memory (flash) could and did replace a faster latency memory (DRAM) in an acceleration role were:-
  • Typical user installations needed more memory capacity than could be integrated by DRAM in a single box. The latency fabric from interfaces wrapped around these SSD assets negated the latency advantage of DRAM chips compared to flash chips. (The flash chips had much higher storage capacity per chip and required much less electrical power.)
  • Most of the easy acceleration advantages of enterprise SSDs came from read requests rather than writes. That's just the way that the legacy installed software base worked. That bias in the profile of memory R/W meant that the asymmetic R/W latency of flash chips - with reads being orders of magnitude faster than writes - was not a serious obstacle in adoption.
Those acceleration lessons - initially duelled out in Fibre Channel SAN rackmount boxes - were won by the time the PCIe SSD market got started. It showed that a faster memory could lose out to a slower memory in an acceleration focused role.

But that was storage... What does that tell us - if anything about different speeds of memory used as memory?

The early experiences (2014 to 2017) of tiered memory from the DIMM wars market - in which flash can be tiered with DRAM (using form factors as diverse as DIMMs, PCIe modules and even SATA arrays) is that there can be trade offs in big data applications whereby trading size of memory in the box can offset the native speed of raw memory (when the fastest memory is DRAM) for exactly the same reasons which pertained in flash storage accelerators. And the benefits of doing so have been mostly related to improvements in cost rather than any risk free overwhelming advantages in application latency. That's because the low hanging fruit of tiered flash speedup was mostly already harvested by the bottlenecks uncovered and bypassed by server based PCIe SSD adoption.

Here's one last look backwards at the lessons from history - before going on to speculate about the value of infinitely faster memory in the futre.

I think that if you could go back in time and take with you a warehouse of today's fastest and highest capacity DRAM chips - along with plug compatible adapters to retrofit them into past server and storage systems - then you wouldn't change the world of applications because most performance constrained servers in the past - already had the maximum amount of DRAM installed. And if they didn't - then there were so many bottlenecks built into interfaces and software that - even with an ideally configured modern memory array taken back in time and fitted as an upgrade - you would at best typically get a 2x or 3x speedup or - very often - get no speed up at all. That's why the early adoption of enterprise SSD accelerators was slow and problematic. There were too many problems baked into the ecosystem for any one new product to make enough of a change.

today's world of enteprise memoryfication and memory defined software

Today's computing market - where SSDs are everywhere and storage latency bands have a precise value and can be controlled - is better placed to consider and use faster memory systems. It's the only direction of travel to enable faster software.

The business needs are well understood. It's easy nowadays to see a direct link between faster decisions (based on diverse internet signals picked up and analyzed by real-time AI) and measurable economic outcomes. Also new business models and markets are being created by the application of heavy duty machine learning.

The software industry has had nearly a decade to become accustomed to thinking about having meaningful choices in latency - which are determined by different types and tiers of semiconductor devices. Most of the benefits of SSDs came when enough software changed to fit in with an SSD world. Those changes were hard to make because of decades of architectural lethargy in the leadup to the modern SSD era. The next stage of software change - towards more memoryfication - has already been underway - due to momentum and despite the lack of revolutionaty memory technologies to take hardware to the next level.

So if you could offer a GB scale memory chip with 20x or 100x lower latency than existing mainstream DRAM - there are companies who could do useful things with it.

simplistic valuation of much faster memory accelerator systems

The most obvious market sizing example for infinitely faster memory accelerator systems is their application in dealing with temporary data.

A simplistic way of looking at this is that overwhelmingly most of the data which enters processor space is temporary data. So if you have a memory accelerator which is 100x faster than the original memory which you used before to solve this type of problem - then provided that you don't run out of new data to feed into the machine and provided that the usable memory size for any single instance of the computation can fit into the memory space provided - then you need approximately 100x less machines to provide the same services.

The equivalence of speed and machines (one machine which is 100x faster being equivalent in capability to 100 slower machines) is similar to the SSD-CPU eqyuivalency model which helped to cost justify SSD accelerators in the early 2000s as server replacements rather than expensive $/TB storage.

However, the memory accelerator has a greater utility than this comparison might suggest on its own because the ability to solve problems sooner, and the ability to solve more complex problems for the first time within a shorter real-time period create new value propositions by making new algorithmic engines viable for new markets and applications.

This ability to create new markets is like the dynamic energy seen in the computer market in the 1980s and 1990s which began with microprocessors making computation cheaper but ended (around 1999) when limits were reached in how fast the GHz clock rate of any particular core could run. And the collective decision of server companies that fater was not necessary is partly to blame for the dumbing down of processor design architecture for nearly 20 years thereafter.

The good thing which emerged from that lack of investment in making commercial processors faster was that it created the fertile soil for the SSD acceleration market - which became the only game in town.

And now that there is a greater understanding of memory (and the interplay of roles and values between raw memory capacity and raw latency) and helped by an SSD rich ecosystem in which larger portions of any computing problem can be economically gathered into systems where the random access time can be arranged to be a handful of microseconds rather than tens of milliseconds - the creative juices of computing architecture have been turning to the much needed creation of new memoryfication compatible computing engines and new processor architectures.

That's what has been giving rise to the proliferation in recent years of commercial in-situ SSDs, processing in / near memory FPGA arrays and dedicated memory accelerators for machine learning and similar neural algorithms.

The early implementations which you can read about in the SSD news archives demonstrate 2 things.
  • The value of an Nx faster memory-compute accelerator can indeed be measured by at least the cost of all the previous traditional hardware which was needed to solve similar problems before. So 1 new PU (TPU etc) is indeed worth Nx conventional server CPU / GPU / when there is sufficient work to be within a particular problem shape universe. Their value is application specific. (Some workloads are accelerated better than others.)
  • Modern memory accelerators don't have to resemble either dumb memory or dumb processors. Provided that they can interoperate with conventional servers and infrastructure the new memory accelerators are best viewed as black boxes whose internal details may change and adapt (just like search engine algorithms) when more data suggests future areas of improvement.
This will creates an existential problem for makers of memory testers - because the future of high performance memory systems (where the money used to be) will become increasingly proprietary.

And as you can realize yourself this will create a cut-off point for manufacturers of high end server memory. Because high end memory systems will inevitably look more like a custom processor market.

And as for traditional processor makers the new memory accelerator systems don't care about and don't need their "instruction set" based backwards compatibilities and roadmaps - because the new ML / NN engine roadmaps (if they need any compatibility at all) will be "application" and "algorithm" based.... More like the kind of compatibilties you witness in successive generations of cloud APIs.

An interesting question is how many different types of memory accelerators the world needs and can support?

One view might be that the world doesn't need more than a handful (one for Google, Amazon, Apple, Baidu etc) because if the biggest benefit and visibility into design optimization only occurs at massive scale then those companies will each drive their own designs.

On the other hand if this is indeed the start of a new rennaissance in computing architecture - then you could argue that there will be the usual explosion of startups hoping to serve new markets created by the new ideas in architecture. (And there may be benefits of such new ideas which occur without being colocated in the cloud.


Going back to my questions in the title...

Are we ready for infinitely faster RAM?

I think I've made a case for the answer being - Yes. More ready than we've ever been before.

And as to - what would it be worth?

My advice to founders of startups in infinitely faster memory accelerators is - don't sprinkle the number "infinity" about too much in your spreadsheets when guessing the market size or attached to the price which you think ideal customers would be prepared to pay. There are plenty of big numbers you can choose which are smaller and will still sound impressive without straining credulity.
SSD ad - click for more info

Hmm... it looks like you're seriously interested in SSDs. So please bookmark this page and come back again soon.

storage search banner
"Unlike the story - selling Toshiba - and other past SSD company acquisitions - which were big stories I don't expect that the sale of this little web site will get huge coverage in other media. And I'm not going to make a song and dance about it." is for sale
Flash Memory Summit - event logo - click to see details
If you could go back in time and take with you - in the DeLorean - a factory full of modern memory chips and SSDs (along with backwards compatible adapters) what real impact would that have?
are we ready for infinitely faster RAM?
Choosing a slow interface for a high capacity SSD is the route whereby one innovative enterprise SSD maker was able to offer "no limits DWPD".
what's the state of DWPD?
There's a genuine problem for the SCM
(storage class memory) industry.
How to describe performance.
is it realistic to talk about memory IOPS?
controllernomics - is that even a real word?
The semiconductor memory business has toggled between under supply and over supply since the 1970s.
an SSD view of past, present and future boom bust cycles in the memory market
As you may have guessed I talk to a lot of companies which design SSDs and SSD controllers.

I also talk to people who design processors.
optimizing CPUs in the Post Modernist Era

Some of the winners and losers from the memory shortages in 2017 were easy to spot. But there have been new opportunities created too.
miscellaneous consequences of the 2017 memory shortages

Despite many revolutionary changes in memory systems design and SSD adoption in the past decade we are still not at the stage where it's possible to predict and plot the next decade as merely an incremental set of refinements of what we've got now.
Are we there yet? - 40 years of thinking about SSDs

Say farewell to friction-free
borderless memory markets.
can memory chips be made in the wrong place?

Data recovery from DRAM?
I thought everyone knew that

the dividing line between storage and memory is more fluid than ever before
where are we heading with memory intensive systems?

Enterprise DRAM has the same latency now (or worse) than in 2000. The CPU-DRAM-HDD oligopoly optimized DRAM for a different set of assumptions than we have today in the post modern SSD era.
latency loving reasons for fading out DRAM

I said to a leading NVDIMM company... This may be a stupid question but... have you thought of supporting a RAMdisk emulation in your new "flash tiered as RAM" solution?
what could we learn?

In some ways the SSD market is like that lakeside village. It's not so long ago that no one even knew where it was.
Can you tell me the best way to get to SSD Street?


The enterprise SSD story...

why's the plot so complicated?

and was there ever a missed opportunity in the past to simplify it?
the elusive golden age of enterprise SSDs

To be? or Not to be?
hold up capacitors in 2.5" MIL SSDs
0 to 3 seconds - aspects of extreme SSD design

Why do SSD revenue forecasts by enterprise vendors so often fail to anticipate crashes in demand from their existing customers?
meet Ken and the enterprise SSD software event horizon

the past (and future) of HDD vs SSD sophistry
How will the hard drive market fare...
in a solid state storage world?

Compared to EMC...

ours is better
can you take these AFA startups seriously?

Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing

Reliability is an important factor in many applications which use SSDs. But can you trust an SSD brand just because it claims to be reliable in its ads?
the cultivation and nurturing of "reliability"
in a 2.5" embedded SSD brand

A couple of years ago - if you were a big company wanting to get into the SSD market by an acquisition or strategic investment then a budget somewhere between $500 million and $1 billion would have seemed like plenty.
VCs in SSDs and storage

Adaptive dynamic refresh to improve ECC and power consumption, tiered memory latencies and some other ideas.
Are you ready to rethink RAM?

90% of the enterprise SSD companies which you know have no good reasons to survive.
market consolidation - why? how? when?

With hundreds of patents already pending in this topic there's a high probability that the SSD vendor won't give you the details. It's enough to get the general idea.
Adaptive flash R/W and DSP ECC IP in SSDs

SSD Market - Easy Entry Route #1 - Buy a Company which Already Makes SSDs. (And here's a list of who bought whom.)
3 Easy Ways to Enter the SSD Market

"You'd think... someone should know all the answers by now. "
what do enterprise SSD users want?

We can't afford NOT to be in the SSD market...
Hostage to the fortunes of SSD

Why buy SSDs?
6 user value propositions for buying SSDs

"Play it again Sam - as time goes by..."
the Problem with Write IOPS - in flash SSDs

Why can't SSD's true believers agree upon a single coherent vision for the future of solid state storage? (They never did.)
the SSD Heresies.

The predictability and calm, careful approach to new technology adoption in industrial SSDs was for a long time regarded as a virtue compared to other brash markets.
say farewell to reassuringly boring industrial SSDs

If you spend a lot of your time analyzing the performance characteristics and limitations of flash SSDs - this article will help you to easily predict the characteristics of any new SSDs you encounter - by leveraging the knowledge you already have.
flash SSD performance characteristics and limitations

The memory chip count ceiling around which the SSD controller IP is optimized - predetermines the efficiency of achieving system-wide goals like cost, performance and reliability.
size matters in SSD controller architecture

A popular fad in selling flash SSDs is life assurance and health care claims as in - my flash SSD controller care scheme is 100x better (than all the rest).
razzle dazzling flash SSD cell care

These are the "Editor Proven" cheerleaders and editorial meetings fixers of the storage and SSD industry.
who's who in SSD and storage PR?