this is the home page of
leading the way to the new storage frontier .....
SSD news since 1998
SSD news ..
military storage directory and news
military SSDs ..
DRAM latency
DRAM's latency secret ....
SSD SoCs controllers
SSD controllers ..
image shows mouse building storage - click to see industrial SSDs article
industrial SSDs ..
image shows megabyte waving the winners trophy - there are over 200 SSD oems - which ones matter? - click to read article
top SSD companies ..
image shows Megabye the mouse reading scroll - click to see the top 30 solid state drive articles
more SSD articles ...
LSI SandForce SSD processors - click for more info
the awards winning silicon
accelerating world's leading SSDs
from Seagate

SSD controllers
the fastest SSDs
DWPD - examples
history of SSD market
SAS SSDs - directory and timeline
the serious business of custom SSD
any lessons for SSD?
boom bust cycles in memory markets

SSD ad - click for more info

12 months of headlines
February 2017 Tachyum emerged from stealth mode
January 2017 Pure Storage said the "new stack" is becoming the standard thing.

Crossbar announced it was sampling 8Mb ReRAM based on 40nm CMOS friendly technology.
December 2016 Violin seeks bankruptcy protection.

4Gb MRAM prototypes unveiled by SK Hynix and Toshiba
November 2016 Silicon Motion announced the "world's first merchant SD 5.1 controller solution."
October 2016 Rambus announced it is exploring the use of Xilinx FPGAs in its Smart Data Acceleration research program.
September 2016 Everspin files for IPO to expand MRAM
August 2016 Seagate previews 60TB 3.5" SAS SSD

Nimbus demonstrates 4PB 4U HA AFA at FMS
July 2016 Diablo announces volume availability of its Memory1 128GB DDR4 DIMM
June 2016 Pure said its AFA revenue in Q1 2016 was more than leading HDD array brand
May 2016 efficiently coded memory architecture unveiled in systems by Symbolic IO

Encrip announces tri-state coded DRAM IP which can be used with any standard process
April 2016 Samsung began mass producing the industry's first 10nm class 8Gb DDR4 DRAM chips
March 2016 New funding for endurance stretching NVMdurance

Cadence and Mellanox demonstrate PCIe 4.0 interoperability at 16Gbps.
archived storage news 2000 to 2017

SSD ad - click for more info


optimizing CPUs for use in SSDs

(Editor:- January 26, 2017 - This article is part of what I said to a reader this week about optimizing CPUs for use in SSDs.)

The characteristics of CPUs used within COTS SSDs varies widely.

One direction of influence comes from the anticipated market.

And this is the aspect which is easiest to understand.

So that results in different preferences for enterprise SSDs which have high solo performance (such as Mangstor) compared to 2.5" SSDs deployed in arrays.

And power consumption can be a key factor in industrial SSDs.

Hyperstone's controllers which are optimized for low power consumption are very different in their choice of CPU and necessarily flash algorithms too because they can't depend on the type of RAM cache which makes enterprise endurance management code easier to design.

But unfortunately I think that analyzing what happened in the past in the SSD controller / SSD processor market isn't a reliable predictor for future controllers.

An influence which has been trickling down from lessons learned in the array market is the powerful system level benefit of passing intelligent control from a viewpoint which is outside the ken of the SSD controller located in the flash storage form factor.

And partly due to that applications awareness and likely to tear apart many controller business plans is the contradictory requirements between custom and standard products. (Described in more detail in my 2015 SSD ideas blog (SSD directional push-me pull-yous).

And - as always with the SSD market different companies can take very different approaches to how they pick the permutations which deliver their ideal SSD.

Added to that I think a new emerging factor in memory systems will be whether the CPUs are able to deliver applications level benefits by integrating nvm or persistent memory within the same die as the processor itself.

Thats partly a process challenge but also a massive architectural and business gamble.

In the past it has been obvious that some SSDs incorporated nvm registers or other small persistent storage memory (apart from the external flash) to deliver power fail data integrity features which didn't need capacitor holdup.

What is less clear is the direction of travel with tiered memory on the CPU.

When it comes to chip space budget - is it worth trading cores to enable bigger (slower) persistent memory upstream of conventional cache?

This was already complicated when external memory was assumed to be only DRAM and being fed from HDD storage buckets. The new latency buckets of SSD storage and bigger tiered semiconductor main memory change the latency of the data bucket chain and the ability to perform in-situ memory processing may change CPU architecture too.

For some applications that might be a good trade. Better integration at lower latency with the CPU and the memory system. And merging of CPU and SSD functionality. But this would be a risky experiment for a component vendor who doesn't have a systems level marketing channel to sell the enhanced merged feature set.

The only clear thing is that whatever made a good SSD in the past will no longer adequate in the future.

More flexibility will be key.

It's not just the CPU making the SSD work better. The SSD makes the CPU work better too.

SSD-CPU equivalence and SSD and memory systems equivalence aren't new ideas - but the scope for innovative improvement is still massive.

1.0" SSDs
1.8" SSDs
2.5" SSDs
3.5" SSDs

1973 - 2017 - the SSD story

2013 - SSD market changes
2014 - SSD market changes
2015 - SSD market changes
2016 - SSD market changes

20K RPM HDDs - no-show

About the publisher - 1991 to 2017
Adaptive R/W flash IP + DSP ECC
Acquired SSD companies
Acquired storage companies
Advertising on
Analysts - SSD market
Analysts - storage market
Animal Brands in the storage market
Architecture - network storage
Articles - SSD
Auto tiering SSDs

Bad block management in flash SSDs
Benchmarks - SSD - can your trust them?
Big market picture of SSDs
Bookmarks from SSD leaders
Branding Strategies in the SSD market

Chips - storage interface
Chips - SSD on a chip & DOMs
Click rates - SSD banner ads
Cloud with SSDs inside
Consolidation trends in the enterprise flash market
Consumer SSDs
Controller chips for SSDs
Cost of SSDs

Data recovery for flash SSDs?
DIMM wars in the SSD market
Disk sanitizers
DWPD - examples from the market

Efficiency - comparing SSD designs
Encryption - impacts in notebook SSDs
Endurance - in flash SSDs
enterprise flash SSDs history
enterprise flash array market - segmentation
enterprise SSD story - plot complications
EOL SSDs - issues for buyers

FITs (failures in time) & SSDs
Fast purge / erase SSDs
Fastest SSDs
Flash Memory

Garbage Collection and other SSD jargon

Hard drives
High availability enterprise SSDs
History of data storage
History of disk to disk backup
History of the SPARC systems market
History of SSD market
Hold up capacitors in military SSDs
hybrid DIMMs
hybrid drives
hybrid storage arrays

Iceberg syndrome - SSD capacity you don't see
Imprinting the brain of the SSD
Industrial SSDs
Industry trade associations (ORGs)
IOPS in flash SSDs

Jargon - flash SSD

Legacy vs New Dynasty - enterprise SSDs
Limericks about flash endurance

M.2 SSDs
Market research (all storage)
Marketing Views
Memory Channel SSDs
Mice and storage
Military storage

Notebook SSDs - timeline
Petabyte SSD roadmap
Power loss - sudden in SSDs
Power, Speed and Strength in SSD brands
PR agencies - storage and SSD
Processors in SSD controllers

Rackmount SSDs
RAID systems (incl RAIC RAISE etc)
RAM cache ratios in flash SSDs
RAM memory chips
RAM SSDs versus Flash SSDs
Reliability - SSD / storage
RPM and hard drive spin speeds

SCSI SSDs - legacy parallel
Symmetry in SSD design

Tape libraries

Test Equipment
Top 20 SSD companies
Tuning SANs with SSDs

USB storage
User Value Propositions for SSDs

VC funds in storage
Videos - about SSDs

Zsolt Kerekes - (editor linkedin) is published by ACSL founded in 1991.

© 1992 to 2017 all rights reserved.

Editor's note:- I currently talk to more than 600 makers of SSDs and another 100 or so companies which are closely enmeshed around the SSD ecosphere.

Most of these SSD companies (but by no means all) are profiled here on the mouse site.

I still learn about new SSD companies every week, including many in stealth mode. If you're interested in the growing big picture of the SSD market canvass - StorageSearch will help you along the way.

Many SSD company CEOs read our site too - and say they value our thought leading SSD content - even when we say something that's not always comfortable to hear. I hope you'll find it it useful too.

Privacy policies.

We never compile email lists from this web site, not for our own use nor anyone else's, and we never ask you to log-in to read any of our own content on this web site. We don't do pop-ups or pop-unders nor blocker ads and we don't place cookies in your computer. We've been publishing on the web since 1996 and these have always been the principles we adhere to.

controllernomics and user risk reward with big memory "flash as RAM"

by Zsolt Kerekes, editor - - February 16, 2017
A recent conversation I had with Kevin Wagner at Diablo Technologies began with talking about the recent benchmarks they have been sharing related to their Memory1 (128GB flash as RAM DIMMs) when running large scale analytics software. But it finished somewhere unexpected.

I'll start with the benchmarks.

Kevin said that some of the results (for SPARK SQL performance) came from a real financial customer who had run these tests themselves using data from their own production environments.

They were able to achieve a 4 to 1 server reduction using Memory1 enhanced servers (3x M1 nodes versus 12 384GB DRAM only nodes) and still get 24% faster performance.

They also have some publicly shareable benchmarks which show useful acceleration when using identical numbers of server nodes and comparing the results to alternative SSD implementations (NVMe and arrays of SATA SSDs).

As in the early phases of flash array market 10 years ago - you have to filter yourself in or out of following up interest in this depending on whether you think you have the right type of problem.

The risk reward factors of using this DIMM based flash as RAM system like Memory1 is that users with big memory apps will be able to choose whether they prefer the idea of getting faster results (using a similar number of servers) or using less servers to save costs (or some combination). But not all jobs will run faster. Small jobs which would have fitted into the DRAM comfortably anyway could run upto 30% slower.

Users who are evaluating this new tiered memory approach can buy preconfigured supported servers from a variety of sources and Diablo says that no changes are required to the OS or applications.

Compared to many of the alternative emerging new semiconductor memory approaches - flash (as RAM) seems like it will be the mainstream safe choice for the next 2 or 3 years - because it will take that long for the newcomers to prove their reliability and even after that - we have the issue of software support for the tiering and caching.


There will of course be alternative competing implementations of "flash as RAM in the DIMM form factor". (Which is not the only form factor for this concept - but I'm trying to keep this article as short as I can.)

Companies like Xitore and Netlist have been saying they want to get into the "flash as RAM in the DIMM form factor" market for a while now.

I haven't seen details from these expected competitors from my guess is that - unlike Diablo's product - which leverages the DRAM which is already in other DIMM sockets in the same motherboard - that some of the later contestants in this market will take the approach of placing everything needed to provide transparent emulation and caching into a single DIMM.

That alternative approach might work better for smaller scale embedded systems which don't have a lot of DIMMs - but creates difficult design constraints - because the "all in a single DIMM" approach means there will be less flexibility about RAM flash cache ratios. The real RAM will be fixed into the design. (Unlike Diablo's solution where the ratio of flash to DRAM DIMMs is fluid.)

That lack of flexibility is why I predicted that the hybrid storage drive market would never succeed in the enterprise and so far no one has been silly enough to admit to stuffing JBODs with 2.5" hybrid flash-HDDs and instead the hybrid storage appliance market picked and chose components from a wide range of best of breed flash modules.

But I think the "all in a single DIMM approach to implement flash as a RAM tier will succeed as a viable market too. (In addition to the Memory1 approach.) Personally I think the all in one DIMM solutions will work better in small capacity memory systems but be less upwardly scalable for large capacity servers.

The product definitions will involve some very complex segmentation and application analysis.

I expect that the "flash as RAM in a DIMM form factor" market will fragment into:-
  • applications which only need a single such DIMM on each server and
  • the other segment will be those applications which tend to use the maximum number of DIMM slots.
the interesting thing? - dataflow controllernomics

Let's pause for some perspective...

What's data? - Now hold on - that's too philosophical. Encode data a different way... to make it work better - now you're talking engineering. But that's a discussion for another time.

Right here we don't care what the data means.

It just comes and goes.

And it's surprising how far or how little it may have traveled.

From the cloud? Another storage device? Maybe it was computed just now from an earlier matching of data. Sometimes the data arrives in a rush only to sadly discover that it's not needed after all. There's a lot of data shuffling happening around the world. Most of it isn't even for you.

It's when that data (or lack of it) is the next thing which the software is going to look at - that the economics of having data in the right or wrong place suddenly becomes very serious. Because if we have to wait too long to get the data then we may need a faster processor (or more processors) to get the next thing done.

You may like to think that data lives in cables, or in storage media or flying around on electromagnetic waves. But from a memory systems perspective the time when data really comes alive is when it's in our memory locality.

We care that data comes when we need it.

And if we haven't got it in our live place (the memory) then we really care about and want to know where it lives. (And not just addresses in memory spaces - but locations in between the memory spaces - in transit.)

Even better if we can tell the data where to live. And if its comings and goings heed our calls.

(Sadly other controllers too have a say in this matter. And even when they think they're trying to be helpful their understanding is based on past customs of politeness.)

back to my conversation with Diablo

Our conversation took an interesting diversion when Kevin Wagner said something about the techniques Diablo uses in the management of its data caching.

We had discussed its DMX software in depth before and I wrote something about it last summer.

The new point which I latched onto is that Diablo has used machine learning to not only get a better understanding of the applications it commonly works with - but also to reverse engineer and understand the behavior of some of the external controllers which it encounters - in particular memory controllers.

That enables DMX to sometimes predict the best way it should request and deliver new data.

The behavior of controllers is a very important factor in the modern digital economy.

I've touched on this aspect in the past as you can see in past stories in the SSD controllers page and my article about controller and caching impacts on DRAM latency.

big datasystems controllernomics?

Analyzing how to get optimal performance from tiered memory, tiered storage etc which will be at the center of future focus for much of our industry - especially in emerging fields like in-situ (SSD / memory) processing, fast elements and software.

Although latencies for raw data media and communications and interfaces have been well understood and managed in their own ways for some time. The science of how to manage large populations of different types of controllers in different localities is fragmented with differing purposes.

Every controller company has its own IP which does the best it can with the things it connects to and can control.

What is becoming more important - when you are in the memory zone - right in close with the RAM and processor - is getting a better understanding of the connected controllers in your space. Because application performance in the data world is limited by the complex interactions of controller-controller speak (from the cloud right down into each processor DRAM cache request ) to a much greater extent than ever before.

When storage was slower and memories were smaller and the software was older - all the controller designs looked good in comparison to the other devices surrounding them. Now with faster storage, bigger memories and modern apps software controllernomics has become the limiting factor.

So it's not how fast the intrinsic memory cells or blocks work... you never get that physical - because media controllers sit between you and noisy physics. And if you are that media controller - speed (from the software's point of view - isn't just how well you and the host interface get along together.) And it's not just how fast your application's CPU works either - because other CPUs and other tasks are competing in the same data highways.

Datasystems controllernomics is like figuring out traffic patterns - some of which you can anticipate (the effects of predicted snow, or the rush hour) but most of which you just have to react to as best you can (a big truck took the wrong turn). And mixing up the two things at the same time. And BTW - each time you call it wrong - you contribute to the next controllernomics snafu.

So you might ask... hey why doesn't some software manage all this? And what about the role of operating systems?

Let's look at the OS first. If you've read any histories of computing you'll know that in the dotcom era (which was the last grand ball era when server CPUs, DRAM and hard drives all knew their place and were equally respected because they had all grown a little bit faster and fatter together up to that that). Chronologically that's upto about 1999-2000 - if you prefer a date. Well upto that time - the OSes took many of their responsibilities seriously.

After that we got into the causes of the great war (I mean the modern era of SSDs). I already dealt with most of the decline, fall and abandonment of the OS (in a useful memory systems context) in my 2012 article - where are we now with SSD software? (And how did we get into this mess?).

Rather than repeat that analysis here - and to be fair to the OS companies their traditional systems partners didn't know what was happening either. But in any case the OS companies had other distractions - like trying to be the next search engine destination, the next phone platform or trying to hang on while pesky open source OS startups were giving enterprise OSes away free to whoever could download them quickly enough.

Anyway that's how the critical software for SSDs got to be written by SSD companies themselves - because for a long time - no-one else was going to do it.

This brings us to the present day. And the SSD market has grown large enough to merit its own conferences, standards etc - which is how we got new form factors like M.2 and new software like NVMe. So the OS companies and the hypervisor companies are more than happy enough to gatecrash the SSD party . But...

And this is a big but... They have no real incentive to improve performance to the next level. And as their business models depend on remaining as hardware agnostic as possible - they have every reason to avoid tying themselves too closely to any quick changing deep piece of single sourced semiconductor trickery. And - even if that wasn't so - the enterprise OS companies have business models which depend on supporting hardware platforms which are already shipping in high volume - and not in creating new platforms.

Give them a problem like tiered memory - which can be solved with a purely software solution and yeah they'll support it eventually (or buy little software companies who can show them how to do it).

But give them a problem where the little pieces involve nanosecond hardware support in semiconductors and where the analysis comes from learning what they themselves have been doing wrong for years - and you can see why the OS companies are not where the best solutions are going to come from.

Diablo got into big datasystems controllernomics (that's my term for it - not theirs) because they spent a lot of time analyzing problems from a particular angle (in the memory close to the processors). And they discovered that even after you've understood the stacks and the apps and the architecture there's still another factor of modeling and predicting which it's worth getting to know - but only if you can do something about it.

And once you've done that - and are comfortably working in the memory and storage and controller-controller alternate universe - then just as Google found with search - you're in a better vantage point to learn more and stay ahead. And if you do - and occupy enough server boxes - then you might become the controller behavior which others in the controllernomics universe have to reverse analyze and understand.

And although this started out as a "flash as RAM" problem - the solution methodology isn't tied to flash.

Interesting times ahead.
SSD ad - click for more info

after AFAs - what's the next box?

cloud adapted memory systems

by Zsolt Kerekes, editor - - January 24, 2017
Last year I had the idea of writing an April 1st blog on the theme of cloud adapted memory systems.

The core idea was to have been a spoof press release about a rackmount memory system for enterprise users which can connect into the fabric of their applications which has been optimized to support cloud services as the next slowest external level of latency.

after All Flash Arrays? - the next SSD boxThe product architecture was a multi-tiered memory systems box in which all the integrated memory resources could be dynamically configured to behave like RAM or SSD storage or persistent memory - depending on the vintage and preferences of the user applications software.

An underlying assumption in my spoof article was that as you move up the latency ladder and move into the slower domains beyond this box - the next level is also likely to be another memory systems box or the cloud.

From the perspective of grounded networked user systems (by which I mean user systems which do not form a native part of the public cloud infrastructure) the cloud (in all its forms - public, private or micro-tiered and local) has replaced the hard drive array and tape library as being the slowest and cheapest data storage devices which your data software might encounter. Everything else is memory.

In this scenario there's no role for user software which was written around a hard drive access model. Indeed as long term readers already know the mission of identifying and removing all such "HDD driven" (prefetch, cache, and pack it all up) embedded software activities has - for the past 10 years - been a secret SSD software weapon used by many leading companies to improve the speed and utilization of their integrated solid state storage systems.

Although, for economic reasons, users might still encounter hard drives in the cloud, or a micro cloud, or a hybrid storage appliance, nevertheless from the perspective of planning new systems for users - the key strategic device for enterprise data performance is the memory system.

raw chip memory... how much as SSD? how much as memory?

This raises the question:- what proportion of the raw semiconductor memory capacity ought to be usable as storage (SSD) or usable as memory (RAM - as in "random access memory" which operates with the software like DRAM but which could be implemented by other technologies).

Ratios of one thing to another have often been useful indicators of changing expectations in the storage market - because they are simple to grasp - even when the associated technologies are not.

Despite the attachment constraints of legacy interface types (same chip datapath, DDR-X, PCIe, SAS, IB, GbE, photonic etc ) I anticipate that emulating SSD arrays and / or big RAM (these two choices determine the "personality" of the installed memory resources in a way we can understand today) could one day (with appropriate datapaths) be as easy to adjust as the ratios of flash memory to hard drives which we saw being promised in a clever "try before you buy" customer experience and business development tool in 2014 - the flash juice strength slider mix - from Tegile Systems - which they used to woo impecuniously minded hybrid array inclined users closer towards the benefits of more expensive to buy all flash arrays.

The more I thought about it the more I realized that as an April 1st type of article this cloud adapted memory systems blog just wouldn't work. It was already too close to the kind of products we're already seeing in the market.

But as a thought provoking feature it got me thinking about some related issues. See if any of these strike a chord with you.

expectations for memory storage systems

In the past we've always expected the data capacity of memory systems (mainly DRAM) to be much smaller than the capacity of all the other attached storage in the same data processing environment. The rationale for this being the economies (dare I say cost) of access time, data density and electrical power - which were traditionally implemented by a many different types of storage media (solid state, magnetic and optical) each having their own unique characteristics.

In a modern data system - even one which is entirely solid state - the arguments for tiered products are the same as they have always been because "faster" usually means "runs hotter". But this new world of "memory systems everywhere" opens the possibility that random read access times (across a significant range of applications data) is similar even if the random write time (including verify and play it again Sam) aspects of that data cycle remains variable.

But would enterprise systems be more efficient (and run faster and at lower cost) if all the software was rewritten to assume that memory was large (and could be persistent) whereas storage (initially supported to emulate legacy applications and grow revenue for such systems) was small?

For a longer discussion of such issues see - where are we heading with memory intensive systems?

frames of relativity - where is the cloud?

Earlier in this blog when describing the relative access times of the memory systems box compared to data in the cloud I was assuming that the frame of reference was from the perspective of the user's system (which is located outside the cloud). That's why I said the cloud would replace the hard drive as the slowest virtual peripheral. Of course if you're thinking about systems architecture from the angle of designing infrastructure components in the cloud - then that "slowness" isn't generally true. And you will still be designing some boxes which support physical hard drives (until a cheaper option comes along or until you can monetize the seldom accessed data in a better way).

software's role in acceleration - worth the wait

As with the SSD market in the past so too with the memory systems market there will be bigger and faster adoption of new technologies when there is more software speaking the same language. Having products which interoperate with legacy software is business plan "A" and will fund some interesting business stories. But getting to the next stage of the memory systems market where the installed memory base (of randomly accessible memory) begins to creep up to the size of the installed capacity of SSD storage - will require a lot of new software which can leverage the memory assets with less backward glances.

You might say we've already got software solutions which can repurpose flash into useful roles as big memory so why do we need any new hardware at all?

We've got the new memories coming anyway. Some of them will stick in easy to identify places. Others have yet to find sustainable new roles. What are those roles? I'll be dealing with those issues in my next blog here - which I've tentatively called - the survivor's guide to all semiconductor memory and the diminishing role of form factors.

See you then.

PS - Although I didn't publish my spoof article about "cloud adaptive memory" (which was to have been its original title back in early 2016 - I did spend a lot of time thinking about the consequences of those ideas. And they clearly influenced the choice of the serious articles and news coverage which I did apply myself to as you may have seen.

To steer your way to future markets sometimes you have to consider ideas which at first seem like a ludicrous stretch from reality and follow them through for a while as if they were real before you can recognize that the truths which emerge from analyzing such notions can be useful.

Previous examples of spoof articles which were useful forerunners of reality discussed issues like why SSDs would replace HDDs (as cheap bulk storage) even if HDDs were free (in the article towards the petabyte SSD), and the complexities of signal processing in flash level discrimination (and data itegrity) - which we now call adaptive DSP (here's a link to the 2008 spoof article).

For the philosophy behind this approach see my article - Boundaries Analysis in SSD Market Forecasting .

PPS - In 2014 I discussed the idea of unified storage (SAN and NAS) being the old fashioned "gentlemen's club" way in an interview with Frankie Roohparvar (who at that time was CEO of Skyera and is now Chief Strategy Officer at Xitore).

I mischievously sounded him out on my expectation of being able to add in the capability of emulating big persistent memory into the new dynasty unified solid state data box feature set. For that story see - Skyera's new skyHawk FS in archived news.

a storage architecture guide

are you ready to rethink RAM?

playing the enterprise SSD box riddle game

SSD ad - click for more info

Storage Class Memory - one idea many different approaches; flash endurance - better scope than previously believed; NVMf, NVMe and NVDIMM variations...
what were the big SSD ideas which emerged in 2016?

Hmm... it looks like you're seriously interested in SSDs. So please bookmark this page and come back again soon.

storage search banner
latency matters
Latency? - the devil is in the detail.

"latency" - mentions on the mouse site
showing the way ahead
1 big market lesson
and 4 shining technology companies
Soft-Error Mitigation for PCM and STT-RAM
Editor:- February 21, 2017 - There's a vast body of knowledge about data integrity issues in nand flash memories.

The underlying problems and fixes have been one of the underpinnings of SSD controller design.

But what about newer emerging nvms such as PCM and STT-RAM?

You know that memories are real when you can read hard data about what goes wrong - because physics detests a perfect storage device.

A new paper - a Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories (pdf) - by Sparsh Mittal, Assistant Professor at Indian Institute of Technology Hyderabad - describes the nature of soft error problems in these new memory types and shows why system level architectures will be needed to make them usable. Among other things:-
  • scrubbing in MLC PCM would be required in almost every cycle to keep the error rate at an acceptable level
  • read disturbance errors are expected to become the most severe bottleneck in STT-RAM scaling and performance
He concludes:- "Given the energy inefficiency of conventional memories and the reliability issues of NVMs, it is likely that future systems will use a hybrid memory design to bring the best of NVMs and conventional memories together. For example, in an SRAM-STT-RAM hybrid cache, read-intensive blocks can be migrated to SRAM to avoid RDEs in STT-RAM, and DRAM can be used as cache to reduce write operations to PCM memory for avoiding WDEs. However, since conventional memories also have reliability issues, practical realization and adoption of these hybrid memory designs are expected to be as challenging as those of NVM-based memory designs. Overcoming these challenges will require concerted efforts from both academia and industry." the article (pdf)

See also:- Sparsh Mittal is running a one day workshop on Advanced Memory System Architecture March 4, 2017
SSD ad - click for more info

Lightning, tachIOn , WarpDrive ... etc
Inanimate Power, Speed and Strength Metaphors in SSD brands

What we've got now is a new SSD market melting pot in which all performance related storage is made from memories and the dividing line between storage and memory is also more fluid than before.
where are we heading with memory intensive systems?

is data remanence in NVDIMMs a new risk factor?
maybe the risk was already there before with DRAM

Some suppliers will quote you higher DWPD even if nothing changes in the BOM.
what's the state of DWPD?

DRAM's reputation for speed is like the old story about the 15K hard drives (more of the same is not always quickest nor best)
latency loving reasons for fading out DRAM

Is more always better?
The ups and downs of capacitor hold up in 2.5" flash SSDs

Why would any sane SSD company in recent years change its business plan from industrial flash controllers to HPC flash arrays?
a winter's tale of SSD market influences

In SSD land - rules are made to be broken.
7 tips to survive and thrive in enterprise SSD

There's a genuine characterization problem for the SCM industry which is:- what are the most useful metrics to judge tiered memory systems by?
is it realistic to talk about memory IOPS?

Many of the important and sometimes mysterious behavioral aspects of SSDs which predetermine their application limitations and usable market roles can only be understood when you look at how well the designer has dealt with managing the symmetries and asymmetries which are implicit in the underlying technologies which are contained within the SSD.
how fast can your SSD run backwards?

The enterprise SSD story...

why's the plot so complicated?

and was there ever a missed opportunity in the past to simplify it?
the elusive golden age of enterprise SSDs

How committed (really) are these companies
to the military SSD business?
a not so simple list of military SSD companies

Can you trust market reports and the handed down wisdom from analysts, bloggers and so-called industry experts?
heck no! - here's why

Why do SSD revenue forecasts by enterprise vendors so often fail to anticipate crashes in demand from their existing customers?
meet Ken and the enterprise SSD software event horizon

the past (and future) of HDD vs SSD sophistry
How will the hard drive market fare...
in a solid state storage world?


Compared to EMC...
ours is better

can you take these AFA companies seriously?

Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing

Reliability is an important factor in many applications which use SSDs. But can you trust an SSD brand just because it claims to be reliable in its ads?
the cultivation and nurturing of "reliability"
in a 2.5" embedded SSD brand

A couple of years ago - if you were a big company wanting to get into the SSD market by an acquisition or strategic investment then a budget somewhere between $500 million and $1 billion would have seemed like plenty.
VCs in SSDs and storage

Adaptive dynamic refresh to improve ECC and power consumption, tiered memory latencies and some other ideas.
Are you ready to rethink RAM?

90% of the enterprise SSD companies which you know have no good reasons to survive.
market consolidation - why? how? when?

With hundreds of patents already pending in this topic there's a high probability that the SSD vendor won't give you the details. It's enough to get the general idea.
Adaptive flash R/W and DSP ECC IP in SSDs

SSD Market - Easy Entry Route #1 - Buy a Company which Already Makes SSDs. (And here's a list of who bought whom.)
3 Easy Ways to Enter the SSD Market

"You'd think... someone should know all the answers by now. "
what do enterprise SSD users want?

We can't afford NOT to be in the SSD market...
Hostage to the fortunes of SSD

Why buy SSDs?
6 user value propositions for buying SSDs

"Play it again Sam - as time goes by..."
the Problem with Write IOPS - in flash SSDs

the SSD heresies
Why can't SSD's true believers agree on a single shared vision for the future of solid state storage?
the SSD Heresies

There's one kind of market research report which you won't find listed on the website of any storage market report vendor - and that's a directory of all the other market research companies they compete with! Here's my list - compiled from over 20 years of past news stories - which includes all categories of market research companies...
who's who in storage market research?

If you spend a lot of your time analyzing the performance characteristics and limitations of flash SSDs - this article will help you to easily predict the characteristics of any new SSDs you encounter - by leveraging the knowledge you already have.
flash SSD performance characteristics and limitations

The memory chip count ceiling around which the SSD controller IP is optimized - predetermines the efficiency of achieving system-wide goals like cost, performance and reliability.
size matters in SSD controller architecture

Are you whiteboarding alternative server based SSD / SCM / SDS architectures? It's messy keeping track of those different options isn't it? Take a look at an easy to remember hex based shorthand which can aptly describe any SSD accelerated server blade.
what's in a number? - SSDserver rank

A popular fad in selling flash SSDs is life assurance and health care claims as in - my flash SSD controller care scheme is 100x better (than all the rest).
razzle dazzling flash SSD cell care

These are the "Editor Proven" cheerleaders and editorial meetings fixers of the storage and SSD industry.
who's who in SSD and storage PR?