this is the home page of
leading the way to the new storage frontier .....
SSD news since 1998
SSD news ..
image shows mouse building storage - click to see industrial SSDs article
industrial SSDs ..
the fastest SSDs - click to read article
the fastest SSDs ..
military storage directory and news
military SSDs ..
storage history
SSD history ..
Flash Memory
nvms in SSDs ..
disk writes per day in enterprise SSDs
DWPD ....
image shows Megabye the mouse reading scroll - click to see the top 30 solid state drive articles
more SSD articles ...
.. is about thought leadership in the SSD market and was the first publication to recognize and promote the tremendous disruptive growth potential of SSDs and the memoryfication of computing architecture. Since the 1990s our readers have been accelerating the growth of this industry and setting its direction and agenda.

Later this month I'll be writing a personal retrospective of how much has changed in the storage market in the 20 years of the Megabyte the Mouse editorial era and I'll offer some predictions about where the next advances will be aimed at. (Although I think many of you can guess some of that already.)

About the publisher (founded 1991)
Advertising on
SSD ad  shows AccelStor NeoSapphire  all-flash array
1U enterprise flash arrays
InfiniBand or 10GbE iSCSI or 16G FC
NeoSapphire series - from AccelStor
SSD controllers
rackmount SSDs
DWPD - examples
history of SSD market
SAS SSDs - market timeline
sugaring flash for the enterprise
SSD ad - click for more info
recently in the SSD news archives
April 2018 A research study of Google consumer workloads showed that in memory processing could at the same time halve power consumption and execution time.
March 2018 Nallatech entered the in-situ SSD market.
February 2018 Gen-Z specification 1.0 released for futuristic memory fabric designers.
January 2018 Foremay launched its new "Immortal" brand of radiation hardened military SSDs.
December 2017 ChinaDaily reported that China's NDRC was looking at complaints about high prices in the semiconductor memory market to determine if there was evidence to open an antitrust inquiry.
November 2017 IntelliProp demonstrated a memory controller for the emerging Gen-Z memory fabric.
October 2017 Quarch Technology launched a test suite which measures real-time SSD watts.
September 2017 Toshiba announced the winner of the $18 billion beauty pageant to find a suitable buyer for its memory and SSD business.
August 2017 Western Digital agreed to acquire Tegile which had pioneered innovative "utility" based customer pricing models in the hybrid storage array market.
July 2017 Viking shipped 50TB planar MLC 3.5" SAS SSDs based on a controller platform designed by rackmount SSD maker Nimbus.
June 2017 Toshiba began sampling the world's first 64 layer QLC (x4) nand flash memory. The 768Gb chips were the highest density nvms available.
news archive 2000 to 2018

1.0" SSDs
1.8" SSDs
2.5" SSDs
3.5" SSDs

1973 - 2017 - the SSD story

2013 - SSD market changes
2014 - SSD market changes
2015 - SSD market changes
2016 - SSD market changes

20K RPM HDDs - no-show

About the publisher - 1991 to 2017
Adaptive R/W flash IP + DSP ECC
Acquired SSD companies
Acquired storage companies
Advertising on
Analysts - SSD market
Analysts - storage market
Animal Brands in the storage market
Architecture - network storage
Articles - SSD
Auto tiering SSDs

Bad block management in flash SSDs
Benchmarks - SSD - can your trust them?
Big market picture of SSDs
Bookmarks from SSD leaders
Branding Strategies in the SSD market

Chips - storage interface
Chips - SSD on a chip & DOMs
Click rates - SSD banner ads
Cloud with SSDs inside
Consolidation trends in the enterprise flash market
Consumer SSDs
Controller chips for SSDs
Cost of SSDs

Data recovery for flash SSDs?
DIMM wars in the SSD market
Disk sanitizers
DRAM (lots of stories)
DRAM remembers
DWPD - examples from the market

Efficiency - comparing SSD designs
Encryption - impacts in notebook SSDs
Endurance - in flash SSDs
enterprise flash SSDs history
enterprise flash array market - segmentation
enterprise SSD story - plot complications
EOL SSDs - issues for buyers

FITs (failures in time) & SSDs
Fast purge / erase SSDs
Fastest SSDs
Flash Memory

Garbage Collection and other SSD jargon

Hard drives
High availability enterprise SSDs
History of data storage
History of disk to disk backup
History of the SPARC systems market
History of SSD market
Hold up capacitors in military SSDs
hybrid DIMMs
hybrid drives
hybrid storage arrays

Iceberg syndrome - SSD capacity you don't see
Imprinting the brain of the SSD
Industrial SSDs
Industry trade associations (ORGs)
IOPS in flash SSDs

Jargon - flash SSD

Legacy vs New Dynasty - enterprise SSDs
Limericks about flash endurance

M.2 SSDs
Market research (all storage)
Marketing Views
Memory Channel SSDs
Mice and storage
Military storage

Notebook SSDs - timeline
Petabyte SSD roadmap
Power loss - sudden in SSDs
Power, Speed and Strength in SSD brands
PR agencies - storage and SSD
Processors in SSD controllers

Rackmount SSDs
RAID systems (incl RAIC RAISE etc)
RAM cache ratios in flash SSDs
RAM memory chips
RAM SSDs versus Flash SSDs
Reliability - SSD / storage
RPM and hard drive spin speeds

SCSI SSDs - legacy parallel
Symmetry in SSD design

Tape libraries

Test Equipment
Top 20 SSD companies
Tuning SANs with SSDs

USB storage
User Value Propositions for SSDs

VCs in SSDs
VCs in storage - 2000 to 2012
Videos - about SSDs

Zsolt Kerekes - (editor linkedin)

animal brands in SSD
The SSD market isn't scared of mice.

But mice aren't the only animals you can find in SSD brands.

There are many other examples of animal brands in SSD as you can see in this collected article.

And before the SSD market became the most important factor in the storage market there were also many animals to be found in other types of storage too.
.. is published by ACSL founded in 1991.

© 1992 to 2018 all rights reserved.

Editor's note:- I currently talk to more than 600 makers of SSDs and another 100 or so companies which are closely enmeshed around the SSD ecosphere.

Most of these SSD companies (but by no means all) are profiled here on the mouse site.

I still learn about new SSD companies every week, including many in stealth mode. If you're interested in the growing big picture of the SSD market canvass - StorageSearch will help you along the way.

Many SSD company CEOs read our site too - and say they value our thought leading SSD content - even when we say something that's not always comfortable to hear. I hope you'll find it it useful too.

Privacy policies.

We never compile email lists from this web site, not for our own use nor anyone else's, and we never ask you to log-in to read any of our own content on this web site. We don't do pop-ups or pop-unders nor blocker ads and we don't place cookies in your computer. We've been publishing on the web since 1996 and these have always been the principles we adhere to.
SSD ad - click for more info

related links:- COMPUTEX Taipei, industrial SSDs, custom SSDs

are we ready for infinitely faster RAM?

(and what would it be worth)

by Zsolt Kerekes, editor - May 14, 2018
infinitely faster memory click to enlargeIf someone could offer you a memory system which had the same storage density (bits per chip / module / box) as mainstream RAM - but which had latency and bandwidth (as measured by what the application sees) which was infinitely faster - could we use it? - how much would that be worth? and how would it change markets? For the past 25 years the computer market has voted with its spending for bigger rather than faster memory - but is the market now receptive to a disruptive change in its ideas about the user value proposition of memory performance?

what's infinitely faster RAM?

I know that some of you who are reading this (and maybe it's You) are the kind of people who found companies or fund them (thanks for staying with me on this ) and when you noticed the words "infinitely faster" in my title above you wondered if it was some kind of late April Fool article. (No - I wrote something else.)

Infinitely? Really? - I know you can't put the value of infinity into a business plan (although it does come in useful sometimes for testing boundary assumptions about how markets will react to disruptive change). So let me explain my use of the term "infinitely faster RAM" in this article to mean "RAM that's maybe 20x or 100x faster than what you can get today. As measured by critical bottlenecks in applications. For my purposes here I'm saying that latency is the most critical fastness factor - and while acknowledging that there aren't generally accepted methods of defining what "faster memory" means - I think it's good enough for my argument below to assume that if the way the black box works behaves consistently as if the memory was X-times faster (or X-times sooner) than before - then that's a good enough understanding.

This also assumes we're on the same page - when it comes to agreement on - what is RAM? - which is a shifting subject I have written about before. For my purposes - if it behaves like RAM - and can transparently replace conventional RAM (chips, modules or boxes or markets) then that's good enough for now - without worrying about implementation details. I'm not going to speculate on the technology of the infinitely faster RAM - I'll leave that problem for someone else - (maybe You). In this article I'm posing the same kind of philosophical and business what-ifs which I did in earlier phases of the SSD and memoryfication markets - which asked - if we could get this new stuff - what new products and markets would we get? - and how would that change pre-existing markets.

No! to the infinitely fast one transistor memory cell. I'd like to make it clear I'm not interested here in the idea of so-called "ultra fast" transistors, memory cells and that ilk of research. As far as I'm concerned if you can't put many megabytes and preferably gigabytes of raw capacity into the infinitely faster RAM (at chip / module level) then it's not the kind of animal I'd talking about here.

some lessons from history - applications create markets and define acceptable latency

Upto the early 2000s the value propositions for different implementations of semiconductor RAM were graded by latency and power - and the order of precedence (DRAM, SRAM, SoC memory on chip - from slowest to fastest) hadn't changed since the dawning of their mutual market coexistence in the 1970s.

If you wanted bigger capacity - you chose DRAM. If you wanted faster latency at a board level of integration you chose SRAM which ran hotter and was smaller in capacity.If you wanted faster than that - there was no contest. It had to be SoC (usually in the form of RAM on a true ASIC or gate array but also latterly on FPGA).

At a board level - and system level - DRAM and SRAM reached their latency limits in about 1999 and haven't get any faster since.

It didn't matter so much in the early 2000s because enterprise processors weren't getting any faster either. And the shape of applications (users doing simple stuff on the internet ) meant that datacenters could get by with affordable technologies which offered higher densities and lower power (more users satisfied per box or watt or dollar) rather than users getting speed they didn't need and couldn't use. The computer industry didn't need faster memory. And when demand for more applications performance did grow - particularly in the early days of the cell phone market (and social networks) the enterprise SSD market took up the slack adequately - as there had been plenty of latency bottlenecks built around earlier generations of (rotating) storage.

Nowadays cell phones are coinage, spies and slot machines. And they've been joined by IoT. There's so much intelligence which can be gathered about the meaning of it all. But no memory or computing platforms fast enough to resolve everything which can imagined by the next master plan in a timely fashion.

memory world war 1 - flash versus DRAM - in enterprise storage

I guess the first time there was a serious challenge to the role of enterprise DRAM from another memory type in the acceleration space was in the early years of enterprise flash adoption (from around 2004). Which was fought out and soon won by flash arrays supplanting RAM SSDs.

If you'd asked most SSD people even as late as 2007 whether they really expected DRAM to be replaced by flash as the mainstream enterprise SSD based acceleration technology there were arguments which could be made for either. But (as we now know) by 2012 the RAM SSD market was effectively extinct. The principal reasons that a slower latency memory (flash) could and did replace a faster latency memory (DRAM) in an acceleration role were:-
  • Typical user installations needed more memory capacity than could be integrated by DRAM in a single box. The latency fabric from interfaces wrapped around these SSD assets negated the latency advantage of DRAM chips compared to flash chips. (The flash chips had much higher storage capacity per chip and required much less electrical power.)
  • Most of the easy acceleration advantages of enterprise SSDs came from read requests rather than writes. That's just the way that the legacy installed software base worked. That bias in the profile of memory R/W meant that the asymmetic R/W latency of flash chips - with reads being orders of magnitude faster than writes - was not a serious obstacle in adoption.
Those acceleration lessons - initially duelled out in Fibre Channel SAN rackmount boxes - were won by the time the PCIe SSD market got started. It showed that a faster memory could lose out to a slower memory in an acceleration focused role.

But that was storage... What does that tell us - if anything about different speeds of memory used as memory?

The early experiences (2014 to 2017) of tiered memory from the DIMM wars market - in which flash can be tiered with DRAM (using form factors as diverse as DIMMs, PCIe modules and even SATA arrays) is that there can be trade offs in big data applications whereby trading size of memory in the box can offset the native speed of raw memory (when the fastest memory is DRAM) for exactly the same reasons which pertained in flash storage accelerators. And the benefits of doing so have been mostly related to improvements in cost rather than any risk free overwhelming advantages in application latency. That's because the low hanging fruit of tiered flash speedup was mostly already harvested by the bottlenecks uncovered and bypassed by server based PCIe SSD adoption.

Here's one last look backwards at the lessons from history - before going on to speculate about the value of infinitely faster memory in the futre.

I think that if you could go back in time and take with you a warehouse of today's fastest and highest capacity DRAM chips - along with plug compatible adapters to retrofit them into past server and storage systems - then you wouldn't change the world of applications because most performance constrained servers in the past - already had the maximum amount of DRAM installed. And if they didn't - then there were so many bottlenecks built into interfaces and software that - even with an ideally configured modern memory array taken back in time and fitted as an upgrade - you would at best typically get a 2x or 3x speedup or - very often - get no speed up at all. That's why the early adoption of enterprise SSD accelerators was slow and problematic. There were too many problems baked into the ecosystem for any one new product to make enough of a change.

today's world of enteprise memoryfication and memory defined software

Today's computing market - where SSDs are everywhere and storage latency bands have a precise value and can be controlled - is better placed to consider and use faster memory systems. It's the only direction of travel to enable faster software.

The business needs are well understood. It's easy nowadays to see a direct link between faster decisions (based on diverse internet signals picked up and analyzed by real-time AI) and measurable economic outcomes. Also new business models and markets are being created by the application of heavy duty machine learning.

The software industry has had nearly a decade to become accustomed to thinking about having meaningful choices in latency - which are determined by different types and tiers of semiconductor devices. Most of the benefits of SSDs came when enough software changed to fit in with an SSD world. Those changes were hard to make because of decades of architectural lethargy in the leadup to the modern SSD era. The next stage of software change - towards more memoryfication - has already been underway - due to momentum and despite the lack of revolutionaty memory technologies to take hardware to the next level.

So if you could offer a GB scale memory chip with 20x or 100x lower latency than existing mainstream DRAM - there are companies who could do useful things with it.

simplistic valuation of much faster memory accelerator systems

The most obvious market sizing example for infinitely faster memory accelerator systems is their application in dealing with temporary data.

A simplistic way of looking at this is that overwhelmingly most of the data which enters processor space is temporary data. So if you have a memory accelerator which is 100x faster than the original memory which you used before to solve this type of problem - then provided that you don't run out of new data to feed into the machine and provided that the usable memory size for any single instance of the computation can fit into the memory space provided - then you need approximately 100x less machines to provide the same services.

The equivalence of speed and machines (one machine which is 100x faster being equivalent in capability to 100 slower machines) is similar to the SSD-CPU eqyuivalency model which helped to cost justify SSD accelerators in the early 2000s as server replacements rather than expensive $/TB storage.

However, the memory accelerator has a greater utility than this comparison might suggest on its own because the ability to solve problems sooner, and the ability to solve more complex problems for the first time within a shorter real-time period create new value propositions by making new algorithmic engines viable for new markets and applications.

This ability to create new markets is like the dynamic energy seen in the computer market in the 1980s and 1990s which began with microprocessors making computation cheaper but ended (around 1999) when limits were reached in how fast the GHz clock rate of any particular core could run. And the collective decision of server companies that fater was not necessary is partly to blame for the dumbing down of processor design architecture for nearly 20 years thereafter.

The good thing which emerged from that lack of investment in making commercial processors faster was that it created the fertile soil for the SSD acceleration market - which became the only game in town.

And now that there is a greater understanding of memory (and the interplay of roles and values between raw memory capacity and raw latency) and helped by an SSD rich ecosystem in which larger portions of any computing problem can be economically gathered into systems where the random access time can be arranged to be a handful of microseconds rather than tens of milliseconds - the creative juices of computing architecture have been turning to the much needed creation of new memoryfication compatible computing engines and new processor architectures.

That's what has been giving rise to the proliferation in recent years of commercial in-situ SSDs, processing in / near memory FPGA arrays and dedicated memory accelerators for machine learning and similar neural algorithms.

The early implementations which you can read about in the SSD news archives demonstrate 2 things.
  • The value of an Nx faster memory-compute accelerator can indeed be measured by at least the cost of all the previous traditional hardware which was needed to solve similar problems before. So 1 new PU (TPU etc) is indeed worth Nx conventional server CPU / GPU / when there is sufficient work to be within a particular problem shape universe. Their value is application specific. (Some workloads are accelerated better than others.)
  • Modern memory accelerators don't have to resemble either dumb memory or dumb processors. Provided that they can interoperate with conventional servers and infrastructure the new memory accelerators are best viewed as black boxes whose internal details may change and adapt (just like search engine algorithms) when more data suggests future areas of improvement.
This will creates an existential problem for makers of memory testers - because the future of high performance memory systems (where the money used to be) will become increasingly proprietary.

And as you can realize yourself this will create a cut-off point for manufacturers of high end server memory. Because high end memory systems will inevitably look more like a custom processor market.

And as for traditional processor makers the new memory accelerator systems don't care about and don't need their "instruction set" based backwards compatibilities and roadmaps - because the new ML / NN engine roadmaps (if they need any compatibility at all) will be "application" and "algorithm" based.... More like the kind of compatibilties you witness in successive generations of cloud APIs.

An interesting question is how many different types of memory accelerators the world needs and can support?

One view might be that the world doesn't need more than a handful (one for Google, Amazon, Apple, Baidu etc) because if the biggest benefit and visibility into design optimization only occurs at massive scale then those companies will each drive their own designs.

On the other hand if this is indeed the start of a new rennaissance in computing architecture - then you could argue that there will be the usual explosion of startups hoping to serve new markets created by the new ideas in architecture. (And there may be benefits of such new ideas which occur without being colocated in the cloud.


Going back to my questions in the title...

Are we ready for infinitely faster RAM?

I think I've made a case for the answer being - Yes. More ready than we've ever been before.

And as to - what would it be worth?

My advice to founders of startups in infinitely faster memory accelerators is - don't sprinkle the number "infinity" about too much in your spreadsheets when guessing the market size or attached to the price which you think ideal customers would be prepared to pay. There are plenty of big numbers you can choose which are smaller and will still sound impressive without straining credulity.
SSD ad - click for more info

introducing Memory Defined Software

yes seriously - these words are in the right order.

A new market for software which is strongly typed to
new physical memory platforms and nvm-inside processors
while unbound from the tyranny of memory virtualizable by storage.

by Zsolt Kerekes, editor - February 14, 2018
The existence of a market which provides independent software support for solid state storage, SSDs and tiered memory for enterprise use has a relatively short history (of only about 7 to 10 years) compared to SSDs themselves. A tremendous amount has been accomplished in that time (as you can see in the SSD news archives) as the computing industry transitioned from initially shoe-horning SSDs into storage software models which had originally been written for hard drives, then optimizing system software related code to detect and bypass hardcoded rotating drive delay assumption workarounds which had been buried in every type of application software and then finally creating a new foundation of software primitives (NVMe) which began with the assumption that storage could be solid state.

So far, so good - and there have been some very talented companies which have revisited storage software assumptions from inside the drive, outside in the array, in the interfaces, in the associated stacks below, above, around and from every angle so that today you can realistically expect to get operational characteristics from solid state storage assets which are considerably better than whatever came before. And although there is still work to be done the storage industry can congratulate itself for collectively having done a good job despite at the start thrashing around in a shambolic state of disarray because the usual suspects didn't see the SSD avalanche coming down the hill.

And it's because of the ubiquity of solid state storage assets in the enterprise and the promise shown by early generations of memory fabrics that the next phase of revolution in software is now underway - which is how we get to memory defined software.

Let's backtrack briefly to hardware - because hardware always comes first. Insofar (by way of an apology in advance) that you can write all the software you like which pretends that you're running a new computer business game on a new computing platform - but you only start getting the benefits and the thrills by doing it on the new hardware. Just over a year ago on this home page I wrote a blog - after AFAs - what's the next box? (cloud adapted memory systems) which hinted at the kind of brew we should expect to see after the earlier pioneering percolators of NVDIMM wars had settled their territory disputes with alternative memories, tiered memory's place relative to tiered storage and PCIe's transition from unruly invader to settler in the territory of big memory fabrics which had upto not long before been dominated by fast versions of very old interfaces (IB and GbE). (And just to warn you that like previous land grabs - PCIe's position as a convenient gateway into big memory spaces - is no more sacrosanct that what came before - as Gen-Z may be a faster way to do things in future - although we have the lessons of Infiniband versus Ethernet to show that sometimes the new does not improve fast enough to displace what came before.)

You've had had plenty of warning that something is coming.

What do I mean by Memory Defined Software?

Simply this... Software which has been deliberately written to take advantage of the computational realities of memory with special characteristics in order to get behavior which was not possible before. The special characteristics may take many forms:-
  • nvm inside processors to enable instant reboot or context switches.
  • trusted persistent memory which is used an as application dependent fast look-up or code translation / computational acceleration / interpretive resource
  • memory which is bigger than traditional storage capacities - and which does not break when you hit it with zillions of memory intensive operations which require sub microsecond random read/modify/write/and move latencies.
  • memory with embedded in-memory processing capability (achieved by FPGA or ASIC).
At first some of the new memory defined software which is designed to run on new memory systems may resemble the functional characteristics of software which was developed to run on tiered memory systems which include SSDs and flash as RAM virtualization. But just as storage software evolved so that code written for flash environments could no longer run with acceptable performance on HDD arrays - so too the split between true memory defined software and software written for solid state storage installations will become quickly apparent. I think sooner rather than later - as the stimulus driving new memory code is coming from newer faster moving users who are more nimble in their adoption of new platforms which solve data dependent problems and doesn't carry the same decades long baggage and requirement for of backwards compatibility.

This is the first time you've seen me use the term "Memory Defined Software" in an article and it may feel not quite right at first as it juggles in your brain space for a relationship between SDS (Software Defined Storage) and the very similar sounding (but entirely different) Software Defined Memory. But people have been experimenting with software and architectures which are based around the Memory Defined Software concept for a while to solve embedded problems. In the next stage of the memoryfication of the enterprise I think it will become clearer that this is a new big market opportunity so I thought it's time to recognize it for what it is and give it its true name.

permalink for the above article

optimizing CPUs in the Post Modernist Era of Memory Systems

In-Datacenter Performance Analysis of a 92 TOPS Tensor Processing Unit ASIC (pdf) - a paper by Developers at Google (June 26, 2017)

Hmm... it looks like you're seriously interested in SSDs. So please bookmark this page and come back again soon.

storage search banner
no more waits for Toshiba sale
SSD news
The convenience of DWPD as a way of selecting SSDs for application roles meant it quickly gained widespread adoption in enterprise, cloud and embedded markets but DWPD has limitations too.
what's the state of DWPD?
There's a genuine problem for the SCM (storage class memory) industry. How to describe performance.
is it realistic to talk about memory IOPS?
controllernomics - is that even a real word?
The semiconductor memory business has toggled between under supply and over supply since the 1970s.
an SSD view of past, present and future boom bust cycles in the memory market
As you may have guessed I talk to a lot of companies which design SSDs and SSD controllers.

I also talk to people who design processors.
optimizing CPUs in the Post Modernist Era
Is more always better?
The ups and downs of capacitor hold up in 2.5" MIL flash SSDs
Some of the winners and losers from the memory shortages in 2017 were easy to spot. But there have been new opportunities created too.
miscellaneous consequences of the 2017 memory shortages

SSD ad - click for more info

Despite many revolutionary changes in memory systems design and SSD adoption in the past decade we are still not at the stage where it's possible to predict and plot the next decade as merely an incremental set of refinements of what we've got now.
Are we there yet? - 2017 and 40 years of SSDs

For me - the SSD companies which made me sit up and take notice because of the promise of better prospects for the SSD market implied by something new they did in 2016 were these...
4 shining companies which made me stop and think

Data recovery from DRAM?
I thought everyone knew that

I said to a leading NVDIMM company... This may be a stupid question but... have you thought of supporting a RAMdisk emulation in your new "flash tiered as RAM" solution?
what could we learn?

the dividing line between storage and memory is more fluid than ever before
where are we heading with memory intensive systems?

Enterprise DRAM has the same latency now (or worse) than in 2000. The CPU-DRAM-HDD oligopoly optimized DRAM for a different set of assumptions than we have today in the post modern SSD era.
latency loving reasons for fading out DRAM

click to see examples of SSD banner ads

In some ways the SSD market is like that lakeside village. It's not so long ago that no one even knew where it was.
Can you tell me the best way to get to SSD Street?

Self awareness in the SSD and memoryfication market fabric has spun new tunneling effects in business strategies which now have the potential to instantly hop across segments with infinite improbability.
one big market lesson in SSD year 2016


Many of the important and sometimes mysterious behavioral aspects of SSDs which predetermine their application limitations and usable market roles can only be understood when you look at how well the designer has dealt with managing the symmetries and asymmetries which are implicit in the underlying technologies which are contained within the SSD.
how fast can your SSD run backwards?

The enterprise SSD story...

why's the plot so complicated?

and was there ever a missed opportunity in the past to simplify it?
the elusive golden age of enterprise SSDs

Nowadays you can't expect to understand the worldwide SSD market and realistically predict the likely source and direction of strong influences without having some cognizance of the SSD market in China.
who's who in the SSD market in China?

Why do SSD revenue forecasts by enterprise vendors so often fail to anticipate crashes in demand from their existing customers?
meet Ken and the enterprise SSD software event horizon

the past (and future) of HDD vs SSD sophistry
How will the hard drive market fare...
in a solid state storage world?

Compared to EMC...

ours is better
can you take these AFA startups seriously?

Now we're seeing new trends in pricing flash arrays which don't even pretend that you can analyze and predict the benefits using technical models.
Exiting the Astrological Age of Enterprise SSD Pricing

Reliability is an important factor in many applications which use SSDs. But can you trust an SSD brand just because it claims to be reliable in its ads?
the cultivation and nurturing of "reliability"
in a 2.5" embedded SSD brand

A couple of years ago - if you were a big company wanting to get into the SSD market by an acquisition or strategic investment then a budget somewhere between $500 million and $1 billion would have seemed like plenty.
VCs in SSDs and storage

Adaptive dynamic refresh to improve ECC and power consumption, tiered memory latencies and some other ideas.
Are you ready to rethink RAM?

90% of the enterprise SSD companies which you know have no good reasons to survive.
market consolidation - why? how? when?

With hundreds of patents already pending in this topic there's a high probability that the SSD vendor won't give you the details. It's enough to get the general idea.
Adaptive flash R/W and DSP ECC IP in SSDs

SSD Market - Easy Entry Route #1 - Buy a Company which Already Makes SSDs. (And here's a list of who bought whom.)
3 Easy Ways to Enter the SSD Market

"You'd think... someone should know all the answers by now. "
what do enterprise SSD users want?

We can't afford NOT to be in the SSD market...
Hostage to the fortunes of SSD

Why buy SSDs?
6 user value propositions for buying SSDs

"Play it again Sam - as time goes by..."
the Problem with Write IOPS - in flash SSDs

Why can't SSD's true believers agree upon a single coherent vision for the future of solid state storage? (They never did.)
the SSD Heresies.

The predictability and calm, careful approach to new technology adoption in industrial SSDs was for a long time regarded as a virtue compared to other brash markets.
say farewell to reassuringly boring industrial SSDs

If you spend a lot of your time analyzing the performance characteristics and limitations of flash SSDs - this article will help you to easily predict the characteristics of any new SSDs you encounter - by leveraging the knowledge you already have.
flash SSD performance characteristics and limitations

The memory chip count ceiling around which the SSD controller IP is optimized - predetermines the efficiency of achieving system-wide goals like cost, performance and reliability.
size matters in SSD controller architecture

A popular fad in selling flash SSDs is life assurance and health care claims as in - my flash SSD controller care scheme is 100x better (than all the rest).
razzle dazzling flash SSD cell care

These are the "Editor Proven" cheerleaders and editorial meetings fixers of the storage and SSD industry.
who's who in SSD and storage PR?