Memory Defined Softwareyes seriously - these words are in the right
One day in years to come this could be the new data
editor - February 14, 2018
|The existence of a market which provides
independent software support for solid state storage, SSDs and tiered memory
for enterprise use has a relativeluy short history (of only about 7 to 10
years) compared to SSDs themselves. A tremendous amount has been accomplished in
that time (as you can see in the
archives) as the computing industry transitioned from initially
shoe-horning SSDs into storage software models which had originally been written
for hard drives, then optimizing system software related code to detect and
bypass hardcoded rotating drive delay asumption workarounds which had been
buried in every type of application software and then finally creating a new
foundation of software primitives (NVMe) which began with the assumption that
storage could be solid state.|
So far, so good - and there have been
some very talented companies which have revisited storage software assumptions
from inside the drive, outside in the array, in the interfaces, in the
associated stacks below, above, around and from every angle so that today you
can realistically expect to get operational characteristics from solid state
storage assets which are considerably better than whatever came before. And
although there is still work to be done the storage industry can congratulate
itself for collectively having done a good job despite at the start thrashing
around in a shambolic
state of disarray because the usual suspects didn't see the SSD avalanche
coming down the hill.
And it's because of the ubiquity of solid state
storage assets in the enterprise and the promise shown by early generations of
memory fabrics that the next phase of revolution in software is now underway -
which is how we get to memory defined software.
briefly to hardware - because hardware always comes first. Insofar (by way of
an apology in advance) that you can write all the software you like which
pretends that you're running a new computer business game on a new computing
platform - but you only start getting the benefits and the thrills by doing it
on the new hardware. Just over a year ago on this home page I wrote a blog -
after AFAs -
what's the next box? (cloud adapted memory systems) which hinted at the
kind of brew we should expect to see after the earlier pioneering percolators
of NVDIMM wars
had settled their territory disputes with alternative memories, tiered memory's
place relative to tiered storage and PCIe's transition from unruly invader to
settler in the territory of big memory fabrics which had upto not long
befiore been dominated by fast versions of very old interfaces (IB and GbE).
(And just to warn you that like previous landgrabs - PCIe's position as a
convenient gateway into big memory spaces - is no more sacrosanct that what came
before - as Gen-Z may be a faster way to do things in future - although we have
the lessons of Infiniband versus Ethernet to show that sometimes the new does
not improve fast enough to displace what came before.)
You've had had
plenty of warning that something is coming.
What do I mean by
Memory Defined Software?
Simply this... Software which has been
deliberately written to take advantage of the computational realities of memory
with special characteristics in order to get behavior which was not possible
before. The special characteristics may take many forms:-
- nvm inside processors to enable instant reboot or context switches.
- trsusted persistent memory which is used an as application dependent fast
look-up or code translation / computational acceleration / interpretive
- memory which is bigger than traditional storage capacities - and which does
not break when you hit it with zillions of memory intensive operations which
require sub microsecond random read/modify/write latencies.
At first some of the new memory defined software which is
designed to run on new memory systems may resemble the functional
characteristics of software which was developed to run on tiered memory systems
which include SSDs and flash as RAM virtualization. But just as storage software
evolved so that code written for flash environments could no longer run with
acceptable performance on HDD arrays - so too the split between true memory
defined software and software written for solid state storage installations
will become quickly apparent. I think sooner rather than later - as the stimulus
driving new memory code is coming from newer faster moving users who are more
nimble in their adoption of new platforms which solve data dependent problems
and doesn't carry the same decades long baggage and requirement for of
- memory with embedded in-memory processing capability (acheived by FPGA or
Happy Valentine's Day.
more sightings of little nv data critters in the wild anticipated in
editor - January 18, 2018
my many years reporting on the SSD market here on the mouse site solid
state storage has been a cleansing catalyst of change and breaking down
doability barriers which have enabled the leveraging and repurposing of data
wherever it may be. So if there is an economic value buried somewhere in the
data it can be discovered, mined and delivered in ways which were previously
impossible or unviable. |
Last year the SSD market looked like
it was morphing into the memoryfication of everything (storage, software and
As we're all aware from the
on linkedin / twitter
etc the loudest
action has been centered around making systems faster and cheaper and bigger in
capability - but at the other end of the arena - the new lower capacity non
volatile memory technologies are creeping into application roles with capacities
which are maybe 1,000x smaller than a single nand flash or DRAM memory
What useful things can you do with such small nvms?
in the right places - little nv data critters can do quite a lot. And that's a
direction I'll be writing more about too in 2018.
little nv data
critters already sighted / cited in January 2018
technologies for ultra low power data critters were mentioned in several
stories in SSD news
in January 2018.
- eVaderis has taped out an "MRAM inside" MCU with 3Mb of
distributed nvm to support IoT applications which are - "normally-off/instant-on
microcontrollers with near zero latency boot".
- FRAM - which didn't gain traction in the
alt-nvm markets - has
some potential inheritors being developed. But apparently you can still buy old
style FRAM and power the devices from energy harvested from mechanical
vibration and converted by piezoelectric transducers.
SSD on a chip, PBGA SSD, 1
- Longsys said it's offering customizable eMMC (8mm x 10mm) for smart
trajectory of SSD market's onward rebound from memory shortages will
be directed by existential questions by
editor - December 1, 2017
|It was the best of times. It was the
worst of times.
was a year like no other in
40 years of
SSD history. The trajectory of SSD market's onward rebound from memory
shortages will be directed by existential questions.|
For the first
time since the
modern era of SSDs (the no-turning-back years since 2003) the approximate
number of SSDs shipped didn't rise substantially and instead remained
essentially flat year on year.
The most obvious cause for SSD
shipments flatlining seemed to be the much written about
shortages. Memory chip shipments in 2017 were reported by
market researchers to
have been about the same or less than in 2016 due to
worse than expected
manufacturing yield problems associated with next generation memories. This
was exacerbated by the impossibility of bringing new production capacity in
place fast enough due to long lead times of production equipment and
prohibitive investment risks.
While it's true that SSDs shipped in
2017 were on average bigger in capacity than in earlier years this was little
consolation to those SSD vendors whose growth ambitions - if they didn't own
wafer fabs - were shredded by circumstances seemingly outside anyone's control.
from SSD companies coupled with rising
memory and SSD prices
gave credence to the notion that user demand for SSDs would have increased
had supplies been available. Had user appetites for SSDs and memory been
satisfied then prices would have fallen rather than risen and in that case the
interpretation of shipment data would have led to a different market prognosis.
don't expect business to pick up where it left off when the next memory boom
bust correction kicks in. I think there are other factors already at work
which point towards the shape of future SSD shipments being materially
different in 2018 and 2019.
At the root of this are revisits to
- what should memory systems should do?
For example should in
memory processing be part of the standard feature set?
- what they should memory systems look like?
In addition to the
obvious form factor and interface issues which attached to DIMM wars and
memory fabrics - another question is - should future memory arrays be optimized
as storage systems which can emulate memory? or as memory systems which
provide backwards emulation of legacy storage?
might say - why worry? As we've seen with the great solid state storage
experiment it takes years to roll out new architectural dice and the
winning patterns can't exert a backwards influence.
- to what extent should new memory systems be compatible with past
Or - given the market's recent willingness to
engage with memory systems as the only way to advance affordable computer
boundaries - should the market aim higher? Should the opportunities (of
performance and cost) enabled by new memory system architectures change the
shape of the very processors and software they are intended to work with?
I would argue
however that long before the truly revolutionary changes in memory systems
architecture are stabilized we are already seeing new influences coming from
pragmatic adaptations of currently shipping memory products (like NVDIMMs and
SoC compatible nvms) which - with the right software - have the ability to
change the ratios of other SSDs, memory and storage in the systems in which
These incremental technologies will change the patterns
of use of memory in every kind of computing product.
designers in the past decade have been nibbling away at issues of
SSD efficiency -
answering the crude question - what's the best way to use any given number of
memory chips and if I can change the way they connect and the software. These
improvements have typically accumulated in chunks from as small as 5% to as big
as 50% in a single design feature (or patentable IP). As long ago as 2013 I
hinted at the tremendous gap between where we had got to compared to what
may be possible in my classic impacts of the SSD
software event horizon. Recently heralded companies like
Symbolic IO claim
that they do can even better.
The new reality is that
flash are no longer the
only defining memory types supported by useful software.
So called "emerging
memories" - some of which had gotten to be teenagers before they quit
their dark dens and emerged as data industry citizens - have this year
been at the heart of claims by systems oriented memoryfication startups that
they could change the world of storage and memory arrays arrays as much as
SSDs changed the landscape before.
In April 2017 on this very home
page - I asked the question - Are we there yet? - 23 years of SSD guides
later... I concluded at that time
memory systems market and this publisher are still "under construction".
Now with the benefit of hindsight it seems I was right.
shaping of the SSD market's future I think we must anticipate bigger changes to
come in the next few years.
2017 - adding new notes to the music of memory tiering by
editor - November 14, 2017
|I think that developments in the SSD and
memory systems markets in 2017 will have as profound an effect on the future of
the data systems market and the direction of its architecture and software as
the adoption of flash SSDs in enterprise storage had on the design of
hard disk arrays and the
design of server motherboards. |
Although many of the influences to this
new fork in the road had been nurturing for several years before this for
- competing software solutions for memory tiering
- the availability of 3 to 5 nvm alternatives to flash, and
it was the accidental convergence or
crashing together of alt nvms as usable modules (in DIMMs, M.2 SSD and PCIe
SSD form factors) in the same year as the
inevitable but accidental and unpredicted market force - the shortage of
flash (with its attendant price hikes which had the effect of making alt-nvms
look 2 -3 years better and more competitive than they had been in the all the
years before) - which made the lasting difference. From here on thinking about
the internal make up and external presentation of memory systems would be
- mainstream market acceptance of solid state storage at the heart of
enterprise storage engineering
2017 - memoryfication solutions (tiering at the board, box and cloud level)
will no longer be restricted to the same old tunes restricted by the paucity
of melodies obtainable from DRAM, flash and the intervening interface
Designers can now count on a new set of notes and
arrangements to provide data harmonies which were hitherto extravagant to
realize with the two old mainstay memory technologies with their well understood
limitations of space, power consumption and raw latency. (Although many
pioneering attempts at breaking these memory opera barriers came with a
supporting cast of batteries and extra cooling technologies hidden behind the
While no one can guarantee that MRAM, ReRAM or 3DX /
Optane will all continue to be available and competitive in multiple future
generations - the continued future existence of any one particular alternative
to flash and DRAM is less significant than the balance of probability that there
are enough technologies out there (and coming in the works) to make it
worthwhile for software and hardware designers to apply their minds to enriching
the vocabulary of their architecture song books.
If I can use another
analogy - 19th century chemists made great strides in their anticipation of all
possible elements when they constructed the periodic table. For over a decade
the SSD market (and its SSD product atoms) has been both enabled but also
limited by the combination of building blocks which designers could construct
with 2 distinct memory types - subject to the constraints of the atomic forces
(price, wattage and ratios of capacity and latency between DRAM, flash and all
rotating storage) which set the boundaries of which architectural permutations
of components were viable at any point in time.
Looking ahead - the
availability of new memories in the mix and the willingness of designers to
leverage their features to create virtualizable benefits could be as
significant to the datasystems market as the advent of additive technologies (3D
printing) to the creation of new materials with characteristics which weren't
imaginable with traditional elements and compounds.
miscellaneous consequences by
editor - September 7, 2017
of the 2017 memory shortages
|Traditionally at this time of year I
cast around for the the strategic threads which have underlied the stories
reported in the SSD market. In 2017 there's no contest. There has been one
factor which has dominated the fortunes and future directions of the entire
SSD and memoryfication market. The memory shortages.|
a long term evangelist
of the SSD market and the rethinking of data architectures which it has
enabled I have been naturally pleased to see that the adoption of solid state
storage has gone so well.
We're now concluding series 19 of the
SSD mouse site and the
story line began before that with a different name but the same
writer. If you
missed those earlier story lines see
why buy SSDs?
- plot spoilers include - side-stepping the Y2K-CPU-GHz barriers to bring us
faster applications processing, lowering the cost of big data and eliminating
the software shackles of one more spin around the rotating media block
which had fattened
latencies and choked host interface arteries due to wasteful
stuffit-just-in-case cache demands. But you don't need to know all those
old episodes to appreciate where we are now.
SSD thinking is now at
the center of all forward looking data architecture projects. SSD technology
is the mainstream. Demand is high.
In many ways that's a good thing.
But it's been a mixed blessing because production of memories has not kept
pace with demand.
Some of the winners and losers from this have been
easy to spot. But there have been new opportunities created by the memory
shortages and higher prices of memory too. This has helped efficiency and
utilization focused technologies to grab a hold on customer minds in ways which
would otherwise have been harder or even impossible if memory prices had merely
followed the decades old direction of travel.
traditional memory makers
If you're one of those who has suffered
from the memory shortages it may seem unfair that despite their
miscalculations and over optimimism the very companies which caused the
shortages of memory and higher prices - the major manufacturers of
nand flash and
DRAM - have been among the
In the first half of 2017
blogging sites were celebrating the increasing values of memory related
companies on particular Micron which is a pure play memory stock.
the upwards revaluing of memory fabs was a great help to the
Toshiba group of companies
which was looking to improve its solvency through the
disposal of its memory business. (Think of how differently that prospect
would have been interpreted if it had taken place against a backdrop of memory
oversupply and plunging memory prices.)
For traditional memory
companies the ability to allocate where its highly sought after memory chips
were going in order to get the highest
establish influence in future strategic markets created opportunities for
classic semiconductor game playing.
In the simplest business terms
if a memory company has a choice of selling at a higher value - such as an
enterprise SSD (instead of a consumer SSD) - then that's what it should do.
Similarly systems such as
AFAs and JBOFs
start to look more attractive than selling drives. In reality none of the
semiconductor companies had invested sufficiently in establishing viable
before the 2017 shortages. But that didn't stop companies like
Western Digital (which
had a stake in Toshiba's fabs) from talking about it as a forward looking
re long time emerging memories
the 13 years leading up to the memory shortages of 2017 there had been a
variety of so called "emerging" alternative memory technologies
including:- MRAM, PRAM, CMOx, PCM, ReRAM and others which at various times
appeared in the SSD news pages - usually attached to a
promise that one day
soon they would fill an applications niche which upto that point had been
dominated by nand flash. The perennial problem with those lookahead promises
was that the density and cost of that pesky flash just kept improving (SLC,
MLC, TLC, 4Xnm to 1Xnm and 2D to early 3D) so that the competitive comparison
tipping point always lay at some point 2, 3 or 4 years in the future.
the smallest capacity flash devices got bigger it was always possible that these
other emerging memories might find small toe holds in the memory cliff face to
which they could cling and attach but unless flash stopped getting better and
stopped getting cheaper this looked to many observers like a race in freeze
frame. The next generation flash was always more competitive than the next
generation alternative nvm.
Aha! But then we had the 3D flash levels
being stacked in a height
busting tottering tower and the whole market edifice came crashing down with
low yields and high prices and the evermore self improvement miracle of the
flash market was caught in the spotlight of having been accidentally
switched to pause.
In a news story in
commenting on this opportunity created by high traditional memory prices I
said... "The unexpectedly higher price of DRAM and nand flash in the past
several quarters due to demand and yield issues has been like manna from heaven
to companies with alternative nvms. The change in relativistic competitive
landscape has had the same effect as if the alternative nvms could time travel 2
years into the future while nand and DRAM have stayed looped in Groundhog Day."
- in May 2017 -
in response to recent steps taken to productize and create sales channels for
Everspin 's MRAM - I said - should we still be calling MRAM - emerging memory?
advantage of the long emerging memories was that they could be manufactured in
fabs which weren't already part of the DRAM / nand flash oligolopoly. And they
were starting to clarify their suited application roles in the SSD and
ecosystem:- as nvm in SoCs,
flash SSDs, low capacity SSDs, high temperature SSDs, persistent memory etc.
The exception was Optane
/ 3DX from Intel Micron which was evolved from and replaced Micron's earlier
development of PCM. 3DX would have to fight internally for wafer starts in
traditional memory fabs. The scale of how those internal priorities would be
decided may be judged by the fact that Micron itself said in an earnings call
in January 2017
that - "3D cross point is a very de minimis amount of revenue in fiscal
2017. We will ship for revenue, but it's actually a fairly small amount and then
we've set the expectation for somewhere around 5% of company revenues in 2018."
And another difference with 3DX compared to other competing alt-nvms
is that it apparently did not look like it would be any easier to make than the
other 3D memories whose yields had caused the memory shortages. In
January 2017 the
CEO of BeSang said
that looking at cross-point structure memories (such as Micron's 3DXpoint) - "is
the worst nightmare for manufacturing".
re efficiency and
utilization - subtext architecture, software and the cloud
I mean here by efficiency?
To put it crudely it's a comparison
about the design and implementation of SSD drives and boxes.
called attention to this important business factor my 2012 article -
internecine SSD competitive advantage. And I have often mentioned it in
stories when praising one kind of design approach compared to another. But even
though I thought this was a desperately important differentiator between
competing product lines (as so did the innovative designers who had designed
such products) you wouldn't have guessed this easily from the external signs
seen in the rackmount SSD market. The reasons being that brand strength and
actual bundled or implied software and services - coupled with the
complexities of different use cases - were just some of the factors which could
hide these internal differences from customers who were buying these systems.
they have guessed anyway - due to seeing different size boxes being offered to
do exactly the same task? Don't blame the user for
SSD box blindness.
All they knew is what they were paying - and they weren't always too
what they getting for their budgets anyway. Perhaps the investors in those AFA
companies should have known - but they were usually the last to know anything.
The street prices of enterprise flash storage arrays had become connected to
chip headcount realities only by the most tenuous of formulas. And truth to tell
- the difference between super efficient and less efficient designs and
didn't matter so much to small and medium users so long as the boxes they were
buying today cost less than the boxes they had bought a little while before.
of les efficient systems could argue - our boxes are more reliable (or some
other distracting excuse) and by a process of waiting time - lo and behold - the
chips got cheaper and the box was more profitable.
As long as you could
buy all the chips you needed it didn't matter if some boxes used twice and many
chips as others.
In contrast - in the mission critical embedded SSD
drive market - where the power consumption of a single slot is looked at by
someone who worries about watts in the box and what they do to reliability - the
efficiency factor was a better appreciated personality trait of SSDs.
let's get back to SSD boxes (hybrid arrays, AFAs etc).
Now in 2017 you
can't get the chips - even if you can afford them. And maybe your customer won't
like the price of the box even if you could assemble it.
starts to matter more.
But there were some other words in the
sub-headline too. Along with efficiency there was utilization.
do I mean by that?
Utilization in this context is a measure of how much
usefulness is delivered at the applications level by a particular raw size of
installed flash. This usefulness benefit is usually delivered by a combination
of software and firmware (and may also include within it a differently tiered
and managed memory architecture). An
of the the benefit can be where an existing flash array is improved to
deliver significantly more reliability, performance or usable storage simply
from a software update alone.
In the best designed systems
efficiency and utilization tricks and tweaks are already integrated at many
levels in the flash array.
Although software vendors like to talk
about hyperconvergence, tiered memory, new stacks, memory defined storage etc -
these can viewed as marketing and branding ideas. They will soon be as quaint
sounding as the 1980s "RISC versus CISC".
During the years
when new technology tricks do something better - their protagonists reap kudos.
From the point of the memoryfication systems industry it doesn't greatly
matter what label is given to a particular technique. The important thing is
that the industry is working towards a better understanding of how to integrate
very large populations of memory chips with diverse characteristics and grouped
in historically defined interface combinations, and creating software bridges
which satisfy legacy applications needs while also incorporating the newer
memory focused demands of big data applications. In 10 to 20 years time all the
best design ideas for memory systems will be mixed up in ASICs or FPGAs and
seamlessly blended in the new standard software stack. The inventors may write
blogs or books about how their IP babies changed the industry - but most people
Returning to the memory shortages... If - like me - you
believe that the industry will most likely remain in a state where demand
exceeds supply for a significant period (years rather than months of quarters)
then the only affordable way that enterprise users will satisfy their needs is
to head towards solutions suppliers which have the best efficiency and
utilization stories to tell.
At its simplest - that will accelerate
integration with the cloud
- because for the past 10 years the cloud and webscale integrators have been ths
companies with the sharpest focus on extracting value from improvment
granularities which traditional box makers didn't care so much about.
there are still huge opportunities in the enterprise box markets for companies
ranging from JBOF to multipetabyte singing and dancing storage arrays to
demonstrate by their pricing and their ability to satisfy shipment demands from
repeat customers - that doing more with less flash is at the core of their
Software companies which promise they can upscale memory
systems to do more with less chips in the box will be hot prospects.
Symbolic IO was much
praised in 2016 before the memory shortages. Their IP is bundled with hardware.
But new memory efficiency partnerships can be software-only or software tied
to a fab.
re hard drives and the memory shortages
Seagate was quick to
squelch expectations in the investor community that a shortage of memory chips
to make SSDs would have a positive impact on the sales of enterprise hard
drives. Although there may have been some small changes of ordering patterns in
the hybrid storage systems base Seagate wanted to dispel
that there is equivalency in these markets and that an SSD sale won is an HDD
This publication has noted that it is realistic for hard
drives to retreat towards safer application roles which are compatible
with but don't aim to challenge the clear and present reality of a confident
SSDwards direction in server and storage markets. (Hard drives in an SSD
And another factor is
We've now in the post-HDD referential era of enterprise software. Most
enterprise applications either doesn't work in a pure hard drive
environment or if it did the performance would be so bad that you wouldn't want
to use it.
In consumer markets particularly in PCs the deployment of
HDDs and SSDs has evolved to be a horse of a different color . Nowadays SSD
based PCs win or lose sales compared to other flash based devices such as
tablets. The hard drive based PC - which survived SSD encroachment better
than the unsuccessful market adaptation of the
hybrid - was
already on its way to becoming a vanishing species with or without nudges
from the 2017 memory shortages.
re SSD manufacturers without
captive memory fabs
The memory shortages of 2017 have highlighted
the differences between those few SSD manufacturers who have their own captive
source of memory and those others (the majority) which don't.
common message I've heard from SSD makers in the latter category is that they
could have sold more SSDs (and SSD based systems) if they had gotten more
supplies of memory.
Another consequence of the shortages is that
those without their own memory fabs have felt the squeeze most from pricing
Long before these recent shortages I had observerd that the
memory fabless SSD companies tended to be those who had better designs and who
invested most in both value added and innovatively efficient designs.
also noted that in times of semiconductor memory gluts the fabless SSD
companies were better positioned to grow market market share while remaining
The memory shortages has been opening up cracks in
SSD business plans which were too heavily predicated on expectations of
An interesting development has been that even
industries which weren't expecting to use the newest generations of highest
density 3D flash - such as the
military markets -
have been hit by shortages in mature planar (2D) memory. You might have
expected them to be immune to leading edge 3D TLC yield problems - because this
is type of memory they are unlikely to use. The cascade of shortages into users
of bigger line geometry components is partly because memory makers were
already underway with hard to reverse plans to migrate most of their
production to 3D before they realized the unprecenteded scale of associated
problems. And also systems companies with SSD product lines which were ready to
ramp to newer memories reacted by extending the shipments of older SSDs with as
much memory as they could get until supplies dried up.
PCs and consumer gadgets
Industry reports said that PC makers were
among the big consumer casualties of the 2017 memory shortages. Were users
going to be happy to pay significantly more for the same old SSD based PC? No
way. They didn't get the choice. The long decline in the PC market due to more
than 10 years of badly designed SSD based notebooks was not an attractive
enough market proposition to warrant high allocations of memory.
the phone market was different.
People love their phones and it's a
crisis if they can't get new ones.
re Samsung's phone business and
For Samsung which at the same time was one of the world's
leading phone makers and memory makers the memory shortages provided
opportunities to increase market share and profitability. (And maybe to expunge
the negative market images of exploding batteries and recalls from its 2016
re Apple's phone business and memory
centered around Apple (the other big phone maker - but without its own memory
fabs - why? - because memory is a commodity Darling - which is also why Intel
stayed so long out of the memory market which it created) the talk and
speculation in the 3rd quarter of 2017 about the effects of the memory
shortages on Apple were split between:-
Apple risk buying memory from its competitor Samsung? (When
this story surfaced in 2016
Samsung was optimistically anticipating a glut in its nand availability.) Or
should Apple join a consortium to acquire Toshiba's memory business - and
thereby secure its memory supplies?
These outcomes were still unknown
at the time of writing this. For samples of the reported Apple memory mood music
asks Apple to join its bid for Toshiba (Sept 6), Apple says it won't
buy Toshiba products if WDC gains control (Sept 8),
in talks with group which includes Dell and Seagate to buy Toshiba (Sep
clarification - re the 2017 memory shortages
you're being pedantic you may ask why did I keep referring to the "2017"
memory shortages in the notes above - didn't the shortages begin in 2016?
right they did. But it wasn't clear in 2016 just how long the shortages would
last and how much of a lasting impact they would make. Based on the experiences
of past memory business cycles and the upbeat messages from the memory market it
would have been reasonable to anticipate a quicker supply correction. That
didn't happen and so instead of being a blip caused the industry's changeoever
to next generation chip capacity - the shortages and higher costs of memory
became the new normal.
how long until there's a correction?
to software mitigation as a memory shortage fixer
July 2017 the
measure of what do we mean by "2017 memory shortages" was
succintly stated by market
research company IC Insights
which said in a report - "DRAM, unit shipments are actually forecast to
show a decline this year (2017). Moreover, NAND shipments are forecast to
increase only 2%."
There you have it. Even after bringing new
production capacity onstream the effect of yield (usable chips versus defects)
is that the number of memory chips coming out of the world's semiconductor
fabs was about the same as it had been the year before. And although a
proportion of these were higher capacity the demands for memory were for both
more chips and higher density chips.
I dealt at length with "no
easy fix at the fab level" nature of the flash memory shortages in a
blog in July 2017 - 3D
nand fab yield - the nth layer tax - are more dimensions of analysis needed to
get a clearer picture of future 3D nand successions?. That analysis
underlies my belief that for the remainder of 2017 and 2018 we can't
realistically expect the semiconductor memory market to return to oversupply
and plunging prices from efforts and resources under its own control.
read this far you won't be surprised that I think the biggest contribution to
mitigate pain for users and producers of memory systems will come from
better architecture based efficiencies and firmware and sofware based
utilization improvements rather than
precise deposition in the wafer fabs.
By their very natue (being
tightly coupled to controller IP and software cycles) these solutions will take
time to prove their worth and further time to gain wider market acceptance.
Think of it as a software correction to this memory supply cycle problem and
you'll get a better feel for the dynamics. This is completey different to the
traditional semiconductor fab based (tweak the machinery settings to up the
yield) which provided quick market corrections in past decades.
- the main fault with this new article is that it's too short and as with
all my new blogs has been published with its rough edges still visible. I'm
confident that I will find much additional material to add to it as the memory
shortages of 2017 unfold into 2018. And if I am in a position to do so I promise
to write a retrospective look back from the post shortage viewpoint when that
|Hmm... it looks like you're seriously
interested in SSDs. So please bookmark this page and come back again soon. |
|The convenience of DWPD as
a way of selecting SSDs for application roles meant it quickly gained widespread
adoption in the enterprise and cloud.|
Indeed it was so useful that
within 3 years it was adopted in most of the other SSD markets too.
before this starts to sound too much like a DWPD fan club I'd like to say that -
like all technical specifications - DWPD has its uses but also has its
the state of DWPD?|
|You can feel the Post
Modernist Era of SSD in the air everywhere.
Momentum has been building during the past 4 years with signals
coming from the appearance of memory channel SSDs, talk of in-situ SSD
processing, and much practical rethinking about RAM architecture.
And as I indicated in an earlier article -
Arrays - what next? (January 2017) - I think the next foreseeable staging
point will be that storage becomes less relevant as a product and will instead
become a supported legacy emulation concept within persistent memory systems.
What does that mean for CPUs?
CPUs for use with SSDs in the Post Modernist Era of SSD and Memory Systems|
|Despite many revolutionary
changes in memory systems design and SSD adoption in the past decade we are
still not at the stage where it's possible to predict and plot the next decade
as merely an incremental set of refinements of what we've got now. |
there yet? - 2017 and 40 years of SSDs|
|I said to a leading NVDIMM
company... This may be a stupid question but... have you thought of supporting
a RAMdisk emulation in your new "flash tiered as RAM" solution?|
|what could we
|Enterprise DRAM has the
same latency now (or worse) than in 2000. The CPU-DRAM-HDD oligopoly
optimized DRAM for a different set of assumptions than we have today in the
post modern SSD era.|
reasons for fading out DRAM|
|Self awareness in the SSD
and memoryfication market fabric has spun new tunneling effects in business
strategies which now have the potential to instantly hop across segments with
infinite improbability. |
market lesson in SSD year 2016|
|Many of the important and
sometimes mysterious behavioral aspects of SSDs which predetermine their
application limitations and usable market roles can only be understood when you
look at how well the designer has dealt with managing the symmetries and
asymmetries which are implicit in the underlying technologies which are
contained within the SSD.|
|how fast can your SSD
|Nowadays you can't expect
to understand the worldwide SSD market and realistically predict the likely
source and direction of strong influences without having some cognizance of the
SSD market in China.|
|who's who in the SSD
market in China?|
|A couple of years ago - if
you were a big company wanting to get into the SSD market by an acquisition or
strategic investment then a budget somewhere between $500 million and $1 billion
would have seemed like plenty.|
|VCs in SSDs and storage|
|With hundreds of patents
already pending in this topic there's a high probability that the SSD vendor
won't give you the details. It's enough to get the general idea.
R/W and DSP ECC IP in SSDs|
|Why can't SSD's true
believers agree upon a single coherent vision for the future of solid state
storage? (They never did.) |
|the SSD Heresies.|
|If you spend a lot of your
time analyzing the performance characteristics and limitations of flash SSDs -
this article will help you to easily predict the characteristics of any new SSDs
you encounter - by leveraging the knowledge you already have.|
|flash SSD performance
characteristics and limitations|
|The memory chip count
ceiling around which the SSD controller IP is optimized - predetermines the
efficiency of achieving system-wide goals like cost, performance and
|size matters in
SSD controller architecture|
|A popular fad in selling
flash SSDs is life assurance and health care claims as in - my flash SSD
controller care scheme is 100x better (than all the rest).|
|razzle dazzling flash SSD
cell care |