Solid-state drives (SSDs) – understanding the fine print

Underneath the hood, solid-state drives are complicated tech. Understanding a bit of this complicated architecture is important as you shop for a new SSD. Two SSDs can look exactly the same, sometimes with the same branding, but have totally different real-world performance characteristics.

The performance of any SSD is ultimately a function of its underlying architecture – the complicated bits. Let’s talk about this underlying architecture in simple terms.

What makes up an SSD

There are five things to consider when talking about the average solid-state drive. Its:

  • Flash memory
  • Controller
  • Buffer or cache worker
  • Data interface
  • Form factor

Flash memory

The flash memory of an SSD is its actual storage device, that is, the component that actually holds stored data onboard the SSD. Unlike HDDs, which rely on moving magnetic parts, flash memory on SSDs have no moving parts. They’re ‘fixed in state’ transistor chips, hence the name ‘solid-state.’

SSD flash memory allows your computer to retrieve data in a ‘flash.’ Whereas HDDs require physical movement to reach areas on their storage architecture to read/write data, flash memory can rapidly access data electronically.

NAND or NOR

SSD flash memory can be either NAND or NOR-type flash memory. Differentiating between the two is irrelevant as most consumer-grade SSD you’ll find will use NAND type flash memory. That said, NOR-type SSD is historically considered to be more performant. NAND lags behind but only just and is more cost-effective to produce.

SLC, MLC, TLC

Image showing the different forms of SSD architectures. SLC vs. MLC vs. TLC
Comparing SLC, MLC and TLC SSD architectures

The building blocks of SSD flash memory are its cells (transistors). Flash memory on SSDs store data by dynamically altering the bit state of single transistors on the chip. If the flash memory chip is:

SLC or single-level cell

It means each transistor or cell can only store one bit of data

MLC or multi-level cell

It means each transistor can hold two bits of data

TLC or trinary-level cell

It means each transistor can hold three bits of data

Think of the NAND flash storage architecture to be a block of flats. Each flat is a ‘cell.’ If flats are designated to house one person, it is an SLC; two people and it’s multi-level; three and it’s trinary.

Why is this important

One person in a flat would mean less wear and tear and better comfort for the occupant. If it’s two or three per flat, then comfortability is reduced, and the house will, in theory, get older, faster. The same principle applies to NAND flash storage. When the architecture is SLC, and each cell stores just a single bit of data, cell (transistor) degradation is lesser over time. Data storage quality and performance are also better.

But to make a block of flats single occupant would mean building x2 the number of flats a two occupant per flat system would require. SLC use more NAND flash real estate and is more expensive to build. MLC and TLC are ways to cut costs and reduce the pricing of consumer-grade SSDs.

Should your SSD be SLC? Well, not necessarily. The SLC config is obviously better, but advancements in tech have caused massive improvements in the resilience and performance characteristics of other configs – particularly MLCs. Any config would do at this point for the average consumer. You only need to start worrying about SLC, MLC and TLC if you’re particular about performance.

Controller

The controller is the brain of the SSD. Much like your computer has a CPU to organize and process things, an SSD’s controller serves the function of ‘controlling’ how the SSD operates. Specifically, a controller works to:

  • Order how data is written, read and managed on an SSD.
  • Order how the SSD interfaces with host devices (your PC).

Different SSDs will spot different controllers, and much like traditional CPUs, there’s a difference in controller performance. The choice of a controller in an SSD varies by SSD vendor, but the technical details that differentiate one controller from the next are often treated as trade secrets.

Nevertheless, testing by individual reviewers often outlines top performant controllers.

Buffer or cache worker (DRAM)

A buffer or cache worker as I like to call them is a RAM chip on an SSD that helps with optimizing the process of reading and writing data.

SSD cache works via two mechanisms:

Storing frequently accessed data temporarily for faster access.

Using our block of flats analogy, think of the DRAM as a central conference room on the ground floor of our building. If we find that occupant ten staying in flat ten is being called out to repark his car every ten minutes, instead of letting him return to his flat after every repack, we just station him in the conference room close to the door of the building.

This way, we can easily call on him when needed, and because the conference room has a wide door and doesn’t require us to scale stairs to reach the occupant, this process is all the more faster.

DRAMs work in the exact same way. Our occupant is data. The conference room is the DRAM architecture compared to NAND flash (the block of flats), and its wide doors and no stairs design represent its system Random Access Memory config, which allows for very fast ‘random access’ to any onboard data.

So why don’t we just use a DRAM as the primary storage device instead of NAND flash? For starters, DRAM chips are more expensive to produce. Then there’s the fact that DRAM is volatile, meaning it loses all its data once its source electric current goes off. NAND flash is non-volatile. So data stays on even after you unplug your PC (containing your SSD) from power.

Keeping a copy of the data map on the SSD

Data is stored on specific electronic locations on an SSD. If we know the exact route to getting to each location, rather than blindly following stairs and searching for a location, then it is easier and faster to fetch data.

DRAMs keep a map of your data as it is stored on an SSD to this effect.

Aside from helping with fetching data and improving data access and retrieval speed, data mapping by DRAMs also facilitates what’s called ‘wear levelling.’

Wear levelling

Remember the cells or transistors we talked about earlier, the same ones responsible for storing data as bits in a NAND flash storage device? They’re subject to electronic ‘wear and tear.’ Their performance and ability to store data wanes over time.

To ensure that this wearing and tearing is uniform across all cells in the NAND flash chip, the SSD will programmatically perform wear levelling by prioritizing data writes to dormant cells or those that haven’t been used recently.

Let’s go back to our block of flats analogy again. Our occupants (data) are tenants, and not every flat is occupied. When a tenant packs out and a new one comes in, we give them one of the older vacant flats instead of assigning the new tenant the old tenant’s flat. This way, there’s a uniformity in how flats are used, and that will prevent overuse of any one particular flat.

DRAMs facilitate wear levelling by being the porter at the front desk of our block flats. They keep a map of what flat has had occupants and which are dormant, so when a new occupant comes in, we know which unused flat to assign them to.

Why is this important

Despite the many obvious benefits of a DRAM cache system, a growing number of manufacturers don’t include them in their SSD builds. Such DRAM-less SSDs lack all of the optimization features I just talked about and, as a result, are slower than SSDs with a DRAM included.

If you’re concerned about performance, your SSD should have a DRAM chip.

Fair to note that some manufacturers are currently experimenting with configs that do away with DRAMs but substitute them with another optimizer setup. A good example of this is the HMB setup, but like I said, still experimental, you’re better off sticking with DRAM included SSDs for now.

Host memory buffer (HMB) setup

Rather than have a dedicated RAM setup (the DRAM we just talked about), SSDs with HMB technology link to your PC’s native RAM to deliver all the performance optimizations we just talked about. The obvious advantage of this setup over the traditional DRAM config is cost – SSDs with HMB tech, much like DRAM-less SSDs, are cheaper.

Performance, for now, is, however, still not on par with DRAM embedded SSDs, although some setups are getting very close.

Data interface

All we’ve talked about prior to this point relates to how SSDs handle their internal business. The data interface technology is what connects an SSD to the outside world, that is, what links it to other devices (host devices).

There are a lot of specifications and technical lingo associated with SSD data interfaces. If you’re new to SSDs, it can get confusing, and that’s normal – my first guide on SSDs touching on data interfaces was not very accurate. Took a while to get fully up to speed.

Let’s attempt to break it down in very simple terms.

To interact with your PC, an SSD requires a software logic and hardware interface. Think of the software logic as what ‘gathers’ and packages data coming in and out of your SSD. The hardware interface, on the other hand, is what allows your SSD to ‘handshake’ with your PC, sort of like a bridge connecting two different cities.

Software logic – AHCI and NVME

The two most popular software logic you’ll come across when shopping for a new SSD will be the AHCI and NVME protocols.

Like I said, what these protocols do is figure out a way to enable communication between SSD and PC.

Of the two, NVME is the better protocol cause it’s faster and more reliable. Most high-end SSDs you’ll see on the market today will use NVME.

AHCI was primarily developed for HDDs (not SSDs). When you see a storage device using the AHCI protocol, then it’ll most likely be a HDD or an SSD using a SATA interface.

SATA vs PCIe

Serial AT Attachment or SATA is kinda like the old dog of hardware interfaces for storage technology. It’s been the standard for years but is facing stiff competition from PCIe, and losing too. Most SSDs you’ll buy today, especially those oriented for performance, will use the PCIe interface as opposed to SATA.

The reason is, compared to SATA, PCIe is faster. Max clock speeds supported by the SATA interface is about 600MB/S. Latest gen PCIe can transmit data at an astonishing rate of 3200MB/S, that’s almost six times the fastest speed supported by SATA.

PCIe or Peripheral Component Interconnect Express is the more updated hardware interface for SSDs. Its faster throughput speed is both a factor of design and architecture. I compared hardware interfaces to bridges earlier on; if SATA was a single lane bridge, PCIe is a four-lane bridge that allows for simultaneous passage of more cars at any point in time.

Cars represent data, and this lane capacity is ever increasing with latest-gen PCIe tech – the more the lanes, the more cars (data) can be transferred at any one instant.

PCIe generations

Latest gen PCIe at the time of writing this piece is PCIe 5.0; however, before that, we had PCIe 1.0, 2.0, 3.0, and PCIe 4.0. Over each generation, there’s an improvement in the number of data lanes. So while PCIe 1.0 had one lane, PCIe 4.0 had four lanes and supported max transfer speeds x4 that of PCIe 1.0.

Why is this important

If speed is a necessity for you, you want to get an SSD with the fastest software and hardware interfaces currently available on the market. The fastest software and hardware interface will vary based on when you’re shopping, but as a rule of thumb, at the time of writing this, you want an SSD that:

  • Uses NVME and not AHCI
  • Uses at least PCIe 3.0

System requirements for latest-gen SSD transfer tech

Note that to use an SSD with the latest-gen tech, you need your hardware bits to support said latest-gen tech. Let me put that into perspective. While PCIe 4.0 was released in 2017, it was not until 2019 before we saw motherboards and CPUs (the AMD Ryzen 3000-series being the first in this case) with support for the technology.

Before settling for any high-end SSD with latest-gen tech, make sure your current setup is compatible with said latest-gen tech.

Form factor

Image showing SSD Form Factors
2.5Inch vs. mSATA vs. M.2 vs. U.2

The form factor of any SSD is a description of the way it is built. Falling back to the bridge analogy, just as there’s an all-metal bridge like the iconic Golden state bridge and others built from concrete, SSDs have different builds that all do the same thing.

SSD build types are quite many, but the ones you’ll most likely encounter include:

  • 2.5 inch
  • mSATA
  • M.2
  • U.2

2.5 inch

This form factor spots a larger build than most other in-use form factors. Originally formulated for HDDs and then adapted to fit first-gen SSDs, 2.5-inch is older tech. And what’ll you’ll see is most 2.5-inch SSDs connect to PCs via a SATA interface, not PCIe, which is faster.

Fair to note that despite relying on SATA, these SSDs are generally way faster than the traditional hard drive. That’s true because they max out the bandwidth capacity of the SATA interface compared to HDDs which use just a fraction.

mSATA

In a practical sense, the mSATA form factor is a mini-version of the 2.5-inch form factor. The smaller build and very similar performance numbers make it a better choice to 2.5-inch, but both still connect to your PC via a SATA interface.

M.2

M.2 is a more recent gen form factor. It’s about the same size as the mSATA form factor (sometimes bigger), but the connecting interface for most M.2 SSD is PCIe/NVME.

Support for PCIe means M.2 SSDs connect directly to the motherboard and fully leverage the bandwidth and insanely fast read/write speeds associated with the PCIe interface. That’s a long way of saying M.2 SSDs are usually faster than their SATA and mSATA counterparts, all things equal.

U.2

U.2 SSDs look a lot like 2.5-inch SSDs, but unlike the latter, which connects via a SATA interface, U.2 SSDs use PCIe. That makes them a tad bit faster than SATA and mSATA SSDs but still not quite on par with M.2 type SSDs. It’s highly unlikely that you’d come across a U.2 SSD when shopping – they were mostly used for enterprise-type storage systems.

How’s this important

Which SSD form factor you’ll pick largely depends on your motherboard specs and performance needs. If your motherboard only has a PCIe port, then it won’t support SATA SSDs and vice versa. That said, most latest-gen motherboards usually come with both SATA and PCIe ports.

Performance-wise, M.2 SSDs usually spot better read/write numbers and bandwidth. That’s because most will interface with your PC via a PCIe port, which, as I said, provides better performance than SATA. This is not to say 2.5 inch and other SSDs that use a SATA interface are in any way slow. For most consumer-grade stuff, they’re more than enough. However, if performance is important, say you’re a pro gamer or a video editor, then your choice should be more in the line of an M.2 SSD.

So far, we’ve covered the basic components of SSDs and how they should influence your buying decision. Let’s hop on to the many performance metrics you’ll find branded on SSDs, as those can be another great source of confusion.

Understanding SSD performance metrics

  • Read/Write speeds
  • IOPS
  • MBTF
  • TBW

Read/write speeds

Read/write speed is one metric you’ll see plastered over many SSDs on the market today. It’s a measure of how fast an SSD can (write) data and how fast it can release (read) stored data to PC components that need it.

Strictly speaking, better read/write speeds translate to better performance, but I won’t recommend basing your buying decision on raw read/write speeds. The R/W figure most manufacturers plaster on their SSDs is not a true reflection of how fast an SSD will read or write data in real-world situations. Rather it’s R/W speeds in optimized conditions where the SSD in question reaches peak read/write performance, something that’ll infrequently happen as you use the SSD.

But just so you have a reference number, aim for SSDs with 3200MBs/400MBs R/W speeds if you’re the average PC user and 6000MBs/7000Mbs if performance matters to you.

IOPs

A better measure of real-world performance is the IOPs or input/output per second number. Which is a representation of how fast an SSD is able to perform a random read or write event. Most SSDs will report very good IOPs numbers, so even though this measure is more practical, it still is kinda redundant – don’t sweat over it.

TBW

In theory, an SSD cannot last forever. The TBW or Terabytes Written metric is a measure of how much writing (storing) of data it’d take to get an SSD over its theoretical lifespan. This all relates back to how cells in the NAND architecture of SSDs degrade over time with use; I talked about it earlier.

I say theoretically because, on average, most consumer-grade SSDs on the market today have a TBW figure of about 19,000 terabytes. Very few consumers (like you), if any, will ever get to that threshold. That said, if you plan on putting your SSD through the paces, then it makes sense to get an SSD with pretty high TBW ratings, something in the 25,000 terabyte range.

There are many other metrics, but except you’re planning on using your SSD on an enterprise-scale, most are what I’ll describe as ‘vanity metrics.’

As long as you’re getting an SSD with decent enough R/W speeds and fair TBW ratings, everything else should be fine as long as you opt for the latest SSD builds with the features I discussed earlier.

SSD tier list

With practically all the runabouts of SSDs well covered, I’ll end the discussion by providing an SSD tier list – a ranked list of SSDs you can buy right now based on their performance.

NB. This list is mostly an adaption of the recommended SSDs on the r/SSD sub on Reddit.

SATA type SSDs

Tier one

Samsung 860 PRO

Tier two

Samsung 860 EVO, Crucial MX500, WD Blue 3d

Tier three

ADATA SU 800, ADATA SX850 UD Pro, HP S700 Pro

Tier four

Crucial BX 500, HP S700

Tier five

HP s600, Toshiba OCZ TR200

M.2 NVME type SSDs

Tier one

Samsung 980 Pro, Neutron NX500

Tier two

Samsung 980 Evo plus, HP EX950, ADATA SX9000 Pro

Tier three

Samsung 970 EVO, ADATA SX8200, Gammix S11, Corsair force MP510

Tier four

ADATA SX6000 Pro, HP EX900

Tier five

Crucial P1, Intel 660p

Tier six

ADATA SX6000, Kingston A1000, SBX Force MP300