Skip to content

HMB vs Pseudo-SLC vs DRAM SSD: What is the difference?

In our article about DRAM vs DRAM-Less SSDs, we discussed how DRAM-less SSDs always use some caching and buffering mechanisms. We also talked about the HMB caching a little bit. But, here we will discuss them along with the pseudo-SLC caching in full detail.

DRAM plays a very important role in almost everything that happens inside the SSDs. It stores the FTL, caches frequent data, store metadata, etc. If the SSD doesn’t have some sort of write caching, it can also serve as a write cache. However, manufacturers have managed to replicate these features even without the DRAM with the help of HMB.

I had plans to just compare DRAM and HMB but I decided to add Pseudo-SLC cache because it seemed relevant to the topic. SLC caching is used primarily to improve the initial write performance, but there is a lot of confusion around the same.

Without the DRAM, the NVMe 1.2 and later SSDs will use HMB by default. But, there could be SSDs with DRAM and SLC cache or HMB (No-DRAM) and SLC Cache or just DRAM or HMB. Manufacturers decide to employ these components depending on the price of the drives and other criteria.

What is DRAM in SSDs?

SSDs store data inside NAND flash chips which include millions and billions of transistors. To keep track of this data, there is a Flash Transition Layer. This layer maps the Logical Block Addresses (LBA) from the operating system to the Physical block locations on the NAND Flash. Because the storage cells are very dense inside these NAND flash, accessing the data can be really tough. So, the controller uses the FTL to know the location of required data and also to keep track of the empty spaces.

The primary function of DRAM is to store this FTL and act as a high-speed storage medium for the controller to provide easy access to these mapping tables. Without the DRAM, this FTL would be stored in the slower NAND flash which will result in slower performance.

DRAM has another key role as a read/write buffer. Most SSDs these days come with the pseudo-SLC write cache that handles the bursts of incoming data to the drive. However, when it comes to random performance, keeping copies of frequently used data is really important to improve performance. DRAM does that for you.

DRAM also helps the garbage collection (space reclamation from deleted files) and wear leveling (evenly distributes writes across the NAND) algorithms to work more efficiently. For both these algorithms to work, the FTL needs to be read and updated very frequently. With the help of DRAM, this latency is reduced hence these programs run faster.

DRAM on SSD

How does DRAM work in SSDs?

A DRAM chip is installed on the PCB of the SSD and it interacts directly with the controller. Now, when the controller has its own faster memory, it uses it for various tasks that it is programmed to perform.

Let’s talk about the main role first of all. The controller loads the FTL inside the DRAM from the main NAND flash. Now, whenever the controller has to access the FTL for anything, it will interact directly with the DRAM. If the system is turned off, the SSD will keep a capacitor-based power for the DRAM so that the FTL data isn’t lost and stored in the permanent NAND flash memory.

When the wear-leveling or garbage collection algorithms are running, they need metadata to find the locations of free and available blocks of data. When DRAM has this data available, the controller can easily read it at a very low latency.

DRAM is a capacitor-based memory that has a very high bandwidth and low latency. So, it can work great as a buffer. If the SSD isn’t using any kind of write caching like pseudo-SLC caching, the DRAM can act as a write buffer to absorb the heavy streams of incoming data and also hold the hot data for frequent read operations.

Whenever a read or write command comes from the host to the SSD, it interacts with the controller first of all. If it is a read command, the controller first checks the DRAM for cached data or metadata. If it is not present, it will load the data from the NAND flash to the DRAM. If it is a write command, the controller writes it in the SLC-cache (if present) or on the DRAM for a faster write operation.

DRAM SSD

What is HMB in SSDs?

HMB stands for Host Memory Buffer. The SSDs that don’t come with their own DRAM chips generally use HMB. In this, some space from the system’s RAM is utilized as your SSD controller’s DRAM. However, not all DRAM-Less SSDs are HMB SSDs. Some drives may store the FTL on the NAND Flash or the SRAM that is built inside the controllers. HMB feature was introduced with NVMe 1.2 and its later variants. Also, the SATA protocol doesn’t support HMB.

SSD with DRAM vs SSD with HMB difference

HMB doesn’t fully replicate the DRAM but it is a smart solution to enhance SSD’s performance. HMB stores the FTL but not the entire table. So, if your SSD is of a larger capacity, you may see performance issues with the HMB.

HMB is slower because the data is transmitted through the PCIe interface which increases its latency. So, yes, you get benefits in the performance (both random and sequential) but you can’t expect the same results as SSD’s own DRAM.

HMB also serves as a cache for frequently accessed data and helps with improving both random read and write operations. It also stores the metadata (data that provides information about other data) about FTL, wear leveling, ECC, garbage collection, TRIM, over-provisioning, etc.

How does HMB work in SSDs?

HMB is a default feature in the NVMe 1.2 and later drives. So, if your drive doesn’t have a DRAM, it will use the host memory buffer by default. The size of HMB is determined by the host system and SSD controller. The SSD will demand a specific amount of RAM as HMB but it depends on the system and available RAM whether that amount will be allocated or not. Typically, the HMB size ranges from 16 MB to 128 MB.

When the system boots up, the SSD controller connects with the NVMe driver on the OS to request the HMB from the RAM. This allocation is done by the operating system. Once the HMB is allocated, the first thing that the controller does is storing the FTL from the NAND Flash. Also, metadata and frequently accessed data are stored in the HMB if the storage capacity allows.

SSD dynamically manages and decides which data will reside in HMB based on the workload to optimize the performance. If the data required by the system isn’t available in the HMB, the SSD falls back to NAND flash which makes this process slower.

What is pseudo-SLC caching in SSDs?

Pseudo-SLC caching is an advanced method to convert the slower TLC/MLC/QLC NAND Flash to SLC NAND Flash. This is done by altering the controller configurations to recognize the multi-level cells as single-level cells. This performance, however, comes with a drawback of lower capacity.

The consumer SSDs always use MLC, TLC, or QLC NAND flash for storing the data. More than one bit per cell is stored in these types of memories. These flash memories are dense but slower in speed. Most noticeably, their write speeds are terrible. So, in order to boost the write speed (sequential), a pseudo-SLC NAND flash is created. Because SLC or pseudo-SLC NAND flash has a simple structure, it enhances the write performance a lot by keeping one bit per cell.

The size of this SLC cache is decided by the manufacturer and when this cache is filled with the incoming data, the performance falls back to that of the actual NAND flash memory. This caching is used just to enhance the initial write speed. Although this is the perceived write speed, manufacturers will always advertise it as the highest write speed for their drives. But, after this cache is filled, the speed can fall back to more than 50%.

Generally, high-end NVMe drives will come with the most amount of this SLC cache. SLC cache can also help in improving data read performance if it is still present in the cache memory.

formation of pseudo-slc cache

How does pseudo-SLC Caching work?

SLC NAND Flash is very expensive and used only in data centers and other enterprise environments. But, for consumer drives, engineers created a smart solution. SSDs with MLC, TLC, or QLC flash have very bad write speed because of the complex circuitry and multiple steps included for erasing, preparing, storing, and managing the written data. Pseudo-SLC caching is a great way to overcome this problem.

Pseudo-SLC caching or simply SLC caching is a clever technique used by SSD manufacturers to enhance the sequential write performance in SSDs. This helps the most when we write big files to our drives. SSDs store the data inside tiny NAND flash cells. In SLC (Single-Level Cell), one cell can store only one bit of data i.e. either “0” or “1”. But, with the advent of Multi-Level Cells, the same cell is now used to store 2 (in MLC), 3 (in TLC), 4 (in QLC), and 5 (in PLC) bits in a single cell.

As the number of bits stored in a single cell started to increase, the performance decreased due to the complexity. With a single bit in a single cell, the read/write operations will become very fast but we have to compromise the storage space.

So, the manufacturers keep some space of the NAND flash memory as a pseudo-SLC cache. This place is assigned for the heavy incoming streams of data. This data is mostly sequential (big files). However, it can compensate for the random data by combining multiple small write requests with organized ones.

SLC write caching graph

In the graph, you see a sudden drop in the performance that goes even below the performance of non-SLC write cache graph. This is because once the SLC cache is filled with data, it starts to write the data back to the normal NAND flash which takes time. Also, the controller starts to reprogram those cells to now store more than 1 bit of data.

In Samsung SSDs, you might have heard about the term “TurboWrite Technology”. This is what they achieve with the help of Pseudo-SLC caching. It is a way to give an artificial boost to the heavy write operations with the help of simpler working of the SLC NAND flash.

Conclusion

DRAM, HMB, and SLC Caching are very similar yet pretty different things used to enhance SSD performance. Any SSD that comes with both DRAM and SLC caching is surely going to offer great performance in most scenarios. However, HMB is used to replicate the features of DRAM in budget drives. Still, HMB is great for normal to average workloads. But, in most high-end NVMe drives, you will find both DRAM and SLC caching.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments