Skip to content

How Does an SSD Work? A Detailed Guide

SSDs have now become the most common storage devices for computers. There are many types of SSDs based on their speed, application, size, and price. But, the basics remain the same. All solid-state drives work on almost similar mechanisms.

The three main hardware parts of any SSD are NAND Flash, Controller, and DRAM. A software part is also there for communication, optimization, error correction, wear-leveling, etc.

SSDs work by storing the data inside the floating gate transistors also called Cells. In most modern SSDs, however, the floating gate is replaced by the charge trap flash cells. There could be Billions of these cells to make a 1TB SSD. These transistors can hold the charge inside its insulating material. Each NAND flash cell can store 1, 2, 3, or 4 bits of data depending on its design. Furthermore, there is a controller on the SSDs that allocates the cell’s location for the incoming data and then creates mapping tables to relocate them when it has to be read.

SSDs simply make use of the NAND Flash cells along with the faster Interface like PCIe on the motherboard to connect to the PC. NVMe protocol helps these drives to work with thousands of command queues at a time.

Anatomy of an SSD

anatomy of an SSD (NVMe)

All the components of an SSD are mounted on a PCB (Printed Circuit Board). The connector depends on the drive interface. If it’s a SATA SSD, the port will have a key called B-Key on the left side. In the case of NVMe, the connector will always be on the right side called M-Key.

The controller is the first part of the drive that connects to the motherboard through the connector. It communicates with the DRAM and manages data transfer to and from the NAND Flash memory. There are other components like pull-up/down resistors and capacitors for voltage filtering and leveling.

What is NAND Flash in SSDs and how does it work?

NAND Flash is the main memory location in your SSDs. It is an integrated circuit with many millions or billions of transistors based on the storage capacity. The basic element is the cell, followed by strings, layers, blocks, etc. Let’s start with the cell first of all and then proceed further.

1. Cell

The cell is the most fundamental element inside your SSD. It is made up of a floating-gate transistor sometimes called Floating Gate MOSFET. In modern 3D NAND memory technology, a different variant of FGMOS is utilized which is called the Charge-Trap Flash. It comes with better durability. If you don’t know electronics, you might find it hard to understand its mechanism. But, in simple words, it is an electronic component that stores the data between two layers of material that doesn’t conduct electricity. However, the layer that stores the charge is conductive in nature. The two insulating layers create a barrier for the current to stay there for a very long time.

When millions and billions of these cells are combined, it creates a NAND Flash Memory Cell. Let’s have a look how is do the floating gate and charge trap flash memory cell look.

There can be up to 3 Billion or more individual cells in a 1 TB SSD. However, this is the case for the SLC (Single-Level Cell) SSDs. For consumer drives, manufacturers generally utilize a single cell to store 3 or more bits per cell. This increases the storage capacity in the same space but increases the wear to the flash. Why the wear is increased will be discussed later in the article.

Because the data is stored in the form of bits, the cell enables it to store in the form of electric charge. You can imagine bit 1 as charge and 0 as no-charge.

The concept of bits is just to help humans understand how computers work and program them. There are no 0's and 1's in real. In real, there are just nanoscopic circuits being turned Off and On. When we say one bit whether 0 or 1, we are talking about the presence or the absence of charge somewhere in the circuit.
In SSD cells, the absence of charge in the floating gate transistor mean 1 and if there is a charge, it is considered 0. Writing the data is just pushing this charge inside the gate. For reading, the controller checks whether there is a charge on the floating gate.

It is practically impossible to imagine a cell because its size is pretty small ranging from 20nm to 10nm. Below is an example image to understand the workings of a floating gate transistor.

Floating gate transistor diagram

Storing Terrabytes of data inside floating gate transistors

In 1967 at Bells Lab, two scientists named Dawon Kahng and Simon Min Sze made the first report on Floating gate MOSFET, although it was invented back in 1959 by Mohamed Atalla.

In 1989, Intel was the first company to utilize the Floating gate transistor in their ETANN chips.

It was the 2000s when the industry switched to sub-micron process nodes and made up to 45nm chips. In 2010 3D NAND was introduced overcoming the limitations of planar scaling. With the help of vertical stacking, the density increased a lot more. Now, we are even working with 2nm chips.

Because of the evolution of technology, we are capable of fabricating billions of these transistors inside tiny chips and reaching huge volumes of storage. But, at the base, this all comes down to a transistor having the capability to hold the charge inside an insulating material. Further, there is a circuitry that is designed to write the data in the form of charge and then read it whenever required.

types of NAND cells and their corresponding storage capacities per cell and 1MB data

2. Word Line (String)

When we connect multiple Floating gate transistors or cells in a series, it becomes a string or word line. You might have heard that a page is the smallest unit that can be written or read inside the NAND flash but in real, a word line is the smallest programmable unit. In order for the controller to do its job of reading or writing, a string is switched on with a common bit line. Then, we can program each cell with the help of a word line.

The cells inside a page also share the common word line. But, because, the switching is properly timed by the controller, we end up having individual control over each specific cell.

Storing the data in a cell

Now, we have connected multiple cells in a row with the bit line connected to the drains of each. The word line is connected to the gate.

To store the data (program a cell), we apply a voltage to the word line. The charge is stored inside two insulating layers around the floating gate.

Imagine a TLC SSD. We want to store this series of binary bits: “0 1 1 0 1 0 1 0 1 1 1 0 1 0 1”. We will take 5 cells and set the voltage levels equivalent to “0 1 1”, “0 1 0”, “1 0 1”, “1 1 0”, and “1 0 1”.

We just stored 15 bits inside just 5 cells. If we had to do the same thing in an SLC SSD, it would have consumed 15 cells. In QLC, we can store this series in just 4 cells.

3. Page

After the Cell, the next fundamental element of NAND Flash is the page. It is a group of cells combined to create a size of 2KB to 16KB. By the way, a cell is the smallest unit of NAND flash that can be written. So, if the page size is 2KB, it’ll have to use at least 2 KB space even if the data is less than 2 KB.

The page has a bigger size and the NAND flash doesn’t work on individual cells to save energy and make things straightforward. It also helps in compartmentalization and makes it easier for the controller to run algorithms like wear-leveling and garbage collection.

We can use individual cells to store data as it comes but it will make the tracking process tougher and also increase the chance of errors.

The erasure and programming of individual cells are prone to errors because they can interfere with other nearby cells. But, when we write the whole page, it reduces the overhead.

The cell must be activated with the control line in order to write or read the data from and by the bit line. You can imagine a bit matrix with cells at the center connected to the bit line and control line. In order to access data from any cell, the control line and bit line must be active at the same time. The same goes for writing the data.

So, there are cells connected vertically forming strings. Then, there are cells connected horizontally called pages. Multiple of these spanned across a flat area is called a layer. Then, multiple layers create a block.

So, the page is there so that the control line and bit line can be synchronized in order to access a single cell. This makes running the algorithms easier and allocating and finding the data easier.

reading the data from cells

Writing the data in a cell

By default, all the memory cells are set to “1” i.e. they have no charge stored in their floating gate. So, if we have to write the data, we have to store the charge and make it a “0”.

Now, if we want to store a series of multiple zeroes and ones, we make use of multiple cells for the same. According to the requirement, the controller writes and data or leaves the cell as it is. Each individual cell is enabled by its gate voltage turned On and then injecting the voltage through the process of tunneling.

Reading the data from a cell

Reading the data is similar in terms of mechanism. In order to check whether the cell has 0 or 1, the threshold voltage is checked. If the cell is charged, i.e. it has a 0 logic, the threshold voltage becomes higher. Conversely, the threshold voltage decreases when there is no charge stored inside the cell.

There is a separate circuitry inside the controller to check these voltage levels and determine the states of each individual cell. This process is done at a very high pace generally more than 1 billion times per second in a 1 GHz controller.

The data is interpreted through the word line while the bit line is utilized for enabling specific pages inside the blocks. There could be up to millions of blocks inside a 1TB NAND Flash chip. The number of pages can be up to 100 million pages inside the same size as NAND Flash. If we talk about the speed, this all comes down to the clock speed of the controller and the small designs of the NAND Flash chips.

Working with the System

The PCIe NVMe SSDs connect to the CPU directly through the PCIe lanes. This allows the CPU to fetch the data from the drives at a high speed and a lower latency. Then the CPU can decide to store the data inside the RAM if required. In the case of SATA drives like SATA SSDs and Hard Drives, the data has to go through the chipset and the CPU has no direct access to it.

Working of NVMe SSDs vs working of SATA drives with the CPU

Note: The SATA SSDs connect to the CPU just like the older hard drives. The path goes through the chipset. This makes them slower than the PCIe NVMe drives.

SSD Controller and its importance

The SSDs have their own controller for interacting with the system and also to run various algorithms for efficiency and performance. They generally have their own DRAM which helps with mapping, buffer writing, wear leveling, latency reduction, and various other things.

So, whenever there is an income command from the CPU to store the information or to fetch the data, it goes through the SSD’s controller. The controller on the SSD handles the workload required for data mapping and algorithm which otherwise has to be done by the CPU on the motherboard.

The controller runs various software and algorithms for wear leveling, garbage collection, bad block management, TRIM, encryption, caching, interface management, etc.

SSD Memory and Its Importance

Some SSDs come with dedicated DRAM chips. Because the DRAM is much faster than the NAND flash, it is utilized to create temporary mapping tables for the data. This helps reach higher read and write speeds by fastening up the speed of converting the logical addresses to physical addresses. These mapping tables are obviously then stored in the permanent NAND Flash cell. Around 1 to 5% of the total NAND Flash is separated for these tables.

DRAM is also responsible for caching because it has a lower latency and a higher bandwidth. So, when there are multiple requests coming from the host, the access speed is increased. This works for writing the data inside the drive. However, as the DRAM gets filled up to its capacity, this high write speed gets throttled to the original speed offered by the NAND Flash.

SSD Interface and its importance

SSD Interface determines how the SSD connects to the system. The modern high-speed NVMe drives use the PCIe interface as we discussed earlier. But, there are other interfaces like SATA and in very old systems PATA.

SATA is obviously slower and limited to 600 MB/s. However PCIe drives are much faster. But, there is a big role of NVMe protocol here enabling these drives to work efficiently and rapidly. NVMe is a software component that enables higher command queues as compared to the normal PCIe lanes. When PCIe and NVMe are combined, we enable the systems to utilize the maximum potential of NAND Flash storage technology.

Also, as the PCIe generations are going forward, we are seeing new SSD releases with huge read/write speeds i.e. 10 GB/s read speed.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments