Advertisement

Solid-state-drive life factors for embedded applications

Solid-state-drive life factors for embedded applications

Looking at SSD usage and life span

BY GARY DROSSEL
Western Digital, Lake Forest, CA
http://www.wdc.com

Embedded systems OEMs need to thoroughly understand the usage model for solid-state drives (SSDs) in their application to get the optimal solution in terms of performance, reliability, capacity, and cost choosing the right tool for the right job. As SSDs become mainstream as alternatives for hard-disk drives (HDDs) in select applications, there are more options available, making the evaluation process more difficult.

Engineers select SSDs to match their reliability and rugged environment requirements. Once used only by military and high industrial applications due to very high costs, SSDs now are common in consumer and commercial applications.

In many industrial applications, overcoming issues such as power anomalies, drive wear-out, data integrity, and security breaches are critical to the selection process. In addition, industrial applications require integrated advanced technologies that allow users to monitor SSD life in real time and forecast remaining life. Many SSDs are well suited to tackle these issues.

Performance factors affecting SSD life

Multiple performance factors affect SSD life. Application read/write speeds are often critical; an SSD writing 20 Gbytes per day may wear out and fail much faster than one in an application writing 100 Gbytes per day, depending on the required speed and capacity.

Randomness and size of data have a direct effect on SSD life. Most SSDs are optimized for fast data transfer for large sequential files that minimize read/write cycles. Not only is performance affected due to the starts and stops of finding, reading, and writing data, so is drive life due to heavier write cycles.

The application is what matters. Many database and file server applications are just not as concerned with large sequential speeds as they are with fast access to random data in small amounts, such as booting an operating system or in a data-logging application.

Write duty cycle is a crucial factor. And without optimized technology to increase SSD endurance, write-intensive duty cycles will cause faster drive wear-out and failure. Selecting the correct SSD capacity beyond actual data storage requirements deeply affects SSD life. Designers need to understand the effects of write amplification, or the amounts of data requested by the host system to be written for a specific write command and the minimum page size actually written.

Write amplification comes from the basic size mismatch between pages (the smallest write unit to the NAND media) and block (the smallest erase unit to the NAND media). The minimum write size from an SSD controller to the NAND is usually the page size, say for example 4K. Most SSDs erase before writing so a 4K write from the host will, at worst case, require a whole erase block (up to 256 Kbytes) to be erased and written. The result is a 256:4 or 64:1 write amplification. In this scenario, writing 64 times the data will certainly cause the drive to wear out faster than projected.

Write amplification is somewhere between perfect (1:1) and worst case, which is defined as erase block size ÷ page size:1. The bottom line for OEM designers is to understand their application’s usage model and know how much true data is being written.

Understanding the usage model and application is key

The right SSD for the right job boils down to a designer’s understanding of an application’s usage model. Write-intensive applications that run 24/7 have rigorous standards that require advanced storage technology and sophisticated management algorithms to meet performance and lifespan requirements. In contrast, read-intensive consumer or commercial applications may just need the lowest cost per gigabyte.

SSD life factors

SSD life (in years) is the product of what the SSD controls (SSD technology) multiplied by what the OEM controls (usage model).

Solid-state-drive life factors for embedded applications

Fig. 1. SSD life factors.

Taking this equation concept further, Western Digital has developed LifeEST, a new metric that determines the number of “write years per Gbyte” the SSD can achieve. This can be verified by the following:

Solid-state-drive life factors for embedded applications

Fig. 2. SSD life estimation.

For example, a LifeEST value of 0.1 suggests that a 60-Gbyte SSD would last a minimum of 6 years given the SSD were to operate at maximum performance with a 100% write duty cycle (never reading, never idle). This represents a worst case scenario. The product life is directly proportional to the capacity and inversely proportional to the write duty cycle:

Solid-state-drive life factors for embedded applications

Fig. 3. Life estimation based on usage mode.

In this scenario, a 30-Gbyte SSD would have a minimum lifespan of at least 3 years and a 120-Gbyte SSD would last at least 12. If the application writes data one-third of the time, reads data another third of the time and did other things the remaining third, the 30-Gbyte SSD would last at least nine years. Usage model again is the important factor to estimating an SSD’s usable life.

Performance and life can work against each other. Decreasing the speed or minimizing write amplification can increase useful life.

Although SSD life is so reliant on usage models and workloads, there is currently no industry-standard calculation method. LifeEST is one means of measuring SSD technology vs usage model. Other methods rate SSD life based on reading and writing gigabytes per day; others specify number of erase cycles per block; and still others misconstrue endurance for mean time between failure (MTBF). Industry trade associations, including JEDEC JC-64 and SNIA’s Solid-State Storage Initiative (SSSI), have taken action to standardize this significant issue.

Using SMART

The ATA SMART (Self-Monitoring And Reporting Technology) command, which monitors parameters such as power-on hours and mechanical wear and tear that might point to a mechanical failure in an HDD, has been available for quite some time but not widely implemented. For SSDs, many factors in the SMART command simply do not apply. Along with JEDEC, the T13 standards body (T13.org) is considering new commands to pinpoint endurance-related issues for SSDs.

Self-monitoring advanced storage technologies for SSDs are now available and have several benefits. The host system can poll the SSD in real-time, with no application downtime, and log live feedback as to drive usage and remaining usable life, preventing unscheduled downtime and allowing network administrators to perform field maintenance on their time table. Self-monitoring technologies also aid the qualification process. ■

Advertisement



Learn more about Western Digital

Leave a Reply