Advertisement

Margining and Calibration for Fun and Profit

Bill Laumeister, Strategic Applications Engineer

Maxim Integrated Thumbnail

This application note presents an overview of electronic margining and its value in detecting potential system failures before a product ships from the factory. It is a calibration method that effectively predicts and allows adjustments to improve product quality. Margining also can be used to sort products into performance levels, allowing premium products to be sold at premium prices. We discuss the downside of sorting and suggest alternative ways to segregate products.

Introduction
The word “margin” has multiple meanings. A common type of margin, the space around the printed text on a page, can be seen on clay tablets from over 5000 years ago. One of the scariest definitions of the word relates to buying stock on margin (borrowing money from a broker using the stock purchase as collateral). Prior to the 1929 U.S. Stock Market Crash, one could borrow up to 90% of the stock's value. If the stock's value dropped, a “margin call” required one to pay money to retain stock ownership. Not meeting the margin call means the stock would be sold and the investor might lose all his money. Today, margin buying is limited to a much smaller percentage.

We, however, will concentrate on margining in the computer and electronics industries. When the first microprocessors were mounted on motherboards to make computers, many, notably gamers and modders (modifiers), wanted faster speeds. Thus, “speed margining” was born.

Speed margining, also known as “pushing” or “overclocking,” is changing your computer's system hardware settings to operate at a speed higher than the manufacturer's rating. This can be done at various points in the system, usually by adjusting the speed of the CPU, memory, video card, or motherboard bus speed.

Often, chips are trimmed and tested by the manufacturer to determine at what speed they fail. They are then rated at a speed one step lower than this. IC manufacturers will try to make their product as fast as possible, because faster hardware sells for more money. Statistically, some ICs in a wafer may be able to run at higher speeds than others. Each chip is tested to see how fast it will run, and the ones that run faster are labeled as “higher speed.” Because the tests are quite rigorous and conservative (because parts must be guaranteed to operate at a minimum speed), gamers thought it would be possible to push the CPU slightly faster than its rating, while preserving stability in the system.

Overclockers also figured out that some IC manufacturers deliberately underrated chips in order to meet market demand and create differentiation between high-end and low-end products. Occasionally, when manufacturers are short on stock, they package faster chips as slower ones to meet demand. Does overclocking always work? No. However, overclockers try because statistically some succeed.

The Next Step in Margining
As we discussed in the “Why Digital Is Analog” section of application note 4345, “Well Grounded, Digital Is Analog,” digital signals are more tolerant of noise and power-supply levels than analog signals are. This is because of the thresholds inherent in digital devices. Analog devices are immediately corrupted, while a digital device generally functions, as long as the signal is higher or lower than the critical threshold levels. In a digital system, failure is sudden (the cliff effect), because deteriorations in the signal are initially rejected (by the thresholds) until they grow serious enough to corrupt data. There, it is necessary to test for performance margin to guarantee that the product will operate over its warranty lifetime and in extreme conditions.

The full document can be freely obtained from the downloadable PDF below

Advertisement



Learn more about Maxim Integrated

Leave a Reply