Misplaced Pages

Predictive failure analysis

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Predictive Failure Analysis (PFA) refers to methods intended to predict imminent failure of systems or components (software or hardware), and potentially enable mechanisms to avoid or counteract failure issues, or recommend maintenance of systems prior to failure.

For example, computer mechanisms that analyze trends in corrected errors to predict future failures of hardware/memory components and proactively enabling mechanisms to avoid them. Predictive Failure Analysis was originally used as term for a proprietary IBM technology for monitoring the likelihood of hard disk drives to fail, although the term is now used generically for a variety of technologies for judging the imminent failure of CPU's, memory and I/O devices. See also first failure data capture.

Disks

IBM introduced the term PFA and its technology in 1992 with reference to its 0662-S1x drive (1052 MB Fast-Wide SCSI-2 disk which operated at 5400 rpm).

The technology relies on measuring several key (mainly mechanical) parameters of the drive unit, for example the flying height of heads. The drive firmware compares the measured parameters against predefined thresholds and evaluates the health status of the drive. If the drive appears likely to fail soon, the system sends notification to the disk controller.

The major drawbacks of the technology included:

  • the binary result - the only status visible to the host was presence or absence of a notification
  • the unidirectional communications - the drive firmware sending notification

The technology merged with IntelliSafe to form the Self-Monitoring, Analysis, and Reporting Technology (SMART).

Processor and Memory

High counts of corrected RAM intermittent errors by ECC can be predictive of future DIMM failures and so automatic offlining for memory and CPU caches can be used to avoid future errors, for example under the Linux operating system the mcelog daemon will automatically remove from usage memory pages showing excessive corrections, and will remove from usage processor cores showing excessive cache correctable memory errors.

Optical media

Main article: Optical disc ยง Surface error scanning

On optical media (CD, DVD and Blu-ray), failures caused by degradation of media can be predicted and media of low manufacturing quality can be detected prior to data loss occurring by measuring the rate of correctable data errors using software such as QpxTool or Nero DiscSpeed. However, not all vendors and models of optical drives allow error scanning.

References

  1. Intel Corp (2011). "Intel Xeon Processor E7 Family: supporting next generation RAS servers. White paper". Retrieved 9 May 2012.
  2. Bianca Schroeder; Eduardo Pinheiro; Wolf-Dietrich Weber (2009). "DRAM Errors in the Wild: A Large-Scale Field Study. Proceedings SIGMETRICS, 2009".
  3. Tang, Arruthers, Totari, Shapiro (2006). ""Assessment of the Effect of Memory Page Retirement on Systems RAS against Hardware Faults", Proceedings of the 2006 International Conference on Dependable Systems and Networks".{{cite news}}: CS1 maint: multiple names: authors list (link)
  4. "mcelog - memory error handling in user space. Linux Kongress 2010" (PDF). 2010.
  5. List of supported devices by dosc quality scanning software QPxTool

See also


Stub icon

This computer-storage-related article is a stub. You can help Misplaced Pages by expanding it.

Categories: