Misplaced Pages

Massively parallel

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Massively parallel computing) Use of many processors to perform simultaneous operations For other uses, see Massively parallel (disambiguation).

Massively parallel is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of coordinated computations in parallel. GPUs are massively parallel architecture with tens of thousands of threads.

One approach is grid computing, where the processing power of many computers in distributed, diverse administrative domains is opportunistically used whenever a computer is available. An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.

Another approach is grouping many processors in close proximity to each other, as in a computer cluster. In such a centralized system the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced InfiniBand systems to three-dimensional torus interconnects.

The term also applies to massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of central processing units (CPUs) and random-access memory (RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing many processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common supercomputer implementations after clusters, as of November 2013.

Data warehouse appliances such as Teradata, Netezza or Microsoft's PDW commonly implement an MPP architecture to handle the processing of very large amounts of data in parallel.

See also

References

  1. Grid computing: experiment management, tool integration, and scientific workflows by Radu Prodan, Thomas Fahringer 2007 ISBN 3-540-69261-4 pages 1–4
  2. Parallel and Distributed Computational Intelligence by Francisco Fernández de Vega 2010 ISBN 3-642-10674-9 pages 65–68
  3. Knight, Will: "IBM creates world's most powerful computer", NewScientist.com news service, June 2007
  4. "Archived copy". Archived from the original on 6 December 2013. Retrieved 12 January 2014.{{cite web}}: CS1 maint: archived copy as title (link)
Parallel computing
General
Levels
Multithreading
Theory
Elements
Coordination
Programming
Hardware
APIs
Problems
Categories: