Parallel computing

Super computers are used to exploit parallelism

Parallel computing is a form of computation in which many instructions are carried out simultaneously (termed "in parallel"),[1] depending on the theory that large problems can often be divided into smaller ones, and then solved concurrently ("in parallel").

There are several different forms of parallel computing:

  1. Bit-level parallelism,
  2. Instruction-level parallelism,
  3. Data parallelism,
  4. Task parallelism.

It has been used for many years, mainly in high-performance computing, with a great increase in its use in recent years, due to the physical constraints preventing frequency scaling. Parallel computing has become the main model in computer architecture, mainly in the form of Multi-core processors.[2] However, in recent years, power consumption by parallel computers has become a concern.[3]

Parallel computers can be classified according to the level at which the hardware supports parallelism—with multi-core and multi-processor computers having multiple processing elements inside a single machine, while clusters, blades, MPPs, and grids use multiple computers to work on the same task.

Parallel computer programs are more difficult to program than sequential ones,[4] because concurrency introduces several new classes of potential software bugs, of which race conditions and dead locks are the most common. however many parallel programming languages have been created to simplify parallel computers programming. But still communication and synchronization between the different subtasks is difficult while achieving good parallel program performance.

  1. Almasi, G.S. and A. Gottlieb (1989). Highly Parallel Computing. Benjamin-Cummings publishers, Redwood City, CA.
  2. Asanovic, Krste et al. (December 18, 2006). "The Landscape of Parallel Computing Research: A View from Berkeley" (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance ... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."
  3. Asanovic et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".
  4. Patterson, David A. and John L. Hennessy (1998). Computer Organization and Design, Second Edition, Morgan Kaufmann Publishers, p. 715. ISBN 1558604286.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by razib.in