Parallel computing is a form of computation in which many instructions are carried out simultaneously (termed "in parallel"),[1] depending on the theory that large problems can often be divided into smaller ones, and then solved concurrently ("in parallel").
There are several different forms of parallel computing:
It has been used for many years, mainly in high-performance computing, with a great increase in its use in recent years, due to the physical constraints preventing frequency scaling. Parallel computing has become the main model in computer architecture, mainly in the form of Multi-core processors.[2] However, in recent years, power consumption by parallel computers has become a concern.[3]
Parallel computers can be classified according to the level at which the hardware supports parallelism—with multi-core and multi-processor computers having multiple processing elements inside a single machine, while clusters, blades, MPPs, and grids use multiple computers to work on the same task.
Parallel computer programs are more difficult to program than sequential ones,[4] because concurrency introduces several new classes of potential software bugs, of which race conditions and dead locks are the most common. however many parallel programming languages have been created to simplify parallel computers programming. But still communication and synchronization between the different subtasks is difficult while achieving good parallel program performance.