Arbitrary-precision arithmetic

In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are potentially limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision.

Several modern programming languages have built-in support for bignums,[1][2][3][4] and others have libraries available for arbitrary-precision integer and floating-point math. Rather than storing values as a fixed number of bits related to the size of the processor register, these implementations typically use variable-length arrays of digits.

Arbitrary precision is used in applications where the speed of arithmetic is not a limiting factor, or where precise results with very large numbers are required. It should not be confused with the symbolic computation provided by many computer algebra systems, which represent numbers by expressions such as π·sin(2), and can thus represent any computable number with infinite precision.

  1. ^ dotnet-bot. "BigInteger Struct (System.Numerics)". docs.microsoft.com. Retrieved 2022-02-22.
  2. ^ "PEP 237 -- Unifying Long Integers and Integers". Python.org. Retrieved 2022-05-23.
  3. ^ "BigInteger (Java Platform SE 7 )". docs.oracle.com. Retrieved 2022-02-22.
  4. ^ "BigInt - JavaScript | MDN". developer.mozilla.org. Retrieved 2022-02-22.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Tubidy