Bfloat16 floating-point format

The bfloat16 (brain floating point)[1][2] floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a shortened (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing.[3] It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.[4]

The bfloat16 format was developed by Google Brain, an artificial intelligence research group at Google. It is utilized in many CPUs, GPUs, and AI processors, such as Intel Xeon processors (AVX-512 BF16 extensions), Intel Data Center GPU, Intel Nervana NNP-L1000, Intel FPGAs,[5][6][7] AMD Zen, AMD Instinct, NVIDIA GPUs, Google Cloud TPUs,[8][9][10] AWS Inferentia, AWS Trainium, ARMv8.6-A,[11] and Apple's M2[12] and therefore A15 chips and later. Many libraries support bfloat16, such as CUDA,[13] Intel oneAPI Math Kernel Library, AMD ROCm,[14] AMD Optimizing CPU Libraries, PyTorch, and TensorFlow.[10][15] On these platforms, bfloat16 may also be used in mixed-precision arithmetic, where bfloat16 numbers may be operated on and expanded to wider data types.

  1. ^ Teich, Paul (2018-05-10). "Tearing Apart Google's TPU 3.0 AI Coprocessor". The Next Platform. Retrieved 2020-08-11. Google invented its own internal floating point format called "bfloat" for "brain floating point" (after Google Brain).
  2. ^ Wang, Shibo; Kanwar, Pankaj (2019-08-23). "BFloat16: The secret to high performance on Cloud TPUs". Google Cloud. Retrieved 2020-08-11. This custom floating point format is called "Brain Floating Point Format," or "bfloat16" for short. The name flows from "Google Brain", which is an artificial intelligence research group at Google where the idea for this format was conceived.
  3. ^ Tagliavini, Giuseppe; Mach, Stefan; Rossi, Davide; Marongiu, Andrea; Benin, Luca (2018). "A transprecision floating-point platform for ultra-low power computing". 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE). pp. 1051–1056. arXiv:1711.10374. doi:10.23919/DATE.2018.8342167. ISBN 978-3-9819263-0-9. S2CID 5067903.
  4. ^ Dr. Ian Cutress (2020-03-17). "Intel': Cooper lake Plans: Why is BF16 Important?". Retrieved 2020-05-12. The bfloat16 standard is a targeted way of representing numbers that give the range of a full 32-bit number, but in the data size of a 16-bit number, keeping the accuracy close to zero but being a bit more loose with the accuracy near the limits of the standard. The bfloat16 standard has a lot of uses inside machine learning algorithms, by offering better accuracy of values inside the algorithm while affording double the data in any given dataset (or doubling the speed in those calculation sections).
  5. ^ Khari Johnson (2018-05-23). "Intel unveils Nervana Neural Net L-1000 for accelerated AI training". VentureBeat. Retrieved 2018-05-23. ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs.
  6. ^ Michael Feldman (2018-05-23). "Intel Lays Out New Roadmap for AI Portfolio". TOP500 Supercomputer Sites. Retrieved 2018-05-23. Intel plans to support this format across all their AI products, including the Xeon and FPGA lines
  7. ^ Lucian Armasu (2018-05-23). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved 2018-05-23. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that's being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
  8. ^ "Available TensorFlow Ops | Cloud TPU | Google Cloud". Google Cloud. Retrieved 2018-05-23. This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU.
  9. ^ Elmar Haußmann (2018-04-26). "Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50". RiseML Blog. Archived from the original on 2018-04-26. Retrieved 2018-05-23. For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision.
  10. ^ a b Tensorflow Authors (2018-07-23). "ResNet-50 using BFloat16 on TPU". Google. Retrieved 2018-11-06.
  11. ^ "BFloat16 extensions for Armv8-A". community.arm.com. 29 August 2019. Retrieved 2019-08-30.
  12. ^ "AArch64: add support for newer Apple CPUs · llvm/llvm-project@677da09". GitHub. Retrieved 2023-05-08.
  13. ^ "CUDA Library bloat16 Intrinsics".
  14. ^ "ROCm version history". github.com. Retrieved 2019-10-23.
  15. ^ Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A. Saurous (2017-11-28). TensorFlow Distributions (Report). arXiv:1711.10604. Bibcode:2017arXiv171110604D. Accessed 2018-05-23. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts{{cite report}}: CS1 maint: multiple names: authors list (link)

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Tubidy