History of artificial intelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. Modern AI concepts were later developed by philosophers who attempted to describe human thought as a mechanical manipulation of symbols. This philosophical work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956.[1] Attendees of the workshop would become the leaders of AI, driving research for decades. Many of them predicted that within a generation, machines as intelligent as humans would exist. Governments and private investors provided millions of dollars to make this vision come true.[2]

Eventually, it became obvious that researchers had grossly underestimated the difficulty of the project.[3] In 1974, criticism from James Lighthill and pressure from the U.S. Congress led to the U.S. and British Governments stopping funding for undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government reinvigorated AI fundings from governments and industry, providing AI with billions of dollars of funding. However by the late 1980s, investors' enthusiasm waned again, leading to another withdrawal of funds, which is now known as the "AI winter". During this time, AI was criticized in the press and avoided by industry until the mid-2000s, but research and funding continued to grow under other names.

In the 1990s and early 2000s, advancements inmachine learning led to its applications in a wide range of academic and industry problems. The success was driven by the availability of powerful computer hardware, the collection of immense data sets and the application of solid mathematical methods. In 2012, deep learning proved to be a breakthrough technology, eclipsing all other methods. The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications. Investment in AI surgedin the 2020s.

  1. ^ Kaplan & Haenlein 2018.
  2. ^ Newquist 1994, pp. 143–156.
  3. ^ Newquist 1994, pp. 144–152.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Tubidy