Back to: Algorithms – Introduction
In mathematics and computer science, an algorithm (/ˈælɡərɪðəm/ (About this sound listen) AL-gə-ridh-əm) is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing and automated reasoning tasks.
An algorithm is an effective method that can be expressed within a finite amount of space and time[1] and in a well-defined formal language[2] for calculating a function.[3] Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, proceeds through a finite[5] number of well-defined successive states, eventually producing “output”[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.[7]
The concept of algorithm has existed for centuries and the use of the concept can be ascribed to Greek mathematicians, e.g. the sieve of Eratosthenes and Euclid’s algorithm;[8] the term algorithm itself derives from the 9th Century mathematician Muḥammad ibn Mūsā al’Khwārizmī, latinized ‘Algoritmi’. A partial formalization of what would become the modern notion of algorithm began with attempts to solve the Entscheidungsproblem (the “decision problem”) posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define “effective calculability”[9] or “effective method”;[10] those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church’s lambda calculus of 1936, Emil Post’s “Formulation 1” of 1936, and Alan Turing’s Turing machines of 1936–7 and 1939.