Turing’s main contributions in computer science: 1. Proposing the concept of [Turing test]; 2. Inventing the Turing machine; 3. The origin of the idea of artificial intelligence; 4. Establishing biology; 5. Determining problems.
Turing’s main contributions to computer science:
1. Proposed the “Turing test "Concept
"Turing test" refers to the situation where the tester and the subject (a person and a machine) are separated, and the subject is asked random questions through some devices (such as a keyboard) .
After multiple tests, if more than 30% of the testers cannot determine whether the person being tested is a human or a machine, then the machine will pass the test and be considered to have human intelligence.
The term Turing test comes from a paper "Computing Machinery and Intelligence" written by Alan Mathison Turing, a pioneer in computer science and cryptography, in 1950, 30% of which was Turing A prediction of what machines would be capable of thinking in the year 2000, a prediction we are currently far behind.
Turing predicted that at the end of the 20th century, there would be computers that passed the "Turing test." On June 7, 2014, at the "2014 Turing Test" conference held by the Royal Society, the organizer, the University of Reading, issued a press release.
Claims that the artificial intelligence software Eugene Goostman, founded by Russian Vladimir Veselov, passed the Turing test.
Although the "Eugene" software is far from being able to "think", it is also a landmark event in the history of artificial intelligence and even computers.
2. Turing machine
The Turing machine was proposed by Turing in 1936. It is an accurate general computer model that can simulate all aspects of an actual computer. Computational behavior.
The so-called Turing machine refers to an abstract machine. It has an infinitely long paper tape. The paper tape is divided into small squares, each square has a different color. There is a machine head that moves around on the paper tape.
The machine head has a set of internal states and some fixed procedures. At each moment, the machine head must read a square of information from the current paper tape, then search the program table based on its own internal state, output the information to the paper tape square according to the program, and convert its own internal state, and then Make a move.
3. Artificial Intelligence
In 1949, Turing became the deputy dean of the Computing Laboratory of the University of Manchester, dedicated to the development and operation of Manchester The Mark 1 model stores the software required for a programmable computer.
Turing’s article was republished in 1956 under the title “Can Machines Think?” At this time, artificial intelligence also entered the practical development stage. Turing's idea of machine intelligence is undoubtedly one of the direct origins of artificial intelligence.
And with in-depth research in the field of artificial intelligence, people are increasingly aware of the profoundness of Turing’s ideas: they are still one of the main ideas of artificial intelligence today.
4. Establish biology
From 1952 until his death, Turing continued to conduct research in mathematical biology. He published a paper "The Chemical Basis of Morphogenesis" in 1952.
His main interest is the Fibonacci leaf sequence, the Fibonacci numbers found in plant structures. He applied the reaction-diffusion formula, which is now central to the field of pattern formation. None of his later papers were published, and it was not until the publication of "The Selected Works of Alan Turing" in 1992 that these articles saw the light of day.
5. Decision problem
In 1937, Turing used his method to solve the famous Hilbert decision problem: narrow predicate calculus (also known as first-order The problem of determining the satisfiability of logical) formulas.
He used the formulas in first-order logic to encode the Turing machine, and then derived the undecidability of first-order logic from the undecidability of the Turing machine halting problem. The "encoding method" he created here became one of the main methods later used to prove the undecidability of formulas in first-order logic.
Regarding the determination issue, another achievement of Turing is the concept of Turing machine with external information source proposed in 1939, from which the concepts of "Turing reducibility" and relative recursion were derived.
Using the concepts of reduction and relative recursion, the degree of undecidability and non-recursion can be compared. On this basis, E. Post proposed the important concept of insolvability, and work in this area subsequently made significant progress.
Related learning recommendations:Programming video
The above is the detailed content of What are Turing's main contributions to computer science?. For more information, please follow other related articles on the PHP Chinese website!