Calculation is a familiar concept that most of us understand intuitively. Let's take the function f (x) = x 3 as an example. When x is 3, f (3) = 3 3. The answer is 6, very simple. Obviously, this function is computable. But some functions are not that simple, and determining whether they can be calculated is not trivial, meaning they may never lead to a final answer.
In 1928, the German mathematicians David Hilbert and Wilhelm Ackermann proposed a method called Entscheidungsproblem (i.e. "deterministic problem"). question"). Over time, the question they asked would lead to a formal definition of computability, a definition that would enable mathematicians to answer a host of new questions and lay the foundation for theoretical computer science.
This definition was proposed by a 23-year-old graduate student named Alan Turing, who wrote a seminal paper in 1936 that not only formalized the concept of computing It also proved a basic problem in mathematics and created the knowledge basis for the invention of electronic computers. Turing's great vision was to provide concrete answers to computing problems in the form of an abstract machine, which was later named the Turing machine by his PhD supervisor Alonzo Church.
A Turing machine is abstract because it does not (and cannot) physically exist as a tangible device. Rather, it is a conceptual model of computation: if this machine can compute a function, then the function is computable.
##When Alan Turing invented the Turing Machine in 1936 At the time, modern computing was also created.
Alan Turing and his Turing MachineIts working principle is this: the Turing machine can follow the rules of the rule table Read and change symbols on infinitely long tapes. The tape is composed of "cells", and each cell can only store one symbol. Turing machines use tape heads to read and rewrite the contents of cells. Each rule in the rule table determines what the Turing machine should do based on its current state and the symbol it is reading. A Turing machine can enter a final state (an "accept state" or a "reject state") based on where it stopped, deciding to accept or reject the input. Or a Turing machine gets stuck in an infinite loop and never stops reading the tape.
The best way to understand a Turing machine is to think about a simple example like this. Let's imagine that a Turing machine is designed to tell us whether a given input is the number zero. We will enter the number 0001 with a whitespace symbol (#), meaning "#0001#" is the relevant part of our tape.
The Turing machine starts from an initial state, let's call it q0, where it reads the leftmost cell of the tape and finds a blank area. As a rule, when in state q0, if the sign is #, leave it as is, then move one cell to the right and change the machine state to q1. After this step, the machine is in state q1 and its head will be reading the second symbol 0.
Now we look for rules that apply to these conditions. We find a rule that says, "Keep state q1 and move the head one cell to the right." This leaves us in the same position (in state q1, the reading is still 0), so we continue to move right until the head finally A different number 1 is read.
When we consulted the rule table again, we found a new rule: "If 1 is encountered, transition to q2, which is the rejection state." The Turing machine stopped running and The initial question "Is 0001 zero?" is answered "No".
Conversely, if the input is "#0000#", the Turing machine will encounter # after all those zeros. When we consult the rule table, we find a rule that says this means the machine enters state q3, an "accepting" state. The machine now answers "yes" to the question "Is '0000' zero?"
Alan Turing helped define computation, algorithms, and Turing machines.
Turing used his abstract machine to build a computational model to answer the Entscheidungs question, which Formally asked: Given a set of mathematical axioms, is there a mechanical process (i.e. a set of instructions, today we call it an algorithm) that can always determine whether a given statement is true?
Suppose we want to find an algorithm to tell us whether the position of the pieces in a certain chess game is feasible. Within this, the axioms are the rules that govern reasonable moves in chess. Can we get there by following a finite sequence of step-by-step processes? Although some chess positions may take longer than our lifetimes to analyze, an algorithm may generate all possible positions and compare them one by one to the input, such algorithms exist in the game of chess. Therefore, we say that chess is "decidable".
However, in 1936, the American mathematicians Church and Turing used different methods to prove that "there is no general method that can solve every instance of the Entscheidungs problem." For example, John Some games, such as Conway's Game of Life, are undecidable: no algorithm can determine whether a certain pattern will emerge from an initial pattern.
Turing showed that a function is computable if there is an algorithm that can perform the required task. At the same time, he also showed that the algorithm is a process that can be defined by a Turing machine. Therefore, a computable function is a function that can be computed by a Turing machine. This seems like a roundabout way of defining computability, but it's the best we have.
"It's not that you can choose to define it any other way," said Michael Sipser, a theoretical computer scientist at MIT. "I think there's a general consensus that Church - The Turing thesis proposes that the informal concept of an algorithm is anything that can be done by any reasonable model of computation." Other mathematicians have proposed different models of computation that, while superficially looking very different, are actually the same : they can do any computation that a Turing machine can do, and vice versa.
Just a few years after philosopher, logician, and mathematician Kurt Gödel proved that mathematics is incomplete, Church and Turing also passed this worksheet It shows that some problems in mathematics are undecidable. No matter how complex the algorithm is, it cannot tell us whether the answer is yes or no. Both events were devastating blows to Hilbert, who had hoped that mathematics would provide simple, idealized answers. But that's not bad: if there were a general solution to the Entscheidungsproblem, it would mean that all problems in mathematics could be reduced to simple mechanical calculations.
In addition to answering these fundamental questions, Turing machines directly influenced modern computers through a variant called the Universal Turing Machine development of. It is a special Turing machine that can simulate any input from any other Turing machine. It could read the descriptions (and rules and input tapes) of other Turing machines and simulate their behavior on its own input tapes, producing the same output as the simulated machine, just as today's computers can read any program and execute it Same.
In 1945, the Hungarian-American mathematician, computer scientist, and physicist John von Neumann proposed a computer architecture—the von Neumann architecture. It makes it possible to turn the concept of a universal Turing machine into a real-life machine.
When Princeton University theoretical computer scientist Sanjeev Arora teaches this concept, he emphasizes the broader philosophical picture. He said, "There are two concepts of universal, one is that it can run any other Turing machine, but the other, bigger concept is that it can run any calculation you come up with in the universe." In classical physics In the world, any physical process can be modeled or simulated using algorithms, and algorithms can be simulated by Turing machines.
Another noteworthy and increasingly useful variant is the probabilistic Turing machine. Unlike conventional Turing machines, which have well-defined responses to each input, probabilistic Turing machines can make multiple responses based on probabilities. This means it can produce different results for the same input at different points in time. It is also surprising that for some problems this probabilistic strategy is more effective than a purely deterministic approach. The concept of probabilistic Turing machines has proven to be very useful in areas such as optimization and machine learning.
These abstract machines are perhaps the best evidence that asking fundamental questions can be one of the most useful things a scientist can do.
The above is the detailed content of 'The most important machine never built', Alan Turing and the Turing Machine. For more information, please follow other related articles on the PHP Chinese website!