Home  >  Article  >  What are the five classic computer algorithms?

What are the five classic computer algorithms?

青灯夜游
青灯夜游Original
2021-07-23 15:06:0834370browse

Five classic computer algorithms: 1. Divide and conquer method, dividing a complex problem into two or more identical or similar sub-problems; 2. Dynamic programming method; 3. Greedy algorithm; 4. Backtracking Method, a kind of optimal search method, searches forward according to the optimal conditions to achieve the goal; 5. Branch and bound method.

What are the five classic computer algorithms?

The operating environment of this tutorial: Windows 10 system, Dell G3 computer.

I’m about to start submitting resumes and looking for internships. I still don’t know how to do anything, so I’m so panicked. From today on, I will brush more than 2 leetcodes every day and then summarize, and summarize the knowledge points of various interview questions. Review often in the future, come on.

When I was brushing leetcode, I often saw people talking about DP. Then I went to Baidu to find out what DP was, and then I realized that DP is one of the five classic algorithms. Today I will summarize the five classic algorithms.

The five classic algorithms are divided into

1. Divide and conquer method: Divide a complex problem into two or more identical or similar ones sub-problems, and then divide the sub-problems into smaller sub-problems... until finally the sub-problems can be solved simply and directly, and the solution to the original problem is the merger of the solutions to the sub-problems.

2. Dynamic programming method: Each decision depends on the current state and immediately causes a state transition. A decision sequence is generated in a changing state, so this multi-stage optimization decision-making process to solve problems is called dynamic programming.

3. Greedy algorithm: When solving a problem, always make the best choice at the moment. In other words, without considering the overall optimal solution, what he made was only a local optimal solution in a certain sense. Common greedy algorithms include: Prim algorithm and Kruskal algorithm (both seek minimum spanning trees).

Basic idea: Decompose the problem into several small problems, gradually obtain the local optimal solution of each sub-problem, and finally merge it into the solution of the original problem.

4. Backtracking method: The backtracking algorithm is actually a search attempt process similar to enumeration. It mainly searches for the solution to the problem during the search attempt. When it is found that the solution conditions are no longer met, it will "backtrack" and return. , try another path. Depth first;

The backtracking method is a optimization search method that searches forward according to the optimization conditions to achieve the goal. But when you reach a certain step in exploration and find that the original choice is not optimal or cannot achieve the goal, you will take a step back and make another choice. This technique of going back and trying again when it doesn't work is the backtracking method, and the point in a certain state that satisfies the backtracking conditions Called the "backtrack point".

5. Branch and bound method: Similar to the backtracking method, it is also an algorithm that searches for a solution to the problem on the solution space tree T of the problem. However, in general, the solution goals of the branch and bound method and the backtracking method are different. The solution goal of the backtracking method is to find all solutions in T that satisfy the constraint conditions, while the solution goal of the branch and bound method is to find a solution that satisfies the constraint conditions, or to find a solution that satisfies the constraint conditions so that a certain objective The function value reaches the maximum or minimum solution, which is the optimal solution in a certain sense.

1. Divide and Conquer Method

The problems that can be solved by the divide and conquer method generally have the following characteristics:

1) The problem can be easily solved when the scale of the problem is reduced to a certain extent
2) The problem can be decomposed into several smaller problems of the same size, that is, the problem has optimal substructure properties.
3) The solutions to the sub-problems decomposed by this problem can be combined into the solution of this problem;

4) The sub-problems decomposed from this problem are independent of each other, that is, the sub-problems There are no common sub-sub-problems between questions.

If you do not have the third characteristic, you can consider using dynamic programming (DP) or greedy method.

If you do not have the fourth characteristic, you can consider using dynamic programming.

Basic steps of the divide-and-conquer method:

Step1 Decomposition: Decompose the original problem into several smaller, independent, sub-problems with the same form as the original problem;
Step2 Solution: If the sub-problem is small and easy to be solved, solve it directly, otherwise solve each sub-problem recursively

Step3 Merge: Combine the solutions of each sub-problem into the solution of the original problem.

2. Dynamic programming method

# The biggest difference between the divide and conquer method is that it is suitable for problems solved by dynamic programming method , the sub-problems obtained after decomposition are often not independent of each other (that is, the solution of the next sub-stage is based on the solution of the previous sub-stage for further solution).

Applicable conditions:

Problems that can be solved by dynamic programming generally have three properties:

(1) Optimization principle: If the optimal problem If the solutions to the subproblems contained in the optimal solution are also optimal, the problem is said to have optimal substructure, that is, it satisfies the optimization principle.
(2) No aftereffect: that is, once the state of a certain stage is determined, it will not be affected by subsequent decisions in this state. In other words, the process after a certain state will not affect the previous state, but is only related to the current state.

(3) There are overlapping sub-problems: that is, the sub-problems are not independent, and a sub-problem may be used multiple times in the next stage of decision-making. (This property is not a necessary condition for the application of dynamic programming, but without this property, the dynamic programming algorithm will not have advantages compared with other algorithms)

Case:

There are n steps , a person goes up one or two steps at a time, and asks how many ways there are to complete n steps.
Analysis: The key to the implementation of dynamic programming lies in whether the dynamic programming table can be accurately and reasonably used to abstract practical problems. In this problem, let f(n) represent the number of ways to go up n steps.

Then when n is 1, f(n) = 1, when n is 2, f(n) =2, that is to say, when there is only one step, the number of methods is one , when there are two steps, the number of methods is 2. So when we want to go up n steps, we must take one step from n-1 steps or two steps from n-2 steps, so the number of ways to reach n steps must be the method to reach n-1 steps number plus the sum of the number of ways to reach n-2 steps. That is, f(n) = f(n-1) f(n-2), we use dp[n] to represent the dynamic programming table, dp[i],i>0,i<=n, indicating reaching the i-level step The number of methods.

int fun(int n){  
    if (n==1||n==2)  
        return n;  
    /*判断n-1的状态有没有被计算过*/  
    if (!dp[n-1])  
        dp[n-1] = fun(n-1);  
    if(!dp[n-2])  
        dp[n-2]=fun(n-2);  
    return dp[n-1]+dp[n-2];  
}

3. Greedy Algorithm

The greedy algorithm means that when solving a problem, always make the best decision at the moment. s Choice. In other words, we do not consider the overall optimal solution, but only make a local optimal solution in a certain sense. The greedy algorithm does not obtain the overall optimal solution for all problems. The key is the selection of the greedy strategy. The selected greedy strategy must have no aftereffects, that is, the previous process of a certain state will not affect the subsequent state, but only affects the current state. related.
The general steps for solving problems are:
1. Establish a mathematical model to describe the problem;
2. Divide the problem to be solved into several sub-problems;
3. Solve each sub-problem and obtain the sub-problem The local optimal solution of the problem;

4. Combine the local optimal solution of the sub-problem into a solution to the original problem.

Example: Money change problem

This problem is more common in our daily lives. Assume that there are c0, c1, c2, c3, c4, c5, and c6 banknotes of 1 yuan, 2 yuan, 5 yuan, 10 yuan, 20 yuan, 50 yuan, and 100 yuan respectively. Now to use this money to pay K yuan, how many banknotes must be used at least? Using the idea of ​​​​a greedy algorithm, it is obvious that each step can only use banknotes with a large face value as much as possible. We naturally do this in our daily lives. The Values ​​have been arranged in ascending order in the program.

public class charge {
    public static void main(String[] args) {
        // TODO 自动生成的方法存根
    	int res = charge(84);
    	System.out.println(res);
    }
 
   private static int charge(int money) {
	   int count = 0;
       int value[] = {50,20,10,5,2,1};
       while(money>0){
       	int length = value.length;
       	for(int i=0;i=value[i]){
       			money=money-value[i];
       			count++;
       			//System.out.println(money);
       		}
       	}
       }
       return count;
	}     
}

4. Backtracking method

## The backtracking method is a method of systematically searching for solutions to problems. During the search process, try to find the solution to the problem. If you find that you cannot find it, take a step back and go back up (the pruning process). Backtracking can be used for many complex and large-scale problems.

 The basic idea of ​​the backtracking method is to follow the depth-first search strategy and start searching from the root node. When reaching a certain node, it is necessary to determine whether it contains the solution to the problem. If it does, continue the search from that node. If it does not, it will continue to search. , backtracking to the parent node. If the backtracking method is used to find all solutions to the problem, it must be traced back to the root, and all feasible subtrees of the root node must have been searched before ending. And if you use the backtracking method to find any solution, you can end as long as you find a solution to the problem.

  Commonly used pruning functions in the backtracking method: (1) Constraint function: subtract subtrees that do not satisfy the constraints at the node. (2) Boundary function: subtract subtrees that cannot obtain the optimal solution.

General steps:

1. For the given problem, determine the solution space of the problem

2. Use methods suitable for search to organize the solution space
3. Use depth-first search Solution space

4. Use the pruning function during the search process to avoid invalid searches.

5. Branch and bound method

#Similar to the backtracking method, it is also an algorithm that searches for the solution to the problem on the solution space tree T of the problem. . However, in general, the solution goals of the branch and bound method and the backtracking method are different. The solution goal of the backtracking method is to find all solutions in T that satisfy the constraint conditions, while the solution goal of the branch and bound method is to find a solution that satisfies the constraint conditions, or to find a solution that satisfies the constraint conditions so that a certain objective The function value reaches the maximum or minimum solution, which is the optimal solution in a certain sense.

(1) Branch search algorithm

The so-called "branch" is to use the breadth-first strategy to search all branches of the E-node in sequence, that is, all adjacent nodes, and discard those that are not satisfied The nodes of the constraint conditions, and the remaining nodes are added to the live node list. Then select a node from the table as the next E-node and continue the search.

Depending on the way to select the next E-node, there will be several different branch search methods.

1) FIFO search

2) LIFO search

3) Priority queue search

(2) Branch and bound search algorithm

General process of branch and bound method

Due to different solution goals, the search methods of the branch and bound method and the backtracking method on the solution space tree T are also different. The backtracking method searches the solution space tree T in a depth-first manner, while the branch-and-bound method searches the solution space tree T in a breadth-first or least-cost-first manner.

The search strategy of the branch and bound method is: at the expansion node, first generate all its child nodes (branch), and then select the next expansion pair from the current live node table. In order to effectively select the next expansion node and speed up the search process, at each live node, a function value (bound) is calculated, and based on these calculated function values, a best value is selected from the current live node table. Favorable nodes serve as expansion nodes to advance the search toward the branch with the optimal solution on the solution space tree, so as to find an optimal solution as quickly as possible.

The branch-and-bound method often searches the solution space tree of the problem in a breadth-first or minimum-cost (maximum-benefit) first manner. The solution space tree of a problem is an ordered tree that represents the solution space of the problem. Common ones include subset trees and permutation trees. When searching the solution space tree of a problem, the branch-and-bound method and the backtracking method use different expansion methods for the current expansion node. In the branch-and-bound method, each live node has only one chance to become an expansion node. Once a live node becomes an extended node, all its child nodes are generated at once. Among these child nodes, those that lead to infeasible solutions or non-optimal solutions are discarded, and the remaining child nodes are added to the live node list. Thereafter, a node is taken from the live node table to become the current expansion node, and the above node expansion process is repeated. This process continues until the desired solution is found or the liveknot table is empty.

Some differences between the backtracking method and the branch and bound method

Some problems can actually be solved well by either the backtracking method or the branch and bound method. But others are not. Maybe we need a more specific analysis - when to use branch and bound and when to use backtracking?

Some differences between the backtracking method and the branch-and-bound method:

The way the method searches the solution space tree Commonly used data structures for storing nodes Commonly used applications of node storage features

Backtracking Depth-first search method: All feasible child nodes of the live node of the stack are traversed before being popped from the stack to find all solutions that satisfy the constraints

Branch-and-bound method Breadth-first or minimum consumption first search queue, priority queue Each node has only one chance to become a live node to find a solution that satisfies the constraints or the optimal solution in a specific sense

For more related knowledge, please visit the FAQ column!

The above is the detailed content of What are the five classic computer algorithms?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn