This article will learn about the algorithm and introduce the time complexity and space complexity of the algorithm. I hope it will be helpful to everyone!
Algorithm refers to a set of methods used to manipulate data and solve program problems. For the same problem, using different algorithms, the final result may be the same, but the resources and time consumed in the process will be very different.
So how should we measure the pros and cons of different algorithms?
Mainly consider it from the two dimensions of "time" and "space" occupied by the algorithm.
Time dimension: refers to the time consumed in executing the current algorithm. We usually use "time complexity" to describe it.
Space dimension: refers to how much memory space is required to execute the current algorithm. We usually use "space complexity" to describe it.
Therefore, evaluating the efficiency of an algorithm mainly depends on its time complexity and space complexity. However, sometimes time and space are "have your cake and eat it too" and you cannot have both, so we need to find a balance point.
Let me introduce the calculation methods of "time complexity" and "space complexity" respectively.
We want to know the "time complexity" of an algorithm. The first way many people think of is to run the algorithm program once. Then the time complexity it consumes Time will come naturally.
Is this method possible? Of course you can, but it also has many disadvantages.
This method is very susceptible to the influence of the operating environment. The results run on a high-performance machine will be very different from the results run on a low-performance machine. And it also has a lot to do with the scale of data used in testing. Furthermore, when we wrote the algorithm, we still had no way to run it completely.
Therefore, another more general method comes out: " Big O notation", that is, T(n) = O(f(n))
Let’s take a look at an example first:
for(i=1; i<=n; ++i) { j = i; j++; }
Using "Big O notation", the time complexity of this code is: O(n), why?
In Big O notation, the formula for time complexity is: T(n) = O( f(n) ), where f(n) represents the sum of the number of executions of each line of code, and O represents Proportional relationship, the full name of this formula is: The asymptotic time complexity of the algorithm.
Let’s continue to look at the above example. Assume that the execution time of each line of code is the same. We use 1 particle time to express it. Then the first line of this example takes 1 particle time, and the third line of code takes 1 particle time. The execution time of the row is n granular time, and the execution time of the fourth row is also n granular time (the second and fifth rows are symbols, ignore them for now), then the total time is 1 granular time n granular time n granular time, that is (1 2n) particle time, that is: T(n) = (1 2n)*particle time. From this result, it can be seen that the time consumption of this algorithm changes with the change of n. Therefore, we can simplify Express the time complexity of this algorithm as: T(n) = O(n)
Why can it be simplified in this way? Because the big O notation is not used to truly represent the execution time of the algorithm. , which is used to represent the growth trend of code execution time.
So in the above example, if n is infinite, the constant 1 in T(n) = time(1 2n) is meaningless, and the multiple 2 is also meaningless. Therefore it can be simply simplified to T(n) = O(n).
Common time complexity metrics include:
Constant order O(1)
Logarithmic order O(logN )
Linear order O(n)
Linear logarithmic order O(nlogN)
square order O(n²)
cubic order O(n³)
Constant order O(1)
int i = 1; int j = 2; ++i; j++; int m = i + j;
Linear order O(n)
for(i=1; i<=n; ++i) { j = i; j++; }
Logarithmic order O(logN)
int i = 1; while(i<n) { i = i * 2; }
从上面代码可以看到,在while循环里面,每次都将 i 乘以 2,乘完之后,i 距离 n 就越来越近了。我们试着求解一下,假设循环x次之后,i 就大于 2 了,此时这个循环就退出了,也就是说 2 的 x 次方等于 n,那么 x = log2^n
也就是说当循环 log2^n 次以后,这个代码就结束了。因此这个代码的时间复杂度为:O(logn)
线性对数阶O(nlogN)
线性对数阶O(nlogN) 其实非常容易理解,将时间复杂度为O(logn)的代码循环N遍的话,那么它的时间复杂度就是 n * O(logN),也就是了O(nlogN)。
就拿上面的代码加一点修改来举例:
for(m=1; m<n; m++) { i = 1; while(i<n) { i = i * 2; } }
平方阶O(n²)
平方阶O(n²) 就更容易理解了,如果把 O(n) 的代码再嵌套循环一遍,它的时间复杂度就是 O(n²) 了。
举例:
for(x=1; i<=n; x++) { for(i=1; i<=n; i++) { j = i; j++; } }
这段代码其实就是嵌套了2层n循环,它的时间复杂度就是 O(n*n),即 O(n²)
如果将其中一层循环的n改成m,即:
for(x=1; i<=m; x++) { for(i=1; i<=n; i++) { j = i; j++; } }
那它的时间复杂度就变成了 O(m*n)
立方阶O(n³)、K次方阶O(n^k)
参考上面的O(n²) 去理解就好了,O(n³)相当于三层n循环,其它的类似。
除此之外,其实还有 平均时间复杂度、均摊时间复杂度、最坏时间复杂度、最好时间复杂度 的分析方法,有点复杂,这里就不展开了。
既然时间复杂度不是用来计算程序具体耗时的,那么我也应该明白,空间复杂度也不是用来计算程序实际占用的空间的。
空间复杂度是对一个算法在运行过程中临时占用存储空间大小的一个量度,同样反映的是一个趋势,我们用 S(n) 来定义。
空间复杂度比较常用的有:O(1)、O(n)、O(n²),我们下面来看看:
空间复杂度 O(1)
如果算法执行所需要的临时空间不随着某个变量n的大小而变化,即此算法空间复杂度为一个常量,可表示为 O(1)
举例:
int i = 1; int j = 2; ++i; j++; int m = i + j;
代码中的 i、j、m 所分配的空间都不随着处理数据量变化,因此它的空间复杂度 S(n) = O(1)
空间复杂度 O(n)
我们先看一个代码:
int[] m = new int[n] for(i=1; i<=n; ++i) { j = i; j++; }
这段代码中,第一行new了一个数组出来,这个数据占用的大小为n,这段代码的2-6行,虽然有循环,但没有再分配新的空间,因此,这段代码的空间复杂度主要看第一行即可,即 S(n) = O(n)
以上,就是对算法的时间复杂度与空间复杂度基础的分析,欢迎大家一起交流。
更多算法相关知识,请访问:编程入门!!
The above is the detailed content of An article to talk about the time complexity and space complexity of the algorithm. For more information, please follow other related articles on the PHP Chinese website!