Sorting Algoritms
In this, we swap the higher element with its neighbor until we reach the end of the array. Now the highest element is at the last position. So, we change the boundary and decrease it by 1 from the last. At worst, we have to iterate n times to sort the array.
def bubble_sort(arr): n = len(arr) for i in range(n): swapped = False # Last i elements are already in place, so we don't need to check them for j in range(0, n-i-1): # Swap if the element found is greater than the next element if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] # Swap the elements swapped = True if not swapped: break return arr
Algorithm -
Time Complexity :
Space Complexity - O(1), No extra memory required.
Advantages -
No extra memory required.
Stable as elements keep their relative order.
Disadvantages -
Applications-
In this, we find the smallest element in the array and replace it with the first element. Then, we increase the boundary by 1 and repeat the same steps until we reach the end of the array.
def selectionSort(a): i = 0 while i<len(a): smallest = min(a[i:]) index_of_smallest = a.index(smallest) a[i],a[index_of_smallest] = a[index_of_smallest],a[i] i=i+1 return a
Algorithm -
Iterate over the array and find the minimum element.
Swap it with the first element and increase the pointer by 1.
Repeat this process until we reach the end of the array.
Time Complexity : It has a time complexity of O(n2) in all three cases: best, average, and worst. This is because we have to select the minimum element and swap it every time, regardless of whether the array is already sorted or not.
Space Complexity - O(1), No extra memory required.
Advantages -
No extra memory required.
Fewer swaps are done than in bubble sort.
Disadvantages -
Time complexity - O(n2), which is very high for large datasets.
Not Stable, as it does not maintain the relative order of equal elements.
Applications -
It can be used in systems with limited memory as it does not require additional storage.
It is used in systems where minimizing the number of swaps is critical, such as in systems with slow write operations.
It is an algorithm that works by inserting an unsorted element into its correct position by iteratively checking backwards from the position of the element to the start of the array.
def bubble_sort(arr): n = len(arr) for i in range(n): swapped = False # Last i elements are already in place, so we don't need to check them for j in range(0, n-i-1): # Swap if the element found is greater than the next element if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] # Swap the elements swapped = True if not swapped: break return arr
Algorithm -
Start from the second element of the array and compare it with the first element. If the current element is smaller than the first element, then swap them.
Now increase the pointer and do this for the third element: compare it with the second and first elements.
Repeat the same process for the rest of the elements, comparing them with all the previous elements, and insert them at the suitable position.
Time Complexity :
- Best Case - If the array is already sorted then only one iteration is required. Time complexity is O(n)
- Average Case - if the array is randomly sorted, then time complexity is O(n2)
- Worst Case - if the array is in decreasing order then we will require n2 iterations.
Space Complexity - O(1), No extra memory required.
Advantages -
Disadvantages -
Time complexity - O(n2), which is very high for large datasets.
Not Stable, as it does not maintain the relative order of equal elements.
Applications-
Merge Sort is an algorithm that follows the divide and conquer approach. It has two main steps: first, dividing the array recursively, and second, merging the divided arrays in sorted order.
def selectionSort(a): i = 0 while i<len(a): smallest = min(a[i:]) index_of_smallest = a.index(smallest) a[i],a[index_of_smallest] = a[index_of_smallest],a[i] i=i+1 return a
Algorithm -
Divide the array into two halves by calculating the mid-point.
Continue dividing until the length of each sub-array is 1.
Call the merge function on both halves: the left half and the right half.
Use three pointers for the merging process:
Iterate through both halves and compare their elements. Insert the smaller element into the sorted array and increment the corresponding pointer by 1.
Repeat this process recursively until the entire array is sorted.
Time Complexity : Merge Sort has a time complexity of O(n log n) in all three cases: best, average, and worst. This is because, irrespective of whether the array is already sorted or not, the same steps are followed for each division and merge.
O( log n ) - The array size is halved at each step during the divide phase.
O(n) - During merging process we have to iterate over all the elements once.
So the total time complexity is O (n) * O (log n) = O (n log n)
Space Complexity - O(n), Extra memory is required during the merging process to store the temporary arrays.
Advantages -
Stable, as elements keep their relative order.
Time complexity is O (n log n), even for the large datasets.
Suitable for parallel processing because sub-arrays are merged independently.
Disadvantages -
Applications -
Quick Sort is an algorithm that follows the divide-and-conquer approach. We choose a pivot element and partition the array around the pivot element after placing the pivot in its correct sorted position.
The first step is to choose the pivot element and then partition the array around the pivot. All elements smaller than the pivot will be on the left, and all elements greater than the pivot will be on its right. The pivot is then in its correct sorted position. Recursively, the same process is applied by dividing the array into two halves: the first half contains the elements before the pivot, and the second half contains the elements after the pivot. This process is repeated until the length of each sub-array reaches 1.
def bubble_sort(arr): n = len(arr) for i in range(n): swapped = False # Last i elements are already in place, so we don't need to check them for j in range(0, n-i-1): # Swap if the element found is greater than the next element if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] # Swap the elements swapped = True if not swapped: break return arr
Algorithm -
Time Complexity :
1. Best Case - Time complexity - O(n log n), when the pivot divides the array into two equal halves.
2. Average Case - Time complexity - O(n log n), when the pivot divides the array into two equal halves. But not necessarily equal.
3. Worst case - Time complexity - O(n2) , When -
The smallest element is chosen as the pivot in an already sorted array.
The largest element is chosen as the pivot in an array sorted in decreasing order.
O( log n ) - The array size is halved at each step during the divide phase.
O(n) - During the ordering of elements.
So, the total time complexity is O (n) * O (log n) = O (n log n)
Space Complexity :
Best and Average case - O( log n) - for the recursive stack.
Worst Case - O(n) - for the recursive stack.
Advantages -
Disadvantages -
Applications -
Heap Sort is a comparison-based sorting algorithm. It is an extension of Selection Sort. In Heap Sort, we create a Binary Heap and swap the maximum or minimum element with the last element. Then, we reduce the heap size by 1. This process is repeated until the length of the heap is greater than 1.
def bubble_sort(arr): n = len(arr) for i in range(n): swapped = False # Last i elements are already in place, so we don't need to check them for j in range(0, n-i-1): # Swap if the element found is greater than the next element if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] # Swap the elements swapped = True if not swapped: break return arr
Algorithm -
a.Its left child is at index 2i 1
b. Its right child is at index 2i 2
Time Complexity : Heap Sort has a time complexity of O(n log n) in all three cases: best, average, and worst. This is because, irrespective of whether the array is already sorted or not, the same steps are followed each time a max heap is created and an element is swapped.
O( log n ) - To create max heap
O(n) - As the max heap is created and an element is swapped n times.
So the total time complexity is O (n) * O (log n) = O (n log n)
Space Complexity : For all cases - O( log n) - for the recursive stack.
Advantages -
Disadvantages -
Applications -
Counting Sort is a non-comparison-based sorting algorithm. It is particularly efficient when the range of input values is small compared to the number of elements to be sorted. The basic idea behind Counting Sort is to count the frequency of each distinct element in the input array and use that information to place the elements in their correct sorted positions.
Radix Sort uses Counting Sort as a subroutine. It applies Counting Sort to each digit place of a number and repeatedly sorts until it processes all the digits of the largest number in the array.
def bubble_sort(arr): n = len(arr) for i in range(n): swapped = False # Last i elements are already in place, so we don't need to check them for j in range(0, n-i-1): # Swap if the element found is greater than the next element if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] # Swap the elements swapped = True if not swapped: break return arr
def selectionSort(a): i = 0 while i<len(a): smallest = min(a[i:]) index_of_smallest = a.index(smallest) a[i],a[index_of_smallest] = a[index_of_smallest],a[i] i=i+1 return a
Algorithm -
Find the maximum number in the array and determine the number of digits (d) in it. If the length of the number is d, Counting Sort is called d times on the array.
Call Counting Sort for each digit place in the array, starting from the ones place, then tens place, and so on.
In Counting sort:
Time Complexity :
Counting Sort has a time complexity of O(n k), where n is the number of elements to sort and k is the range of values (size of the index array).This complexity is valid for all three cases: best, average, and worst.
This is because, irrespective of whether the array is already sorted or not, the same steps are followed each time.
Radix Sort’s time complexity increases by a factor of d, where d is the number of digits in the largest number in the array. Time complexity is O (d * (n k))
So the total time complexity is O (d) * O(n k) = O (d * (n k))
Space Complexity : For all cases - O(n k), where n is the length of the input array and k is the range of values in the index array.
Advantages -
Disadvantages -
Applications -
The above is the detailed content of Sorting Algorithms || Python || Data Structures and Algorithms. For more information, please follow other related articles on the PHP Chinese website!