In Part 1 of this series I detailed the workings and complexity of the O(n2) sorting algorithms - Bubble, Selection and Insertion - giving implementations in both Java and C#. Let's now look at some of the O(n log2 n) algorithms that perform better.
The mergesort is a divide and conquer algo. It splits the input in half then sorts each half separately before merging the results back together. However when it needs to sort each half, it employs the same technique of splitting and merging. Hence it is a recursive algorithm. It is based on 2 important facts: 1) it's simpler to sort a smaller list than a larger list, and (2) merging 2 sorted lists is not computationally intensive. Here is the non-generic implementation in Java:
1: public static void mergeSortInPlace(int input, int startIndex, int endIndex)
3: if (endIndex - startIndex <= 0)
6: // split the input into 2 halves
7: int half = (endIndex - startIndex + 1)/2;
9: mergeSortInPlace(input, startIndex,startIndex + half -1);
10: mergeSortInPlace(input, startIndex + half, endIndex);
11: mergeInPlace(input, startIndex, startIndex + half, endIndex);
14: private static void mergeInPlace(int input, int startA, int startB, int endB)
16: int c = new int[endB-startA+1];
17: int ap=startA,bp=startB,cp=0;
18: while (cp < c.length)
20: if (bp > endB || (ap <= (startB-1) && input[ap] < input[bp]))
33: for(int i=startA; i <= endB; i++)
34: input[i] = c[i-startA];
Here's the generic C# implementation:
1: public static void MergeSortInPlace<T>(T input)
2: where T : IComparable<T>
4: MergeSortInPlace(input, 0, input.Length-1);
7: public static void MergeSortInPlace<T>(T input, int startIndex, int endIndex)
8: where T:IComparable<T>
10: if (endIndex - startIndex <= 0)
13: // split the input into 2 halves
14: int half = (endIndex - startIndex + 1) / 2;
16: MergeSortInPlace(input, startIndex, startIndex + half - 1);
17: MergeSortInPlace(input, startIndex + half, endIndex);
18: MergeInPlace(input, startIndex, startIndex + half, endIndex);
21: private static void MergeInPlace<T>(T input, int startA, int startB, int endB)
22: where T:IComparable<T>
24: var c = new T[endB - startA + 1];
25: int ap = startA, bp = startB, cp = 0;
26: while (cp < c.Length)
28: if (bp > endB || (ap <= (startB - 1) && (input[ap].CompareTo(input[bp]) < 0)))
30: c[cp] = input[ap];
35: c[cp] = input[bp];
40: for (int i = startA; i <= endB; i++)
41: input[i] = c[i - startA];
The merge routine is somewhat complicated by the array bounds checking but in essence all it is doing is comparing the head of 2 sequences and returning the minimum of these until no items are left. When either input list is exhausted, the remainder of the other list is returned.
Run Time Complexity
The best way to understand the run time complexity of the mergesort is to see the problem as a binary tree of sub-problems of reducing size.
Since we are splitting in half at each branch the depth of the tree is log2(n). Because no items have been removed or added to the original array - it's merely been partitioned - the total number of comparisons done at each level of the tree is ~O(n). [In fact it's interesting to note that our sort procedure doesn't do any comparisons - only the merge procedure does comparisons.] Thus it follows that the worst case time complexity is ~O(n.log2n). This is also true for the average case.
For the mathematically minded, an alternate way to derive the run time complexity is to examine the recurrence relationship. This is possible because mergesort is a recursive algorithm. So if we let T(n) represent the time it takes to sort n items, the recipe of mergesort yields the following recurrence:
Note that the final term is n because the complexity of merging two sorted list is proportional to the total number of elements across both lists. Of course, to be a recurrence we must specify a base case. Because it's easy to sort/merge a list of size 1, we know T(1) = 0.
Now we can use the master theorem from Introduction to Algorithms [Cormen et.al] to derive the asymptotic complexity. In the above recurrence, a=2, b=2, f(n) = n, according to the generic formula give by Cormen et al. Since logba =log22=1 and f(n) ~ O(n) this is in fact a type 2 case thus T(n) is bounded by O(n.log2n). Check out page 73-75 of Introduction to Algorithms for full details. It a fairly handy rule to make sense of recurrence relationships.
There are numerous implementations of the mergesort so one has to be careful when defining the space complexity since it is invariably implementation-specific. The implementations given above in both Java and C# are in-place however, since mergesort is a recursive algorithm we are going to be pushing activation records, or stack frames, onto the stack repeatedly. In fact the growth rate of call stack memory is O(log n) since the number of recursive calls made is proportional to the tree depth. Each of these activation records contains a fixed amount of data, some local variables and a return address, but none the less the total size consumed has a defined growth rate. As well, the merge procedure uses auxiliary memory to do the merge. [Doing an in-place merge on contiguous memory in an array is not easy, but it is in a linked list which is why mergesort works better on lists.] The extra memory required for the merge is, at worst, n hence the total space needed for a merge sort is: 2n + log(n). Since n will dominate log(n) in this function, and we need not worry about coefficients when stating asymptotic upper bounds, we say the space complexity ~ O(n).
Uses and Optimisations
Because it employs a divide-and-conquer approach, mergesort is an excellent algorithm to use in situations where the input cannot fit into main memory but must be stored in blocks on disk. Knuth, in the new edition of The Art of Computer Programming, Vol. 3: Sorting and Searching, identified a technique - which he called "randomized striping" - as the method of choice for sorting with parallel disks/reads. This is an slightly modified version of mergesort optimised for use when the data to be sorted is to be read from multiple disks.
If you have multiple cores to play with, the merge sort algorithm can be further optimised since the conventional algorithm has poor performance due to the successive reduction of the number of participating processors by half, and down to one in the last merging stage. An optimised parallel merge sort would utilize all processors throughout the computation to achieve a speedup of (P-1)/log P where P is the number of processors.
Quicksort is another very popular divide-and-conquer sorting algorithm. It works be selecting a pivot item, shuffling "lesser" items to positions below the pivot, and "higher" items to positions above the pivot. Leaving the pivot in place, it then recursively does the same procedure to the "lesser" and "greater" sub-lists created when the input was partitioned about the pivot.
Here is my version of quick sort in non-generic Java:
1: public static void quickSort(int input, int start, int finish)
3: if (finish-start < 1)
6: //pick a pivot
7: int pivotIndex = (finish+start)/2;
9: //partition about the pivot
10: int i=start, j=finish;
13: while(input[i] < input[pivotIndex])
16: while(input[j] > input[pivotIndex])
21: int temp = input[i];
29: if (start < i-1)
30: quickSort(input, start, i-1);
32: if (i < finish)
33: quickSort(input, i, finish);
And here it is in a generic C# flavour:
1: public static void QuickSort<T>(T input)
2: where T : IComparable<T>
4: QuickSort(input, 0, input.Length - 1);
7: public static void QuickSort<T>(T input, int start, int finish)
8: where T:IComparable<T>
10: if (finish - start < 1)
13: //pick a pivot
14: int pivotIndex = (finish + start) / 2;
16: //partition about the pivot
17: int i = start, j = finish;
18: while (i <= j)
20: while (input[i].CompareTo(input[pivotIndex]) < 0)
23: while (input[j].CompareTo(input[pivotIndex]) > 0)
26: if (i <= j)
28: T temp = input[i];
29: input[i] = input[j];
30: input[j] = temp;
36: if (start < i - 1)
37: QuickSort(input, start, i - 1);
39: if (i < finish)
40: QuickSort(input, i, finish);
Run Time Complexity
Like the mergesort, quicksort has 2 components: partitioning about a pivot, and recursion. The first of these, partitioning, is clearly a O(n) operation for a set of size n since we need to scan through all items and perform (n-1) comparisons. If this partition splits the set of items upon which it operates in half each and every time then there will be log2(n) recursive calls, leading to a run time complexity of O(n.log2 n). This is the best case run time complexity for quicksort. However, the partitioning phase might not split the items exactly in half. In the worst case it might select a pivot that has all other (n-1) values above or below it. If such a 1:(n-1) split were to happen on every pass we would have recursion n-levels deep rather than log(n) deep and the total number of comparisons performed would be (n-1) + (n-2) + (n-3)+...+ 1 which, by our knowledge of arithmetic progressions, results in run time complexity of O(n2). That is the worst case run time complexity for quicksort. The chances of this partitioning happening are pretty remote though. Each time a pivot is selected it would have to be the maximum or minimum value in the set of values. Other implementations just pick the first item in the set to be the pivot value, but in this case, if the data is already sorted, you will encounter this repeated 1:(n-1) partitioning and thus worst-case quadratic time complexity.
So we've defined the best-case and the worst-case, what is the average-case I hear you ask. It turns out that the average case run time complexity if also ~O(n.log2 n). To derive this you need to solve the recurrence relationship that considers all possible choices of the pivot selection but rather than detail that here, you can see the details of this analysis on wikipedia, or a more complete description in Section 7.4 of Introduction to Algorithms [Cormen et al].
As with the mergesort, the space complexity of quicksort is determined by the amount of auxiliary memory needed for interim workings, plus the memory required for the recursive call stack. Again it's implementation specific, but in the implementation above, the partitioning is effectively done in-place so apart from the initial memory required to store the original input, the memory required is purely for the call stack. This implies the space complexity of quicksort is O(log2 n) for the average and best case. In the worst case, as stated earlier, quicksort makes n recursive calls so that would indicate a O(n) space complexity in that case. However, if you were to transform the algorithm from a recursive one to an iterative one the call stack memory component would be considerably reduced. In certain circumstances your compiler will be able to do this automatically for you, assuming you are using a statically-typed language. This is called tail recursion and if it can be done then the worst case space complexity reduces to O(log2 n) from O(n).
To be continued...
UPDATE (Oct 2009):
See these algorithms in F# here.