Modify the class Date
given below so that it implements the Comparable
interface in a reasonable way.
public class Date { private final int month, day, year; public Date(int m, int d, int y) { month = m; day = d; year = y; } ⋮ }
public class Date implements Comparable<Date> { private final int month, day, year; public Date(int m, int d, int y) { month = m; day = d; year = y; } ⋮ public int compareTo(Date that) { if (this.year < that.year) return -1; if (this.year > that.year) return 1; if (this.month < that.month) return -1; if (this.month > that.month) return 1; if (this.day < that.day) return -1; if (this.day > that.day) return 1; return 0; } }
Show the state of an array with elements {f,b,a,h,c,d,g,e}
...
after each exchange in a selection sort.
after each sequence of exchanges required to accomplish each insertion in an insertion sort.
Order before selection sort application: f b a h c d g e 0 1 2 3 4 5 6 7 ------------------------ a b f h c d g e a b f h c d g e a b c h f d g e a b c d f h g e a b c d e h g f a b c d e f g h a b c d e f g h a b c d e f g h Order after selection sort application: a b c d e f g h
Order before insertion sort application: f b a h c d g e 0 1 2 3 4 5 6 7 ------------------------ f b a h c d g e b f a h c d g e a b f h c d g e a b f h c d g e a b c f h d g e a b c d f h g e a b c d f g h e a b c d e f g h Order after insertion sort application: a b c d e f g h
What's the average runtime cost of insertion sort in Big-O notation?
$O(n^2)$
What is the best case and worst case time complexity in Big-O notation for the selection sort algorithm?
$O(n^2)$
When using the element at index $0$ as the pivot and "ends-to-middle" partitioning, at what index is the pivot ultimately placed in an array of 15 equal values?
The pivot is ultimately placed at index $8$.
Which of the following is the least costly way to sort an already sorted array?
An insertion sort on an already sorted array uses only $\sim n$ comparisons, no exchanges, and accomplishes the sort in-place and in a stable way. Without pre-shuffling, an already sorted array is quicksort's worst case with $\sim n^2/2$ comparisons.
With pre-shuffling, quicksort will still need $\sim 2\log_2 n$ comparisons, slightly fewer exchanges, and is not stable (although it is done in-place).
The merge sort similarly requires $\sim n \log_2 n$ comparisons, twice the memory space -- assuming an auxiliary array the size of the original is created, and require lots of copying of array elements (although it is a stable sort).
While for large arrays, quicksort and merge sort typically crush insertion sort in terms of performance, in this odd circumstance, insertion sort is the clear winner.
Create a class BubbleSorter
that implements the Sorter
interface, where the sorting done by the required sort()
method is implemented with a bubble sort.
Create a class MergeSortNR
that implements the Sorter
interface, where the sorting done by the required sort()
method is implemented with a non-recursive merge sort.
The non-recursive and recursive versions of merge sort are very similar -- except that in a non-recursive mergesort, one starts with a pass of 1-by-1 merges (merge adjacent subarrays of size 1, to make subarrays of size 2), then one makes a pass of 2-by-2 merges (merge adjacent subarrays of size 2 to make subarrays of size 4), then 4-by-4 merges, and so forth, until we do a merge that encompasses the whole array.
The non-recursive merge sort should outperform the recursive merge sort due to less overhead due to recursive calls. Run the class SortTester
to compare the MergeSortNR
you just wrote to other sorting methods, as found in InsertionSorter
, SelectionSorter
, MergeSorter
, and QuickSorter
, to see how your implementation compares.
In the SorterTest
class, all of the sorting methods are applied to random arrays of various sizes (all powers of 2) one hundred times each, and the total times required for sorting by each method are reported for each array size.
Note: in order to collect execution times, SortTester
uses the Stopwatch
class.
Tweak the code in your MergeSortNR
class until it generally does better than the MergeSort
class (at least for larger-sized arrays). Times seen will vary, depending on many factors -- including the type of machine on which your code is executed. However, a sample run is shown below.
$ java SortTester↵
Size Ins Sel Mrg MrgNR Quick
4 0.001 0.000 0.000 0.000 0.003
8 0.000 0.001 0.000 0.002 0.000
16 0.003 0.001 0.001 0.000 0.004
32 0.000 0.003 0.000 0.003 0.005
64 0.004 0.004 0.008 0.003 0.001
128 0.015 0.012 0.013 0.003 0.005
256 0.045 0.059 0.013 0.014 0.027
512 0.056 0.070 0.004 0.003 0.008
1024 0.103 0.094 0.009 0.015 0.013
2048 0.402 0.377 0.025 0.031 0.030
4096 1.650 1.480 0.073 0.065 0.048
8192 6.718 6.144 0.137 0.136 0.110
When you are done, plot the results in a graph. (You can use Google SpreadSheet, Microsoft Excel, OpenOffice Calc, or other tools to this end). Does what you see agree with the Big-O analysis of each of these algorithms' average running times?
Complete the class Quick3WaySorter
which implements the Sorter
interface and uses 3-Way Quicksort partitioning to sort the array, and shows the state of the array after each partition, as suggested in the sample run provided below.
$ java Quick3WaySorter↵ Enter a string of characters to sort: PABXWPPVPDPCYZ [A, B, C, D, P, P, P, P, P, V, W, Y, Z, X] [A, C, D, B, P, P, P, P, P, V, W, Y, Z, X] [A, B, C, D, P, P, P, P, P, V, W, Y, Z, X] [A, B, C, D, P, P, P, P, P, V, Y, Z, X, W] [A, B, C, D, P, P, P, P, P, V, W, X, Y, Z] [A, B, C, D, P, P, P, P, P, V, W, X, Y, Z]
In addition using lo
and hi
to indicate the positions of the left-most and right-most ends of the section of the array a[]
that we are partitioning, keep track of 3 more positions:
Then, simply compare the pivot with the element of the array at this last position; make an appropriate exchange if warranted; and update these three values, as needed.