Skip to content

Latest commit

 

History

History
616 lines (379 loc) · 24.3 KB

README-en.md

File metadata and controls

616 lines (379 loc) · 24.3 KB

한글 문서 | English Document


GoSortingAlgorithms


Various Sorting Algorithms with golang :octocat:



🔰 Installation

Once you have installed Go, run the go get command to install the GoSoringAlgorithms package:

$ go get github.com/kdgyun/GoSortingAlgorithms



📖 Usage

it's simple!

package main

import (
	"github.com/kdgyun/GoSortingAlgorithms/sorts"

	crand "crypto/rand"
	"fmt"
	"math"
	"math/big"
	"math/rand"
	"sort"
)

// generate random data
func SliceBuilder(len int) []int {
	seed, _ := crand.Int(crand.Reader, big.NewInt(math.MaxInt64))
	rand.Seed(seed.Int64())

	var slice []int
	for i := 0; i < len; i++ {
		slice = append(slice, rand.Int())
	}
	return slice
}

func main() {
	slice := SliceBuilder(1000000)
 
	sorts.QuickSort(slice) // sorts.____(slice []int)

	isSorted := sort.SliceIsSorted(slice, func(i, j int) bool {
		return slice[i] < slice[j]
	})
	fmt.Print(isSorted)
}



🧪 Simply Test

It's easy to get test of all sorting algorithms with simplytest.

package main

import (
	"github.com/kdgyun/GoSortingAlgorithms/simplytest"
)

func main() {
	simplytest.RunTest()
}


Example of Output


+==================================================================================+
|                                   Random Test                                    |
+==================================================================================+

...

[array length : 100000]
make arrays...
running bubble sort...
running cocktail sort...

...

running intro sort...
running parallel intro sort...
running cycle sort...

+-------------------------------------------------------------------------------------------------+
|                                name |                      ns |                 ms |     verify |      (err mag) 
|-------------------------------------------------------------------------------------------------|
|                         bubble sort |          13016202738 ns |           13016 ms |       true |
|-------------------------------------------------------------------------------------------------|
|                       cocktail sort |           8922656474 ns |            8922 ms |       true |
|-------------------------------------------------------------------------------------------------|
|                            tim sort |             11419013 ns |              11 ms |       true |
|-------------------------------------------------------------------------------------------------|
|                        bitonic sort |             32333072 ns |              32 ms |       true |
|-------------------------------------------------------------------------------------------------|

...

|-------------------------------------------------------------------------------------------------|
|                          intro sort |              6665792 ns |               6 ms |       true |
|-------------------------------------------------------------------------------------------------|
|                 parallel intro sort |              2537508 ns |               2 ms |       true |
|-------------------------------------------------------------------------------------------------|
|                          cycle sort |          20209957747 ns |           20209 ms |       true |
+-------------------------------------------------------------------------------------------------+



Option

There is an option.go file so that users can easily adjust the test, and you can adjust three major options in that file.

  1. Select the sorting algorithm. If you want to test only a specific sorting algorithm or not, find the sorting name you want to change and change the variable to true or false. (True : enable, false : disable)

    ex) SHELL_SORT Activate = false

  2. Change the length of the slice to be tested. To change the length of the slice to be tested, configure the slice of the variable 'lengths' to the desired value.

    ex) var lengths = [...]int{35, 50000, 100000}

  3. You can activate or disable the data type of slice to be tested. Basically, all slices consisting of ascending, descending, and random data are tested. However, if you want to disable a specific data type, change the variable to false.

    ex) ASCENDING_TEST Activate = false



🔎 SUMMARY

Algorithms covered so far:

name function name
Bubble Sort BubbleSort
Cocktail Sort CocktailSort
Insertion Sort InsertionSort
Selection Sort SelectionSort
Shell Sort ShellSort
Merge Sort (bottom-up) BottomUpMergeSort
Merge Sort (top-down) TopDownMergeSort
Merge Sort (parallel) ParallelMergeSort
Heap Sort HeapSort
Quick Sort (left-pivot) QuickSortLP
Quick Sort (middle-pivot) QuickSort
Quick Sort (left-pivot) QuickSortRP
Quick Sort (left-pivot with parallel) ParallelQuickSortLP
Quick Sort (middle-pivot with parallel) ParallelQuickSort
Quick Sort (left-pivot with parallel) ParallelQuickSortRP
Dual-pivot Quick Sort DualPivotQuickSort
Dual-pivot Quick Sort (parallel) ParallelDualPivotQuickSort
Binaray Insertion Sort BinarySort
Tim Sort TimSort
Bitonic Sort BitonicSort
Bitonic Sort (parallel) ParallelBitonicSort
Intro Sort IntroSort
Intro Sort (parallel) ParallelIntroSort
Cycle Sort CycleSort
Odd-Even Sort OddEvenSort
Odd-Even Merge Sort OddEvenMergeSort
Odd-Even Merge Sort (parallel) ParallelOddEvenMergeSort
Comb Sort CombSort


Bubble Sort


Bubble sort repeatedly compares and swaps adjacent elements through the list.

This implementation is optimized. so, if the slice(array) is sorted, exit the bubble sort method. If you don't want to optimize, delete the swapped variable in the bubbleSort method.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2) O(n) or O(n^2) Yes Yes total : O(n), auxiliary : O(1)


Cocktail Sort


Cocktail Sort is a variation of Bubble sort. it is also known as Cocktail shaker sort, bidirectional bubble sort, cocktail sort, shaker sort.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2) O(n) Yes Yes total : O(n), auxiliary : O(1)


Insertion Sort


Insertion sort is a method of finding element from the list through the iteration to place in the correct position.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2) O(n) Yes Yes total : O(n), auxiliary : O(1)


Selection Sort


Selection sort is like finding minimum element from the unsorted part through each iteration to place it in front position.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2) O(n^2) Yes No total : O(n), auxiliary : O(1)


Shell Sort


Shell sort is an extended version of insertion sort. This algorithm first sorts the elements that are far away from each other, then it subsequently reduces the gap between them.

*The applied gap sequence is the Ciura sequence.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) or O(nlog^2_n) depends on gap sequence O(nlog_n) or O(nlog^2_n) Yes No total : O(n), auxiliary : O(1)


Merge Sort


Merge sort works on the basis of Divide and Conquer algorithm.

Conceptually, a merge sort works as follows:

  1. Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
  2. Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.


it supports 3 types of algorithms

  • top-down
    Top-down merge sort algorithm that recursively splits the list into sublists until sublist size is 1, then merges those sublists to produce a sorted list.
  • bottom-up
    Bottom-up merge sort algorithm which treats the slice(array) as an slice of n sublists of size 1, and iteratively merges sub-lists back and forth between two buffers
  • parallel
    Through parallelization of recursive calls, the parallel merge sorting algorithm recursively splits the list into sublists until the sublist size is smaller than the threshold, and then merges the sublists to create a sorted list. And sub-lists smaller than the threshold use a bottom-up list to produce a sorted list.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(nlog_n) O(nlog_n) O(nlog_n) No Yes total : O(n), auxiliary : O(n)


Heap Sort


Heap sort is to eliminate the elements one by one from the heap part of the list, and then insert them into the sorted part of the list.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(nlog_n) O(nlog_n) O(nlog_n) Yes No total : O(n), auxiliary : O(1)


Quick Sort


Quick Sort is based on Divide-and-conquer algorithm where it selects a pivot element from the array and then partition the other elements into two sub-arrays according to whether they are less than or greater than the pivot. The sub-arrays are then sorted recursively.

it supports 3 types of algorithms

  • left-pivot (+parallel)
    Left-pivot Quick sort is implemented with left element selected as the pivot
  • middle-pivot (+parallel)
    Middle-pivot Quick sort is implemented with middle element selected as the pivot
  • right-pivot (+parallel)
    right-pivot Quick sort is implemented with right element selected as the pivot

Through parallelization of recursive calls, each parallel quick sorting algorithms recursively splits the list into sublists until the sublist size is smaller than the threshold, and then sorts the sublists that smaller than the threshold by use a basic quick sort to create a sorted list.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(nlog_n) O(nlog_n) Yes No total : O(n), auxiliary : O(log_n)


Dual-Pivot Quick Sort


Dual-Pivot Quick Sort is based on Divide-and-conquer algorithm of quick sort, but there is a difference in selecting two elements (pivot) in the list to sort.

Pick two elements(pivot) from the array to be sorted, partition the remaining elements into those less than the lesser pivot, those between the pivots, and those greater than the greater pivot, and recursively sort these partitions.

it is also known as 3-way quick sort.



it supports 2 types of algorithms

  • non-parallel
    Pick two elements(pivot) from the array to be sorted, partition the remaining elements into those less than the lesser pivot, those between the pivots, and those greater than the greater pivot, and recursively sort these partitions.
  • parallel
    Through parallelization of recursive calls, each parallel dual-pivot quick sorting algorithms recursively splits the list into sublists until the sublist size is smaller than the threshold, and then sorts the sublists that smaller than the threshold by use a basic dual-pivot quick sort to create a sorted list.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(nlog_n) O(nlog_n) Yes No total : O(n), auxiliary : O(log_n)


Binary Insertion Sort


Binary Insertion Sort is an extended version of insertion sort. it also known as Binary Sort.

Binary Insertion Sort uses binary search to find the proper location to insert the selected item at each iteration.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2) O(n) Yes yes total : O(n), auxiliary : O(1)


Tim Sort


Timsort is a hybrid stable sorting algorithm, derived from merge sort and insertion sort(or binary insertion sort), designed to perform well on many kinds of real-world data. It was implemented by Tim Peters in 2002 for use in the Python programming language. The algorithm finds subsequences of the data that are already ordered (runs) and uses them to sort the remainder more efficiently.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(nlog_n) O(nlog_n) O(n) Yes yes total : O(n), auxiliary : O(n)


Bitonic Sort


Bitonic sort is a comparison-based sorting algorithm that can be run in parallel. It focuses on converting a random sequence of numbers into a bitonic sequence, one that monotonically increases, then decreases.

it supports 2 types of algorithms

  • non-parallel
    non-parallel bitonic sort that recursively splits the list into sublists until sublist size is 1, then merges those sublists according to the Bitonic Sequencing Rule to produce a sorted list.
  • parallel
    Through parallelization of recursive calls, the parallel bitonic sorting algorithm recursively splits the list into sublists until the sublist size is smaller than the threshold, and then switches to non-parallelization to split and merge the sublists according to the Bitonic Sequencing Rule to create a sorted list.

COMPLEXITY

Type Worst-Case Average-Case Best-Case in-place stable Space Complexity
non-parallel O(nlog^2_n) O(nlog^2_n) O(nlog^2_n) Yes No total : O(nlog^2_n), auxiliary : O(nlog^2_n)
parallel O(log^2_n) O(log^2_n) O(log^2_n) Yes No total : O(nlog^2_n), auxiliary : O(nlog^2_n)


Intro Sort


Introsort is a hybrid sorting algorithm that provides both fast average performance and (asymptotically) optimal worst-case performance. It begins with quicksort, it switches to heapsort when the recursion depth exceeds a level based on (maximum depth: 2ceil(log2_n) of) the number of elements being sorted and it switches to insertion sort when the number of elements is below some threshold(16).



it supports 2 types of algorithms

  • non-parallel
    Pick a element(pivot) when uses quick sort from the array to be sorted, partition the other elements into two sub-arrays according to whether they are less than or greater than the pivot, and recursively sort these partitions until the recursion depth exceeds a level based on (maximum depth: 2ceil(log2_n) of) the number of elements.
  • parallel
    Through parallelization of recursive calls, each parallel dual-pivot quick sorting algorithms recursively splits the list into sublists until the sublist size is smaller than the threshold, and then sorts the sublists that smaller than the threshold by use a basic dual-pivot quick sort to create a sorted list.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(nlog_n) O(nlog_n) O(nlog_n) Yes No total : O(n), auxiliary : O(log_n)


Cycle Sort


Cycle sort is an in-place sorting Algorithm, unstable sorting algorithm, a comparison sort that is theoretically optimal in terms of the total number of writes to the original array.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2) O(n^2) Yes No total : O(n), auxiliary : O(1)


Odd-Even Sort


In computing, an odd–even sort or odd–even transposition sort (also known as brick sort or parity sort) is a relatively simple sorting algorithm, developed originally for use on parallel processors with local interconnections.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2) O(n^2) Yes Yes total : O(n), auxiliary : O(1)


Odd-Even Merge Sort


odd–even mergesort is a generic construction devised by Ken Batcher for sorting networks of size O(n(log_n)^2)
and depth O((log_n)^2) where n is the number of items to be sorted.


COMPLEXITY

Type Worst-Case Average-Case Best-Case in-place stable Space Complexity
non-parallel O(nlog^2_n) O(nlog^2_n) O(nlog^2_n) Yes yes total : O(nlog^2_n), auxiliary : O(nlog^2_n)
parallel O(log^2_n) O(log^2_n) O(log^2_n) Yes yes total : O(nlog^2_n), auxiliary : O(nlog^2_n)


Comb Sort


Comb sort is a relatively simple sorting algorithm originally designed by Włodzimierz Dobosiewicz and Artur Borowy in 1980, later rediscovered (and given the name "Combsort") by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort in the same way that Shellsort improves on insertion sort.

COMPLEXITY

Worst-Case Average-Case Best-Case in-place stable Space Complexity
O(n^2) O(n^2/p^2) O(nlog_n) Yes No total : O(n), auxiliary : O(1)