Posted on April 16, 2023 at 18:51 (GMT +00:00) by
Colin
Continuing from
Sort and search arrays in C / C++ - part 1, if you haven't already read it, I suggest you do that first.
In this article I will start to look at methods of partitioning arrays to increase the speed of sorting the values ascending. The idea of partitioning is basically to split the sorting of the array into "partitions".
First I will start by creating a new sort procedure and name it array_dacsort; literally meaning divide and conquer sort.
The procedure is fairly simple, especially as it is recursively calling itself. Aftering writing this procedure and doing some research on my implementation, it turns out this algorithm is known as quicksort. I used the code from the previous article to test the speed with the exact same conditions as the previous sort procedure and this one came in consecutively as 0.006s - a HUGE difference.
After reading about quicksort and looking at similar implementations, I read more about using an algorithm known as HeapSort which is considered better when using in embedded systems where memory usage may be limited.
After reading a few articles on heap sort and what it actually does, they all seemed to implement a variation of a re-arrangement loop, often called "heapify", so first was to write the code for this:
To understand this, you need ideally to understand binary trees and read into heap operations. Here we consider the array as a binary tree. The left child in a binary tree is always i*2+1, where i is the index of the parent.
Now we need to start building and sorting the heap. We start from half way down the array and decrement, then call our heapify procedure at the current step index to re-arrange the values.
Next we will actually sort the full re-arranged array using the same heapify procedure, again from upper to lower.
The final implementation of our heap sort with some slight changes looks like this:
This one took a little longer for me to wrap my head around, but so far it works! Now for speed testing with the same setup as earlier.
This sorted the 50000 random values consecutively 6 times in a row at 0.009s.
That wraps up this article for now. In the next follow-up I plan to move on to the searching algorithms, but I may also swing back to sorting to build some kind of hybridized algorithm and maybe also turn everything into a fully fledged class for re-usability.
Now adding some additional notes for myself - I may also try writing some sorting and searching algorithms for different data types.
Thank you for reading, please drop me any suggestions and recommendations you have or if you've found this article interesting drop me a comment below!
That's all for this entry into the Captain's log for now, keep checking back for more!