Doubling and incremental strategy while implementing stack using linked list and arrays?

3.6k views Asked by At

What is difference between amortized and average complexity? Also what is doubling and incremental strategy while implementing stack using linked list and arrays?

1

There are 1 answers

0
templatetypedef On

Typically, average-case complexity and amortized complexity refer to different concepts. Average-case complexity is usually used when describing the runtime of a randomized algorithm or data structure in the average case. For example, we can talk about the average-case runtime of randomized quicksort (where the pivot is chosen randomly), or the average-case runtime of a skiplist.

Amortized complexity, on the other hand, usually refers to ratio of the total work done by some series of operations divided by the total number of operations performed. For example, in a dynamic array that doubles its size whenever more space is needed, each individual append operation might do O(n) work copying the array and transferring elements over. However, in doing so, it makes the next n operations complete in time O(1) because space is guaranteed to be available. Consequently, any n appends takes time O(n) total, so the amortized cost of each operation is O(1); this is the ratio of the total work (O(n)) to the number of operations (n).

The two concepts may seem similar, but they represent fundamentally different concepts. Average complexity refers to the amount of work on average given that there is an underlying probability distribution of runtimes. Amortized complexity refers to the average amount of work done by a series of operations based on the fact that certain operations are "expensive" but can be paid for by "cheaper" operations. This is why some data structures (for example, chained hash tables with rehashing) have operations that take O(1) amortized time in the average case - on average, any sequence of n operations on the data structure will take O(n) time.

As to your next question - doubling versus incremental growth - the doubling strategy ensures O(1) amortized complexity for push operations while the incremental growth strategy does not. Intuitively, if you double the size of the array when you need more space, then after doing work to copy n elements to an array of size 2n, the next n pushes will be "cheap" because space will be available. Consequently, any series of n operations take O(n) time and all operations take amortized O(1) time. With some trickier math, you can show that growing by any factor greater than one will lead to O(1) amortized time.

Incremental growth, on the other hand, doesn't guarantee this. Think of it this way - if you grow by adding k elements to the array and have an array of size 10000k, then when growing you'll do 10000k work to make the next k operations fast. After doing those k operations "cheaply," you have to do 10001k work to grow the array again. You can show that this will take Θ(n2) work over a series of n pushes, which is why it's not recommended.

Finally, you asked about arrays versus linked lists. When using linked lists, you don't need to worry about doubling or incremental growth because you can efficiently (in O(1)) allocate new linked list cells and chain them on. They're worst-case efficient structures rather than amortized efficient structures, though the constant factors are higher.

Hope this helps!