Why and how does a bad pivot choice make Quicksort O(n^2)? - pivot

For example when the pivot is the highest or lowest value in the array.
For quicksort that uses 2 pointers, 1 goes Left end to right, the other goes right end to left, a pointer stops when it finds an element out of place in respect to the pivot, when both have stopped, they swap the elements and continue on from that position. But, why and how does a bad pivot choice make Quicksort O(n^2)?

how does a bad pivot choice make Quicksort O(n^2)?
Let's say you always pick the smallest element as your pivot. The top-level iteration of quicksort will require n-1 comparisons and will split the array into two subarrays: one of size 1 and one of size n-1. The first one is already sorted, and you apply the quicksort recursively to the second one. Splitting the second one will require n-2 comparisons. And so on.
In total, you have (n-1) + (n-2) + ... + 1 = n * (n-1) / 2 = O(n^2) comparisons.

If your chosen pivot happened to be the maximal value in your subset on every recursion, the algorithm would simply move every record read into the subset below the pivot, and continue on with only one non-empty partition. This new subset's size would be only one less.
In that case, the quicksorts operation would be similar to a selection sort. I would find a maximal value, put it where it goes, and move on to the rest of the data in the next iteration. The difference being that the selection sort searches for the maximal (or minimal) data point, where the worst-case quicksort would happen to select the maximal value and then discover that it is, indeed, the maximum.
This is a quite rare case, to my knowledge.

Try a list with n times the same number.
Choose any way to find a privot.
Look whats happening.
(Edit: To make some hints:
The pivot does not depend on the way to find a pivot, because it is always the same.
So in every iteration, for the current list with n elements you will need n comparisons and you will split the list with n current elements in two sublists with 1 and n-1 elements.
You can quickly calculate the number of operations overall. You need n, n-1, n-2, ..., 2, 1 operations.
Formally, it is the sum from i=1 to n over i, for which you should know a formula to see it is O(n*n))

Related

Given k words, determine words equality in constant time

I have encountered this question while studying for algorithms test:
Given a set of k words (strings), with a total character count of n, (meaning the sum of all words lengths are n), perform some sort of manipulation on the words in O(n) time, such that whenever 2 words are being compared, return answer (whether they are identical or not) in O(1) time.
It's an interesting question but I could not find any direction to deal with it...
Construct a trie of all of the words, and for each word store the index of its last character in the array. This is a O(n) operation.
Given two words, they are the same if and only if the index of their last character is the same.

How to determine what character would be at a given index in a sorted string without hashing or sorting?

We are given with a string and an integer. We have to tell what character would be at that integer position in the string if the characters were to be placed into sort order.
For Example
String = LALIT
Index = 3
Sorted string AILLT and the character at position 3 is L
Is it possible to solve this problem without sorting?
if yes then can someone provide a pseudo code.
Yes, it's possible to do this. You're looking for something called a selection algorithm which, given a list of elements and a number k, returns what element would be in position k if the elements were to be in sorted order. Amazingly enough, it's possible to do this without sorting the entire list!
The simplest non-sorting algorithm for selection is called quickselect, which runs in expected time O(n) and, provided you're allowed to modify the original array, uses only O(1) auxiliary storage space. The idea behind quickselect is to do a single step of quicksort - pick a pivot element, partition the elements into elements less than the pivot, elements equal to the pivot, and elements greater than the pivot - then to see what happens based on that. If the pivot element ends up in position k after this step, then you're done - that's the element that would be at position k in the final sequence. If the pivot is at a position higher than k, recursively look to the left of the pivot (the kth smallest element is somewhere in there), and if the pivot is at a position lower than k, recursively look to the right of the pivot (the kth smallest element is somewhere in there).
Other approaches exist as well, such as the median-of-medians algorithm that always runs in worst-case O(n) time but is a classic "tricky algorithm to wrap your head around."

special interleaving string coding

The interleaving rule is to form a new word by inserting one word into another, in a letter by letter fashion, like showing below:
a p p l e
o l d
=
aoplpdle
It does not matter which word goes first. (oalpdple is also valid)
The problem is given a vector of strings {old, apple, talk, aoplpdle, otladlk}, find all the words that are valid interleavings of two word from the vector.
The simplest solution asks for at least O(n^2) time complexity, taking every two word and form a interleaving word, check if it is in the vector.
Is there better solutions?
Sort by length. You only need to check combinations where the sum of lengths of 2 entries (words...) is equal to the length of existing entry(ies).
This will reduce your average complexity. I didn't take the time to compute the worst complexity, but it's probably lower then O(n^2) as well.
You can also optimize the "inner loop" by rejecting matches early - you don't really need to construct the entire interleaved word to reject a match - iterate the candidate word alongside the 2 input words till you find a mismatch. This won't reduce your worst complexity, but will have a positive effect on overall performance.

Permutation Tree for Combinatorial Search Problems?

I would like to generate a Search tree for a permutation problem. My requirement is as follows: I want to use a Divide and Conquer strategy for doing so
I am giving an example tree length 3 Permutation.
Given a set of n numbers, divide the problem into n subproblems, each having one of the numbers from the set as the first number and the chosen number removed from the set. For each subproblem, repeat the process. If set is empty, stop.

Count no. of words in O(n)

I am on an interview ride here. One more interview question I had difficulties with.
“A rose is a rose is a rose” Write an
algorithm that prints the number of
times a character/word occurs. E.g.
A – 3 Rose – 3 Is – 2
Also ensure that when you are printing
the results, they are in order of
what was present in the original
sentence. All this in order n.
I did get solution to count number of occurrences of each word in sentence in the order as present in the original sentence. I used Dictionary<string,int> to do it. However I did not understand what is meant by order of n. That is something I need an explanation from you guys.
There are 26 characters, So you can use counting sort to sort them, in your counting sort you can have an index which determines when specific character visited first time to save order of occurrence. [They can be sorted by their count and their occurrence with sort like radix sort].
Edit: by words first thing every one can think about it, is using Hash table and insert words in hash, and in this way count them, and They can be sorted in O(n), because all numbers are within 1..n steel you can sort them by counting sort in O(n), also for their occurrence you can traverse string and change position of same values.
Order of n means you traverse the string only once or some lesser multiple of n ,where n is number of characters in the string.
So your solution to store the String and number of its occurences is O(n) , order of n, as you loop through the complete string only once.
However it uses extra space in form of the list you created.
Order N refers to the Big O computational complexity analysis where you get a good upper bound on algorithms. It is a theory we cover early in a Data Structures class, so we can torment, I mean help the student gain facility with it as we traverse in a balanced way, heaps of different trees of knowledge, all different. In your case they want your algorithm to grow in compute time proportional to the size of the text as it grows.
It's a reference to Big O notation. Basically the interviewer means that you have to complete the task with an O(N) algorithm.
"Order n" is referring to Big O notation. Big O is a way for mathematicians and computer scientists to describe the behavior of a function. When someone specifies searching a string "in order n", that means that the time it takes for the function to execute grows linearly as the length of that string increases. In other words, if you plotted time of execution vs length of input, you would see a straight line.
Saying that your function must be of Order n does not mean that your function must equal O(n), a function with a Big O less than O(n) would also be considered acceptable. In your problems case, this would not be possible (because in order to count a letter, you must "touch" that letter, thus there must be some operation dependent on the input size).
One possible method is to traverse the string linearly. Then create a hash and list. The idea is to use the word as the hash key and increment the value for each occurance. If the value is non-existent in the hash, add the word to the end of the list. After traversing the string, go through the list in order using the hash values as the count.
The order of the algorithm is O(n). The hash lookup and list add operations are O(1) (or very close to it).

Resources