Worst time complexity of a BST (traversing it postorder ) - search

Consider the time complexity of a post-order traversal on a binary search tree of N nodes. I know it takes O(N) to visit all the nodes, in the general case, but what is the complexity in the worst case, when BST is a list? I think it takes O(N^2), because it will traverse N nodes to go reach the end, and N nodes to go back to the start. That means N*N = N^2, so I think it is O(N^2). Is it right?

In your "worst case" scenario (which I don't understand, frankly) it's N + N = O(N), not N * N = O(N^2).

Related

How to make this Haskell program run faster

So I've been trying to learn Haskell by solving some problems on Codeforce.
And I am getting a lot of TLE (Time Limit Exceed) even though I think my time complexity is optimal.
My question is: is the way I wrote this program that makes it slow?
For example, here is the problem.
Basically the answer is to find an for a given n , where
an = 2*an-1 + D(n) and D(n) = the difference of the number of divisors between n and n-1.
(update: the top limit for n is 106).
Below is my program.
import qualified Data.Map.Strict as Map
main = do t <- read <$> getLine
putStrLn . show $ solve t
solve :: Integer -> Integer
solve 0 = 1
solve 1 = 1
solve n = (2*(solve (n-1)) + (fact n) - (fact (n-1))) `mod` 998244353
where fact n = foldl (\s -> \t -> s*(snd t + 1)) 1 (Map.toList . factorization $ n)
--the number of divisors of a number
--copied from Internet,infinite prime list
primes :: [Integer]
primes = 2: 3: sieve (tail primes) [5,7..]
where
sieve (p:ps) xs = h ++ sieve ps [x | x <- t, x `rem` p /= 0]
where (h,~(_:t)) = span (< p*p) xs
--make factorization of a number
factorization :: Integer -> Map.Map Integer Integer
factorization 1 = Map.fromList []
factorization x = Map.insertWith (+) factor 1 (factorization (x `div` factor))
where factor = head $ filter (\s -> (x `mod` s) == 0) ls
ls = primes
This program failed to solve in the time limit.
So could anyone point me out where did I do wrong and how to fix it?
Or it just impossible to solve this problem using Haskell in time limit?
There are many ways in which your time complexity is not optimal. The most obvious one is a prime finder using trial division instead of, e.g., a sieve. Maybe it's fine because you only compute the primes once, but it does not inspire confidence.
factorization also has at least one glaring problem. Consider factoring a number like 78893012641, whose prime factorization is 280879^2. You will search each prime number up to 280879: expensive, but pretty much unavoidable. However, at this point you divide by 280879 and then try to factorize 280879, starting from 2 and scanning all the small primes again even though you just found out none of them are a factor!
As Li-yao Xia says in a comment, I would also be suspicious of the multiplication of very large Integers before taking their modulus, instead of taking a modulus after each multiplication.
You haven't copied the right piece of code from the "Internet". You should've instead copied primesTMWE for the primes list, but more importantly, primeFactors for the factorization algorithm.
Your foldl based calculation of the number of divisors from a number's factorization is perfectly fine, except perhaps foldl' should be used instead.
Notice that both solve n and solve (n-1) calculate fact (n-1), so better precalculate all of them..... perhaps a better algorithm exists to find the numbers of divisors for all numbers from 1 to n than calculating it for each number separately.
I suspect even with the right algorithms (which I link above) it's going to be tough, time-wise, if you're going to factorize each number independently (O(n) numbers, O(n1/2)) time to factorize each... each prime, at least).
Perhaps the thing to try here is the smallest-factor sieve which can be built in O(n log log n) time as usual with the sieve of Eratosthenes, and once it's built it lets you find the factorization of each number in O(log log n) time (it's the average number of prime factors for a number). It will have to be built up to n though (you can special-case the evens to halve the space requirements of course; or 6-coprimes to save another 1/6th). Probably as an STUArray (that link is an example; better codes can be found here on SO).
The smallest-factor sieve is just like the sieve of Eratosthenes, except it uses the smallest factor, not just a Boolean, as a mark.
To find a number's factorization then we just repeatedly delete by a number's smallest factor, n / sf(n) =: n1, repeating for n1 / sf(n1) =: n2, then n2, etc. until we hit a prime (which is any number which has itself as the smallest factor).
Since you only use those factors to calculate the number's total number of divisors, you can fuse the two calculations together into one joined loop, for extra efficiency.

list is not being closed after finishing computation

For practice I wrote a Haskell program to find prime factors.
The code is the following:
getfactors :: Int -> [Int]
getfactors n = [x | x<-[1..n], n `mod` x == 0]
prime :: Int -> Bool
prime n | getfactors n == [1,n] = True
| otherwise = False
primefactors :: Int -> [Int]
primefactors n = [x | x <- getfactors n, prime x == True]
Everything works fine for small numbers, but when I enter large numbers the computation stops at the biggest prime factor and the expected list does not close.
For example:
>primefactors 1263
[3,421]
>primefactors 1387781234
[2,7,2161,6553
An explanation is very much appreciated.
Could not reproduce:
> :set +s
> primefactors 1387781234
[2,7,2161,6553]
(368.04 secs, 288,660,869,072 bytes)
Your algorithm is just very slow. There are lots of ways to improve it:
You are checking primality by trial division of all numbers smaller than the candidate. You can improve (without changing algorithm) by checking only up to the square root of the candidate.
Besides trial division, there are a wide range of other primality checking algorithms running the full spectrum from "simple but slow" to "complicated as heck but blazing fast". Other Internet sources will have plenty of details.
If you want to factor many numbers, it may be beneficial to memoize your primality checks between calls -- e.g. by storing a list of primes and iterating over them instead of iterating over all numbers. Since this is the only consumer of your primality check, you may want to consider creating this list directly rather than implementing a primality-checking algorithm first; again there's a wide range of algorithms for this running the spectrum from simple to fast.
Once you find a factor, you can divide the number you are factoring by that to get a smaller, faster number to compute the remaining factors with.
There are probably other easy opportunities to speed things up, and lots of prior work to read about online. Enjoy!

Is the computational complexity of this function O(2^n) or O(n)

I want to make a function that creates an infinite list that takes two numbers and an operator as input so it can generate arithmetic and geometric sequences.
infiniteList:: (Floating a)=>a->(a->a->a)->a->[a]
infiniteList start operation changeby =
start:[(operation x changeby)| x<-(infiniteList start operation changeby)]
the code compiles and works properly: infiniteList 1 (*) 2 generates a list starting from 1 and subsequent numbers are double its predecessor.
Now I'm having trouble figuring out the computational complexity "to calculate the nth element of the list". Technically it is doing one operation to figure out each element of the list. However, if you were after the (2^k +1) term, I would have to wait for the computer to finish calculating 2^(k+1) elements first.
I hope I'm explaining this properly, so basically I think the program produces the elments in 2^k batches where k is an integer, so you could potentially be waiting for ( 2^(k+1)-2^k) time to calculate the (2^k +1)th integer. So what is the computational complexity "to calculate the nth element of the list"?
A key tool is the following rule:
When analyzing the performance (not the totality) of a binding, you are allowed to assume, when analyzing its right-hand-side, that the binding itself has been fully evaluated.
You are defining infiniteList, so you are allowed to assume that in the RHS, the infiniteList binding has been fully evaluated. That, unfortunately, isn't useful, because infiniteList is just a function, and fully evaluating it just gives you the function!
But you can use this reasoning tool to figure out a fix: you have to bind the right thing.
infiniteList :: a -> (a -> a -> a) -> a -> [a]
infiniteList start operation changeby =
let result =
start : [operation x changeby | x <- result]
in result
Now you have a useful binding, result, which you can assume is fully evaluated! In the RHS, you now have, essentially,
start : map (\x -> operation x changeby) result
which is clearly O(n).
Indeed with the first definition,
> infiniteList 1 (*) 2 !! 10000
takes longer than I wish to wait, but with the modified definition, it takes a mere 0.04 seconds even in GHCi.
The run time depends a lot on how GHC decides to evaluate it.
To simplify things, consider this version of the function:
inf a f = a : [ f x | x <- inf a f ]
If GHC performed common sub-expression elimination on int a f, it could decide to evaluate it as if it had been written:
inf a f = let r = a : [ f x | x <- r ]
in r
and this runs in linear time.
I'm not sure where you are getting the "batches" idea from. Below is a transcript of the first few elements of the list. From that, I think you should be able to figure out the complexity.
What's the first element of the list? It is start, because infiniteList is defined as start:[something], and the first element of any list of that form is start.
What is the second element of the list? We certainly need to consult the [something] portion of the list above. The first element of that sublist is operation x changeby where x is the first element of infiniteList. We decided already that the first element is start, so the second element is operation start changeby, which is exactly what we wanted. What do we have to compute to get the second element? Just the first, plus the operation.
What is the third element of the list? It's the second element of [something], which is operation x changeby where x is the second element of infiniteList. Fortunately, we just calculated what that is...
What do we have to compute to get the third element? Just the first and second, plus the operation.
Although it doesn't directly answer the question, you should ask yourself what complexity you expect the function to have. How much work needs to be done to get the nth element? It's possible that your implementation in code is worse, but it might help you think about your code differently.
Just do some math, assume calculate nth item requires T(n) calculations, as
[(operation x changeby)| x<-(infiniteList start operation changeby)]
suggests, we need to know sub problem T(n-1), and the full list comprehension have n-1 operations, and then concat star:... operation is efficient, and have 1 calculation, so
T(n) = T(n-1) + (n - 1) + 1 = T(n-1) + n -> O(n^2)
Actually, you can "feel" the time complexity just by running some examples. Let f n = (infiniteList 0 (+) 1) !! n, then run f 10, f 100, f 1000, f 10000, you can see the difference.
Usually, when n=1000 runs in no time, n=10000 run some time like 1 or 2 seconds, and n=100000 run forever, it is usually O(n^2).
BTW, there is an O(n) approach:
infi :: a -> (a -> a -> a) -> a -> [a]
infi x f s = x : infi (f x s) f s
You can do some math and run some examples to feel the difference.
One strategy that sometimes helps with recursion is to expand it out a few times to get a better idea of what's going on. Let's try that:
infiniteList start operation changeby =
start:[(operation x changeby) | x <-
start:[(operation x changeby) | x <-
start:[(operation x changeby) | x <-
start:[(operation x changeby) | x <-
start:[(operation x changeby) | x <- (infiniteList start operation changeby)]]]]]
We can see the first element in the list is going to be start as expected. Then the second element will be start from the first recursive call passed through operation x changeby. What will the third item be? Well it'll be the second item of the first recursive call, so it'll be start passed through two calls of operation x changeby. Now the pattern emerges! In general, the nth item of infiniteList will be start with operation x changeby called on it n-1 times. This is rather unfortunate because ,as any student of computer science knows, 1 + 2 + ... + n - 1 = n(n-1)/2 = O(n^2).
There is, of course, a much more efficient way to write this function. Instead of applying operation x changeby to start n-1 times to get the nth item, why don't we just apply it once to the previous item? This will give us an O(n) solution. For example, we can use unfoldr from Data.List:
import Data.List (unfoldr)
infiniteList start operation changeby =
unfoldr (\x -> Just (x, operation x changeby)) start

Using non-deterministic list monad to find long Collatz sequences

I wrote the following code to solve Project Euler's No. 14:
The following iterative (Collatz) sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Q: Which starting number, under one million, produces the longest chain?
And my code:
collatz :: Integer -> [Integer]
collatz 1 = [1]
collatz n =
filter (< 1000000) prev >>= poss
where prev = collatz (n - 1)
poss :: Integer -> [Integer]
poss prev
| even prev && prev `mod` 3 == 1 && (prev - 1) `div` 3 > 1 = [2 * prev, (prev - 1) `div` 3]
| otherwise = [2 * prev]
Where collatz n returns a list of numbers that will generate a Collatz chain of length n. The problem is, I can only either not restrict the result or restrict the whole chain, instead of only the seed number, to be under 1000,000. Is it possible to use this model to solve the problem at all?
I think that this approach - while interesting - is fundamentally doomed. Suppose I discover that all the seeds which result in a chain of length 500 are above 2,000,000. How can I know that I won't find that in three more steps there's a seed under 1,000,000 that gets me there? I see no way to know when you're done.
The only viable approach I see to this problem is to compute the collatz length for every number from 1 to 999,999 and then do something like:
main :: IO ()
main = do
let collatzMax = maximumBy (compare `on` collatzLength) [1..999999]
print collatzMax
On the other hand, this provides a great opportunity to learn about CAFs since the function collatzLength could be naively defined as:
collatzLength 1 = 1
collatzLength n | n `mod` 2 == 0 = 1 + collatzLength (n `div` 2)
collatzLength n = 1 + collatzLength (3 * n + 1)
And that kind of recursion screams out for a CAF.
Sure, there are memoization modules that will go and build the CAF for you, but building one yourself is a useful exercise. It's a whole little mini-course in lazy infinitely-recursive data structures.
If that defeats you, you can glance at this spoiler of how to use a CAF and then rewrite it using a different data structure. (what about a 10-way tree instead of a binary tree? What about traversing the tree in a different order? Can you remove the call to showIntAtBase?)
Your idea is interesting, although not the most efficient one. It could be worth trying, although it'll be probably memory intensive. Some thoughts:
As some chains can go over 1000000, so you can't just filter out everything less in collatz. You need to keep all the numbers in each pass.
Calling collatz this way is inefficient, as it computes the sets all over again. Making it an infinite list that shares values would be more efficient:
collatz :: [[Integer]]
collatz = [1] : map (>>= poss) collatz
You need to figure out when you're done. For this you'd need to go through the number lists generated by collatz and count how many of them are below 1000000. When you have seen all the numbers below the limit, the last list will contain the numbers with the longest chain.
That said, I'm afraid this approach isn't computationally feasible. In particular, you'll generate exponentially many numbers and exponentially large ones. For example, if the longest chain would be 500, the result of collatz in that step would contain numbers up to 2^500. And as mentioned, there is no way to tell which of these huge numbers might be the one leading to the solution, so you can't just discard them.

Efficient Rational Resampling with lazy semantics

To change the sampling rate of a signal, one needs to upsample , filter, then downsample. Doing this naively means inserting zeros into the input signal, correlating with a filter's impulse response, then discarding all but every nth sample of the convolution.
The problem with the naive approach is that there is a lot of useless computation. When convolving with the filter, most of the filter taps are multiplied by zero, and computing the value of samples that will be discarded in the downsampling phase is useless. That's why efficient rational resampling uses polyphase filter banks, where only the computations that are needed are performed.
I wonder if it would be possible to use lazy computation to avoid the useless multiplications, while also avoiding explicitely constructing the polyphase filter banks. My ideal solution would be something that resembled the naive approach (upsample, then correlate, then downsample), but did the same computations as the explicit polyphase filter approach.
The downsampling is easy, since values that aren't needed won't be calculated. But I can't figure out how to avoid the multiplications-by-zero in the correlation part. The best I've come up with is to use the Maybe type and upsample with Nothings (instead of zeros):
upsample n xs = upsample2' n xs 0
where upsample' _ [] _ = []
upsample' _ (x:_) 0 = Just x : upsample' n xs n
upsample' n xs counter = Nothing : upsample' n xs (counter - 1)
correlate xs ys = sum $ catMaybes $ zipWith (fmap . (*)) xs ys
firFilter taps signal = map (correlate taps) (tails signal)
downsample _ [] = []
downsample n (x:xs) = x : downsample n (drop (n-1) xs)
upfirdn up down taps = (downsample down).(fir_filter taps).(upsample up)
The upfirdn function indeed is just the straightforward approach, and laziness in the downsampling avoids computation, but I think the processor still needs to check if values are Nothing in the correlation step.
Is there a way to use laziness to get the same computational savings as the polyphase filter approach? If not, is there a fundamental reason it can't be done?
I don't think laziness is helpful for this kind of problem for two reasons:
In Haskell, laziness is achieved by building unevaluated thunks in memory. This means laziness is not completely free: you still incur the cost of creating the thunk. This cost can be negligible if the evaluation of the thunk is expensive.
However, in your case, for every thunk you are saving yourself a multiplication and an addition, which is only a few CPU instructions. The cost of creating a thunk is probably of the same order of magnitude.
Laziness is helpful when you don't know a priori which elements will be used -- often because the choice depends on the input/environment in some complicated or unknown way, so you would rather defer the decision until later.
In your case, you know exactly which elements will be used: the elements must have indices divisible by n. Therefore, it's going to be more efficient to just iterate through [0, n, 2 * n, 3 * n, ...].
A naive way to add laziness would be to define a lazy multiply-add operation:
(+*) :: Num a => a -> (a, a) -> a
z +* (_, 0) = z
z +* (x, y) = z + x * y
The operation is biased so that if y is zero the calculation is skipped.
Now, when generating the mask via upsample, there is no need to use Maybe: just yield zero instead of Nothing. Then, to calculate the sum, simply use:
correlate xs ys = foldl' (+*) 0 (zip xs ys)
One does not need to upsample and downsample to resample.
If efficiency and performance are not important, you can resample by simple interpolation at each sample point in the new array of equally spaced samples, and just recompute the needed phases or values of a (low-pass/anti-alias) interpolation polynomial at every new interpolation point (instead of precomputing and caching in a poly-phase table).
This also allows "lazy resampling" by only computing the new samples as needed.
There is a "quick-and-dirty" example of how to do this, using a computed von-Hann-Windowed-Sinc interpolation kernel, in Basic, on my DSP blog here:
http://www.nicholson.com/rhn/dsp.html#3
Since this is a just array function computation at each new sample point, it should not be too difficult to convert this procedural Basic into functional Haskell.

Resources