Binary Search in Fortran 95 - search

I try to implement a binary search in Fortran 95 with a recursive function.
My error message is: /usr/bin/timeout: the monitored command dumped core
Does anyone have an idea how I could solve this problem?
The return value should be 2 for this little programm.
In addition: How can I expand my code to return the position number of the array "ar" to know where I can add my new element (i.e "2.9")? In case of 2.9 it should return 2. Another example, if I take 7.3 instead of 2.9 it should return 6 (between 7.0 and 8.0 of array "ar").
program h
real, dimension(7) :: ar
integer :: s
real :: outp
ar = (/2.0, 3.0, 4.0, 5.0, 7.0, 8.0, 9.0/)
s = sizeof(ar)
outp = findAr(ar, 1, s, 3.0)
print*, outp
end program h
recursive function findAr(ar, l, r, x) result(a)
real, dimension(size(ar)), intent(in) :: ar
integer, intent(in) :: l, r
real, intent(in) :: x
integer :: midd
real :: a
if (r >= 1) then
midd = l + (r - 1) / 2
if (ar(midd) == x) then
a = midd
else if (ar(midd) > x) then
a = findAr(ar, l, midd - 1, x)
else
a = findAr(ar, midd + 1, r, x)
end if
end if
end function findAr

The compiler error can be resolved as specified by the comments to your questions.
Concerning your 2nd question: How to extend your code for search values in between the array elements. One could ask in the same manner for values outside of your array (e.g. smaller than its minimum).
The following code is not using recursive functions nor is it an extension of your code.
You should keep in mind that unnecessary function calls create some overhead and could result in performance loss.
module bin_search_m
implicit none
contains
function bin_search(x, x1) result(i)
!! find index of nearest value in sorted array
real, intent(in) :: x(:)
!! sorted array
real, intent(in) :: x1
!! value to find
integer :: i
!! return array index
integer :: i0, i1
! starting interval is the whole array
i0 = 1
i1 = size(x)
! test if x1 is outside of interval [x(1), x(end)]
if (x1 <= x(i0)) then
i = i0
return
end if
if (x1 >= x(i1)) then
i = i1
return
end if
! binary search
do while (i1 > (i0 + 1))
i = (i1 + i0) / 2
if (x(i) < x1) then
i0 = i
else if (x(i) > x1) then
i1 = i
else
return
end if
end do
! pick index of value that is closer
i = merge(i0, i1, ((2 * x1) < (x(i0) + x(i1))))
end function
end module

Related

Type mismatch error when using a function inside a sub

I've made a simple function which reads a string from an Excel cell and converts it into a formula:
Function f(x As Double) As Double
Dim q As String
q = Cells(2, 1)
formula = Replace(q, "x", x)
f = Evaluate(formula)
End Function
It works just fine when used in Excel. If, for example, the text in the cell (2, 1) is "x^2-6*x+15", the command "=f(1)" returns 10. However, when I try to use this function inside a sub, it returns a type mismatch error. I've tried to use this function in a following code for golden section search:
Sub goldenSectionSearch()
Dim k As Double, a As Double, b As Double, epsilon As Double, x1 As Double, x2 As Double
k = (Sqr(5) - 1) / 2
a = Cells(2, 2).Value
b = Cells(2, 3).Value
epsilon = Cells(2, 4).Value
x1 = b - k * (b - a)
x2 = a + k * (b - a)
Do While Abs(b - a) > epsilon
If f(x1) > f(x2) Then
a = x1
x1 = x2
x2 = a + k * (b - a)
Else
b = x2
x2 = x1
x1 = b - k * (b - a)
End If
Loop
Cells(6, 2).Value = (a + b) / 2
Cells(7, 2).Value = f((a + b) / 2)
End Sub
After running the code line by line I seem to have pinpointed where the issue is, it happens right after the If, at the "a = x1" line. I'm afraid my lack of experience doesn't allow me to deduce why it's happening though.

cross sum operation in Haskell

I need to determine a recursive function crosssum :: Int -> Int in Haskell to calculate the cross sum of positive numbers. I am not allowed to use any functions from the hierarchical library besides (:), (>), (++), (<), (>=), (<=), div, mod, not (&&), max, min, etc.
crosssum :: Int -> Int
cross sum x = if x > 0
then x `mod` 10
+ x `div` 10 + crosssum x
else 0
so whenever I fill in e.g. crosssum 12 it says 'thread killed'. I do not understand how to get this right. I would appreciate any ideas. Thx
One of the problems with your code is that x is not reduced (or changed somehow) when it's passed as an argument to the recursive call of crosssum. That's why your program never stops.
The modified code:
crosssum :: Int -> Int
crosssum x = if x > 0
then x `mod` 10 + crosssum (x `div` 10)
else 0
is going to have the following logic
crosssum 12 = 2 + (crosssum 1) = 2 + (1 + (crosssum 0)) = 2 + 1 + 0
By the way, Haskell will help you to avoid if condition by using pattern-matching to receive more readable code:
crosssum :: Int -> Int
crosssum 0 = 0
crosssum x =
(mod x 10) + (crosssum (div x 10))
divMod in Prelude is very handy, too. It's one operation for both div and mod, In fact for all 2 digit numbers dm n = sum.sequence [fst,snd] $ divMod n 10
cs 0 = 0; cs n = m+ cs d where (d,m) = divMod n 10
cs will do any size number.

Haskell ways to the 3n+1 challenge

Here is a simple programming problem from SPOJ: http://www.spoj.com/problems/PROBTRES/.
Basically, you are asked to output the biggest Collatz cycle for numbers between i and j. (Collatz cycle of a number $n$ is the number of steps to eventually get from $n$ to 1.)
I have been looking for a Haskell way to solve the problem with comparative performance than that of Java or C++ (so as to fits in the allowed run-time limit). Although a simple Java solution that memoizes the cycle length of any already computed cycles will work. I haven't been successful at applying the idea to obtain a Haskell solution.
I have tried the Data.Function.Memoize, as well as home-brewed log time memoization technique using the idea from this post: Memoization in Haskell?. Unfortunately, memoization actually makes the computation of cycle(n) even slower. I believe the slow down comes from the overhead of haskell way. (I tried running with the compiled binary code, instead of interpreting.)
I also suspect that simply iterating numbers from i to j can be costly ($i,j\le10^6$). So I even tried precompute everything for the range query, using idea from http://blog.openendings.net/2013/10/range-trees-and-profiling-in-haskell.html. However, this still gives "Time Limit Exceeding" error.
Can you help to inform a neat competitive Haskell program for this?
Thanks!
>>> using the approach bellow, I could submit an accepted answer to SPOJ. You may check the entire code from here.
The problem has bounds 0 < n < 1,000,000. Pre-calculate all of them and store them inside an array; then freeze the array. The array can be used as its own cache / memoization space.
The problem would then reduce to a range query problem over an array, which can be done very efficiently using trees.
With the code bellow I can get Collatz of 1..1,000,000 in a fraction of a second:
$ time echo 1000000 | ./collatz
525
real 0m0.177s
user 0m0.173s
sys 0m0.003s
Note that collatz function below, uses mutable STUArray internally, but itself is a pure function:
import Control.Monad.ST (ST)
import Control.Monad (mapM_)
import Control.Applicative ((<$>))
import Data.Array.Unboxed (UArray, elems)
import Data.Array.ST (STUArray, readArray, writeArray, runSTUArray, newArray)
collatz :: Int -> UArray Int Int
collatz size = out
where
next i = if odd i then 3 * i + 1 else i `div` 2
loop :: STUArray s Int Int -> Int -> ST s Int
loop arr k
| size < k = succ <$> loop arr (next k)
| otherwise = do
out <- readArray arr k
if out /= 0 then return out
else do
out <- succ <$> loop arr (next k)
writeArray arr k out
return out
out = runSTUArray $ do
arr <- newArray (1, size) 0
writeArray arr 1 1
mapM_ (loop arr) [2..size]
return arr
main = do
size <- read <$> getLine
print . maximum . elems $ collatz size
In order to perform range queries on this array, you may build a balanced tree as simple as below:
type Range = (Int, Int)
data Tree = Leaf Int | Node Tree Tree Range Int
build_tree :: Int -> Tree
build_tree size = loop 1 cnt
where
ctz = collatz size
cnt = head . dropWhile (< size) $ iterate (*2) 1
(Leaf a) +: (Leaf b) = max a b
(Node _ _ _ a) +: (Node _ _ _ b) = max a b
loop lo hi
| lo == hi = Leaf $ if size < lo then minBound else ctz ! lo
| otherwise = Node left right (lo, hi) (left +: right)
where
i = (lo + hi) `div` 2
left = loop lo i
right = loop (i + 1) hi
query_tree :: Tree -> Int -> Int -> Int
query_tree (Leaf x) _ _ = x
query_tree (Node l r (lo, hi) x) i j
| i <= lo && hi <= j = x
| mid < i = query_tree r i j
| j < 1 + mid = query_tree l i j
| otherwise = max (query_tree l i j) (query_tree r i j)
where mid = (lo + hi) `div` 2
Here is the same as in the other answer, but with an immutable recursively defined array (and it also leaks slightly (can someone say why?) and so two times slower):
import Data.Array
upper = 10^6
step :: Integer -> Int
step i = 1 + colAt (if odd i then 3 * i + 1 else i `div` 2)
colAt :: Integer -> Int
colAt i | i > upper = step i
colAt i = col!i
col :: Array Integer Int
col = array (1, upper) $ (1, 1) : [(i, step i) | i <- [2..upper]]
main = print $ maximum $ elems col

buffering calculations in haskell array initialization

Right now i am porting my mathematical solution from c# to Haskell, learning Haskell in process. I have following code for Thompson algorithm:
xi[N] = a[N] / c[N];
eta[N] = f[N] / c[N];
for (int i = N - 1; i > 0; i--)
{
var cbxip = (c[i] - b[i] * xi[i + 1]);
xi[i] = a[i] / cbxip;
eta[i] = (f[i] + b[i] * eta[i + 1]) / cbxip;
}
{
int i = 0;
var cbxip = (c[i] - b[i] * xi[i + 1]);
eta[i] = (f[i] + b[i] * eta[i + 1]) / cbxip;
}
How do I do it in Haskell?
I found info on array initialization, but I have several problems with it.
Say, I wrote the following code:
xi = [a[i] / (c[i] - b[i] * xi[i + 1]) | i <- 1..N-1] ++ [a[N] / c[N]]
etha = [(f[i] + b[i] * etha[i + 1] / (c[i] - b[i] * xi[i + 1]) | i <- 0..N-1] ++ [f[N] / c[N]]
The problems are following:
How do I specify I have to initialize array starting right? Do I even need to do so, or Haskell will grasp it by itself? If latter, how can it do that? isn't it is just a blackbox like[f(i)|i<-[a..b]] for a compiler?
(most problematic) For all i in [1..N-1] the part (c[i] - b[i] * xi[i + 1]) is going to be evaluated twice. How can I fix this? Prior mapping it to some other array will cost memory and is impossible as I don't have xi array yet.
I thought of something like simultaneous mapping, but I am confused with how to apply it to array initializing.
I would probably avoid using list comprehensions until you become really familiar with solving problems through recursion. Haskell is very different to C# in that you don't have "arrays" as such, which can be randomly accessed and inserted - you can't pre-allocate this space up front, because allocation is a side effect. Instead, consider everything to be linked lists, and to use recursion to iterate through them.
If we start with a top-down approach, we have a bunch of lists of numbers, and we need a function to iterate through them. If we passed these separately we would end up with a function signature like [n] -> [n] -> [n] -> [n] -> [n] -> ... This is probably not a good idea considering they all seem to be the same size, N. Instead, we can use a tuple (or pair of tuples) to contain them, eg.
thompson :: Num n => [(n, n, n, n, n, n)] -> [(n, n)]
thompson [] = [] -- pattern to terminate recursion for empty lists
-- these variables are equivalent to your a[i], etc in C#
thompson ((a, b, c, f, xi, eta):_) = ?
If we are duplicating your C# exactly, we probably want patterns for the case of 2 elements in the list, since it seems that each iteration needs to access the current and next elements. For 2 or more elements.
-- handle final 2 elements
thompson ((a, _, c, f, xi, eta):[]) = ((a / c), (f / c))
thompson ((a0, b0, c0, f0, xi0, eta0):(_,_,_,_,xi1,eta1):[]) = ?
-- handle the regular case.
thompson ((a0, b0, c0, f0, xi0, eta0):(a1,b1,c1,f1,xi1,eta1):tail) = ?
Once you have the overall iterative structure, it should become more obvious how to implement what's in the loop. The loop is basically a function which takes one of these tuples, plus a tuple for the next xi/eta and does some calculation, returning a new tuple for xi/eta (or in the final case, just eta). The a,b,c,f appear to not change.
doCalc1 :: Num n => (n, n, n, n, n, n) -> (n, n) -> (n, n)
doCalc1 (a, b, c, f, xi0, eta0) (xi1, eta1) = (a / cbxip, f + b * eta1 / cbxip)
where cbxip = c - b * xi1
doCalc2 :: Num n => Num n => (n, n, n, n, n, n) -> (n, n) -> n
doCalc2 (a, b, c, f, xi0, eta0) (xi1, eta1) = f + b * eta1 / cbxip
where cbxip = c - b * xi1
Now we just need to update thompson to call doCalc1/doCalc2, and recursively call itself with the tail.
thompson (head:next#(_,_,_,_,xi,eta):[])
= (xi, doCalc2 head (xi, eta)) : thompson [next]
thompson (head:next#(_,_,_,_,xi,eta):tail)
= doCalc1 head (xi, eta) : thompson (next:tail)

Haskell Int64 inconsistent?

I am trying to solve the problem 2's complement here (sorry, it requires login, but anyone can login with FB/google account). The problem in short is to count the number of ones appearing in the 2's complement representation of all numbers in a given range [A, B] where A and B are within the 32-bit limits (231 in absolute value). I know my algorithm is correct (it's logarithmic in the bigger absolute value, since I already solved the problem in another language).
I am testing the code below on my machine and it's giving perfectly correct results. When it runs on the Amazon server, it gives a few wrong answers (obviously overflows) and also some stack overflows. This is not a bug in the logic here, because I test the same code on my machine on the same test inputs and get different results. For example, for the range [-1548535525, 662630637] I get 35782216444 on my machine, while according to the tests, my result is some negative overflow value.
The only problem I can think of, is that perhaps I am not using Int64 correctly, or I have a wrong assumption about it's operation.
Any help is appreciated. Code is here.
The stack overflows are a bug in the logic.
countOnes !a !b | a == b = countOnes' a
countOnes' :: Int64 -> Integer
countOnes' !0 = 0
countOnes' !a = (fromIntegral (a .&. 1)) + (countOnes' (a `shiftR` 1))
Whenever you call countOnes' with a negative argument, you get a nonterminating computation, since the shiftR is an arithmetic shift and not a logical one, so you always shift in a 1-bit and never reach 0.
But even with a logical shift, for negative arguments, you'd get a result 32 too large, since the top 32 bits are all 1.
Solution: mask out the uninteresting bits before calling countOnes',
countOnes !a !b | a == b = countOnes' (a .&. 0xFFFFFFFF)
There are some superfluous guards in countOnes,
countOnes :: Int64 -> Int64 -> Integer
countOnes !a !b | a > b = 0
-- From here on we know a <= b
countOnes !a !b | a == b = countOnes' (a .&. 0xFFFFFFFF)
-- From here on, we know a < b
countOnes !0 !n = range + leading + (countOnes 0 (n - (1 `shiftL` m)))
where
range = fromIntegral $ m * (1 `shiftL` (m - 1))
leading = fromIntegral $ (n - (1 `shiftL` m) + 1)
m = (getLog n) - 1
-- From here on, we know a /= 0
countOnes !a !b | a > 0 = (countOnes 0 b) - (countOnes 0 (a - 1))
-- From here on, we know a < 0,
-- the guard in the next and the last equation are superfluous
countOnes !a !0 | a < 0 = countOnes (maxInt + a + 1) maxInt
countOnes !a !b | b < 0 = (countOnes a 0) - (countOnes (b + 1) 0)
countOnes !a !b | a < 0 = (countOnes a 0) + (countOnes 0 b)
The integer overflows on the server are caused by
getLog :: Int64 -> Int
--
countOnes !0 !n = range + leading + (countOnes 0 (n - (1 `shiftL` m)))
where
range = fromIntegral $ m * (1 `shiftL` (m - 1))
leading = fromIntegral $ (n - (1 `shiftL` m) + 1)
m = (getLog n) - 1
because the server has a 32-bit GHC, while you have a 64-bit one. The shift distance/bit width m is an Int (and because it's used as the shift distance, it has to be).
Therefore
m * (1 `shiftL` (m-1))
is an Int too. For m >= 28, that overflows a 32-bit Int.
Solution: remove a $
range = fromIntegral m * (1 `shiftL` (m - 1))
Then the 1 that is shifted is an Integer, hence no overflow.

Resources