Instrument string tension calculator in Haskell - haskell

I'm completely new to Haskell, so please bear that in mind
I'm working on a fairly simple (or so do I think) project - a string instrument tension calculator. Here's what I've got so far:
data Operator = Metric | Imperial
deriving Read
eval o l u p = case o of
Imperial -> (((2 * l * p)^2) * u) / 386.4
Metric -> (((2 * (l * 2.54) * p)^2) * u) / 386.4
prompt txt = do
putStrLn txt
readLn
main = do
o <- prompt "Metric or Imperial?"
l <- prompt "Scale length?"
u <- prompt "Gauge?" -- Unit Weight
p <- prompt "Pitch? (In hertz)"
putStrLn $ "The result is " ++ show (eval o l u p)
It lets you choose either Metric or Imperial for your scale length, and calculates the tension based on length, gauge and the desired pitch, giving you an output in pounds.
What I'm having a problem with is: I want the calculator to fetch me unit weight based on the gauge number I input.
I want to change the "u" in
u <- prompt "Gauge?" -- Unit Weight
to a "g", and if the "g" matches a number from 0.07 to 0.80, it returns me the unit weight of "u" corresponding to the gauge from a table, for example, the unit weight for a gauge of 0.80 would be 0.00115011, and I want it to be reflected in the equation.
How would I go about that? What do I need to do to create a table/list of the "g" values giving "u"'s?
The table and the equation I'm using, by the way: http://www.daddario.com/upload/tension_chart_13934.pdf

I'll give you an hint for a basic solution.
Define your map as a list of pairs (gauge, unitWeight). Make it ordered according to gauge.
gw :: [(Double, Double)]
gw = [(1, 2), (2, 4), ... ]
Then, assume you have a g to lookup. You want to discard from gw all the initial pairs with gauge <g. For that, you can use
dropWhile (some predicate here) gw
then, you can take the first pair element in the remaining list, and extract the unit weight.
This is not terribly efficient since you scan almost the whole list every time, depending on your gauge. One could improve this with Data.Map.Map. Yet, for a beginner exercise, using a list should do. You probably won't have a huge amount of list items anyway, so the program should still be pretty fast.

Related

Breadth-first search with spiral ordered list of coordinates in Haskell

edit: This is getting down voted but it's not been made clear what data structure I should be using instead of a list of coordinates. Unfortunately my data comes as a flat list and it needs to be distributed with in an outwards clockwise spiral. Then run a BFS on that to work out islands. I used coordinates which is what the C++ code tutorial seem to do (I have zero C++ experience though) but seems that was a bad route to take in Haskell
I'm trying to accumulate a list of touching land cells grouped by islands.
Looking at the image bellow I'd expect 5 islands and each with the cells on that island. [[Cell]].
My input is currently a flat list of cells ordered in a clockwise spiral (red dotted line) and a number of the population of the cell. 0 making it sea and any >= 1 is the population.
data Cell = Cell
{ cellLoc :: (Int, Int)
, cellpop :: Int -- 0 sea, >= 1 population of land
}
startingCellList :: [Cell]
startingCellList =
[(Cell (1,0) 0)
,(Cell (1,-1) 0)
,(Cell (0,-1) 0)
,(Cell (-1,-1) 4)
]
The cellLoc gives me coordinates of cell in an X Y plane with (0,0) being at the centre of the grid. Am I right in thinking I can use the those coordinates to run my BSF?
Or do I need to rethink the use of coordinates to achieving my grid?
I've also found this great example but I'm not grasping it's use of vertexes and how or if I can relate it to using coordinates.
You can convert the list [(Int,Int)] into a Data.Set (Int,Int). Then you can quickly compute adjacency for your graph in the following way. Using this you can build your graph algorithm that finds components (in the complement of the graph, whatever).
import Data.Set
-- compute all possible neighbours as difference vectors
let adjDiff = [(dx,dy) | dx <- [-1..1], dy <- [-1..1], (dx,dy) /= (0,0)]
-- given a cell, compute all potential neighbouring cells
let adjFull (x,y) = [(x',y') | (dx,dy) <- adjDiff, let x'=x+dx, let y'=y+dy]
-- given a set of valid cells and a cell, compute all valid neighbours of this cell
let adj validCells cell = [n | n <- adjFull cell, member ValidCells n]

Generating clustered spatstat marks for a ppp object

This question is very close to what has been asked here. The answer is great if we want to generate random marks to an already existing point pattern - we draw from a multivariate normal distribution and associate with each point.
However, I need to generate marks that follows the marks given in the lansing dataset that comes with spatstat for my own point pattern. In other words, I have a point pattern without marks and I want to simulate marks with a definite pattern (for example, to illustrate the concept of segregation for my own data). How do I make such marks? I understand the number of points could be different between lansing and my data set but I am allowed to reduce the window or create more points. Thanks!
Here is another version of segregation in four different rectangular
regions.
library(spatstat)
p <- c(.6,.2,.1,.1)
prob <- rbind(p,
p[c(4,1:3)],
p[c(3:4,1:2)],
p[c(2:4,1)])
X <- unmark(spruces)
labels <- factor(LETTERS[1:4])
subwins <- quadrats(X, 2, 2)
Xsplit <- split(X, subwins)
rslt <- NULL
for(i in seq_along(Xsplit)){
Y <- Xsplit[[i]]
marks(Y) <- sample(labels, size = npoints(Y),
replace = TRUE, prob = prob[i,])
rslt <- superimpose(rslt, Y)
}
plot(rslt, main = "", cols = 1:4)
plot(subwins, add = TRUE)
Segregation refers to the fact that one species predominates in a
specific part of the observation window. An extreme example would be to
segregate completely based on e.g. the x-coordinate. This would generate strips
of points of different types:
library(spatstat)
X <- lansing
Y <- cut(X, X$x, breaks = 6, labels = LETTERS[1:6])
plot(Y, cols = 1:6)
Without knowing more details about the desired type of segregation it is
hard to suggest something more useful.

Random effects modeling using mgcv and using lmer. Basically identical fits but VERY different likelihoods and DF. Which to use for testing?

I am aware that there is a duality between random effects and smooth curve estimation. At this link, Simon Wood describes how to specify random effects using mgcv. Of particular note is the following passage:
For example if g is a factor then s(g,bs="re") produces a random coefficient for each level of g, with the radndom coefficients all modelled as i.i.d. normal.
After a quick simulation, I can see this is correct, and that the model fits are almost identical. However, the likelihoods and degrees of freedom are VERY different. Can anyone explain the difference? Which one should be used for testing?
library(mgcv)
library(lme4)
set.seed(1)
x <- rnorm(1000)
ID <- rep(1:200,each=5)
y <- x
for(i in 1:200) y[which(ID==i)] <- y[which(ID==i)] + rnorm(1)
y <- y + rnorm(1000)
ID <- as.factor(ID)
# gam (mgcv)
m <- gam(y ~ x + s(ID,bs="re"))
gam.vcomp(m)
coef(m)[1:2]
logLik(m)
# lmer
m2 <- lmer(y ~ x + (1|ID))
sqrt(VarCorr(m2)$ID[1])
summary(m2)$coef[,1]
logLik(m2)
mean( abs( fitted(m)-fitted(m2) ) )
Full disclosure: I encountered this problem because I want to fit a GAM that also includes random effects (repeated measures), but need to know if I can trust likelihood-based tests under those models.

Computing recurrence relations in Haskell

Greetings, StackOverflow.
Let's say I have two following recurrence relations for computing S(i,j)
I would like to compute values S(0,0), S(0,1), S(1,0), S(2,0), etc... in asymptotically optimal way. Few minutes with pencil and paper reveal that it unfolds into treelike structure which can be transversed in several ways. Now, it's unlikely tree will be useful later on, so for now I'm looking to produce nested list like [[S(00)],[S(10),S(01)],[S(20),S(21),S(12),S(02)],...]. I have created a function to produce a flat list of S(i,0) (or S(0,j), depending on first argument):
osrr xpa p predexp = os00 : os00 * (xpa + rp) : zipWith3 osrr' [1..] (tail osrr) osrr
where
osrr' n a b = xpa * a + rp * n * b
os00 = sqrt (pi/p) * predexp
rp = recip (2*p)
I am, however, at loss as how to proceed further.
I would suggest writing it in a direct recursive style and using memoization to create your traversal:
import qualified Data.MemoCombinators as Memo
osrr p = memoed
where
memoed = Memo.memo2 Memo.integral Memo.integral osrr'
osrr' a b = ... -- recursive calls to memoed (not osrr or osrr')
The library will create an infinite table to store values you have already computed. Because the memo constructors are under the p parameter, the table exists for the scope of p; i.e. osrr 1 2 3 will create a table for the purpose of computing A(2,3), and then clean it up. You can reuse the table for a specific p by partially applying:
osrr1 = osrr p
Now osrr1 will share the table between all its calls (which, depending on your situation, may or may not be what you want).
First, there must be some boundary conditions that you've not told us about.
Once you have those, try stating the solution as a recursively defined array. This works as long as you know an upper bound on i and j. Otherwise, use memo combinators.

Problem detecting cyclic numbers in Haskell

I am doing problem 61 at project Euler and came up with the following code (to test the case they give):
p3 n = n*(n+1) `div` 2
p4 n = n*n
p5 n = n*(3*n -1) `div` 2
p6 n = n*(2*n -1)
p7 n = n*(5*n -3) `div` 2
p8 n = n*(3*n -2)
x n = take 2 $ show n
x2 n = reverse $ take 2 $ reverse $ show n
pX p = dropWhile (< 999) $ takeWhile (< 10000) [p n|n<-[1..]]
isCyclic2 (a,b,c) = x2 b == x c && x2 c == x a && x2 a == x b
ns2 = [(a,b,c)|a <- pX p3 , b <- pX p4 , c <- pX p5 , isCyclic2 (a,b,c)]
And all ns2 does is return an empty list, yet cyclic2 with the arguments given as the example in the question, yet the series doesn't come up in the solution. The problem must lie in the list comprehension ns2 but I can't see where, what have I done wrong?
Also, how can I make it so that the pX only gets the pX (n) up to the pX used in the previous pX?
PS: in case you thought I completely missed the problem, I will get my final solution with this:
isCyclic (a,b,c,d,e,f) = x2 a == x b && x2 b == x c && x2 c == x d && x2 d == x e && x2 e == x f && x2 f == x a
ns = [[a,b,c,d,e,f]|a <- pX p3 , b <- pX p4 , c <- pX p5 , d <- pX p6 , e <- pX p7 , f <- pX p8 ,isCyclic (a,b,c,d,e,f)]
answer = sum $ head ns
The order is important. The cyclic numbers in the question are 8128, 2882, 8281, and these are not P3/127, P4/91, P5/44 but P3/127, P5/44, P4/91.
Your code is only checking in the order 8128, 8281, 2882, which is not cyclic.
You would get the result if you check for
isCyclic2 (a,c,b)
in your list comprehension.
EDIT: Wrong Problem!
I assumed you were talking about the circular number problem, Sorry!
There is a more efficient way to do this with something like this:
take (2 * l x -1) . cycle $ show x
where l = length . show
Try that and see where it gets you.
If I understand you right here, you're no longer asking why your code doesn't work but how to make it faster. That's actually the whole fun of Project Euler to find an efficient way to solve the problems, so proceed with care and first try to think of reducing your search space yourself. I suggest you let Haskell print out the three lists pX p3, pX p4, pX p5 and see how you'd go about looking for a cycle.
If you would proceed like your list comprehension, you'd start with the first element of each list, 1035, 1024, 1080. I'm pretty sure you would stop right after picking 1035 and 1024 and not test for cycles with any value from P5, let alone try all the permutations of the combinations involving these two numbers.
(I haven't actually worked on this problem yet, so this is how I would go about speeding it up. There may be some math wizardry out there that's even faster)
First, start looking at the numbers you get from pX. You can drop more than those. For example, P3 contains 6105 - there's no way you're going to find a number in the other sets starting with '05'. So you can also drop those numbers where the number modulo 100 is less than 10.
Then (for the case of 3 sets), we can sometimes see after drawing two numbers that there can't be any number in the last set that will give you a cycle, no matter how you permutate (e.g. 1035 from P3 and 3136 from P4 - there can't be a cycle here).
I'd probably try to build a chain by starting with the elements from one list, one by one, and for each element, find the elements from the remaining lists that are valid successors. For those that you've found, continue trying to find the next chain element from the remaining lists. When you've built a chain with one number from every list, you just have to check if the last two digits of the last number match the first two digits of the first number.
Note when looking for successors, you again don't have to traverse the entire lists. If you're looking for a successor to 3015 from P5, for example, you can stop when you hit a number that's 1600 or larger.
If that's too slow still, you could transform the lists other than the first one to maps where the map key is the first two digits and the associated values are lists of numbers that start with those digits. Saves you from going through the lists from the start again and again.
I hope this helps a bit.
btw, I sense some repetition in your code.
you can unite your [p3, p4, p5, p6, p7, p8] functions into one function that will take the 3 from the p3 as a parameter etc.
to find what the pattern is, you can make all the functions in the form of
pX n = ... `div` 2

Resources