Solving Acceleration for Time with a limited velocity [closed] - excel

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm working on a calculator in Excel for interstellar travel times. I'm currently solving an equation for Acceleration for Time to arrival like so:
=sqrt(distance*2/acceleration)
which seems to work fine for me, except that if I give it a large enough acceleration and long enough distance, I get a maximum velocity back that is higher than the speed of light.
What I want to do is add in some limiting factor into the formula that limits the velocity to some number, but I have no idea how to do this in the mathematics (disclaimer: I'm a writer and artist, so I suck badly at math). I think I need to integrate something like V=min(C,d/t) where C is the speed of light, but I don't know how to integrate that into my function. Since the rest of this works without having to chart out periods of time, I'd prefer a solution in the formula rather than some roundabout recursive chart trickery. Any ideas?

The right solution is of course to use the relativistic equation for the velocity after "constant acceleration" (which doesn't exist when you get near the speed of light). I suspect you mean "constant apparent acceleration" (what the passengers in the rocket feel). In that case, relativistically,
v = c * tanh(asinh(F*t/m*c))
Where
v = velocity
F = force
t = time
m = mass
c = speed of light
Then you can write F = m * a so acceleration = F / m
which you can write in Excel (after defining the constant C_ = 3E8 )
= C_ * TANH(ASINH(acceleration*time/C_)
This will initially give you linear acceleration as expected - then it will taper off and never quite reach the speed of light:
It seems to me that this equation is "the right one" for your particular application - you are not really trying to be super accurate, just have something that at least doesn't go faster than the speed of light, and transitions smoothly. In reality, what a rocket motor can do at these very high velocities, how the mass of the rocket is changing - all those things make the math a lot more complicated.
update if you want to achieve a result like the above but only have "distance" and "acceleration", we need to be a little bit tricky. Of course distance is something that depends on your frame of reference - it's different for the people in the rocket vs a stationary observer. So we are going to throw "real physics" out of the window for a minute and do something else. The straight red line in my plot represents "how fast you would be going if you kept accelerating" - this is the velocity of your initial calculation.
You can convert that to the "real" velocity with a simple
limitedVelocity = C_ * TANH(ASINH(calculatedVelocity / C_))
This is more in keeping with the question you asked, and allows you to stay in the framework you had (where you know "acceleration" and "distance" - whatever those mean in your world.)
Relativity. Blows your mind.
Afterthought
An accelerating space ship is in a non inertial frame of reference. The clock on board runs at a different speed (slower) than the "clock in the universe". Inside the spaceship the distance to their destination appears to shrink (Lorentz contraction) as they go faster. All this means that the "real" calculation depends on factors and assumptions that were not explicitly stated in the question. But since this is about an "interstellar travel calculator" by a self-professed non-physicist I think it is better not to turn this into a second year Physics of General Relativity course.

You can use an IF statement:
IF(logical_test, [value_if_true], [value_if_false])
As in:
=IF( sqrt(distance*2/acceleration) > C , C , sqrt(distance*2/acceleration) )

Related

Surface Optimisation with Excel

I've to make an optimisation for the manufacturing of wood plank.
I have base planks with two possible dimensions: L1 x l1 and L2 x l2.
I was asked to produce planks of different known dimensions (in 1 or more copies for some) from these basic planks, the goal being to optimize the process and to use as few base planks as possible. Note that each plank must be made entirely from a same base board and thus cannot be made in 2 or more parts.
I first made a simple excel calculation on a single variable dimension but it did not take into account that a plank could be made under another one on a same base plank.
I then got interested in optimization. If I'm not mistaken I'm looking to optimize the difference between :
the number of base boards x total area of a base board - sum of planks to make
However, I have great difficulties to write mathematically the condition "each board is entirely made from one and the same base board L1 x l1 or L2 x l2".
Does someone would know how to do that? I've looked on internet but I found nothing.Also do you think it's possible to do that on basic Excel (I've never used VBA)?
Feel free to ask any questions (my english isn't the best, sorry if my explaination isn't clear enough :) )
Thanks and have a good day :)

Haskell IdleCallback too slow

I just started designing some graphics in haskell. I want to create an animated picture with a rotating sphere, so I created an IdleCallback function to constantly update the angle value:
idle :: IORef GLfloat -> IdleCallback
idle angle = do
a <- get angle
angle $= a+1
postRedisplay Nothing
I'm adding 1 each time to the angle because I want to make my sphere smoothly rotate, rather than just jump from here to there. The problem is that now it rotates TOO slow. Is there a way to keep the rotation smooth and make it faster??
Thanks a lot!
There's not a lot to go on here. I don't see an explicit delay anywhere, so I'm guessing it's slow just because of how long it takes to update?
It also doesn't look explicitly recursive, so it seems like the problem is outside the scope of this snippet.
Also I don't know which libraries you may be using.
In general, though, that IORef makes me feel unhappy.
While it may be common in other languages to have global variables, IORefs in Haskell have their place, but are often a bad sign.
Even in another language, I don't think I'd do this with a global variable.
If you want to do time-updating things in Haskell, one "common" approach is to use a Functional Reactive Programming library.
They are built to have chains of functions that trigger off of a signal coming from outside, modifying the state of something, which eventually renders an output.
I've used them in the past for (simple) games, and in your case you could construct a system that is fed a clock signal 24 times per second, or whatever, and uses that to update the counter and yield a new image to blit.
My answer is kind of vague, but the question is a little vague too, so hopefully I've at least given you something to look into.

Why is this an invalid Turing machine? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
Whilst doing exam revision I am having trouble answering the following question from the book, "An Introduction to the Theory of Computation" by Sipser. Unfortunately there's no solution to this question in the book.
Explain why the following is not a legitimate Turing machine.
M = {
The input is a polynomial p over variables x1, ..., xn
Try all possible settings of x1, ..., xn to integer values
Evaluate p on all of these settings
If any of these settings evaluates to 0, accept; otherwise reject.
}
This is driving me crazy! I suspect it is because the set of integers is infinite? Does this somehow exceed the alphabet's allowable size?
Although this is quite an informal way of describing a Turing machine, I'd say the problem is one of the following:
otherwise reject - i agree with Welbog on that. Since you have a countably infinite set of possible settings, the machine can never know whether a setting on which it evaluates to 0 is still to come, and will loop forever if it doesn't find any - only when such a setting is encountered, the machine may stop. That last statement is useless and will never be true, unless of course you limit the machine to a finite set of integers.
The code order: I would read this pseudocode as "first write all possible settings down, then evaluate p on each one" and there's your problem:
Again, by having an infinite set of possible settings, not even the first part will ever terminate, because there never is a last setting to write down and continue with the next step. In this case, not even can the machine never say "there is no 0 setting", but it can never even start evaluating to find one. This, too, would be solved by limiting the integer set.
Anyway, i don't think the problem is the alphabet's size. You wouldn't use an infinite alphabet since your integers can be written in decimal / binary / etc, and those only use a (very) finite alphabet.
I'm a bit rusty on turing machines, but I believe your reasoning is correct, ie the set of integers is infinite therefore you cannot compute them all. I am not sure how to prove this theoretically though.
However, the easiest way to get your head around Turing machines is to remember "Anything a real computer can compute, a Turing machine can also compute.". So, if you can write a program that given a polynomial can solve your 3 questions, you will be able to find a Turing machine which can also do it.
I think the problem is with the very last part: otherwise reject.
According to countable set basics, any vector space over a countable set is countable itself. In your case, you have a vector space over the integers of size n, which is countable. So your set of integers is countable and therefore it is possible to try every combination of them. (That is to say without missing any combination.)
Also, computing the result of p on a given set of inputs is also possible.
And entering an accepting state when p evaluates to 0 is also possible.
However, since there is an infinite number of input vectors, you can never reject the input. Therefore no Turing machine can follow all of the rules defined in the question. Without that last rule, it is possible.

Finite questions

Are there a finite number of questions that can be asked regarding a specific language (and or topic), for example - for T-SQL given that there are only so many commands, can there be a limited number of non-repetitive questions? and if so can you use that to determine sizing for a site like stackoverflow and to determine the probability of a new question being a repeat of a prior one? If there is a finite number, how would you determine/calculate it: for instance, T-SQL has x number of commands, each one can have a set of relevant questions (syntax, example of use, etc.) - so could the # of questions = x times potential questions time some relevant variation? or something like that?
No, since, theoretically, programs can be of infinite length, and this site is not just about language commands, but programs developed with those languages.
I'm pretty sure Turing says no, and if you don't believe him them Gödel might have something to say about it.
A stack overflow question is expressed as a finite length sequence of bytes. One could in principle consider the question body in terms of an integer, expressed lowest digit first, in base 256 (or larger, if you wish to think about it as unicode). This is a bijection between questions and whole numbers. Therefore the set of all stack overflow questions has a countably infinite cardinality (How do i typeset \aleph_0 in SO?).

Moore’s Law Problem [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Suppose you need to run a program on the world’s fastest supercomputer which will take 10 years to complete. You could:
Spend $250M now
Program for 9 years, Moore’s law speedup (4,000x faster), spend $1M in 10years, complete in 2 weeks.
What is the optimum strategy?
Question from "Long Term Storage Trends and You"
Moore's Law is not about speed, it's about the number of transistors in a given area of silicon. There is no guarantee that in 9 years the speed will increase 4000x. If anything, GHz speed has levelled off in recent years. What is increasing, currently, is the number of cores in a CPU.
In your question, if the program does not lend itself to vectorisation (i.e. can be split into distinct parts that can be computed in parallel) then waiting 9 years will not provide any benefit, it won't be that much faster as clock speeds are unlikely to raise much in the intervening years.
Assuming the program is infinitely parallelizable (so it can always take advantage of all cores of all CPUs available)...
Assuming the program cannot be paused and moved to a different machine in mid-run...
Assuming time is the only issue (maybe we have a big research grant and we always use the best computers available)...
We have four equations (well, actually two of them are functions):
endtime(startyear) = startyear + (calculations / speed(startyear))
speed(year) = speed(year-1.5)4 (the problem assumes both hardware and software double in speed every 18 months)
endtime(0) = 0 + (calculations/speed(0)) = 10 years
speed(0) = calculations/(10 years) (implied by #3)
I started to use derivatives to minimize endtime, but I realized I can't remember my differential equations very well. So I transformed #2 into the equivalent exponential-growth formula:
speed(year) = speed(0)*4(year/1.5) = (calculations/10)*4(year/1.5)
Then I wrote this little BeanShell script:
calculations() {
return 10000000; // random constant (gets cancelled out anyway)
}
speed(year) {
speed0 = calculations()/10; // constant factor
return speed0*Math.pow(4.0, year/1.5);
}
endtime(startyear) {
return startyear + calculations()/speed(startyear);
}
findmin() {
start = 0.0;
finish = 10.0;
result = 0.0;
// home in on the best solution (there should only be one minimum)
for (inc = 1; inc > 0.00000001; inc /= 2.0) {
result = findmin(start,finish,inc);
start = result-2*inc;
finish = result+inc;
}
print("Minimum value is " + result + ", taking a total of " +
endtime(result) + " years");
}
findmin(start,finish,inc) {
lastNum = 0;
lastVal = Double.MAX_VALUE;
for (i = start; i < finish; i += inc) {
result = endtime(i);
if (result > lastVal) {
print("Minimum value between " + start + " and " + finish +
" is " + lastVal + ", occurring at " + lastNum);
return i;
}
lastNum = i;
lastVal = result;
}
return lastNum;
}
Output:
bsh % source("moore.bsh");
bsh % findmin();
Minimum value between 0.0 and 10.0 is 3.5749013123685915, occurring at 2.0
Minimum value between 1.0 and 4.0 is 3.4921256574801243, occurring at 2.5
Minimum value between 2.0 and 3.5 is 3.4921256574801243, occurring at 2.5
Minimum value between 2.25 and 3.0 is 3.4886233976754246, occurring at 2.375
Minimum value between 2.25 and 2.625 is 3.488620519067143, occurring at 2.4375
Minimum value between 2.375 and 2.5625 is 3.488170701257679, occurring at 2.40625
Minimum value between 2.375 and 2.46875 is 3.488170701257679, occurring at 2.40625
Minimum value between 2.390625 and 2.4375 is 3.488170701257679, occurring at 2.40625
(snip)
Minimum value between 2.406149387359619 and 2.4061494767665863 is 3.4881706965827037,
occurring at 2.4061494171619415
Minimum value is 2.4061494320631027, taking a total of 3.488170696582704 years
So, with the assumptions I stated before, the answer is to wait 2.406149... years (or approximately 2 years, 148 days, according to Google).
Edit: I noticed that with second formula rewritten as above, solving only requires simple calculus.
endtime(x) = x + c/speed(x) (where c = calculations)
speed(x) = speed(0) * 4^(x/1.5) = (c/10)*4^(2x/3)
=> endtime(x) = x + c/((c/10)*4^(2x/3))
= x + 10*(4^(-2x/3))
d/dx endtime(x) = 1 + 10*ln(4)*(-2/3)*(4^(-2x/3))
Critical point is when d/dx = 0, so
1 + 10*ln(4)*(-2/3)*(4^(-2x/3)) = 0
=> 4^(-2x/3) = 1/(10*ln(4)*(2/3))
Take log4 of both sides: (remember that log4(x) = ln(x)/ln(4), and that ln(1/x) = -ln(x))
-2x/3 = ln(1/(10*ln(4)*(2/3))) / ln(4)
= -ln(10*ln(4)*2/3) / ln(4)
=> x = (-3/2) * -ln(1/(10*ln(4)*2/3)) / ln(4)
= 3*ln(10*ln(4)*(2/3)) / 2*ln(4)
That looks like an awful mess (it doesn't help that there's no good way to show math formulas here). But if you plug it into your calculator, you should get 2.4061494159159814141268120293221 (at least if you use the Windows calculator, like I just did). So my previous answer was correct to seven decimal places (which are meaningless in a problem like this, of course).
(I should note that this is just a critical point, not necessarily a minimum. But the second derivative (which is of the form -(some constant)*4-2x/3) is always negative. So the function is always concave up, therefore the only critical point is the minimum.)
Moore's Law is concerned with the number of transistors that will be placed into one single chip and does not relate to the speed of microprocessors in general.
That said, from the current trend we are seeing, we'll probably see more and more cores being fit into a single processor die, so concurrent programming is going to become more and more important to take advantage of the raw processing power available in a processor.
So, it's hard to say whether to do it now or wait -- however, either way, concurrent programming or distributed computing is going to come into play, as we won't be seeing a single core processor becoming exponentially faster (in terms of clock speed) due to the physical limitations of current semiconductor technology and the laws of nature.
Make sure your program can pause and continue, and then put it on faster and faster machines as they come along. Best of both worlds...
Spend the money now - the price/value of the dollar now vs an estimate in 10 years is like trying to forecast weather 3 months from now. Plus this fails to consider factors like programming trends in 10 years and whether things will actually be 4,000 times faster or 4,000 times more scalable/parallel which seems to be the trend of late.
Also, according to the Mayans the world will end in 2012 so spend the loot now!
Simplify the model to make an estimate that you can run now. As more/better resources become available, refine the model for more accurate results.
The fastest way to complete it would be to:
Write a version for current technology that could be migrated to each new generation.
Alongside migrations, continue programming for any improvements algorithmically etc.
The cheapest way would obviously be to leave it for longer. You do need to factor in programming time (which would be near enough constant).
Also, I wouldn't want to stake too much on moore's law continuing.
Also remember that moore's law relates to the density of transistors not to computing speed for a particular problem. Even if computing power in general improves by that much, it doesn't necessarily mean your application will benefit.
But Moore's law does not speed up programming.
9 years of programming will never be condensed into 2 weeks.
Unless you successfully spend the 9 years programming an automated thought reading machine I suppose.
Program for 4 years and then run it in 2.5?
(I'm sure there's a "perfect" answer somwehere between 4 and 5 years...)
The optimum strategy depends on the reason you have to run the program.
In this scenario, the second option is the best one, because the moment you'll have the result (which is what actualy matters) would be the same.
Actually, I believe that if everybody choose the first one (and had the money to do that) ... the Moore's Law would be compromised. I assume that if all our computational needs were satisfied ... we wouldn't be so commited in keep the technology development moving forward.
This makes a flawed assumption that Moore's Law is actually a Law. It would probably be better named Moore's Theory. The risk you run by waiting is that in 10 years, it may still take 10 years to run. Start the program now (with pause and restart built in if possible), start a team looking at other ways to solve the problem that will be faster. Once you have confidence that one or the other will provide a quicker solution, switch.
EDIT: As a problem I think the best value in this question is that it makes you examine whether your assumptions are valid. The obvious solution -- since according to the problem definition you get the same result in the same amount of time with less money -- is to wait, but it depends on some implicit assumptions. If those assumptions don't hold, then the obvious solution is not necessarily the best as many answers here attest.
It's specified in the question that the problem runs on a super-computer, thus the problem must be vectorizable. The speed of of a super-computers is going up vastly faster than Moores law, so depending on the actual problem space one approach would be to hire hacker banditos to create a world wide distributed Warhol Worm that acquired the resources of 85% of the computers on the net for a short massively distributed grid like the Mersenne prime search (GIMPS) and solve the problem in 20 minutes.
(many ways to solve a problem but I sure hope this is labeled as humor)

Resources