Why is it Ceiling() and Floor() not RoundUp() and RoundDown() - history

Most languages name the native rounding up function Ceiling() or Ceil() and the rounding down function Floor(). Yet as far as I know, this notation is pretty much never used outside of programming. Most people just refer to ceiling as "rounding up" and floor as "rounding down" so how did the ceiling/floor notation arise and become popular?

I guess the naming convention has its origin from Mathematics and Computer Science.
http://en.wikipedia.org/wiki/Floor_and_ceiling_functions

Related

Are floating point SMT logics slower than real ones?

I wrote an application in Haskell that calls Z3 solver to solve constrains with some complex formulas. Thanks to Haskell I can quickly switch the data type I'm working with.
When using SBV's AlgReal type for computations, I get results in sensible time, however switching to Float or Double types makes Z3 consume ~2Gb of RAM and doesn't result even in 30 minutes.
Is this expected that producing floating point solutions require much more time, or it is some mistake on my side?
As with any question regarding solver performance, it is impossible to make generalizations. Christoph Wintersteiger (https://stackoverflow.com/users/869966/christoph-wintersteiger) would be the expert on this to opine, but I'm not sure how closely he follows this group. Chris: If you're reading this, I'd love to hear your thoughts!
There's also the risk of comparing apples-to-oranges here: Reals and floats are two completely different logics, with different decision procedures, heuristics, algorithms, etc. I'm sure you can find problems where one outperforms the other, with no clear "performance" winner.
Having said all that, here are some things that make floating-point (FP) tricky:
Rewriting is almost impossible with FP, since rules like associativity simply
don't hold for FP addition/multiplication. So, there are fewer opportunities for
simplification before bit-blasting.
Similarly a * 1/a == 1 doesn't hold for floats. Neither does x + 1 /= x or (x + a == x) -> (a == 0) and many other "obvious" simplifications that you'd love to be able to make. All of this complicates reasoning.
Existence of NaN values make equality non-reflexive: Nothing compares equal to NaN including itself. So, substitution of equals-for-equals is also problematic and requires side conditions.
Likewise, the existence of +0 and -0, which compare equal but behave differently due to rounding complicate matters. The typical example is x == 0 -> fma(a, b, x) == a * b doesn't hold (where fma is fused multiply-add), because depending on the sign of zero these two expressions can produce different values for different rounding modes.
Combination of floats with integers and reals introduce non-linearity, which is always a soft-spot for solvers. So, if you're using FP, it is advisable not to mix it with other theories as the combination itself creates extra complexity.
Operations like multiplication, division, and remainder (yes, there's a floating-point remainder operation!) are inherently very complex and lead to extremely large SAT formulas. In particular, multiplication is a known operation that challenges most SAT/BDD engines, due to lack of any good variable ordering and splitting heuristics. Solvers end-up bit-blasting fairly quickly, resulting in extremely large state-spaces. I have observed that solvers have a hard time dealing with FP division and remainder operations even when the input is completely concrete, imagine what happens when they are fully symbolic!
The logic of reals have a decision procedure with a double-exponential complexity. However, techniques like Fourier-Motzkin elimination (https://en.wikipedia.org/wiki/Fourier%E2%80%93Motzkin_elimination), while remaining exponential, perform really well in practice.
FP solvers are relatively new, and it's a niche area with nascent research. So existing solvers tend to be quite conservative and quickly bit-blast and reduce the problem to bit-vector logic. I would expect them to improve over time, just like all the other theories did.
Again, I want to emphasize comparing solver performance on these two different logics is misguided as they are totally different beasts. But hopefully, the above points illustrate why floating-point is tricky in practice.
A great paper to read about the treatment of IEEE754 floats in SMT solvers is: http://smtlib.cs.uiowa.edu/papers/BTRW14.pdf. You can see the myriad of operations it supports and get a sense of the complexity.

Is it more computationally strenuous to compute an exponential or a Bessel function?

It is often desirable to obtain the solution to a mathematical problem in closed form, that is, as an expression that contains generally-accepted functions like polynomials, rational and irrational functions, roots, and exponentials and logarithms. One justification I often hear is that, when known functions are involved, it easier to visualize the behavior of the function. Another justification is that it is less computationally demanding to evaluate the function at a set of points. While I certainly agree with the first justification, is the second justification reasonable? For example:
Does it take a longer time to compute a modified Bessel function of the first kind and fifth order for 10 points than to compute an exponential?
Does it take a longer time to compute an exponential integral than to compute an exponential?
My intuition is that in all three cases, a Taylor series expansion around the desired point is formed, so it comes down to evaluating a polynomial, some other polynomial, or its antiderivative.
One justification I often hear is that, when known functions are involved, it easier to visualize the behavior of the function.
Another justification is that it is less computationally demanding to evaluate the function at a set of points.
That's only if the function is "simple", not closed. You can construct closed forms which are arbitrarily complex and demanding computationally.
I can imagine two genuine advantages of closed form solutions:
Since most programming languages have support for sqrt, sin and the like, closed form solutions are easily representable in code.
If a solution has closed form then there will be a number of algebraic steps that you can follow to get to the solution – an algebraic (or perhaps trigonometric) solution algorithm. Those algorithms can also only contain "closed form" steps, so they'll be fairly easily implemented.
If you know that the solution of your problem probably doesn't have closed form, you'll have to reside to an entirely different approach for solving it. This can get quite tricky: The Babylonians were able to solve quadratic equations 2000 BC, and it took more than another 3000 years until roots of polynomials of arbitrary order could be solved – with numerics, not algebra.

Trigonometric Functions in Pseudo Code

I'm searching for Trigonometric Functions in Pseudo Code.
I'm Not good at mathematics, so I can't do much with the formulas in the Wikipedia.
Mainly I'm searching for Sine, Cosine, Tangent and the inverse functions (sin⁻¹, cos⁻¹, tan⁻¹) of them.
There are also other Trigonometric Functions. But for me the above are the most important.
If it is possible, I would be happy if in the pseudo code only variables, for, if, and operators (+, -, *, /, %, sqrt()) are used, because I do not have a library with advanced mathematics functions.
Trigonometry functions are Transcendental.
You cannot find an exact expression of them in term of polynomial algebra.
You can approximate them though.
The usual approach is to use periodicity and symmetry to reduce an angle α into and equivalent angle α′ such that sin(α) = sin(α′) but α′ ≪ α.
Simply put you reduce any angle into and angle in the first quadrant or similar, this is easier than it looks.
Once you have a small angle, you can use Taylor Series Expansion to compute the function up to a fixed error magnitude.
Here is a tutorial page.
Another approach is to use a lookup table.
This is especially useful when you can keep track of the required precision of the process and is very fast.
However it takes more memory and may give rise to a step-looking function.
Here an introductory page.
Another approach is to use CORDIC Algorithm, this is specially suited for hardware that lacks multiplication support (like some MIPS and ARM chips).
From Wikipedia:
CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller) [...]
On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC.

CLPFD for real numbers

CLP(FD) allows user to set the domain for every wannabe-integer variable, so it's able to solve equations.
So far so good.
However you can't do the same in CLP(R) or similar languages (where you can do only simple inferences). And it's not hard to understand why: the fractional part of a number may have an almost infinite region, putted down by an implementation limit. This mean the search space will be too large to make any practical use for a solver which deals with floats like with integers. So it's the user task to write generator in CLP(R) and set constraint guards where needed to get variables instantiated with numbers (if simple inference is not possible).
So my question here: is there any CLP(FD)-like language over reals? I think it could be implemented by use of number rounding, searching and following incremental approximation.
There are at least some major CLP(FD) solvers that support real (decision) variables:
Gecode
JaCoP
ECLiPSe CLP (ic library)
Choco (using Ibex)
(The first three also support var float in MiniZinc.)
The answser to your question is yes. There is Constraint-based Solvers dedicated for floating numbers. I do not have a list of solvers but I know that that ibex http://www.ibex-lib.org is a library allowing the use of floats. You should also have a look at SMT-Solvers implementing the Real-Theory (http://smtlib.cs.uiowa.edu/solvers.shtml).

Mathematica: what is symbolic programming?

I am a big fan of Stephen Wolfram, but he is definitely one not shy of tooting his own horn. In many references, he extols Mathematica as a different symbolic programming paradigm. I am not a Mathematica user.
My questions are: what is this symbolic programming? And how does it compare to functional languages (such as Haskell)?
When I hear the phrase "symbolic programming", LISP, Prolog and (yes) Mathematica immediately leap to mind. I would characterize a symbolic programming environment as one in which the expressions used to represent program text also happen to be the primary data structure. As a result, it becomes very easy to build abstractions upon abstractions since data can easily be transformed into code and vice versa.
Mathematica exploits this capability heavily. Even more heavily than LISP and Prolog (IMHO).
As an example of symbolic programming, consider the following sequence of events. I have a CSV file that looks like this:
r,1,2
g,3,4
I read that file in:
Import["somefile.csv"]
--> {{r,1,2},{g,3,4}}
Is the result data or code? It is both. It is the data that results from reading the file, but it also happens to be the expression that will construct that data. As code goes, however, this expression is inert since the result of evaluating it is simply itself.
So now I apply a transformation to the result:
% /. {c_, x_, y_} :> {c, Disk[{x, y}]}
--> {{r,Disk[{1,2}]},{g,Disk[{3,4}]}}
Without dwelling on the details, all that has happened is that Disk[{...}] has been wrapped around the last two numbers from each input line. The result is still data/code, but still inert. Another transformation:
% /. {"r" -> Red, "g" -> Green}
--> {{Red,Disk[{1,2}]},{Green,Disk[{3,4}]}}
Yes, still inert. However, by a remarkable coincidence this last result just happens to be a list of valid directives in Mathematica's built-in domain-specific language for graphics. One last transformation, and things start to happen:
% /. x_ :> Graphics[x]
--> Graphics[{{Red,Disk[{1,2}]},{Green,Disk[{3,4}]}}]
Actually, you would not see that last result. In an epic display of syntactic sugar, Mathematica would show this picture of red and green circles:
But the fun doesn't stop there. Underneath all that syntactic sugar we still have a symbolic expression. I can apply another transformation rule:
% /. Red -> Black
Presto! The red circle became black.
It is this kind of "symbol pushing" that characterizes symbolic programming. A great majority of Mathematica programming is of this nature.
Functional vs. Symbolic
I won't address the differences between symbolic and functional programming in detail, but I will contribute a few remarks.
One could view symbolic programming as an answer to the question: "What would happen if I tried to model everything using only expression transformations?" Functional programming, by contrast, can been seen as an answer to: "What would happen if I tried to model everything using only functions?" Just like symbolic programming, functional programming makes it easy to quickly build up layers of abstractions. The example I gave here could be easily be reproduced in, say, Haskell using a functional reactive animation approach. Functional programming is all about function composition, higher level functions, combinators -- all the nifty things that you can do with functions.
Mathematica is clearly optimized for symbolic programming. It is possible to write code in functional style, but the functional features in Mathematica are really just a thin veneer over transformations (and a leaky abstraction at that, see the footnote below).
Haskell is clearly optimized for functional programming. It is possible to write code in symbolic style, but I would quibble that the syntactic representation of programs and data are quite distinct, making the experience suboptimal.
Concluding Remarks
In conclusion, I advocate that there is a distinction between functional programming (as epitomized by Haskell) and symbolic programming (as epitomized by Mathematica). I think that if one studies both, then one will learn substantially more than studying just one -- the ultimate test of distinctness.
Leaky Functional Abstraction in Mathematica?
Yup, leaky. Try this, for example:
f[x_] := g[Function[a, x]];
g[fn_] := Module[{h}, h[a_] := fn[a]; h[0]];
f[999]
Duly reported to, and acknowledged by, WRI. The response: avoid the use of Function[var, body] (Function[body] is okay).
You can think of Mathematica's symbolic programming as a search-and-replace system where you program by specifying search-and-replace rules.
For instance you could specify the following rule
area := Pi*radius^2;
Next time you use area, it'll be replaced with Pi*radius^2. Now, suppose you define new rule
radius:=5
Now, whenever you use radius, it'll get rewritten into 5. If you evaluate area it'll get rewritten into Pi*radius^2 which triggers rewriting rule for radius and you'll get Pi*5^2 as an intermediate result. This new form will trigger a built-in rewriting rule for ^ operation so the expression will get further rewritten into Pi*25. At this point rewriting stops because there are no applicable rules.
You can emulate functional programming by using your replacement rules as function. For instance, if you want to define a function that adds, you could do
add[a_,b_]:=a+b
Now add[x,y] gets rewritten into x+y. If you want add to only apply for numeric a,b, you could instead do
add[a_?NumericQ, b_?NumericQ] := a + b
Now, add[2,3] gets rewritten into 2+3 using your rule and then into 5 using built-in rule for +, whereas add[test1,test2] remains unchanged.
Here's an example of an interactive replacement rule
a := ChoiceDialog["Pick one", {1, 2, 3, 4}]
a+1
Here, a gets replaced with ChoiceDialog, which then gets replaced with the number the user chose on the dialog that popped up, which makes both quantities numeric and triggers replacement rule for +. Here, ChoiceDialog as a built-in replacement rule along the lines of "replace ChoiceDialog[some stuff] with the value of button the user clicked".
Rules can be defined using conditions which themselves need to go through rule-rewriting in order to produce True or False. For instance suppose you invented a new equation solving method, but you think it only works when the final result of your method is positive. You could do the following rule
solve[x + 5 == b_] := (result = b - 5; result /; result > 0)
Here, solve[x+5==20] gets replaced with 15, but solve[x + 5 == -20] is unchanged because there's no rule that applies. The condition that prevents this rule from applying is /;result>0. Evaluator essentially looks the potential output of rule application to decide whether to go ahead with it.
Mathematica's evaluator greedily rewrites every pattern with one of the rules that apply for that symbol. Sometimes you want to have finer control, and in such case you could define your own rules and apply them manually like this
myrules={area->Pi radius^2,radius->5}
area//.myrules
This will apply rules defined in myrules until result stops changing. This is pretty similar to the default evaluator, but now you could have several sets of rules and apply them selectively. A more advanced example shows how to make a Prolog-like evaluator that searches over sequences of rule applications.
One drawback of current Mathematica version comes up when you need to use Mathematica's default evaluator (to make use of Integrate, Solve, etc) and want to change default sequence of evaluation. That is possible but complicated, and I like to think that some future implementation of symbolic programming will have a more elegant way of controlling evaluation sequence
As others here already mentioned, Mathematica does a lot of term rewriting. Maybe Haskell isn't the best comparison though, but Pure is a nice functional term-rewriting language (that should feel familiar to people with a Haskell background). Maybe reading their Wiki page on term rewriting will clear up a few things for you:
http://code.google.com/p/pure-lang/wiki/Rewriting
Mathematica is using term rewriting heavily. The language provides special syntax for various forms of rewriting, special support for rules and strategies. The paradigm is not that "new" and of course it's not unique, but they're definitely on a bleeding edge of this "symbolic programming" thing, alongside with the other strong players such as Axiom.
As for comparison to Haskell, well, you could do rewriting there, with a bit of help from scrap your boilerplate library, but it's not nearly as easy as in a dynamically typed Mathematica.
Symbolic shouldn't be contrasted with functional, it should be contrasted with numerical programming. Consider as an example MatLab vs Mathematica. Suppose I want the characteristic polynomial of a matrix. If I wanted to do that in Mathematica, I could do get an identity matrix (I) and the matrix (A) itself into Mathematica, then do this:
Det[A-lambda*I]
And I would get the characteristic polynomial (never mind that there's probably a characteristic polynomial function), on the other hand, if I was in MatLab I couldn't do it with base MatLab because base MatLab (never mind that there's probably a characteristic polynomial function) is only good at calculating finite-precision numbers, not things where there are random lambdas (our symbol) in there. What you'd have to do is buy the add-on Symbolab, and then define lambda as its own line of code and then write this out (wherein it would convert your A matrix to a matrix of rational numbers rather than finite precision decimals), and while the performance difference would probably be unnoticeable for a small case like this, it would probably do it much slower than Mathematica in terms of relative speed.
So that's the difference, symbolic languages are interested in doing calculations with perfect accuracy (often using rational numbers as opposed to numerical) and numerical programming languages on the other hand are very good at the vast majority of calculations you would need to do and they tend to be faster at the numerical operations they're meant for (MatLab is nearly unmatched in this regard for higher level languages - excluding C++, etc) and a piss poor at symbolic operations.

Resources