What programming languages support arbitrary precision arithmetic? - programming-languages

What programming languages support arbitrary precision arithmetic and could you give a short example of how to print an arbitrary number of digits?

Some languages have this support built in. For example, take a look at java.math.BigDecimal in Java, or decimal.Decimal in Python.
Other languages frequently have a library available to provide this feature. For example, in C you could use GMP or other options.
The "Arbitrary-precision software" section of this article gives a good rundown of your options.

Mathematica.
N[Pi, 100]
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068
Not only does mathematica have arbitrary precision but by default it has infinite precision. It keeps things like 1/3 as rationals and even expressions involving things like Sqrt[2] it maintains symbolically until you ask for a numeric approximation, which you can have to any number of decimal places.

In Common Lisp,
(format t "~D~%" (expt 7 77))
"~D~%" in printf format would be "%d\n". Arbitrary precision arithmetic is built into Common Lisp.

Smalltalk supports arbitrary precision Integers and Fractions from the beginning.
Note that gnu Smalltalk implementation does use GMP under the hood.
I'm also developping ArbitraryPrecisionFloat for various dialects (Squeak/Pharo Visualworks and Dolphin), see http://www.squeaksource.com/ArbitraryPrecisionFl.html

Python has such ability. There is an excellent example here.
From the article:
from math import log as _flog
from decimal import getcontext, Decimal
def log(x):
if x < 0:
return Decimal("NaN")
if x == 0:
return Decimal("-inf")
getcontext().prec += 3
eps = Decimal("10")**(-getcontext().prec+2)
# A good initial estimate is needed
r = Decimal(repr(_flog(float(x))))
while 1:
r2 = r - 1 + x/exp(r)
if abs(r2-r) < eps:
break
else:
r = r2
getcontext().prec -= 3
return +r
Also, the python quick start tutorial discusses the arbitrary precision: http://docs.python.org/lib/decimal-tutorial.html
and describes getcontext:
the getcontext() function accesses the
current context and allows the
settings to be changed.
Edit: Added clarification on getcontext.

Many people recommended Python's decimal module, but I would recommend using mpmath over decimal for any serious numeric uses.

COBOL
77 VALUE PIC S9(4)V9(4).
a signed variable witch 4 decimals.
PL/1
DCL VALUE DEC FIXED (4,4);
:-) I can't remember the other old stuff...
Jokes apart, as my example show, I think you shouldn't choose a programming language depending on a single feature. Virtually all decent and recent language support fixed precision in some dedicated classes.

Scheme (a variation of lisp) has a capability called 'bignum'. there are many good scheme implementations available both full language environments and embeddable scripting options.
a few I can vouch for
MitScheme (also referred to as gnu scheme)
PLTScheme
Chezscheme
Guile (also a gnu project)
Scheme 48

Ruby whole numbers and floating point numbers (mathematically speaking: rational numbers) are by default not strictly tied to the classical CPU related limits. In Ruby the integers and floats are automatically, transparently, switched to some "bignum types", if the size exceeds the maximum of the classical sizes.
One probably wants to use some reasonably optimized and "complete", multifarious, math library that uses the "bignums". This is where the Mathematica-like software truly shines with its capabilities.
As of 2011 the Mathematica is extremely expensive and terribly restricted from hacking and reshipping point of view, specially, if one wants to ship the math software as a component of a small, low price end, web application or an open source project. If one needs to do only raw number crunching, where visualizations are not required, then there exists a very viable alternative to the Mathematica and Maple. The alternative is the REDUCE Computer Algebra System, which is Lisp based, open source and mature (for decades) and under active development (in 2011). Like Mathematica, the REDUCE uses symbolic calculation.
For the recognition of the Mathematica I say that as of 2011 it seems to me that the Mathematica is the best at interactive visualizations, but I think that from programming point of view there are more convenient alternatives even if Mathematica were an open source project. To me it seems that the Mahtematica is also a bit slow and not suitable for working with huge data sets. It seems to me that the niche of the Mathematica is theoretical math, not real-life number crunching. On the other hand the publisher of the Mathematica, the Wolfram Research, is hosting and maintaining one of the most high quality, if not THE most high quality, free to use, math reference sites on planet Earth: the http://mathworld.wolfram.com/
The online documentation system that comes bundled with the Mathematica is also truly good.
When talking about speed, then it's worth to mention that REDUCE is said to run even on a Linux router. The REDUCE itself is written in Lisp, but it comes with 2 of its very own, specific, Lisp implementations. One of the Lisps is implemented in Java and the other is implemented in C. Both of them work decently, at least from math point of view. The REDUCE has 2 modes: the traditional "math mode" and a "programmers mode" that allows full access to all of the internals by the language that the REDUCE is self written in: Lisp.
So, my opinion is that if one looks at the amount of work that it takes to write math routines, not to mention all of the symbolic calculations that are all MATURE in the REDUCE, then one can save enormous amount of time (decades, literally) by doing most of the math part in REDUCE, specially given that it has been tested and debugged by professional mathematicians over a long period of time, used for doing symbolic calculations on old-era supercomputers for real professional tasks and works wonderfully, truly fast, on modern low end computers. Neither has it crashed on me, unlike at least one commercial package that I don't want to name here.
http://www.reduce-algebra.com/
To illustrate, where the symbolic calculation is essential in practice, I bring an example of solving a system of linear equations by matrix inversion. To invert a matrix, one needs to find determinants. The rounding that takes place with the directly CPU supported floating point types, can render a matrix that theoretically has an inverse, to a matrix that does not have an inverse. This in turn introduces a situation, where most of the time the software might work just fine, but if the data is a bit "unfortunate" then the application crashes, despite the fact that algorithmically there's nothing wrong in the software, other than the rounding of floating point numbers.
The absolute precision rational numbers do have a serious limitation. The more computations is performed with them, the more memory they consume. As of 2011 I don't know any solutions to that problem other than just being careful and keeping track of the number of operations that has been performed with the numbers and then rounding the numbers to save memory, but one has to do the rounding at a very precise stage of the calculations to avoid the aforementioned problems. If possible, then the rounding should be done at the very end of the calculations as the very last operation.

In PHP you have BCMath. You not need to load any dll or compile any module.
Supports numbers of any size and precision, represented as string
<?php
$a = '1.234';
$b = '5';
echo bcadd($a, $b); // 6
echo bcadd($a, $b, 4); // 6.2340
?>

Apparently Tcl also has them, from version 8.5, courtesy of LibTomMath:
http://wiki.tcl.tk/5193
http://www.tcl.tk/cgi-bin/tct/tip/237.html
http://math.libtomcrypt.com/

There are several Javascript libraries that handle arbitrary-precision arithmetic.
For example, using my big.js library:
Big.DP = 20; // Decimal Places
var pi = Big(355).div(113)
console.log( pi.toString() ); // '3.14159292035398230088'

In R you can use the Rmpfr package:
library(Rmpfr)
exp(mpfr(1, 120))
## 1 'mpfr' number of precision 120 bits
## [1] 2.7182818284590452353602874713526624979
You can find the vignette here: Arbitrarily Accurate Computation with R:
The Rmpfr Package

Java natively can do bignum operations with BigDecimal. GMP is the defacto standard library for bignum with C/C++.

If you want to work in the .NET world you can use still use the java.math.BigDecimal class. Just add a reference to vjslib (in the framework) and then you can use the java classes.
The great thing is, they can be used fron any .NET language. For example in C#:
using java.math;
namespace MyNamespace
{
class Program
{
static void Main(string[] args)
{
BigDecimal bd = new BigDecimal("12345678901234567890.1234567890123456789");
Console.WriteLine(bd.ToString());
}
}
}

The (free) basic program x11 basic ( http://x11-basic.sourceforge.net/ ) has arbitrary precision for integers. (and some useful commands as well, e.g. nextprime( abcd...pqrs))

IBM's interpreted scripting language Rexx, provides custom precision setting with Numeric. https://www.ibm.com/docs/en/zos/2.1.0?topic=instructions-numeric.
The language runs on mainframes and pc operating systems and has very powerful parsing and variable handling as well as extension packages. Object Rexx is the most recent implementation. Links from https://en.wikipedia.org/wiki/Rexx

Haskell has excellent support for arbitrary-precision arithmetic built in, and using it is the default behavior. At the REPL, with no imports or setup required:
Prelude> 2 ^ 2 ^ 12
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336
(try this yourself at https://tryhaskell.org/)
If you're writing code stored in a file and you want to print a number, you have to convert it to a string first. The show function does that.
module Test where
main = do
let x = 2 ^ 2 ^ 12
let xStr = show x
putStrLn xStr
(try this yourself at code.world: https://www.code.world/haskell#Pb_gPCQuqY7r77v1IHH_vWg)
What's more, Haskell's Num abstraction lets you defer deciding what type to use as long as possible.
-- Define a function to make big numbers. The (inferred) type is generic.
Prelude> superbig n = 2 ^ 2 ^ n
-- We can call this function with different concrete types and get different results.
Prelude> superbig 5 :: Int
4294967296
Prelude> superbig 5 :: Float
4.2949673e9
-- The `Int` type is not arbitrary precision, and we might overflow.
Prelude> superbig 6 :: Int
0
-- `Double` can hold bigger numbers.
Prelude> superbig 6 :: Double
1.8446744073709552e19
Prelude> superbig 9 :: Double
1.3407807929942597e154
-- But it is also not arbitrary precision, and can still overflow.
Prelude> superbig 10 :: Double
Infinity
-- The Integer type is arbitrary-precision though, and can go as big as we have memory for and patience to wait for the result.
Prelude> superbig 12 :: Integer
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336
-- If we don't specify a type, Haskell will infer one with arbitrary precision.
Prelude> superbig 12
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336

Related

CLPFD for real numbers

CLP(FD) allows user to set the domain for every wannabe-integer variable, so it's able to solve equations.
So far so good.
However you can't do the same in CLP(R) or similar languages (where you can do only simple inferences). And it's not hard to understand why: the fractional part of a number may have an almost infinite region, putted down by an implementation limit. This mean the search space will be too large to make any practical use for a solver which deals with floats like with integers. So it's the user task to write generator in CLP(R) and set constraint guards where needed to get variables instantiated with numbers (if simple inference is not possible).
So my question here: is there any CLP(FD)-like language over reals? I think it could be implemented by use of number rounding, searching and following incremental approximation.
There are at least some major CLP(FD) solvers that support real (decision) variables:
Gecode
JaCoP
ECLiPSe CLP (ic library)
Choco (using Ibex)
(The first three also support var float in MiniZinc.)
The answser to your question is yes. There is Constraint-based Solvers dedicated for floating numbers. I do not have a list of solvers but I know that that ibex http://www.ibex-lib.org is a library allowing the use of floats. You should also have a look at SMT-Solvers implementing the Real-Theory (http://smtlib.cs.uiowa.edu/solvers.shtml).

High precision floating point numbers in Haskell?

I know Haskell has native data types which allow you to have really big integers so things like
>> let x = 131242358045284502395482305
>> x
131242358045284502395482305
work as expected. I was wondering if there was a similar "large precision float" native structure I could be using, so things like
>> let x = 5.0000000000000000000000001
>> x
5.0000000000000000000000001
could be possible. If I enter this in Haskell, it truncates down to 5 if I go beyond 15 decimal places (double precision).
Depending on exactly what you are looking for:
Float and Double - pretty much what you know and "love" from Floats and Doubles in all other languages.
Rational which is a Ratio of Integers
FixedPoint - This package provides arbitrary sized fixed point values. For example, if you want a number that is represented by 64 integral bits and 64 fractional bits you can use FixedPoint6464. If you want a number that is 1024 integral bits and 8 fractional bits then use $(mkFixedPoint 1024 8) to generate type FixedPoint1024_8.
EDIT: And yes, I just learned about the numbers package mentioned above - very cool.
Haskell does not have high-precision floating-point numbers naitively.
For a package/module/library for this purpose, I'd refer to this answer to another post. There's also an example which shows how to use this package, called numbers.
If you need a high precision /fast/ floating point calculations, you may need to use FFI and long doubles, as the native Haskell type is not implemented yet (see https://ghc.haskell.org/trac/ghc/ticket/3353).
I believe the standard package for arbitrary precision floating point numbers is now https://hackage.haskell.org/package/scientific

A language in which everything compiles

I'm trying to do some research for a new project, and I need to create objects dynamically from random data.
For this to work, I need a language / compiler that doesn't have problems with weird uncompilable code lying around.
Basically, I need the random code to compile (or be interpreted) as much as possible - Meaning that the uncompilable parts will be ignored, and only the compilable parts will create the objects (which could be ran).
Object Oriented-ness is not a must, but is a very strong advantage.
I thought of ASM, but it's very messy, and I'd probably need a more readable code
Thanks!
It sounds like you're doing something very much like genetic programming; even if you aren't, GP has to solve some of the same problems—using randomness to generate valid programs. The approach to this that is typically used is to work with a syntax tree: rather than storing x + y * 3 - 2, you store something like the following:
Then, instead of randomly changing the syntax, one can randomly change nodes in the tree instead. And if x should randomly change to, say, +, you can statically know that this means you need to insert two children (or not, depending on how you define +).
A good choice for a language to work with for this would be any Lisp dialect. In a Lisp, the above program would be written (- (+ x (* y 3)) 2), which is just a linearization of the syntax tree using parentheses to show depth. And in fact, Lisps expose this feature: you can just as easily work with the object '(- (+ x (* y 3)) 2) (note the leading quote). This is a three-element list, whose first element is -, second element is another list, and third element is 2. And, though you might or might not want it for your particular application, there's an eval function, such that (eval '(- (+ x (* y 3)) 2)) will take in the given list, treat it as a Lisp syntax tree/program, and evaluate it. This is what makes Lisps so attractive for doing this sort of work; Lisp syntax is basically a reification of the syntax-tree, and if you operate at the syntax-tree level, you can work on code as though it was a value. Lisp won't help you read /dev/random as a program directly, but with a little interpretation layered on top, you should be able to get what you want.
I should also mention, though I don't know anything about it (not that I know much about ordinary genetic programming) the existence of linear genetic programming. This is sort of like the assembly model that you mentioned—a linear stream of very, very simple instructions. The advantage here would seem to be that if you are working with /dev/random or something like it, the amount of interpretation needed is very small; the disadvantage would be, as you mentioned, the low-level nature of the code.
I'm not sure if this is what you're looking for, but any programming language can be made to function this way. For any programming language P, define the language Palways as follows:
If p is a valid program in P, then p is a valid program in Palways whose meaning is the same as its meaning in P.
If p is not a valid program in P, then p is a valid program in Palways whose meaning is the same as a program that immediately terminates.
For example, I could make the language C++always so that this program:
#include <iostream>
using namespace std;
int main() {
cout << "Hello, world!" << endl;
}
would compile as "Hello, world!", while this program:
Hahaha! This isn't legal C++ code!
Would be a legal program that just does absolutely nothing.
To solve your original problem, just take any OOP language like Java, Smalltalk, etc. and construct the appropriate Javaalways, Smalltalkalways, etc. language from it. Again, I'm not sure if this is at all what you're looking for, but it could be done very easily.
Alternatively, consider finding a grammar for any OOP language and then using that grammar to produce random syntactically valid programs. You could then filter those programs down by using the Palways programming language for that language to eliminate syntactically but not semantically valid programs.
Divide the ASCII byte values into 9 classes (division modulo 9 would help). Then assign then to Brainfuck codewords (see http://en.wikipedia.org/wiki/Brainfuck). Then interpret as Brainfuck.
There you go, any sequence of ASCII characters is a program. Not that it's going to do anything sensible... This approach has a much better chance, compared to templatetypedef's answer, to get a nontrivial program from a random byte sequence.
Text Editors
You could try feeding random character strings to an editor like Emacs or VI. Many (most?) characters will perform an editing action but some will do nothing (other than beep, perhaps). You would have to ensure that the random code mutator never generates the character sequence that exits the editor. However, this experience would be much like programming a Turing machine -- the code is not too readable.
Mathematica
In Mathematica, undefined symbols and other expressions evaluate to themselves, without error. So, that language might be a viable choice if you can arrange for the random code mutator to always generate well-formed expressions. This would be readily achievable since the basic Mathematica syntax is trivial, making it easy to operate on syntactic units rather than at the character level. It would be even easier if the mutator were written in Mathematica itself since expression-munging is Mathematica's forte. You could define a mini-language of valid operations within a Mathematica package that does not import the system-defined symbols. This would allow you to generate well-formed expressions to your heart's content without fear of generating a dangerous expression, like DeleteFile[FileNames["*.*", "/", Infinity]].
I believe Common Lisp should suit your needs. I always have some code in my SLIME/Emacs session that wouldn't compile. You can always tweak things, redefine functions in run-time. It is actually very good for prototyping.
A few years ago it took me quite a while to learn. But nowadays we have quicklisp and everything is so much easier.
Here I describe my development environment:
Install lisp on my linux machine
PS: I want to give an example, where Common Lisp was useful for me:
Up to maybe 2004 I used to write small programs in C (the keep it simple Unix way).
The last 3 years I had to get lots of different hardware running. Motorized stages, scientific cameras, IO cards.
The cameras turned out to be quite annoying. Usually you have to cool them down to -50 degree celsius or so and (in some SDKs) they don't like it when you close them. But this
is exactly how my C development cycle worked: write (30s), compile (1s), run (0.1s), repeat.
Eventually I decided to just use Common Lisp. Often it is straight forward to define the foreign function interfaces to talk to the SDKs and I can do this without ever leaving the running Lisp image. I start the editor in the morning define the open-device function, to talk to the device and after 3 hours I have enough of the functions implemented to set gain, temperature, region of interest and obtain the video.
Then I can often put the SDK manual away and just use the camera.
I used the same interactive programming approach when I have to parse some webpage or some weird XML.

Strategies for parallel implementation of Lua numbers and a 64bit integer

Lua by default uses a double precision floating point (double) type as its only numeric type. That's nice and useful. However, I'm working on software that expects to see 64bit integers, for which I don't get around using actual 64bit integers one way or another.
The place where the integer type becomes relevant is for file sizes. Although I don't truly expect to see file sizes beyond what Lua can represent with full "integer" precision using a double, I want to be prepared.
What strategies can you recommend when using a 64bit integer type in parallel with the default numeric type of Lua? I don't really want to throw the default implementation overboard (and I'm not worried of its performance compared to integer arithmetics), but I need some way of representing 64bit integers up to their full precision without too much of a performance penalty.
My problem is that I'm unsure where to modify the behavior. Should I modify the syntax and extend the parser (numbers with appended LL or ULL come to mind, which to my knowledge doesn't exist in default Lua) or should I instead write my own C module and define a userdata type that represents the 64bit integer, along with library functions able to manipulate the values? ...
Note: yes, I am embedding Lua, so I am free to extend it whichever way I please.
As part of LuaJIT's port to ARM CPUs (which often have poor floating-point), LuaJIT implemented a "Dual-number VM", which allows it to switch between integers and floats dynamically as needed. You could use this yourself, just switch between 64-bit integers and doubles instead of 32-bit integers and floats.
It's currently live in builds, so you may want to consider using LuaJIT as your Lua "interpreter." Or you could use it as a way to learn how to do this sort of thing.
However, I do agree with Marcelo; the 53-bit mantissa should be plenty. You shouldn't really need this for a good 10 years or so.
I'd suggest storing your data outside of Lua and use some type of reference to retrieve it when calling your other libraries. You can then push various results onto the Lua stack for the user the see, you can even retrieve the value as a string to be precise, but I would avoid modifying them in Lua and relying on the Lua values when calling your external library.
If you're not going to need floating-point precision at any point in the program, you can just redefine LUA_NUMBER to __int64 (or whatever 64-bit int may be in your environment) in luaconf.h.
Otherwise, you can just bring in another library to handle your integers- for infinite precision, you can use a bignum library such as lhf's lbn.

Does 64-bit floating point numbers behave identically on all modern PCs?

I would like to know whether i can assume that same operations on same 64-bit floating point numbers gives exactly the same results on any modern PC and in most common programming languages? (C++, Java, C#, etc.). We can assume, that we are operating on numbers and result is also a number (no NaNs, INFs and so on).
I know there are two very simmilar standards of computation using floating point numbers (IEEE 854-1987 and IEEE 754-2008). However I don't know how it is in practice.
Modern processors that implement 64-bit floating-point typically implement something that is close to the IEEE 754-1985 standard, recently superseded by the 754-2008 standard.
The 754 standard specifies what result you should get from certain basic operations, notably addition, subtraction, multiplication, division, square root, and negation. In most cases, the numeric result is specified precisely: The result must be the representable number that is closest to the exact mathematical result in the direction specified by the rounding mode (to nearest, toward infinity, toward zero, or toward negative infinity). In "to nearest" mode, the standard also specifies how ties are broken.
Because of this, operations that do not involve exception conditions such as overflow will get the same results on different processors that conform to the standard.
However, there are several issues that interfere with getting identical results on different processors. One of them is that the compiler is often free to implement sequences of floating-point operations in a variety of ways. For example, if you write "a = bc + d" in C, where all variables are declared double, the compiler is free to compute "bc" in either double-precision arithmetic or something with more range or precision. If, for example, the processor has registers capable of holding extended-precision floating-point numbers and doing arithmetic with extended-precision does not take any more CPU time than doing arithmetic with double-precision, a compiler is likely to generate code using extended-precision. On such a processor, you might not get the same results as you would on another processor. Even if the compiler does this regularly, it might not in some circumstances because the registers are full during a complicated sequence, so it stores the intermediate results in memory temporarily. When it does that, it might write just the 64-bit double rather than the extended-precision number. So a routine containing floating-point arithmetic might give different results just because it was compiled with different code, perhaps inlined in one place, and the compiler needed registers for something else.
Some processors have instructions to compute a multiply and an add in one instruction, so "bc + d" might be computed with no intermediate rounding and get a more accurate result than on a processor that first computes bc and then adds d.
Your compiler might have switches to control behavior like this.
There are some places where the 754-1985 standard does not require a unique result. For example, when determining whether underflow has occurred (a result is too small to be represented accurately), the standard allows an implementation to make the determination either before or after it rounds the significand (the fraction bits) to the target precision. So some implementations will tell you underflow has occurred when other implementations will not.
A common feature in processors is to have an "almost IEEE 754" mode that eliminates the difficulty of dealing with underflow by substituting zero instead of returning the very small number that the standard requires. Naturally, you will get different numbers when executing in such a mode than when executing in the more compliant mode. The non-compliant mode may be the default set by your compiler and/or operating system, for reasons of performance.
Note that an IEEE 754 implementation is typically not provided just by hardware but by a combination of hardware and software. The processor may do the bulk of the work but rely on the software to handle certain exceptions, set certain modes, and so on.
When you move beyond the basic arithmetic operations to things like sine and cosine, you are very dependent on the library you use. Transcendental functions are generally calculated with carefully engineered approximations. The implementations are developed independently by various engineers and get different results from each other. On one system, the sin function may give results accurate within an ULP (unit of least precision) for small arguments (less than pi or so) but larger errors for large arguments. On another system, the sin function might give results accurate within several ULP for all arguments. No current math library is known to produce correctly rounded results for all inputs. There is a project, crlibm (Correctly Rounded Libm), that has done some good work toward this goal, and they have developed implementations for significant parts of the math library that are correctly rounded and have good performance, but not all of the math library yet.
In summary, if you have a manageable set of calculations, understand your compiler implementation, and are very careful, you can rely on identical results on different processors. Otherwise, getting completely identical results is not something you can rely on.
If you mean getting exactly the same result, then the answer is no.
You might even get different results for debug (non-optimized) builds vs. release builds (optimized) on the same machine in some cases, so don't even assume that the results might be always identical on different machines.
(This can happen e.g. on a computer with an Intel processor, if the optimizer keeps a variable for an intermediate result in a register, that is stored in memory in the unoptimized build. Since Intel FPU registers are 80 bit, and double variables are 64 bit, the intermediate result will be stored with greater precision in the optimized build, causing different values in later results.).
In practice, however, you may often get the same results, but you shouldn't rely on it.
Modern FPUs all implement IEEE754 floats in single and double formats, and some in extended format. A certain set of operations are supported (pretty much anything in math.h), with some special instructions floating around out there.
assuming you are talking about applying multiple operations, I do not think you will get exact numbers. CPU architecture, compiler use, optimization settings will change the results of your computations.
if you mean the exact order of operations (at the assembly level), I think you will still get variations.for example Intel chips use extended precision (80 bits) internally, which may not be the case for other CPUs. (I do not think extended precision is mandated)
The same C# program can bring out different numerical results on the same PC, once compiled in debug mode without optimization, second time compiled in release mode with optimization enabled. That's my personal experience. We did not regard this when we set up an automatic regression test suite for one of our programs for the first time, and were completely surprised that a lot of our tests failed without any apparent reason.
For C# on x86, 80-bit FP registers are used.
The C# standard says that the processor must operate at the same precision as, or greater than, the type itself (i.e. 64-bit in the case of a 'double'). Promotions are allowed, except for storage. That means that locals and parameters could be at greater than 64-bit precision.
In other words, assigning a member variable to a local variable could (and in fact will under certain circumstances) be enough to give an inequality.
See also: Float/double precision in debug/release modes
For the 64-bit data type, I only know of "double precision" / "binary64" from the IEEE 754 (1985 and 2008 don't differ much here for common cases) being used.
Note: The radix types defined in IEEE 854-1987 are superseded by IEEE 754-2008 anyways.

Resources