I have been working on a multi-GPU project where I have had problems with obtaining non-deterministic results. I was surprised when it turned out that I obtained non-deterministic results due to a reduction clause executed on the CPU.
In the book Using OpenMP - The Next Step it is written that
"[...] the order in which threads combine their value to construct the
value for the shared result is non-deterministic."
Maybe I just don't understand how the reduction clauses are implemented. Does it mean that if I use schedule(monotonic:static) in combination with a reduction clause each thread will execute its chunk of the iterations in a deterministic order, but that the order in which the partial results are combined at the end of the parallel region is non-deterministic?
Does it mean that if I use schedule(monotonic:static) in combination
with a reduction clause each thread will execute its chunk of the
iterations in a deterministic order, but that the order in which the
partial results are combined at the end of the parallel region is
non-deterministic?
It is known that the end result is non-determinist, detailed information can be found in:
What Every Computer Scientist Should Know about Floating Point Arithmetic. For instance:
Another grey area concerns the interpretation of parentheses. Due to roundoff errors, the associative laws of algebra do not necessarily hold for floating-point numbers. For example, the expression (x+y)+z has a totally different answer than x+(y+z) when x = 1e30, y = -1e30 and z = 1 (it is 1 in the former case, 0 in the latter).
Now regarding the order in which the threads perform the reduction action, as far as I know, the OpenMP standard does not enforce any order, or requires that the order has to be deterministic. Hence, this is an implementation detail that is left up to the compiler that is implementing the OpenMP standard to decide, and consequently, it is something that your code should not reply upon.
Programming language semantics usually declares that a+b+c+d is evaluated as ((a+b)+c)+d. This is not parallel, so an OpenMP reduction is probably evaluated as (a+b)+(c+d). And so on for larger numbers of summands.
So you immediately have that, because of the non-associativity of floating point arithmetic, the result may be subtly different from the sequential value.
But more importantly, the exact value will depend on precisely how the combination is done. Is it a+(b+c) (on 2 threads) or (a+b)+c? So the result is at least "indeterministic" in the sense that you can not reconstruct how it was formed. It could probably even be done in two different ways, if you run the same reduction twice. That's what I would call "non-deterministic", but look in the standard for the exact definition of the term.
By the way, if you want to get some idea of how OpenMP actually does it, write your own reduction operator, and let each invocation print out what it computes. Here is a decent illustration: https://victoreijkhout.github.io/pcse/omp-reduction.html#Initialvalueforreductions
By the way, the standard actually doesn't use the word "non-deterministic" for this case. The following passage explains the issue:
Furthermore, using different numbers of threads may result in
different numeric results because of changes in the association of
numeric operations. For example, a serial addition reduction may have
a different pattern of addition associations than a parallel
reduction.
Related
Suppose I have a function that runs calculations, example being something like a dot product - I pass in an arrays A, B of vectors and a float array C, and the functions assigns:
C[i] = dot(A[i], B[i]);
If I create and start two threads that will run this function, and pass in the same three arrays to both the threads, under what circumstances is this type of action (perhaps using a different non-random mathematical operation etc.) not guaranteed give the same result (running the same application without any recompilation, and on the same machine)? I'm only interested in the context of a consumer PC.
I know that float operations are in general deterministic, but I do wonder whether perhaps something weird could happen and maybe on one thread the calculations will use an intermediate 80 bit register, but not in the other.
I would assume it's pretty much guaranteed the same binary code should run in both threads (is there some way this could not happen? The function being compiled multiple times for some reason, the compiler somehow figuring out it will run in multiple threads, and compiling it again, for some reason, for the second thread?).
But I'm a a bit more worried that CPU cores might not have the same instruction sets, even on consumer level PCs.
Side question - what about GPUs in a similar scenario?
//
I'm assuming x86_64, Windows, c++, and dot is a.x * b.x + a.y * b.y. Can't give more info than that - using Unity IL2CPP, don't know how it compiles/with what options.
Motivation for the question: I'm writing a computational geometry procedure that modifies a mesh - I'll call this the "geometric mesh". The issue is that it could happen that the "rendering mesh" has multiple vertices for certain geometric positions - it's needed for flat shading for example - you have multiple vertices with different normals. However, the actual computational geometry procedure only uses purely geometric data of the positions in space.
So I see two options:
Create a map from the rendering mesh to the geometric mesh (example - duplicate vertices being mapped to one unique vertex), run the procedure on the geometric mesh, then somehow modify the rendering mesh based on the result.
Work with the rendering mesh directly. Slightly more inefficient as the procedure does calculations for all vertices, but much easier from a code perspective. But most of all I'm a bit worried that I could get two different values for two vertices that actually have the same position and that shouldn't happen. Only the position is used, and the position would be the same for both such vertices.
Floating point (FP) operations are not associative (but it is commutative). As a result, (x+y)+z can give different results than x+(y+z). For example, (1e-13 + (1 - 1e-13)) == ((1e-13 + 1) - 1e-13) is false with 64-bit IEEE-754 floats. The C++ standard is not very restrictive about floating-point numbers. However, the widely-used IEEE-754 standard is. It specifies the precision of 32-bit and 64-bit number operations, including rounding modes. x86-64 processors are IEEE-754 compliant and mainstream compilers (eg. GCC, Clang and MSVC) are also IEEE-754 compliant by default. ICC is not compliant by default since it assumes the FP operations are associative for the sake of performance. Mainstream compilers have compilation flags to make such assumption so to speed up codes. It is generally combined with other ones like the assumption that all FP values are not NaN (eg. -ffast-math). Such flags break the IEEE-754 compliance, but they are often used in the 3D or video game industry so to speed up codes. IEEE-754 is not required by the C++ standard, but you can check this with std::numeric_limits<T>::is_iec559.
Threads can have different rounding modes by default. However, you can set the rounding mode using the C code provided in this answer. Also, please note that denormal numbers are sometimes disabled on some platforms because of their very-high overhead (see this for more information).
Assuming the IEEE-754 compliance is not broken, the rounding mode is the same and the threads does the operations in the same order, then the result should be identical up to at least 1 ULP. In practice, if they are compiled using a same mainstream compiler, the result should be exactly the same.
The thing is using multiple threads often result in a non-deterministic order of the applied FP operations which causes non-deterministic results. More specifically, atomic operations on FP variables often cause such an issue because the order of the operations often changes at runtime. If you want deterministic results, you need to use a static partitioning, avoid atomic operations on FP variables or more generally atomic operations that could result in a different ordering. The same thing applies for locks or any synchronization mechanisms.
The same thing is true for GPUs. In fact, such problem is very frequent when developers use atomic FP operations for example to sum values. They often do that because implementing fast reductions is complex (though it is more deterministic) and atomic operations as pretty fast on modern GPUs (since they use dedicated efficient units).
According to the accepted answer to floating point processor non-determinism?, C++ floating point is not non-deterministic. The same sequence of instructions will give the same results.
There are a few things to take into account though:
Firstly, the behavior (i.e. the result) of a particular piece of C++ source code doing a FP calculation may depend on the compiler and the chosen compiler options. For example, it may depend on whether the compiler chooses to emit 64 or 80 bit FP instructions. But this is deterministic.
Secondly, similar C++ source code may give different results; e.g. due to non-associative behavior of certain FP instructions. This also is deterministic.
Determinism won't be affected by multi-threading by default. The C++ compiler will probably be unaware of whether the code is multi-threaded or not. And it definitely has no reason to emit different FP code.
Admittedly, FP behavior depends on the rounding mode selected, and that can be set on a per-thread basis. However, for this to happen, something (application code) would have to explicitly set different rounding modes for different threads. Once again, that is deterministic. (And a pretty daft thing for the application code to do, IMO.)
The idea that a PC would would use different FP hardware with different behavior for different threads seems far-fetched to me. Sure a PC could have (say) an Intel chipset and an ARM chipset, but it is not plausible that different threads of the same C++ application (executable) would simultaneously run on both chipsets.
Likewise for GPUs. Indeed, given that you need to program GPUs in a way that is radically different to ordinary (or threaded) C++, I would doubt that they could even share the same source code.
In short, I think that you are worrying about a hypothetical problem that you are unlikely to encounter in reality ... given the current state of the art in hardware and C++ compilers.
I wrote an application in Haskell that calls Z3 solver to solve constrains with some complex formulas. Thanks to Haskell I can quickly switch the data type I'm working with.
When using SBV's AlgReal type for computations, I get results in sensible time, however switching to Float or Double types makes Z3 consume ~2Gb of RAM and doesn't result even in 30 minutes.
Is this expected that producing floating point solutions require much more time, or it is some mistake on my side?
As with any question regarding solver performance, it is impossible to make generalizations. Christoph Wintersteiger (https://stackoverflow.com/users/869966/christoph-wintersteiger) would be the expert on this to opine, but I'm not sure how closely he follows this group. Chris: If you're reading this, I'd love to hear your thoughts!
There's also the risk of comparing apples-to-oranges here: Reals and floats are two completely different logics, with different decision procedures, heuristics, algorithms, etc. I'm sure you can find problems where one outperforms the other, with no clear "performance" winner.
Having said all that, here are some things that make floating-point (FP) tricky:
Rewriting is almost impossible with FP, since rules like associativity simply
don't hold for FP addition/multiplication. So, there are fewer opportunities for
simplification before bit-blasting.
Similarly a * 1/a == 1 doesn't hold for floats. Neither does x + 1 /= x or (x + a == x) -> (a == 0) and many other "obvious" simplifications that you'd love to be able to make. All of this complicates reasoning.
Existence of NaN values make equality non-reflexive: Nothing compares equal to NaN including itself. So, substitution of equals-for-equals is also problematic and requires side conditions.
Likewise, the existence of +0 and -0, which compare equal but behave differently due to rounding complicate matters. The typical example is x == 0 -> fma(a, b, x) == a * b doesn't hold (where fma is fused multiply-add), because depending on the sign of zero these two expressions can produce different values for different rounding modes.
Combination of floats with integers and reals introduce non-linearity, which is always a soft-spot for solvers. So, if you're using FP, it is advisable not to mix it with other theories as the combination itself creates extra complexity.
Operations like multiplication, division, and remainder (yes, there's a floating-point remainder operation!) are inherently very complex and lead to extremely large SAT formulas. In particular, multiplication is a known operation that challenges most SAT/BDD engines, due to lack of any good variable ordering and splitting heuristics. Solvers end-up bit-blasting fairly quickly, resulting in extremely large state-spaces. I have observed that solvers have a hard time dealing with FP division and remainder operations even when the input is completely concrete, imagine what happens when they are fully symbolic!
The logic of reals have a decision procedure with a double-exponential complexity. However, techniques like Fourier-Motzkin elimination (https://en.wikipedia.org/wiki/Fourier%E2%80%93Motzkin_elimination), while remaining exponential, perform really well in practice.
FP solvers are relatively new, and it's a niche area with nascent research. So existing solvers tend to be quite conservative and quickly bit-blast and reduce the problem to bit-vector logic. I would expect them to improve over time, just like all the other theories did.
Again, I want to emphasize comparing solver performance on these two different logics is misguided as they are totally different beasts. But hopefully, the above points illustrate why floating-point is tricky in practice.
A great paper to read about the treatment of IEEE754 floats in SMT solvers is: http://smtlib.cs.uiowa.edu/papers/BTRW14.pdf. You can see the myriad of operations it supports and get a sense of the complexity.
I have an interesting question, but I'm not sure exactly how to phrase it...
Consider the lambda calculus. For a given lambda expression, there are several possible reduction orders. But some of these don't terminate, while others do.
In the lambda calculus, it turns out that there is one particular reduction order which is guaranteed to always terminate with an irreducible solution if one actually exists. It's called Normal Order.
I've written a simple logic solver. But the trouble is, the order in which it processes the constraints seems to have a huge effect on whether it finds any solutions or not. Basically, I'm wondering whether something like a normal order exists for my logic programming language. (Or wether it's impossible for a mere machine to deterministically solve this problem.)
So that's what I'm after. Presumably the answer critically depends on exactly what the "simple logic solver" is. So I will attempt to briefly describe it.
My program is closely based on the system of combinators in chapter 9 of The Fun of Programming (Jeremy Gibbons & Oege de Moor). The language has the following structure:
The input to the solver is a single predicate. Predicates may involve variables. The output from the solver is zero or more solutions. A solution is a set of variable assignments which make the predicate become true.
Variables hold expressions. An expression is an integer, a variable name, or a tuple of subexpressions.
There is an equality predicate, which compares expressions (not predicates) for equality. It is satisfied if substituting every (bound) variable with its value makes the two expressions identical. (In particular, every variable equals itself, bound or not.) This predicate is solved using unification.
There are also operators for AND and OR, which work in the obvious way. There is no NOT operator.
There is an "exists" operator, which essentially creates local variables.
The facility to define named predicates enables recursive looping.
One of the "interesting things" about logic programming is that once you write a named predicate, it typically works fowards and backwards (and sometimes even sideways). Canonical example: A predicate to concatinate two lists can also be used to split a list into all possible pairs.
But sometimes running a predicate backwards results in an infinite search, unless you rearrange the order of the terms. (E.g., swap the LHS and RHS of an AND or an OR somehwere.) I'm wondering whether there's some automated way to detect the best order to run the predicates in, to ensure prompt termination in all cases where the solution set is exactually finite.
Any suggestions?
Relevant paper, I think: http://www.cs.technion.ac.il/~shaulm/papers/abstracts/Ledeniov-1998-DCS.html
Also take a look at this: http://en.wikipedia.org/wiki/Constraint_logic_programming#Bottom-up_evaluation
Don't quite understand determinism in the context of concurrency and parallelism in Haskell. Some examples would be helpful.
Thanks
When dealing with pure values, the order of evaluation does not matter. That is essentially what parallelism does: Evaluating pure values in parallel. As opposed to pure values, order usually matters for actions with side-effects. Running actions simultaneously is called concurrency.
As an example, consider the two actions putStr "foo" and putStr "bar". Depending on the order in which those two actions get evaluated, the output is either "foobar", "barfoo" or any state in between. The output is indeterministic as it depends on the specific order of evaluation.
As another example, consider the two values sum [1..10] and 5 * 3. Regardless of the order in which those two get evaluated, they always reduce to the same results. This determinism is something you can usually only guarantee with pure values.
Concurrency and parallelism are two different things.
Concurrency means that you have multiple threads interacting non-deterministically. For example, you might have a chat server where each client is handled by one thread. The non-determinism is essential to the system you're trying to model.
Parallelism is about using multiple threads for simply making your program run faster. However, the end result should be exactly the same as if you run the algorithm sequentially.
Many languages don't have primitives for parallelism, so you have to implement it using concurrency primitives like threads and locks. However, this means that you the programmer have to be careful to ensure that you don't accidentally introduce unwanted non-determinism or other concurrency issues. With explicit parallelism primitives like par and pseq, many of these concerns simply go away.
I'm very new to GPGPU and Programming. I'm interested to know if statistical hypothesis testing like one-sample Kolmogorov-Smirnov test (K–S test) and Levene's test could be implemented in GPGPU (SIMD) using CUDA? If so what will be the limitations?
I just read web definitions about these tests, but, if I understood correctly, they can be properly accelerated by the kind of parallelism expressed by SIMD (in particular as implemented by CUDA).
In K-S test, one has to compute the difference between a function and an estimate on N samples, then take the maximum difference. In other words, one has to perform the same operation on N different values, which is exactly SIMD (single instruction, multiple data).
In Levene's test, there is again the same difference, square and multiplication over N different values.
What SIMD can do is a sort of FOR statement over N value sets, provided that the iterations are independent from each other. Thus, in CUDA for example, the compiler can allocate the iterations to the processing elements of the graphic device, so that, executing in parallel, the FOR loop is run for all the data in the time of a single iteration.
The CUDA toolkit provides a specific C/C++ compiler (NVCC) where special instructions are dispatched to the GPGPU rather than to the CPU, therefore distributed to its parallel processing elements.