Floating decimal point type in Haxe - decimal

Is there floating decimal point type in Haxe (similar to decimal.Decimal in Python, i.e. http://en.wikipedia.org/wiki/IEEE_854-1987)? For the application I have float is not an option due to possible precision problems.

I don't think anything like that exists in the standard library, apart from target-specific externs such as java.math.BigDecimal. So if you want a cross-platform solution, you would need a third-party library.
One that comes to mind is thx.Decimal from the thx.core library. Having a look at the associated test cases may be helpful since there's not much documentation.

Related

Webassembly trig functions possible?

Does webassembly have support for trig functions? I mean like built in support because it seems we have to import those from javascript. It would be great if we had things like:
f32.sin
f32.cos
If it makes any sense. If they dont exist I assume its because the implementation is very system dependent.
The problem:
Imagine we had some really complex and computationally expensive formula which involves these math functions. I would like to compute everything in webassembly without relying on imports or subdividing my code where one part is run in webassembly and the other in javascript.
Apart from that I do believe semantically things look neater if trig functions were built in.
Trigonometric functions aren't available in WebAssembly though it's been discussed before 1, 2, 3. In general, WebAssembly provides opcodes for things that exist efficiently as CPU instructions, and trigonometric functions just don't have efficient CPU versions nowadays (even the x86 sin / cos is slow).
Further, we don't want to mandate specific precision bounds at this point in time. It's an art-form to specify trigonometric functions with the right precision and there hasn't been strong interest thus far.
In the future, we'd like to see code sharing of something like libm.wasm, which would remove the code download burden.
An argument could be made that implementations could be more efficient if one had ISA information, but we'd need someone championing that usecase with data to standardize it. I expect the first such case to be fast sqrt and inverse sqrt approximations, which are often used in games and do have good ISA support. I haven't heard of real performance-sensitive trigonometric uses otherwise in the context of WebAssembly. Not that I don't think they'd exist: it's a simple question of priorities for the Community Group.

Cereal versus Binary

The two major competing packages for serializing and deserializing data in Haskell that I am aware of are binary and cereal. When should one choose one of these packages over the other? Or are there other choices that I am neglecting?
They aren't competing, they are complementary. cereal works on strict bytestrings while binary works on lazy. Because of its lazy nature, binary depends on throwing an exception on parse error while cereal can fail via Either.
Also, to imply there are "only" two main packages is a misrepresentation. At the very least you should look at blaze-builder too.
For one thing, binary has questionable default encoding for floating point, instead of simply IEEE-754 encoding. So for example, NaN does not round-trip properly. cereal does not have such a known issue. The issue shows no sign of being resolved but can be circumvented by explicitly using things like getFloatle, which means generically derived instances of Binary still have the issue.
On the flip side, though, binary seems to be more popular than cereal. There are currently on hackage 345 packages that depend on cereal vs 821 that depend on binary. So, you may find related libraries you need more easily if you choose binary.

What would be involved in calling ARPACK++ (a C++ library) from Haskell?

I've spent a couple of days developing a program in Haskell, while learning the language. Now I realize that I'll need to call Arpack (a Fortran library) or Arpack++ (a C++ wrapper to Arpack) -- I can't find a good implementation of Lanczos method with Haskell bindings. Do any more experienced Haskell programers have an opinion of how difficult this would be?
I've been able to get ".so" ("shared object") versions of libarpack and libarpack++ installed through Ubuntu's repository, but I'm not sure that will suffice. I suspect I'm going to ultimately need to build Arpack++ from source code, which is possible, but I'm getting a lot of build errors, so it will take time. Is there any way to use just the ".so" files, without knowing exactly which version of the header files were used to generate them?
I'm considering using GreenCard, because it looks like the most well maintained Haskell/C bridge. I can't find much documentation though, so I'm wondering whether it will support C++ too.
I'm also starting to wonder whether I should rewrite my program in Python, and use scipy to call Arpack, but I've already sunk a couple of days into writing Haskell. I really like Haskell too, so I'm hoping I can make this work. I guess my overall question is this: What would be involved in making this work with Haskell?
Thanks much.
ELF format is standard format of executables and shared libraries, so accessing the code in these compiled modules is only a matter of knowing function names. If I understand correctly, Fortran is interoperable with C. As a consequence, Fortran should be interoperable with any language which can use C bindings, including Haskell. FYI, you can find all names exported by a module (executable or shared object or simple object archive) using nm tool (it is usually available in all linux distros by default). This of course would work if the binary file was not "stripped", but AFAIK it is not common practice.
However, Haskell cannot use C++ bindings in sane way, since C++ polymorphic features require name mangling, and the method of this name transformation is highly compiler-dependent. It is well-known problem which is not specific to Haskell. Of course, you could try to get a list of exported symbols from C++ shared object and then bind them using FFI, but... It isn't worth it.
As dsign said, you can use Foreign Function Interface GHC feature to create bindings to foreign code. All you would require is library headers (and the library itself of course). In case of C language that would be header files (*.h), but since your library is written in Fortran, you have to find header files analogue in library sources, refere to this page to match Fortran and C types, and then use this information to write FFI bindings. It would be helpful first to write C bindings, i.e. write C header. Then you can even use automatic FFI binding programs like c2hs.
It maybe also helpful to look through C++ bindings. It is possible that it has the header file I've described above. If it has one, then writing FFI bindings will be no more difficult than writing them for any other library.
So, it is not entirely impossible, but it may require some thorough work. Writing bindings to scientific/pure computational libraries is way easier than writing them for some system library which does a lot of IO and keeps its own internal state, but since this library is written not in C... Well, it may be advisable to invest your time in easier alternatives. I cannot say anythin about scipy, I've never used it, but since Python as a language is much more simpler than Haskell, it may be good alternative.
I can tell you that using a C/Fortran library from Haskell, with the help of the Foreign Function Interface would be certainly possible and not terribly complicated. Here is an introduction. In my understanding, you should be able to call anything with a C calling convention, and perhaps even Fortran, without need of recompiling the code. The only exception is with things that look like function calls but are indeed macros, in which case you will have to figure out what the macros do and reproduce them in Haskell.
As of greencard, I have never used it, so I can not vouch for it.
Your second idea of using Python could potentially save you more than a couple of days. Sad as it is, I have never managed Haskell code to easily adapt to my changing requirements, while I find that trivial in Python. Of course, that could be a limitation on my skills with Haskell or my thinking process rather that something to blame to the language.

Which Haskell library for interpolated strings

There are many different libraries on Hackage dealing with interpolated strings. Some have poor quality while other vary with number of features they support.
Which ones are worth using?
Examples of libraries (in no particular order): shakespeare, interpolatedstring-qq, Interpolation
I took a look at all the interpolation quasiquoter libraries I could find on Hackage.
Interpolation libraries worth using:
interpolatedstring-perl6: Supports interpolating arbitrary Haskell code with reasonable syntax, but requires haskell-src-exts. If you just want a general string interpolation syntax, I'd use this.
shakespeare-text: Based on the Shakespeare family of templates, and has minimal dependencies; most other interpolated string packages depend on haskell-src-exts, which is quite a heavy package and can take a lot of time and resources to compile. If you use any other Shakespeare templates, I'd suggest going with this one.
However, it doesn't support interpolating arbitrary Haskell code, although it supports more than simple variable expansion; it also does function application, operators, etc. I think it also uses Text rather than String; I'm not sure whether it can be used with Strings looking from the source code, though there is support code to suggest it can be.
Interpolation: Supports arbitrary expressions (again with haskell-src-exts), and also has built-in looping facilities. If you want more "template"-like features than just plain interpolation, it's worth considering, although I personally find the syntax quite ugly.
Interpolation libraries probably not worth using:
interpolatedstring-qq: Seems to be based on interpolatedstring-perl6; it hasn't been updated for over a year, and seems to have less functionality than interpolatedstring-perl6. Unless you're really attached to the #{expr} syntax, I wouldn't consider this one.
interpol: Implemented as a preprocessor, gives {foo} special meaning in strings; IMO too heavyweight a solution, and with such lightweight syntax, likely to break existing code.
In summary, I'd suggest interpolatedstring-perl6 if you don't mind the haskell-src-exts dependency, and shakespeare-text if you do (or are already using Shakespeare templates).
Another option might be to use the string-qq package with a more general template engine; it supports String, Text and ByteString, which should cover every use. However, this obviously doesn't support embedding Haskell code, and you'll need to specify the variables separately, so it's probably only useful in certain situations.

Rationale for no primitive SIMD data types

(Sorry if this sounds like a rant, but it's a real question and I'd appreciate real answers)
I understand that since C is so old, it might have not made sense to add it back then(MMX didn't even exist back then). But since then there was C99, and still there are no standard for SIMD variables(as far as I know).
By "SIMD variables", I mean something like:
vec2_int a = {2, 2};
vec2_int b = {3, 3};
a += b;
I also understand that this can be done with structs and (in theory) the compiler should optimize it to use SIMD when appropriate anyway.
But I recently saw a post from Qt Labs which includes an example with types like "__m128i"(which look clearly non-standard), instead of relying on optimizations. Considering Qt is advertising this as greatly improving Qt's speed, I'm guessing compiler optimizations are being insufficient, at least for some programmers.
If it was just C, I'd think C was being stupid. But, as far as I know, newer languages such as C++, Java and C# don't include these either. C# has Mono.SIMD but it's not a primitive type(and since C# has a "decimal" keyword, I don't think they were trying to save types).
So here's what I'm noticing: Languages with vector primitive types seem to be the exception and not the rule.
Because vector primitive types look so obvious, I'm guessing there's got to be some decent reasons NOT to include these types.
Does anyone here know why these types are so frequently excluded? Some links to rationales against adding them?
Because not all processors support SIMD instructions. Languages like C and C++ (and even Java and C#) are designed to be used on different kinds of hardware, like microcontrollers, in addition to desktop computers.
Currently, vectorization of algorithms is not automatic (although that is being actively researched). Algorithms that are "vectorizable" must be explicitly written to take advantage of any SIMD capabilities of the execution environment.

Resources