Porting duktape, getting duk_create_heap error during JS compilation of builtin initjs - duktape

This question might be too detailed for this forum, but I could not find a mailing list for duktape. Maybe this question will be useful for others trying to get duktape running on more obscure hardware.
I am trying to get duktape to work on an old ColdFire CPU, using an OLD gcc compiler (2.95.3). The board has limited resources (flash/RAM) but I seem to have enough of both. I must live with the old compiler.
I believe the duk_config.h is calculating the right options regarding endianness, etc. I am using a number of the duktape options to reduce code and data size. I have successfully used the same configuration on 64 and 32 bit Ubuntu and it works fine.
The "properties string" that is formed and set in duk_hthread_create_builtin_objects() is:
"bb u pnRHSBOL p2 a8 generic linux gcc" which seems correct (not sure of the effect of the "generic" tag for architecture).
I am getting a failure when calling duk_create_heap(). I have isolated the problem to a what I believe is a JS compile error related to duk_initjs. If I undef DUK_USE_BUILTIN_INITJS, initialization works. The error is a syntax error (not sure where yet). By running "strings" on my executable, I can see that the javascript program source string is there. As a side issue, when this error occurs, the longjmp doesn't work (setjmp never called?) so my fatal handler gets called, but I don't care about for now.
I thought it might be my small C stack (as it appears the js compiler uses recursion) but making the stack much larger didn't help.
I am starting to dig into the JS compiler, but this must be an issue with the architecture or my environment. Any suggestions appreciated!
EDIT: I just now noticed a post of a similar issue, and there was a request to repeat with "-DDUK_OPT_DEBUG -DDUK_OPT_DPRINT -DDUK_OPT_ASSERTIONS -DDUK_OPT_SELF_TESTS" I will try to use these options (if possible, I am very close to a relocation limit on my executable).

There was a bug in 1.4.0 release (https://github.com/svaarala/duktape/pull/550) which caused duk_config.h to incorrectly end up with an unpacked value representation even when the architecture supported packed representation. This might be an issue in your case - try adding and explicit -DDUK_OPT_PACKED_TVAL (which forces Duktape to use packed representation) to see if it helps.

Related

Rcpp: Platform differences in output

i have the following problem (and cannot really produce a minimal test)--
i am porting a package from C++ via Rcpp to R.
the tests (i am testing if the output matrix is exactly what i
would get if calling c++ directly) under linux and osx are absolutely equal, no difference.
but when testing either via build_win() or via a win 8.1 virtual machine i get different results (but the results between both are consistent, so i have linux/osx vs win results)
i already replaced the one rand() call with the corresponding Rcpp sugar, so this should be no problem (i hope at least).
as calling the tests via "R -d valgrind" also produce no error, i am a bit puzzled how to proceed.
all tests are done with R 3.2.0 (local machines) and latest unstable (via build_win())
so my questions are:
are there any known Rcpp differences when compiling (e.g. the compiler provided by Rtools on windows is too old and therefore numeric computations (using STL, no other library like boost/eigen etc) are expected to be slightly different?
is there a good way to debug the problem? i would need to trace basically the C++ code line by line, i am even not sure how to do that except for heavy std::couts.
thanks.
the truth about the 32bit/64bit problem is indeed written up here
different behaviour or sqrt when compiled with 64 or 32 bits
adding the -ffloat-store option did fix my problem.
never expected that, thought the problem is in the source code.

Linux: How to find out which (sub) dependency of my library needs a specific library?

The title may seem complicated.
I made a library to be loaded within a Tcl script. Now I need to transfer it to Ubuntu 12.04.
Tclsh gives the following error:
couldn't load file "/apollo/applications/Linux-PORT/i586/lib/libapmntwraptcl.so":
**libgeos-3.4.2.so**:
cannot open shared object file: No such file or directory
while executing "load $::env(ACCLIB)/libapmntwraptcl[info sharedlibextension]"
The library libgeos doesn't have the version 3.4.2 under Ubuntu 12.04. So I need to know which (sub) dependency of my library needs the famous libgeos-3.4.2.so, so that I can rebuild it or find an alternative.
Many thanks in advance.
Edit:
Thank you for your USEFUL answers. I already did ldd -v or -r. I have 200+ dependencies when I do ldd -r. The worst is, in the result list I see libgeos-3.3.8.so => /usr/lib/libgeos-3.3.8.so (0xb3ea9000) (version I have), but when I execute, Tclsh says
libgeos-3.4.2.so missing.
That's why I need something able to tell me the complete dependency tree of my library.
Could anyone give me a hint (not some useless showoff)?
Thank you so much.
You've accidentally (probably through no fault of your own) wandered into “DLL Hell”; the problem is that something that libapmntwraptcl.so depends on, possibly indirectly, does not have its dependencies satisfied. This sort of thing can be very difficult to solve precisely because the tools that know what went wrong (in particular, the system dynamic linker library) produce such little informative output by default.
What's even worse is that you have apparently multiple versions about. That's where DLL Hell reaches its worst incarnation. You need to be a detective to solve this; it's too hard to sensibly do remotely as many of the things that you poke your fingers at are determined by what previous steps said.
You need to identify exactly what versions you're loading, with ldd libapmntwraptcl.so (in your shell, not in Tcl). You also need to double check what your environment variables are immediately before the offending load command, as several of them can affect the loading process. The easiest way to do that is to put parray env just before the offending load, which will produce a dump of everything in the context where things could be failing; reading the manual page for ld.so will tell you a lot more about each of the possible candidates for trouble (there's many!).
You might also need to go through the list of libraries identified by the ldd program above and check whether each of those also has all their dependencies satisfied and in a way that you expect, and you should also bear in mind that failing to locate with ldd might not mean that the code actually fails. (That would be too easy.)
You can also try setting the LD_DEBUG environment variable to all before doing the load. That will produce quite a lot of information on standard out; maybe it will give you enough to figure out what is going wrong?
Finally, on Linux you need to bear in mind that there can be an RPATH set for a particular library (which can affect where it is found) and there's a system library cache which can also affect things.
I'm really sorry the error message isn't better. All I can really say is that it's exactly as much as Tcl is told about what went wrong, and its hardly anything.

How to inspect Haskell bytecode

I am trying to figure out a bug (a serious performance downgrade). Unfortunately, I wasn't able to figure out why by going back many different versions of my code.
I am suspecting it could be some modifications to libraries that I've updated, not to mention in the meanwhile I've updated to GHC 7.6 from 7.4 (and if anybody knows if some laziness behavior has changed I would greatly appreciate it!).
I have an older executable of this code that does not have this bug and thus I wonder if there are any tools to tell me the library versions I was linking to from before? Like if it can figure out the symbols, etc.
GHC creates executables, which are notoriously hard to understand... On my Linux box I can view the assembly code by typing in
objdump -d <executable filename>
but I get back over 100K lines of code from just a simple "Hello, World!" program written in Haskell.
If you happen to have the GHC .hi files, you can get some information about the executable by typing in
ghc --show-iface <hi filename>
This won't give you the assembly code, but you can get some extra information that may prove useful.
As I mentioned in the comment above, on Linux you can use "ldd" to see what C-system libraries you used in the compile, but that is also probably less than useful.
You can try to use a disassembler, but those are generally written to disassemble to C, not anything higher level and certainly not Haskell. That being said, GHC compiles to C as an intermediary (at least it used to; has that changed?), so you might be able to learn something.
Personally I often find view system calls in action much more interesting than viewing pure assembly. On my Linux box, I can view all system calls by running using strace (use Wireshark for the network traffic equivalent):
strace <program executable>
This also will generate a lot of data, so it might only be useful if you know of some specific place where direct real world communication (i.e., changes to a file on the hard disk drive) goes wrong.
In all honesty, you are probably better off just debugging the problem from source, although, depending on the actual problem, some of these techniques may help you pinpoint something.
Most of these tools have Mac and Windows equivalents.
Since much has changed in the last 9 years, and apparently this is still the first result a search engine gives on this question (like for me, again), an updated answer is in order:
First of all, yes, while Haskell does not specify a bytecode format, bytecode is also just a kind of machine code, for a virtual machine. So for the rest of the answer I will treat them as the same thing. The “Core“ as well as the LLVM intermediate language, or even WASM could be considered equivalent too.
Secondly, if your old binary is statically linked, then of course, no matter the format your program is in, no symbols will be available to check out. Because that is what linking does. Even with bytecode, and even with just classic static #include in simple languages. So your old binary will be no good, no matter what. And given the optimisations compilers do, a classic decompiler will very likely never be able to figure out what optimised bits used to be partially what libraries. Especially with stream fusion and such “magic”.
Third, you can do the things you asked with a modern Haskell program. But you need to have your binaries compiled with -dynamic and -rdynamic, So not only the C-calling-convention libraries (e.g. .so), and the Haskell libraries, but also the runtime itself is dynamically loaded. That way you end up with a very small binary, consisting of only your actual code, dynamic linking instructions, and the exact data about what libraries and runtime were used to build it. And since the runtime is compiler-dependent, you will know the compiler too. So it would give you everything you need, but only if you compiled it right. (I recommend using such dynamic linking by default in any case as it saves memory.)
The last factor that one might forget, is that even the exact same compiler version might behave vastly differently, depending on what IT was compiled with. (E.g. if somebody put a backdoor in the very first version of GHC, and all GHCs after that were compiled with that first GHC, and nobody ever checked, then that backdoor could still be in the code today, with no traces in any source or libraries whatsoever. … Or for a less extreme case, that version of GHC your old binary was built with might have been compiled with different architecture options, leading to it putting more optimised instructions into the binaries it compiles for unless told to cross-compile.)
Finally, of course, you can profile even compiled binaries, by profiling their system calls. This will give you clues about which part of the code acted differently and how. (E.g. if you notice that your new binary floods the system with some slow system calls where the old one just used a single fast one. A classic OpenGL example would be using fast display lists versus slow direct calls to draw triangles. Or using a different sorting algorithm, or having switched to a different kind of data structure that fits your work load badly and thrashes a lot of memory.)

Detecting the reason for EXCEPTION_FLT_STACK_CHECK

I have a compliacted C and C++ code with heavy mathematics calculations. I use intel C++ - the latest update to compile. I use optimizatons and the application does not give the expected answer. After a long time I managed to reduce the problem to getting EXCEPTION_FLT_STACK_CHECK
0xc0000092. If I compile without optimization - The program work as expected.
It's a single threaded code on Winxp64 (the application is 32-bit).
MSVC 2010 gives the same results with Debug or Release builds. (I mean Good=Expected results)
Can someone help me where to look? Currently I suspect a compiler bug - since I have no asmsembly code of my own, Only compiler-generated code. I looked at the assembler and it's SSE/x87 mixed code.
I'm looking for directions to look for. Since I'm on trial version (of the intel compiler) I don't have much time for investigations.
I will try to use /Qfp-stack-check tommorow to see if i can find something wrong with my code.
* Update *
I just found a bug in intel compiler. A function returns a value on st(0) but the calling function does not remove it. That way i get the stack exception. Workaround is to use the returned value even that i dont always need it. I will try to reproduce it with code that I can share.
After this workaround intel was faster 35% then msvc2010 on the same code. - That's the main result.
mordy
Update * I just found a bug in intel compiler. A function returns a value on st(0) but the calling function does not remove it. That way i get the stack exception. Workaround is to use the returned value even that i dont always need it.

MissingMethodException when using zxing.Monotouch on iOS6

I have updated my development system to the new MonoTouch (6.0.1) and now whenever I'm referencing zxing.Monotouch types I get MissingMethodException on the constructor.
System.MissingMethodException: Method not found: 'MyClass..ctor'.
It's been 3 days now...
Anyone got any idea? I'm even willing to give up zxing if that what it takes (even though it's a wonderful library).
Edit
When I include zxing.Monotouch in the solution and reference it as a project the problem does not reproduce. If that's a clue I've missed it...
It's likely that the binary version of zxing.Monotouch is trying to access something that does not exists in 6.0.1. That's uncommon as we try to maintain source/binary compatibility unless the code is really broken (e.g. it would cause a crash anyway). I cannot be more precise without more data (e.g. a full build log).
If you include zxing.Monotouch as a reference then it will be rebuilt. If it works then it really looks like source compatibility was preserved (but not binary compatibility).
Whenever you have the source code available I encourage you to use .csproj (not .dll) references. Is has a few advantages, including the source/binary compatibility (above) and the fact that it makes things easier to debug from your project.

Resources