I just upgraded my Haskell Platform to the latest (64-bit 2014.2.0.0) from an older one (32-bit 2013.2.0.0). I reinstalled various packages that my project requires via cabal and built the main. However, this time around the OpenCL (1.0.3.4) package crashes in the first foreign call it makes (clGetPlatformIDs).
The crash is an access violation.
I can't get a stack trace through any of the Windows debugging tools.
I set a breakpoint in Visual Studio in clGetPlatformIDs which never hits, so the crash precedes that. I don't have symbols for OpenCL.dll, but I think it should be able to find the interface symbol; it resolved the breakpoint.
The KHR OpenCL.dll does get loaded.
None of the registers contain a usable address which lands in a module that I can see.
I can get further through ghci. The calls work, but my OpenCL implementation doesn't show up.
The 32-bit platform works fine.
This sort smells like a linking issue. (I am not getting out of the Haskell runtime cleanly.) How can I fix this or debug it? How does one generally debug binding issues like this?
NOTE 1: upon re-installation of various packages I see warnings like:
Util.hs:291:1: Warning:
the 'stdcall' calling convention is unsupported on this platform,
treating as ccall
When checking declaration:
foreign import stdcall unsafe "static windows.h GetCurrentProcessId" getCurrentProcessId
:: IO WIN32.DWORD
I ignored these as they appear in great quantities for packages that function correctly. Minimal testing shows that they still work, and I couldn't find anything significant online about this warning. Win32 API is heavily used in Haskell, and I don't believe that it's totally broken on 64-bit Haskell Platform. Moreover, the above getCurrentProcessId works fine. I am assuming ghc picks the right convention for x64 and ignores stdcall (which I think doesn't exist on Windows).
NOTE 2: I installed the OpenCL module with the following command:
% cabal install OpenCL \
--extra-include-dirs="C:\Program Files (x86)\Intel\OpenCL SDK\4.0\include" \
--extra-lib-dirs="C:\Program Files (x86)\Intel\OpenCL SDK\4.0\lib\x64"
(The \ newlines are inserted here for readability; this was one line.)
Related
I have developed a library which I have testing on an x86-64 bit machine and it works and passes tests successfully. When I put it in my android application, the code stops in a constructor that just initializes all its variables to their default values (pointers get assigned to null, booleans to false...). I have set the target for x86-64 bit so I am sure it's not a problem of deploying a different architecture. How can I find out the root of the problem because if I do comment out the initialization in the constructor, it will execute a good amount of code before giving a SEGILL error again? I am using android 8 x64 bit intel image in the emulator. Also, the log cat doesn't show anything, the only error is the SEGILL.
It seems that most of the time, doing some pointer manipulation causes the problem. Simply initializing pointers with null or new causes the app to crash.
Instead of enabling SSE, I enabled avx which is not supported by android and therefore clang optimized some parts by using avx which resulted in SIGILL.
I am trying to package some native libraries for inclusion into a java natives .jar. Right now, we are targeting 32-bit and 64-bit linux and windows, with macosx upcoming (which would yield a total of 6 variations). In addition, we have some naming problems which would be resolved if we could roll up several small libraries into one big one.
My goal is to convert
my_library.so dependencyA-55.so dependencyB-50.so
into
my_library_without_dependencies.so
I have full (C and C++) sources for dependencyA and dependencyB; however, I would much rather not to meddle in their compilation, as it is quite complex (ffmpeg). I am trying to pull this off using gcc 4.6 (ubuntu 12.04 64-bit), and the solution, if found, should ideally work for 64-bit and 32-bit linux, and 64-bit and 32-bit windows architectures (cross-compiling via mingw32).
Is there any magic combination of linker options that would cause GCC to subsume the dependencies into a single final shared library?. I have looked intently at the linker options without success, and related SO questions do not address this use-case.
Its not possible.
Shared objects are already a product of linker and in the form of ready to execute.
Instead you can create static libraries as "dependencyA.a" and "dependencyB.a"
( as you have source code ) and use "--whole-archive" linker switch while creating "my_library.so"
Error
Currently using this to compile my C++ program:
x86_64-w64-mingw32-g++ -std=c++11 -c main.cpp -o main.o -I../include
x86_64-w64-mingw32-g++ main.o -o mainWin.exe -L/usr/lib/x86_64-linux-gnu/ -L/usr/local/lib -lopengl32 -lglfw32 -lGLEW -lX11 -lXxf86vm -lXrandr -lpthread -lXi -DGLEW_STATIC
I am using Mingw to compile my C++ program from Linux (Ubuntu) to a Windows executable. I am relatively new to compiling via command line, but I would like to switch my work environment completely over to Linux.
When I attempt to compile the program, I get the following error:
*** Error in `/usr/bin/x86_64-w64-mingw32-ld`: free(): invalid pointer: [removed]***
ld terminated with signal 6 [Aborted], core dumped
I believe this is because of my build of GLEW. Whenever I make it, it wants to use a mingw32msvc version of mingw. I think I need it to use x86_64-w64-mingw32-gcc. I cannot figure out how to make it do this (if even possible).
Extra
It's also worth noting that I only get this error with GLEW_STATIC defined at the top of main.cpp. Without it, I get undefined references to GLEW.
It seems that you were using the -lGLEW flag when you're supposed to use -lglew32s/lglew32! Make sure to #define GLEW STATIC if you are statically linking...and get the appropriate binaries from their website.
If the loader (or any program) is crashing, then check to see whether you are using the most recent version. If not, get hold of the newest version and try again. If that doesn't resolve it, can you find an older version and use that? If you can't easily find a version that works, you need to report the bug to the relevant team — at MinGW or the bin-utils team at GNU. Is 32-bit compilation an option? If so, try that. You're in a hole; it will probably take some digging to get yourself out.
This problem seems to occur in 2016, even though the question is from 2014. It is a little surprising that the problem has not been fixed yet — assuming that the flawed free being run into in 2016 is the same as the one that occurred in 2014. If the loader now in use dates from (say) 2013-early 2015, then there's probably an update and you should investigate it. If the loader now in use dates from mid-2015 onwards, it is more likely (or, if that's too aggressive, then it is at least possible) that it is a different bug that manifests itself similarly.
The advice to "try an upgrade if there is one available; if that doesn't work, see whether you can find a working downgrade" remains valid. It would be worth trying to create an MCVE (Minimal, Complete, and Verifiable Example) and reporting the bug to the maintenance teams — as was suggested by nodakai in 2014. The smaller the code you use, and the fewer libraries you need, the easier it will be for the maintenance teams to discover the problem and fix it. If it is a cross-compiler running on Linux for MinGW, then you still need to minimize the code and report the issue.
Note that if you can find a 'known-to-work' version, that will probably be of interest to the maintainers. It localizes where they need to look a bit.
I should note that even if the library in use is the wrong library, the loader still shouldn't crash with the free error. It can report a problem and stop under control. It should not crash. It may still be worth reporting that it does crash.
In many ways, this is just generic advice on what to do when you encounter a bug in software.
You are (and I was) using the -lGLEW flag when you're supposed to use -lglew32s/lglew32! Make sure to #define GLEW STATIC if you are statically linking... get the appropriate binaries from their website .-.
The title is pretty straightforward - I can't get anything at all to run when building in x64 and I get a message box with this error code. Do you know what may be the problem here?
This is STATUS_INVALID_IMAGE_FORMAT, you can find these error codes listed in the ntstatus.h SDK header file.
It is certainly strongly correlated with building x64 code. You'll get this status code whenever your program has a dependency on 32-bit code, particularly in a DLL. Your program will fail to start when it tries to load the DLL at startup, a 64-bit process cannot contain any 32-bit code. Or the other way around, a 32-bit process trying to load a 64-bit DLL.
Review all the dependencies for your program, particularly the import libraries you link. Everything must be built to target x64. You can use SysInternals' ProcMon utility to find the DLL that fails to load, useful in case this is a DLL Hell problem.
Just an addition to the correct answer above: also check you .manifest-files (resp. #pragma comment(linker,"/manifestdependency...) and make sure that you have processorArchitecture='x86' for 32-bit and processorArchitecture='amd64' for x64 code.
I have a very complex cross-platform application. Recently my team and I have been running stress tests and have encountered several crashes (and core dumps accompanying them). Some of these core dumps are very precise, and show me the exact location where the crash occurred with around 10 or more stack frames. Others sometimes have just one stack frame with ?? being the only symbol!
What I'd like to know is:
Is there a way to increase the probability of core dumps pointing in the right direction?
Why isn't the number of stack frames reported consistent?
Any best practice advise for managing core dumps.
Here's how I compile the binaries (in release mode):
Compiler and platform: g++ with glibc-2.3.2-95.50 on CentOS 3.6 x86_64 -- This helps me maintain compatibility with older versions of Linux.
All files are compiled with the -g flag.
Debug symbols are stripped from the final binary and saved in a separate file.
When I have a core dump, I use GDB with the executable which created the core, and the symbols file. GDB never complains that there's a mismatch between the core/binary/symbols.
Yet I sometimes get core dumps with no symbols at all! It's understandable that I'm linking against non-debug version of libstdc++ and libgcc, but it would be nice if at least the stack trace shows me where in my code did the faulty instruction call originate (although it may ultimately end in ??).
Others sometimes have just one stack frame with "??" being the only symbol!
There can be many reasons for that, among others:
the stack frame was trashed (overwritten)
EBP/RBP (on x86/x64) is currently not holding any meaningful value — this can happen e.g. in units compiled with -fomit-frame-pointer or asm units that do so
Note that the second point may occur simply by, for example, glibc being compiled in such a way. Having the debug info for such system libraries installed could mitigate this (something like what the glibc-debug{info,source} packages are on openSUSE).
gdb has more control over the program than glibc, so glibc's backtrace call would naturally be unable to print a backtrace if gdb cannot do so either.
But shipping the source would be much easier :-)
As an alternative, on a glibc system, you could use the backtrace function call (or backtrace_symbols or backtrace_symbols_fd) and filter out the results yourself, so only the symbols belonging to your own code are displayed. It's a bit more work, but then, you can really tailor it to your needs.
Have you tried installing debugging symbols of the various libraries that you are using? For example, my distribution (Ubuntu) provides libc6-dbg, libstdc++6-4.5-dbg, libgcc1-dbg etc.
If you're building with optimisation enabled (eg. -O2), then the compiler can blur the boundary between stack frames, for example by inlining. I'm not sure that this would cause backtraces with just one stack frame, but in general the rule is to expect great debugging difficulty since the code you are looking it in the core dump has been modified and so does not necessarily correspond to your source.