Analyzing crashdumps from crosscompiled linux target - linux

I am facing a problem with the analysis of an crashdump that was generated on a linux machine I do not have access to. The situation looks like the following:
Development happens on Linux machines running distributions like Ubuntu 14.04, 13.10 and 14.04.
The target is an embedded x86 based System that runs on a stripped
down Debian 5
The build for the target happens on one of the development machines, dependent on who does the release. We use a chroot-environment to do the cross-build and we are pretty sure the chroot-environments are the same (revision controlled via git)
By the Way, the software is written in C++.
Now from time to time the software crashes in a situation we cannot reproduce so our users send us an core-file via email. The plan looks like the following:
Compile the same version of the software with debug-symbols from within the chroot-environment
Look at the core-file with GDB, also inside the chroot-environment.
This normally works fine, except for one problem. It only works if debugging happens on the same machine the stripped and the released binary was build on. On other machines, the debugger seems to get confused, a stacktrace may consist of completely unrelated calls that do not make any sense. This is a thing we are wondering about for some time now without a conclusion. Also it was a situation we could deal with easily.
But then some mindless upgrading to a new distribution took place on my machine, rendering all the core files from targets that I did the build for useless...
Now I am looking for a way to (a) understand what is happening and (b) for a way to cross-debug core files that where generated on machines without the possibility of remote access that run different Linux distributions. Oh and (c) if we maybe are doing something fundamentally wrong?

It only works if debugging happens on the same machine the stripped and the released binary was build on.
(a) understand what is happening
It's pretty obvious that despite your belief that you have a hermetic build environment, you in fact don't. If you did have a completely hermetic build environment, building on a different machine wouldn't matter.
Therefore, your first step should be to find and eliminate everything non-hermetic, until you can build bit-identical releases on every one of the machines you are building your releases on.
Once you have achieved that, it should also solve your (b) problem.
(c) if we maybe are doing something fundamentally wrong?
What you appear to be doing fundamentally wrong is that you trust that your chroot is version-controlled, when it appears to clearly not be.

Related

Deploying C++ game in Linux

I am indie game developer working on Windows platform, but I have actually little to none experience with Linux and deploying apps for it. I am polishing my game written in C++'11 based on SDL 2.0 with several other cross-platform dependencies (like AngelScript or PugiXML) on Windows and I want to distribute it over Linux too and have a few question about that. The game is commercial, closed source which is currently on Steam's GreenLite, but I want to distribute free alpha version downloadable from my website regardless of GreenLite status.
1.) Are the main Linux distributions ABI (application binary interface) compatible? Or do I need to compile my game on every supported distribution/platform?
2.) If so, which distributions/platforms are reasonable choices to support?
3.) What is the best way to install an app and it's dependencies on Linux? I've read about deb and rpm systems, but it's still confusing - is there any way to automatically generate setup packages for various distributions?
4.) How does Steam work with Linux? How should I prepare my app for distribution via it?
Excuse me if I ask wrong questions, the whole world of Linux is pretty new to me and I got lost reading various articles and manual pages...
This depends on what the distribution is derived from. Generally, there's no need to recompile a program on something like Ubuntu under Fedora so long as the code remains unchanged. Since Ubuntu and Fedora would have to be using the same libraries (albeit in different locations perhaps) and anything OpenGL-related would be a driver issue; therefore it is hardly a requirement to recompile your software. I would be extremely surprised if you had to recompile your software since all distros use pretty much the same set of libraries/got bash/use the Linux kernel but have different package managers. The last part is where it gets complex:
The aforesaid distributions have different package managers which requires you to repackage your software accordingly. You could release pre-compiled binaries under a tar.gz file and simply have distro maintainers package the software for you; though if you want to control how your software is distributed then you should better do that yourself. Because of this issue surrounding the many package managers out there, people still resort to recompiling source code through a make file which can be generated through cmake. This only happens if certain dependencies are, for whatever reason, 'renamed' in which case because of a simple name change the program magically doesn't find the dependency. Since there's also no naming convention it even makes life harder. The most important virtue here is Trust, and Trust therein to have developers follow naming conventions so everyone can reference the same package of the same name.
The best distros to support are the most popular ones: Ubuntu and openSUSE would be great starting points. Linux Mint, Fedora and Red Hat and Debian use package managers likewise of the aforesaid.
Meanwhile, you should know that you can't statically link GPL'd code in your software without also making your software also GPL. A way to work round this is to resolve dependencies by either: A. Including the relevant dependencies in the same folder as the executable (much like *.dlls on Windows) or to depend on the system upon which your program is run to look inside the same directories your program looked into while compiling and linking. The latter is actually riskier since it bears the assumption the user will have the libraries and unmodified. The former will make your overall software use less storage, but including the dependencies would only mean increasing the size of your package but ensures consistency across all systems.
As for installation, you would need a bash executable that moves the contents of your directory to the right locations. Generally, the main app goes into /usr/bin and any app-related data goes into the home folder. This again depends on the distro; look for a .local directory, or you could create a directory dedicated to your app that is hidden. Hidden folders are prefixed with a period. Why put this stuff in the home folder, and what? The why: because the home folder gives read-and-write permissions to everyone by default. The what: anything that needs to be run with the app without the user having to authorise it. Dependencies should thus be located in the home folder, preferably under your own directory. Different conventions follow; some may disagree with me on this one.
You might also want to use steam's API which does most of this work for you. Under Steam, your app may be under steam's own directory and thus functions as a steam APP with all the functionality therein.
http://www.steampowered.com/steamworks/
To find out more about how to get your app on steam. I have to say I was really impressed, and they even include code samples. The best part is that this API is on Linux as well. I don't know much apart from that Steam would be handling the execution of your app through its own layer. Wherein, there's no need to independently distribute your app the previous steps.
Note that you can also distribute your software through the Ubuntu Software Centre if you are interested.
http://developer.ubuntu.com/apps/
Though, Ubuntu has more focus on getting apps running regardless of platform.
Know that Linux has no single convention, but its convention is simply derived from pragmatism, not by theory. At the end of the day, how you want your software run on Linux is up to you.
I'm not a game developer and I come from open source community so can't really advise on delivering binaries. I'll try to answer some of your questions though:
Valve has a steam runtime you can target on linux (https://github.com/ValveSoftware/steam-runtime) - this would be the best way to port your game. I saw it mentioned in one of their Linux dev videos on youtube my understanding is it bundle a bunch of libraries inclding SDL and it is setup to emulate a specific version of ubuntu. So if you write your game against the steam runtime it will run in any linux distro that steam has been ported too.
As for natively packaging your game one thing to consider is if you package it as a deb or RPM and instruct it to depend on distro provided libraies than your app may break when the libraries are updated (some distros update libs quite often - others are more stable). Dynamically linking against system libraries works well for open source since people can patch the code when libraies change not ideal for close sourced stuff.
You can statically link your binary at build time, which means you have a larger sized binary. But than you don't have to worry about app breaking when libs are updated.
Some programs like Chrome bundle their own libs (which are essentially forks of the system libs) again this makes download size much larger but also has potential to cause security problems, people tend to frown on this. (see: http://lwn.net/Articles/378865/)
1.) No, ABIs of main Linux distributions are not fully compatible. In a narrow sense, they are mostly compatible. But in a broader sense, there are many differences in which some elements of the system work, are configured, etc.., OTOH, making "packages" is not a big problem per se, just some work. Also, what works for a "major" distribution (like Debian) works (pretty much always) on all derivatives (like Ubuntu, Mint...).
2.) A good start list to support is: Debian (.deb), RedHat and Fedora (.rpm both). Their package formats and tools are mature and well known. This will "cover" a lot of derivatives.
3.) There are some "cross-distribution" package builders, but are mostly not up for the task. Writing a definition script for most of package formats is not hard, once you get a hang of it. Also, some commercial tools have "Installation script generators" for Linux which take care of things for you (they don't generate .deb or .rpm, rather a complex shell script, similar to a Windows EXE installer)
4.) Sorry, I don't know much about Steam. O:)
So, from my experience, the best way to do it is to accept the ugly truth that you have to select a few major distributions and their versions ('cause things can change very much between versions) and make packages for them, and test often on all of them. And be happy that you're not developing some long-lived kernel module/driver, because if you add tracking kernel API changes to the whole picture... :)

How do I cross compile a haskell program on a linux machine with a windows (PE) target?

I'd like to develop haskell code that will run on windows and interact with windows OS APIs, but I would like to do it on a linux machine. How do I accomplish this? I can compile on a windows machine and that works, but not on a linux machine. Haskell can use a LLVM backend, can't it? Can I use LLVM to accomplish this? Or work with MinGW somehow?
I tried many possibilities, including GHC on Wine (didn't work for me, despite many notices advertising that it works "out of the box").
For cross compilation, one problem lies in making GHC find your C libraries and DLLs (for windows). Template Haskell will also give you headaches (because it needs to load linux libraries but then compile for windows).
I never managed to get around those problems properly.
In the end, I opted for installing GHC on a Windows VM, and now I use a script to push stuff to a repo, connect to the windows machine via SSH, pull, clean, recompile and test, all executed from a linux CLI and giving me feedback about what's happening on windows.
EDIT: I'm not providing this answer in an attempt to discourage anyone from trying something smarter. I too am interested in real cross-compilation and, if someone has a good solution, I'm all ears. My alternative method always works, but it really is a pain, having to start a VM just for this. Furthermore, it implies using one VM per OS per architecture, which is quite heavy.

How to verify cross platform installation steps

I have to check installation steps of my application on different production machines. I want to check how can I install my application on HP UX. I have only linux/windows machines but dont have real physical HP unix machine. Is there any way i can check installation steps of HP unix. I am thinking of any virtual environment or any flavour that run on linux or windows which gives accessiblity and functionality of HP unix.
I am looking something to cross check platfrom installation steps.
The short answer is no. HP-UX is as different from Linux as Linux is from Windows (almost). There would be many differences in libraries, patches, installed utilities, build tools, etc.
A few examples:
HP-UX does not come pre-installed with the bash shell
HP-UX uses a proprietary software packager and installer called swinstall (analogous to RPM but completely different)
Partition layout is different
Many common utilities behave differently. "echo" is one of many examples. This will affect things if your build process uses shell utilities
Even if you can test the install, don't you need to test the product's operation on HP-UX?
Not saying it's impossible. If your application uses basic, nonspecific utilities for install, it might work. There is no way to know without a running installation. Unfortunately you need Itanium hardware and the O/S.
My recommendation would be to get your application working on Solaris and any other Unixes first. The more platforms you test on, the more portable your code will become on all of them. Then, put out some feelers and find someone with a system you can borrow time on.
Worst case, find an Itanium server like an rx2620 on eBay, should not cost too much. Even better if the seller forgets to wipe the O/S :). You'll need a terminal and possibly null modem. 11.31 (11iv3) is the latest version of the O/S.

Best way to build cross toolchains on Mac OS X

I spent the last three weeks researching about crossdevelopment under Mac OS X. I want to achieve two separate results, but I believe they can be reached through the same path.
I want to
set up distcc to help my old Gentoo laptop using the iMac I recently got at home (OS X 10.6, 64 bit native) which I also use for iOS development, so Xcode 4 tools are already there;
develop my pet project which is an elf kernel for x86, x86_64, and arm (and I'll stop here as it's OT).
So, after a lot of that thinking thing we all do in these cases, I came up the idea that to reach the first goal I need to set up an i686-pc-linux-gnu toolchain (or is it i686-unknown-linux-gnu?) with all the appropriate versions (eg gcc-4.4) and make it callable by distcc. It seems like a reasonable task, but unfortunately there seem to be clearer tools and instructions to build toolchains for obscure archs like sparc or mips, and not a single reasonably updated resource on how to go for x86 the best way. Therefore, first question: is there anybody that succesfully build such a toolchain and feels like sharing the pain? :)
Second goal. My current workbench is made of Gentoo on an i686 laptop (yes, the same as the first goal) with all the regular development stuff, and I use QEMU to test it (its gdb integration is awesome). What I'd really like to do is to keep using the laptop while travelling (I do a lot of commuting) and continue to work and test on the iMac when I'm home (git is awesome in this respect). Hence, second question: is there anybody that have done something like this and wants to share?
I'd really appreciate any input. Seriously.
EDIT I know about MacPorts, crosstool, and crosstool-ng. I tried installing i386-elf-binutils 2.18 from MacPorts just to discover I have 2.20 in my laptop. Also I couldn't get gcc44 to produce i686-pc-linux-gnu elf objects, and using i386-elf-gcc is not an option as I need 4.4 and the packaged one is 4.3.
This is no easy task, specially because you want to cross compile for so many different platforms.
The most used approach is to run a Virtual Machine with the desired OS (e.g. VirtualBox, Parallels, VMWare Fusion) and install your workbench tools to work from it. This is very often used because it's not complex to setup and it also make it easier to write, test and debug code for/from the target system.
Of course, if you search enough you'll find all sorts of hacks/tricks to setup a toolchain on Mac OS X and compile code for other architectures:
One of these uses Buildroot, but that means that there is no official support for Mac OS X.
Another one, also interesting, offers a .dmg package with the tools needed to compile for Linux on MacOS X.
You already mentioned Gentoo, so I think you should take a look at Gentoo Prefix. Gentoo Prefix lets you install a small Gentoo system in a user defined directory (= prefix). From there, you may start a shell which lets you use portage (= Gentoo's package system) which should enable you to install the necessary tools.
I do not know in what shape Prefix on OS X today is, but I was able to install it on a friend's MacBook a year or so ago. If you are interested, I can give further details about the installation process which can be a bit tricky.

What could prevent from running a binary on linux distribution compiled on a different platform?

We have 2 different compilation machine: red hat as4 and as5. Our architects require us, developers, to compile our program on those 2 platforms each time before copying them on their respective machine in production.
What could prevent us from compiling our application on one machine only (let say the red has as 4 for instance) and deploy that binary on all target platform instead ?
What is your point of view and could you pinpoint specific issues you've encountered with doing so ? What issues may I face with doing so ?
You could run into shared library incompatibilities by building on one system and running on another. It's not especially likely going between successive versions of Red Hat, but it is possible.
Another problem you could run into is if one system is 32 bits and the other 64 bits. In that case, the app compiled on the 64 bit machine wouldn't run on the 32 bit machine.
Prevent? Nothing. An application that runs on EL4 should run on EL5 as well, barring things like external applications being different versions or libraries being obsoleted. However, Red Hat likes to do all sorts of tweaks to gcc that involve security and code optimization, and you'll miss out on any improvements in EL5 if you just copy the EL4-compiled binary.
Besides, everyone needs a break.
Whoa, compiling stuff then developers copying the binaries into production? That's some pretty whacky process you've got there.
Yes of course you can run the same binaries on RHEL 4 and 5 provided they are the same architecture and you have the dependencies installed.
I would strongly recommend that you build your binaries into a RPM, then you can have dependencies created for it, which will ensure it can only be installed when dependencies are satisified.
Moreover, it will enable your QA team** to install the same binary and carry out testing on a non-production system and such like, which they will definitely want to do before they let the software anywhere near the Ops team*** who will then deploy them, knowing that the relevant processes, including testing, have been carried out.
** You have one of those, right
*** surely you have one of those!

Resources