I am recently doing some experiment on building a cluster system for computing tasks. My server is Suse Linux Enterprise 11, and one of the client is Redhat Enterprise Linux 5. I compiled openmpi on Suse, and it runs fine alone. Then I shared /usr/local(where the openmpi was installed) to my Redhat client, and tried to do mpirun, the following error popped up: error while loading shared libraries: /usr/local/openmpi/lib/libopen-rte.so.0: ELF file OS ABI invalid. Does this mean that I have to compile openmpi on redhat separately? Thanks!
Related
I have an application that is used unittest, written with gtest. My application work fine on Windows platform, several days ago i try run application on linux platform. Linux that is used have following configuration:
cat /etc/*-release
Cluster Manager v7.2
slave
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Red Hat Enterprise Linux Server release 6.7 (Santiago)
Red Hat Enterprise Linux Server release 6.7 (Santiago)
I start build my application, for build we use cmake, that also include cmake from gcc:
add_subdirectory(/path/gtest/ ${PROJECT_BINARY_DIR})
add_executable(myApplication ${SOURCES})
add_dependencies(myApplication tinyxml2 gtest)
target_link_libraries(myApplication tinyxml2 gtest ${CMAKE_DL_LIBS})
During application build I do not have any error. After build i try execute application and have an error: Segmentation fault
During analysis i found that myApplication and gtest have different OS/ABI version:
myApplication: UNIX - System V
gtest: UNIX - Linux
tinyXML2: UNIX - System V
other binary and library also have: UNIX - System V
Do you know how i can fix this problem?
GCC version: 4.8.2
After migrating a Debian 7.6 system from i368 to amd64 I encountered the problem to use some older ELF 32-bit LSB executables. Of course, there is the possibility to setup a secondary 32-bit system (in fact I could use my old one) and to apply chroot(8) or schroot(1). But I do not like the additional admin effort, the performance loss by a wrapper program and a different command sequence.
I am wondering whether there is really no way to run a 32-bit application directly on the x86_64 architecture (as possible, e.g., for HP-UX 11.0)? Both
$ /home/alf/prog32
and
$ /usr/bin/linux32 /home/alf/prog32
just lead to the failure message /home/alf/prog32: file or directory not found. This behavior is observed for all ELF 32-bit LSB executables (Debian i386, downloaded, self-written and compiled C-programs).
I would like to load a very simple, hello world program, on an Embedded ARM processor. For this, I would like to install a toolchain in order to cross compile my code. I am currently working on a 64-bit Linux OS. Does anyone know of a GCC ARM embedded toolchain that I can download? I've downloaded a pre-built version of Linaro GCC but it only runs on a 32-bit Linux machine and I can't install the ia32-libs package because my Linux machine has no internet connection.
The gcc-arm toolchain I'm using for ARM Cortex-M processors can be found here-
https://launchpad.net/gcc-arm-embedded
It also builds for Cortex-A targets, which should cover the majority of embedded ARM systems.
You can download standalone distributions for many operating systems, including linux.
There are also 64bit builds of Linaro toolchain here. Just download the x86_64 and not the i686 version.
I have created NPAPI plugin, which is workig fine on linux where I have created the .so file but when I deployed this plugin on our production device where we have linux environment with limited resources(due to performance constraints) , following error is thrown
'/usr/lib/mozilla/plugins/npPluginTest.so' is not an ELF executable for sh
FYI:so file created on 32bit linux box.
how can I resolve this issue?
I build the plugin using wrong toolchain/compiler. instead of Machine: Intel 80386 it should Machine: ST40. I build my shared library on ST GCC compiler.Now my NAPI Plugin is working fine.
Thanks for your suggestions.
I am using a software for graph mining.
I have got the binary of that software in 2 folders for Linux mode and SunOs mode but don't have the source.
I am able to run the binary in Linux machine.
But when I want to run the binary in a Mac machine I am getting "command not found" for both the Linux and SunOs folders' binaries.
Could someone suggest if it can be able to run this in a MAC machine by any means like using a Linux shell or something
Gaurav
EDIT:I am getting "cannot execute binary" error when I set chmod to "u+x"
You'll need to recompile it for OS X or use a VM.
A command not found just means you're not executing it right, make sure it's chmod u+x and it's either on your PATH, or you specify the path explicitly.
If you use the file command you will see the difference, on the linux executable you'll have something like:
ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically
linked, for GNU/Linux 2.6.15, not stripped
and something like this for OS X executables:
command: Mach-O universal binary with 2 architectures command (for
architecture x86_64): Mach-O 64-bit executable x86_64 command (for
architecture i386): Mach-O executable i386
Operating systems generally don't support executing object code any extra formats... If Mac osx decended from solaris or Linux, then there could be some incentive for legacy support. But just assume everything to be binarily incomparable if it was compiled for a different arch and platform. There are a few places where you inherit backwards compatibility, running 32 but code on 64 bit oses... Or ppc code support on intel macs, but I suspect that both of those, especially the latter were non trivial engineering tasks.
Here are your options...
Get the source and compile on the Mac, if it compiles on Linux and solaris good chance it will compile and run ok on Mac.
Run through an emulator or boot camp