I am trying to run cargo on a RockPro64 (aarch64). I installed the toolchain using curl https://sh.rustup.rs -sSf | sh with the output:
info: default toolchain set to 'stable-aarch64-unknown-linux-gnu'
stable-aarch64-unknown-linux-gnu installed - (error reading rustc version)
Rust is installed now. Great!
The toolchain seems to be correct as this Reddit post suggests. However, when running cargo (or any other bin in $HOME/.cargo/bin/cargo), I get this error:
error: command failed: 'cargo': No such file or directory (os error 2)
I tried to investigate further after following Rust musl Docker image cannot find Cargo without success. Here is the output of ldd $HOME/.cargo/bin/cargo:
libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0xf6cc7000)
librt.so.1 => /lib/arm-linux-gnueabihf/librt.so.1 (0xf6cb1000)
libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xf6c8d000)
libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0xf6c15000)
libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0xf6c02000)
libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xf6b14000)
/lib/ld-linux-armhf.so.3 (0xf7508000)
And the output of strace -f -e trace=execve cargo:
execve("/home/alexandre/.cargo/bin/cargo", ["cargo"], [/* 15 vars */]) = 0
--- SIGILL {si_signo=SIGILL, si_code=ILL_ILLOPC, si_addr=0xab460186} ---
syscall_397(0, 0, 0, 0xfff, 0, 0x5) = -1 (errno 38)
execve("/home/alexandre/.rustup/toolchains/stable-aarch64-unknown-linux-gnu/bin/cargo", ["/home/alexandre/.rustup/toolchai"...], [/* 20 vars */]) = -1 ENOENT (No such file or directory)
error: command failed: 'cargo': No such file or directory (os error 2)
+++ exited with 1 +++
Is this the correct toolchain or am I missing something?
Looks like you are doing NATIVE aarch64 build, that is you are building on an aarch64 machine. I'm not sure whether rust support this now.
Please try to setup build environment on x86_64 host and CROSS build for aarch64.
Related
I have a container built from base image alpine:3.11
Now I have a binary my_bin that I copied into the running container. From within the running container I moved to /usr/local/bin and I confirmed that the binary is there with the right permissions. Eg
/ # ls -l /usr/local/bin/my_bin
-rwxr-xr-x 1 root root 55662376 Jun 12 18:52 /usr/local/bin/my_bin
But when I attempt to execute/run this binary I get the following:
/ # my_bin init
/bin/sh: my_bin: not found
This is also the case if I switch into /usr/local/bin/ and run via ./my_bin
also if I try using the full path
/# /usr/local/bin/my_bin init
/bin/sh: /usr/local/bin/my_bin: not found
Why am I seeing this behavior? and how do I get to be able to execute the binary?
EDIT 1
I installed file and I can also confirm that the binary is copied and is an executable
file /usr/local/bin/my_bin
/usr/local/bin/my_bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=b36f0aad307c3229850d8db8c52e00033eae900c, for GNU/Linux 3.2.0, not stripped
Maybe this gives some extra clues?
Edit 2
As suggested by #BMitch in the answer I also ran ldd and here is the output
# ldd /usr/local/bin/my_bin
/lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
** Edit 3 **
Based on the output of ldd and more googling, I find that running apk add libc6-compat installed the missing libraries and I could then run the binary.
For a binary, this most likely indicates a missing dynamic library. You can run ldd /usr/local/bin/my_bin to see all the libraries that binary uses. With alpine, the most common library to be missing from an externally compiled program is libc. Alpine is built with musl instead of libc, and therefore you'll want to compile programs specifically for Alpine.
For others that may encounter this error in docker containers, I cover various issues in my faq presentation and other questions on the site.
/ # my_bin init
/bin/sh: my_bin: not found
When you execute above line it says file which you are trying to execute can't be found, my_bin is the file in your case.
Check if file is copied properly and with the same name or you might be trying to execute file from different location.
e.g. Try /usr/local/bin/my_bin init if you are not doing cd /usr/local/bin after ls -l /usr/local/bin/my_bin command.
Background:
We have node version 8 installed and is working fine in a Jenkins alpine based docker image (running in AWS ECS). The node 8 was installed in the jenkins-alpine docker image.
Then, there came another requirement to install node js Jenkins plugin, so that custom version can be installed and applied as needed using global tools configuration, We installed nodejs 10 as shown in the image below:
Nodejs Plugin failed to run in jenkins
I then tried using the jenkins nodejs 10 plugin in jenkins pipeline as follows:
#!groovy​
pipeline {
options {
buildDiscarder(logRotator(daysToKeepStr: '5'))
timeout(time: 5, unit: 'MINUTES')
ansiColor('xterm')
}
agent {
label 'jenkins-slave'
}
stages {
stage('Nodejs test') {
steps {
nodejs('NodeJS 10.19.0') {
sh "which node; which npm"
sh "ls -l /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node"
sh "node -v"
}
}
}
}
}
}
The jenkins job failed as it could not find node even though it did exist and was executable:
11:00:31 + which node
11:00:31 /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node
11:00:31 + which npm
11:00:31 /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/npm
[Pipeline] sh
11:00:31 + ls -l /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node
11:00:31 -rwxrwxr-x 1 jenkins jenkins 41122344 Feb 5 23:36 /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node
11:00:32 + /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node -v
11:00:32 /var/jenkins_home/workspace/test-jerald-nodejs-plugin#tmp/durable-55482f4f/script.sh: line 1: /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: not found
Tests inside jenkins slave docker container
I tried executing the node version command directly in the jenkins slave docker container, however, output was same.
Then I googled and referred the following thread which mentioned that this is because of the missing libraries needed by nodejs.
Jenkins NodeJSPlugin node command not found
Following was the initial output on finding the shared libraries of nodejs from jenkins plugin
bash-4.4$ ldd /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node
/lib64/ld-linux-x86-64.so.2 (0x7fcbe2e7e000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7fcbe2e7e000)
librt.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7fcbe2e7e000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7fcbe2d29000)
libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7fcbe2e7e000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7fcbe2d15000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7fcbe2e7e000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7fcbe2e7e000)
Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node)
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: gnu_get_libc_version: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: __register_atfork: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: setcontext: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: makecontext: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: backtrace: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: getcontext: symbol not found
I then installed the missing libraries with the following command:
apk add libc6-compat gcompat
After installing the missing libraries, there was no error related to missing libraries, however there is still errors with "symbol not found" and node was still not executable.
bash-4.4# ldd /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node
/lib64/ld-linux-x86-64.so.2 (0x7f0e698f6000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f0e698f6000)
librt.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f0e698f6000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7f0e697a1000)
libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f0e698f6000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f0e6978d000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f0e698f6000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f0e698f6000)
ld-linux-x86-64.so.2 => /lib/ld-linux-x86-64.so.2 (0x7f0e69787000)
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: gnu_get_libc_version: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: __register_atfork: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: setcontext: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: makecontext: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: backtrace: symbol not found
Error relocating /var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_10.19.0/bin/node: getcontext: symbol not found
bash-4.4#
I also checked the shared libraries of existing node v8, and it had no issues:
bash-4.4# which node
/usr/local/bin/node
bash-4.4# ldd /usr/local/bin/node
/lib/ld-musl-x86_64.so.1 (0x7f1e07118000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7f1e0539f000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f1e0538b000)
libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f1e07118000)
bash-4.4# /usr/local/bin/node -v
v8.11.3
So could anyone assist me on getting the Jenkins nodejs plugin to work?
Using the Jenkins GUI you have installed the "mainstream" Linux NodeJS plugin. It is clear from the above outputs that this plugin is not Alpine Linux compatible.
Alpine Linux builds upon musl-libc - the musl standard C library, where as the majority of Linux distributions are built around glibc - GNU's standard C library. The libc library provides basic facilities to any native Linux program, including the standard C and POSIX APIs, and is an intrinsic part of the operating system. Therefore, binaries built on different operating systems with different libc implementations, such as Alpine's musl and Debian's glibc, usually don't mix, since the implementations are not fully compatible.
The missing libraries you mention, libc6-compat and gcompat, add a glibc compatibility layer around Alpine's musl, which allows for running simple glibc programs. However, it does not provide all glibc API's - hence the missing symbols.
For nodejs on Alpine, you should normally install the nodejs package of the Alpine repository, but that unfortunately does not provide the Jenkins plugin. You would need a musl libc compatible nodeJS Jenkins plugin - and I'm not sure if one is available.
There are several options:
You can go "full glibc" on Alpine Linux by installing proper glibc on your Alpine container (example). However, this will require restructuring of your current image, and you loose the "purity" of the Alpine image.
If a musl compatible plugin is not to be found, consider switching to a (less) compact, glibc compatible base image, such as debian.
Or, try to build the Jenkins plugin from source on Alpine Linux, then install it manually.
I have a flask application which I'm trying to deploy to heroku. I'm trying to do this through my virtual box where I'm running ubuntu 18.04 .
Up on command : git push heroku master
I see a warning :
WARNING: The Python installation you are using does not appear to have
been installed with a shared library, or in the case of MacOS X, as a
framework. Where these are not present, the compilation of mod_wsgi may
fail, or if it does succeed, will result in extra memory being used by all
processes at run time as a result of the static library needing to
be loaded in its entirety to every process. It is highly recommended
that you reinstall the Python installation being used from source code,
supplying the '--enable-shared' option to the 'configure' script when
configuring the source code prior to building and installing it.
and it fails with an error message:
/usr/bin/ld: final link failed: Bad value
remote: collect2: error: ld returned 1 exit status
remote: error: command 'gcc' failed with exit status 1
I understand that I need to install python with --enable-shared flag in the "configure" script. but I don't seem to find the configure script at all in my python installation under bin or python folder. This is where I'm stuck since 3 days and its frustrating; I don't really have an idea of how to re-install python with such a flag .. After going through many posts I tried the following steps:
1) Downloaded python from the url: https://www.python.org/downloads/source/
2) Ran the command: ./configure --prefix=/opt/python --enable-shared
followed by make
and make install
3) Copied the shared object files to my original python directory
The output to the command: ldd ** path **/anaconda3/bin/python gives me:
libpython3.6m.so.1.0 => /usr/anaconda3/bin/../lib/libpython3.6m.so.1.0
(0x00007f902dd2e000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x00007f902db0e000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f902d906000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f902d6fe000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f902d4f6000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f902d156000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f902cd5e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f902e26e000)
Yet, when I tried to deploy the app, I see the same warning of --enable-shared and the same error message of gcc. I can't seem to figure out how to properly enable my python installation to have shared object. Please note that I'm a total newbie to ubuntu, I really appreciate any leads into solving this issue.
P.S: My mod_wsgi installation was successful and I was able to run the app locally through mod_wsgi.
If you don't need to use custom build of python3, you can try to install python3.6-dev package through apt, it added shared libs for me.
I know this question has been asked a lot, but the problem remains fo me:
I have a 64bits ELF executable that I am trying to run on my Kali VM, but it keeps telling me that the file doesn't exist.
The solution most of the time for this problem is the difference of architecture, but my Kali is x86-64:
$ uname -m
x86_64
like the file (named '8') that I am trying to execute:
file 8
8: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.27, BuildID[sha1]=0xf3b096c69086131b091d1805894fde4fae0537a0, stripped
EDIT: Error:
$ chmod +x 8
$ ./8
bash: ./8: No such file or directory
EDIT 2 : lld:
linux-vdso.so.1 => (0x00007fffe37fe000)
libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f680fac8000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f680f73c000)
libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f680f343000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f680f13f000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f680ef28000)
/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f680fd49000)
I tried to install the 32 bits library to be sure, but it didn't solve anything neither. I also tried it on my Ubuntu, same issues.
Has anybody an idea on ho to run it ? Here is a link to it if some of you want to try on other architectures: https://www.dropbox.com/s/s3ucka4ufd00zmy/8?dl=0
bash: ./8: No such file or directory
This is caused by the file having an elf interpreter which is not installed on your system.
You can find out which elf interpreter your file is compiled with by running
readelf -l ./8 | grep interpreter
I am guessing that you have /lib/ld-linux-x86-64.so.2 compiled in, whereas the standard 64-bit elf interpreter is /lib64/ld-linux-x86-64.so.2.
The best fix is to correct the build script for your executable (it has something like -Wl,--dynamic-linker=/lib/ld-linux-x86-64.so.2 in it).
Alternatively, creating a symlink:
sudo ln -s /lib64/ld-linux-x86-64.so.2 /lib
will also fix the problem.
Short question:
How does llvm-ld locate libstdc++?
Details:
I am getting the following error message:
llvm-ld: error: Cannot find library 'stdc++'
while running llvm-ld. I am trying to understand how llvm-ld searches for libstdc++.
I am setting up a new system and following compilation steps that work on a different system. Eventually I noticed a difference with the LD_LIBRARY_PATH that was set in my .bashrc on the old system that included a large number of directories including for Cadence and other miscellaneous software. I don't want to use the LD_LIBRARY_PATH, I want to be able to link against libstdc++ by supplying the appropriate command line parameters to llvm-ld.
The command I am running is:
llvm-ld -disable-internalize -native -o foo foo.bc4 -L/usr/lib/x86_64-linux-gnu -lpthread -lrt -lstdc++ -lm -v
which results in the following output:
Linking bitcode file 'foo.bc4'
Linked in file 'foo.bc4'
Linking archive file '/usr/lib/x86_64-linux-gnu/libpthread.a'
Linking archive file '/usr/lib/x86_64-linux-gnu/librt.a'
llvm-ld: error: Cannot find library 'stdc++'
However running ls -l /usr/lib/x86_64-linux-gnu/libstdc++* results in:
lrwxrwxrwx 1 root root 19 Apr 15 16:34 /usr/lib/x86_64-linux-gnu/libstdc++.so.6 -> libstdc++.so.6.0.16
-rw-r--r-- 1 root root 962656 Apr 15 16:36 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16
So I don't understand why llvm-ld is not finding this file? Especially since when I compile with with the LD_LIBRARY_PATH set and then run ldd on the resulting executable I get the following output:
linux-vdso.so.1 => (0x00007ffff7ffe000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ffff7dc1000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007ffff7ac0000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ffff77c6000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ffff75b0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff71f0000)
/lib64/ld-linux-x86-64.so.2 (0x0000555555554000)
Which seems to indicate that the version of libstdc++ that I want is /usr/lib/x86_64-linux-gnu/libstdc++.so.6 but I can't figure out why llvm-ld is not locating it with the search path -L/usr/lib/x86_64-linux-gnu.
For reference: uname -a results in: Linux FOO 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
And llvm-ld --version:
LLVM (http://llvm.org/):
LLVM version 3.1svn
Optimized build.
Built Sep 14 2012 (13:22:38).
Default target: x86_64-unknown-linux-gnu
Host CPU: core2
It looks like llvm-ld isn't looking for .so.#. According to the man page:
When looking for a library specified with the -l option, llvm-ld first attempts to load a file with that name from the current directory. If that fails, it looks for liblibrary.bc, liblibrary.a, or liblibrary.shared library extension, in that order, in each directory added to the library search path with the -L option. These directories are searched in the order they are specified. If the library cannot
be located, then llvm-ld looks in the directory specified by the LLVM_LIB_SEARCH_PATH environment variable. If it does not find a library there, it fails.
You can make this work by creating a symlink /usr/lib/x86_64-linux-gnu/libstdc++.so -> libstdc++.so.6.
I usually link with clang directly, since it understands searching for C++ library things better.