I have built and installed https://github.com/Xilinx-CNS/onload shared lib.
Then I am trying:
onload ping 8.8.8.8
Got this error:
ERROR: ld.so: object 'libonload.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=110 time=29.2 ms
But it works with sudo onload ping 8.8.8.8:
oo:ping[724989]: netif_tcp_helper_alloc_u: ENODEV. This error can occur if:
- no Solarflare network interfaces are active/UP, or they are running packed stream firmware or are disabled, and
- there are no AF_XDP interfaces registered with sfc_resource Please check your configuration.
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=110 time=50.4 ms
Can someone help me, how I can achieve this command working without sudo? For example, onload nc -l $PORT works without sudo, but ping do not.
Some debug information:
sudo find / -name libonload.so:
/usr/lib/x86_64-linux-gnu/libonload.so
cat /etc/ld.so.conf
include /etc/ld.so.conf.d/*.conf
cat /etc/ld.so.conf.d/*.conf
/usr/lib/x86_64-linux-gnu/libfakeroot
/usr/local/lib
/usr/local/lib/x86_64-linux-gnu
/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu
/lib32
/usr/lib32
sudo ldconfig -v:
...
/usr/lib/x86_64-linux-gnu/libfakeroot:
libfakeroot-0.so -> libfakeroot-tcp.so
/usr/local/lib:
/lib/x86_64-linux-gnu:
...
libonload_ext.so.2 -> libonload_ext.so.2.0.0
libonload.so -> libonload.so
...
...
ls -l /usr/lib/x86_64-linux-gnu | grep onload
-rwxr-xr-x 1 root root 9528312 Mar 3 01:17 libonload.so
-rw-r--r-- 1 root root 106222 Mar 3 01:17 libonload_ext.a
lrwxrwxrwx 1 root root 18 Mar 3 01:17 libonload_ext.so -> libonload_ext.so.2
lrwxrwxrwx 1 root root 22 Mar 3 01:17 libonload_ext.so.2 -> libonload_ext.so.2.0.0
-rwxr-xr-x 1 root root 31344 Mar 3 01:17 libonload_ext.so.2.0.0
ls -l /lib/x86_64-linux-gnu | grep onload
-rwxr-xr-x 1 root root 9528312 Mar 3 01:17 libonload.so
-rw-r--r-- 1 root root 106222 Mar 3 01:17 libonload_ext.a
lrwxrwxrwx 1 root root 18 Mar 3 01:17 libonload_ext.so -> libonload_ext.so.2
lrwxrwxrwx 1 root root 22 Mar 3 01:17 libonload_ext.so.2 -> libonload_ext.so.2.0.0
-rwxr-xr-x 1 root root 31344 Mar 3 01:17 libonload_ext.so.2.0.0
/lib$ ls -l | grep x86_64-linux-gnu
drwxr-xr-x 35 root root 36864 Mar 3 01:17 x86_64-linux-gnu
/usr/lib$ ls -l | grep x86_64-linux-gnu
drwxr-xr-x 35 root root 36864 Mar 3 01:17 x86_64-linux-gnu
The most likely reason is the ping application uses setuid but the Onload library isn't installed with that. Running ping will effectively promote the application to root user to allow the creation of a raw socket but the library will be loaded as the regular user and so it can't be used. Running as the root user avoids this since the library is loaded as the root user to start with.
I don't think GitHub version of Onload supports loading with setuid for the Onload library but you can set this yourself using chmod +s <libpath>. It's worth pointing out that ping isn't accelerated by Onload since the library will only accelerate UDP and TCP socket and pipes so you wouldn't see any benefit from this.
Related
I'm curious about how dockerized processes interact with the terminal from which you run docker run.
From some research I done, I found that when you run a container without -t or -i, the file descriptors of the process are:
// the PID of the containerized process is 16198
~$ sudo ls -l /proc/16198/fd
total 0
lrwx------ 1 root root 64 Jan 18 09:28 0 -> /dev/null
l-wx------ 1 root root 64 Jan 18 09:28 1 -> 'pipe:[242758]'
l-wx------ 1 root root 64 Jan 18 09:28 2 -> 'pipe:[242759]'
I see that the other ends of those pipes are of the containerd-shim process that spawned the containerized process. We know that once that process will write something to its STDOUT, it will show up on the terminal from which you ran docker run. Further, when you run a container with -t and look at the open FDs of the process:
~$ sudo ls -l /proc/17317/fd
total 0
lrwx------ 1 root root 64 Jan 18 09:45 0 -> /dev/pts/0
lrwx------ 1 root root 64 Jan 18 09:45 1 -> /dev/pts/0
lrwx------ 1 root root 64 Jan 18 09:45 2 -> /dev/pts/0
So now the container has a pseudo-tty slave as STDIN, STDOUT and STDERR. Who has the master side of that slave? Listing the FDs of the parent containerd-shim we can now see that it has a /dev/ptmx open:
$ sudo ls -l /proc/17299/fd
total 0
lr-x------ 1 root root 64 Jan 18 09:50 0 -> /dev/null
l-wx------ 1 root root 64 Jan 18 09:50 1 -> /dev/null
lrwx------ 1 root root 64 Jan 18 09:50 10 -> 'socket:[331340]'
l--------- 1 root root 64 Jan 18 09:50 12 -> /run/docker/containerd/afb8b7a1573c8da16943adb6f482764bb27c0973cf4f51279db895c6c6003cff/init-stdin
l--------- 1 root root 64 Jan 18 09:50 13 -> /run/docker/containerd/afb8b7a1573c8da16943adb6f482764bb27c0973cf4f51279db895c6c6003cff/init-stdin
lrwx------ 1 root root 64 Jan 18 09:50 14 -> /dev/pts/ptmx
...
So I suppose that the containerd-shim process interacts with the container process using this pseudo-terminal system. By the way, even in this case you can't interact with the process, since I didn't ran the container with -i.
So one question is: what difference does it make if containerd-shim interacts with the process using a pipe or using a pseudo-terminal subsystem?
Another question is, how do containerd-shim rolls this data to the terminal from which I ran docker run
I created a user which using the /bin/rbash, so it cannot execute some commands, like 'cd' and 'ls'. But it still can browser other directory when enter some path like '/bin/', then using tab the shell will show the files under 'bin'. And this user only allowed to login through serial port. How can I restrict the user only work in it's home dirctory, and not read other directories.
Doing a quick search I have found this couple of questions that I think may fit your requirements
Create ssh user which can only access home directory
Give user read/write access to only one directory
put
set disable-completion on
string in ~/.inputrc and restart your shell. it will disable completion at all.
this can solve my questions
It is possible to use chroot to implement a user that does not see other directories.
This might be quite crazy solution, and not recommended way to do it.
Create a script that makes chroot
#!/bin/sh
exec /usr/sbin/chroot /home/test /bin/sh
Use the script as login shell (/etc/passwd):
test:x:0:0:Linux User,,,:/:/usr/sbin/chrootsh.sh
Copy all needed files to home directory of the user. You need at least shell and libraries that are needed for the shell:
~ # ls -lR /home/test/
/home/test/:
total 2
drwxr-xr-x 2 root test 1024 Aug 21 13:54 bin
drwxr-xr-x 2 root test 1024 Aug 21 13:54 lib
/home/test/bin:
total 1776
-rwxr-xr-x 1 root test 908672 Aug 21 13:54 ls
-rwxr-xr-x 1 root test 908672 Aug 21 13:54 sh
/home/test/lib:
total 1972
-rwxr-xr-x 1 root test 134316 Aug 21 13:54 ld-linux.so.3
-rwxr-xr-x 1 root test 1242640 Aug 21 13:54 libc.so.6
-rwxr-xr-x 1 root test 640480 Aug 21 13:54 libm.so.6
~ #
Ready. Then login as the user:
~ # su - test
/ # pwd
/
/ # ls -lR /
/:
total 2
drwxr-xr-x 2 0 1000 1024 Aug 21 13:54 bin
drwxr-xr-x 2 0 1000 1024 Aug 21 13:54 lib
/bin:
total 1776
-rwxr-xr-x 1 0 1000 908672 Aug 21 13:54 ls
-rwxr-xr-x 1 0 1000 908672 Aug 21 13:54 sh
/lib:
total 1972
-rwxr-xr-x 1 0 1000 134316 Aug 21 13:54 ld-linux.so.3
-rwxr-xr-x 1 0 1000 1242640 Aug 21 13:54 libc.so.6
-rwxr-xr-x 1 0 1000 640480 Aug 21 13:54 libm.so.6
/ #
I am trying to run command dmidecode in my docker container,
docker run --device /dev/mem:/dev/mem -it jin/ubu1604
However, it claims that there is no permission
root#bd1062dfd8ab:/# dmidecode
# dmidecode 3.0
Scanning /dev/mem for entry point.
/dev/mem: Operation not permitted
root#bd1062dfd8ab:/# ls -l /dev
total 0
crw--w---- 1 root tty 136, 0 Jan 7 03:21 console
lrwxrwxrwx 1 root root 11 Jan 7 03:20 core -> /proc/kcore
lrwxrwxrwx 1 root root 13 Jan 7 03:20 fd -> /proc/self/fd
crw-rw-rw- 1 root root 1, 7 Jan 7 03:20 full
crw-r----- 1 root kmem 1, 1 Jan 7 03:20 mem
drwxrwxrwt 2 root root 40 Jan 7 03:20 mqueue
crw-rw-rw- 1 root root 1, 3 Jan 7 03:20 null
lrwxrwxrwx 1 root root 8 Jan 7 03:20 ptmx -> pts/ptmx
drwxr-xr-x 2 root root 0 Jan 7 03:20 pts
crw-rw-rw- 1 root root 1, 8 Jan 7 03:20 random
drwxrwxrwt 2 root root 40 Jan 7 03:20 shm
lrwxrwxrwx 1 root root 15 Jan 7 03:20 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Jan 7 03:20 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Jan 7 03:20 stdout -> /proc/self/fd/1
crw-rw-rw- 1 root root 5, 0 Jan 7 03:20 tty
crw-rw-rw- 1 root root 1, 9 Jan 7 03:20 urandom
crw-rw-rw- 1 root root 1, 5 Jan 7 03:20 zero
This confused me. Since I was able to run dmidecode -t system on the host (ubuntu 14.04) fine.
I even followed some advice and set the permission on dmidecode executable
setcap cap_sys_rawio+ep /usr/sbin/dmidecode
It still doesn't work.
Any ideas?
UPDATE
Based on David Maze's answer, the command should be
run --device /dev/mem:/dev/mem --cap-add SYS_RAWIO -it my/ubu1604a
Do this only when you are going to trust what runs in container. For example, if you are test installation procedure on a pristine OS.
Docker provides an isolation layer, and one of the major goals of Docker is to hide details of the host's hardware from containers. The easiest, most appropriate way to query low-level details of the host's hardware is from a root shell on the host, ignoring Docker entirely.
The actual mechanism of this is by restricting Linux capabilities. capabilities(7) documents that you need CAP_SYS_RAWIO to access /dev/mem, so in principle you can launch your container with --cap-add SYS_RAWIO. You might need other capabilities and/or device access to make this actually work, because Docker is hiding the details of what you're trying to access as a design goal.
I have a docker container running Ubuntu Server. I am running Docker for Windows and I have the following version of Docker and Docker Compose respectively installed:
> docker-compose -v
docker-compose version 1.11.2, build f963d76f
> docker -v
Docker version 17.03.1-ce-rc1, build 3476dbf
This is what I have tried so far without success:
// The dojo linked file exists so I've tried to update it as per this answer (http://stackoverflow.com/a/1951752/719427)
> docker exec -it dockeramp_webserver_1 ln -sf /var/www/html/externals/dojo /var/www/html/externals/public_html/js/dojo
ln: failed to create symbolic link '/var/www/html/externals/public_html/js/dojo': No such file or directory
// I have deleted the previous linked file and then I tried to create a new one
> docker exec -it dockeramp_webserver_1 ln -s /var/www/html/externals/dojo /var/www/html/externals/public_html/js/dojo
ln: failed to create symbolic link '/var/www/html/externals/public_html/js/dojo': No such file or directory
// removed the directory name from the link name
> docker exec -it dockeramp_webserver_1 ln -s /var/www/html/externals/dojo /var/www/html/externals/public_html/js
ln: failed to create symbolic link '/var/www/html/externals/public_html/js': No such file or directory
Because the error keep saying the directory doesn't exists then I've checked if the error is right or wrong:
> docker exec -u www-data -it dockeramp_webserver_1 ls -la /var/www/html/externals/dojo
total 80
drwxr-xr-x 2 root root 0 Mar 25 15:09 .
drwxr-xr-x 2 root root 4096 Mar 25 15:09 ..
drwxr-xr-x 2 root root 0 Mar 25 15:09 dijit
drwxr-xr-x 2 root root 0 Mar 25 15:09 dojo
drwxr-xr-x 2 root root 0 Mar 25 15:09 dojox
drwxr-xr-x 2 root root 0 Mar 25 15:09 mmi
-rwxr-xr-x 1 root root 74047 Mar 25 15:09 tundra.css
> docker exec -u www-data -it dockeramp_webserver_1 ls -la /var/www/html/public_html/js
total 24
drwxr-xr-x 2 root root 4096 Mar 26 14:40 .
drwxr-xr-x 2 root root 4096 Mar 25 15:11 ..
-rwxr-xr-x 1 root root 7123 Mar 25 15:09 jquery.PrintArea.js
-rwxr-xr-x 1 root root 6141 Mar 25 15:11 quoteit_delegate_search.js
They both exists so ... what I am missing here? It's not supported in Windows just yet? I have found the development team added something called mfsymlinks in a previous version than mine.
The command is telling you that /var/www/html/externals/public_html does not exist. You only showed that the /var/www/html/externals/dojo and /var/www/html/public_html/js folders exist. I believe this is a simple typo in your commands.
I am using Fedora(actually Pidora since I am trying to set up hadoop on a cluster of raspberrypi). I installed oopenjdk on all of the nodes using ansible. However, when I tried to set up the JAVA_HOME environment variable, I got really confused looking at the folder of /usr/lib/jvm:
[root#datafireball1 jvm]# ls
java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.arm jre jre-1.7.0 jre-1.7.0-openjdk jre-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.arm jre-openjdk
[root#datafireball1 jvm]# ls -alth
total 80K
drwxr-xr-x 3 root root 4.0K Jun 7 21:07 .
lrwxrwxrwx 1 root root 35 Jun 7 21:07 jre-1.7.0-openjdk -> /etc/alternatives/jre_1.7.0_openjdk
lrwxrwxrwx 1 root root 27 Jun 7 21:07 jre-1.7.0 -> /etc/alternatives/jre_1.7.0
lrwxrwxrwx 1 root root 29 Jun 7 21:07 jre-openjdk -> /etc/alternatives/jre_openjdk
lrwxrwxrwx 1 root root 21 Jun 7 21:07 jre -> /etc/alternatives/jre
lrwxrwxrwx 1 root root 48 Jun 7 21:07 jre-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.arm -> java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.arm/jre
drwxr-xr-x 4 root root 4.0K Jun 7 21:06 java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.arm
Why there are so many folders for Java and which folder should I use as the Java home?
[root#datafireball1 bin]# which java
/usr/bin/java
[root#datafireball1 bin]# ls -alSh /usr/bin/ | grep java
lrwxrwxrwx 1 root root 22 Jun 7 21:07 java -> /etc/alternatives/java
Thanks!
You can add in your .bashrc file:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
and it will dynamically change when you update your packages.
Best solution tested with Fedora 26 :
echo "JAVA_HOME=/etc/alternatives/jre" >> ~/.profile
source ~/.profile
echo $JAVA_HOME
Use the following command to find out exact path to which java executable under UNIX / Linux:
$ which java (suppose returns /usr/java/jdk1.5.0_07/bin/java)
And then set path as export JAVA_HOME=/usr/java/jdk1.5.0_07/bin/java