I am using jenkins in a docker container and would like to start another instance, whenever I try to use docker command like
docker run -t -i ap/dashboard /bin/bash
I get this error:
bash: line 61: docker: command not found
How do I navigate to another container or solve this error?
I can clearly see that the vm creator was able to use the docker command by reading the /root/.ash_history
here are some details about the system:
[-] Specific release information:
3.3.1
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.3.1
PRETTY_NAME="Alpine Linux v3.3"
HOME_URL="http://alpinelinux.org"
BUG_REPORT_URL="http://bugs.alpinelinux.org"
Hostname:
b51cdbb7eebd
ENVIRONMENTAL #######################################
Environment information:
JENKINS_VOL=/var/lib/jenkins
JAVA_VERSION_BUILD=17
HOSTNAME=b51cdbb7eebd
JAVA_VERSION_MAJOR=8
JENKINS_HOME=/opt/jenkins
NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/java/jre/bin
JAVA_BASE=/usr/local/java
PWD=/
JAVA_HOME=/usr/local/java/jre
JAVA_PKG=server-jre
LANG=C.UTF-8
XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
SHLVL=2
HOME=/root
PKG_URL=https://circle-artifacts.com/gh/andyshinn/alpine-pkg-glibc/6/artifacts/0/home/ubuntu/alpine-pkg-glibc/packages/x86_64
JENKINS_VERSION=1.637
JAVA_VERSION_MINOR=66
_=/usr/bin/env
[-] Available shells:
# valid login shells
/bin/sh
/bin/ash
/bin/bash
[+] We can read root's home directory!
total 76
drwx------ 5 root root 4.0K Aug 28 2018 .
drwxr-xr-x 1 root root 4.0K Nov 24 10:55 ..
-rw------- 1 root root 3.1K Aug 29 2018 .ash_history
-rw------- 1 root root 155 May 16 2016 .bash_history
drwxr-xr-x 2 root root 4.0K May 12 2016 .oracle_jre_usage
drwx------ 2 root root 4.0K Aug 28 2018 .ssh
-rwxr-xr-x 1 root root 46.0K Aug 28 2018 LinEnum.sh
drwxr-xr-x 3 root root 4.0K May 12 2016 dockerfiles
-rw-r--r-- 1 root root 0 Aug 28 2018 foo
Looks like we're in a Docker container:
10:net_prio:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
9:net_cls:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
8:freezer:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
7:devices:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
6:memory:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
5:blkio:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
4:cpuacct:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
3:cpu:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
2:cpuset:/docker/b51cdbb7eebd806431ee4120d9b3ae050dbefe4a835bf2063446724572e45e30
1:name=openrc:/docker
-rwxr-xr-x 1 root root 0 May 16 2016 /.dockerenv
[-] Anything juicy in the Dockerfile:
-rw-r--r-- 1 root root 617 May 12 2016 /root/dockerfiles/jenkins/Dockerfile
I tried many docker commands without hope
is it because that I'm already inside the container??
Try with docker run -t -i ap/dashboard /bin/ash.
Maybe your container does not use bash so you should try with /bin/sh which is a symcolic link to the default installed shell processor.
I have folder in /media on ubuntu - shared from windows via fstab and cifs-utils. Can I share this folder to other user: "miki" (not root)
root#localhost:/media#
drwxr-xrwx 4 root root 4096 Nov 15 12:21 .
drwxr-xr-x 23 root root 4096 Nov 14 06:34 ..
drwxr-xr-x 2 padm root 0 Nov 15 09:34 Archive
drwxr-xrwx 2 root root 4096 Feb 25 2019 kekik
I have try with:
root#localhost:~# sudo chmod -R 757 /media/Archive/
but get:
chmod: changing permissions of '/media/Archive/': Permission denied
Find a solution:
need to modify /etc/fstab by changing:
//windowsServer/Archive /media/Archive cifs username=wundowsuser,password=somepass,uid=1000,iocharset=iso8859-1,rw,file_mode=0777,dir_mode=0777,vers=1.0 0 0
and change group of folder (must umont it first!)
sudo umount -l /media/Archive
sudo chown miki:miki /media/Archive/
The command mkdir creates a file rather than a directory on a mounted network drive on a windows 10 system operating under the Windows sub-system for linux, using the Ubuntu app.
After installing the ubuntu app and putting the windows machine into developer mode I successfully mounted a remote network drive using the command:
sudo mount -t drvfs '\\networkdrive\sharename' /mnt/U
which successfully mounts the network drive at the mount point. I can see the files on the remote drive. However when looking at a directory on the remote machine and issuing the command
mkdir Source
a file called Source is created on the remote drive rather than a directory.
I tried this on two completely different laptops running windows 10, which I set up in exactly the same way and the same problem happens. The windows 10 machines are in developer mode and running the latest version of the ubuntu app. This is a pretty fundamental thing to have go wrong so I'm guessing it's a bug of some sort.
The snippet below is the terminal output which illustrates the problem.
username#~$ pwd
/home/username
username#~$ sudo mount -t drvfs '\\networkdrive.host\sharename\' /mnt/U
[sudo] password for username:
username#~$ cd /mnt/U/People/username/projects/Vesiform
username#Vesiform$ ls -al
total 0
drwxrwxrwx 0 root root 512 Mar 29 2018 .
drwxrwxrwx 0 root root 512 Mar 28 12:04 ..
drwxrwxrwx 0 root root 512 Mar 28 11:12 Builder
drwxrwxrwx 0 root root 512 Mar 28 11:42 Library
drwxrwxrwx 0 root root 512 Mar 28 11:42 NPack
drwxrwxrwx 0 root root 512 Mar 28 11:42 PDBProc
drwxrwxrwx 0 root root 512 Mar 28 11:55 Projects
drwxrwxrwx 0 root root 512 Mar 28 11:55 SpacePack
drwxrwxrwx 0 root root 512 Mar 28 11:55 Utilities
username#Vesiform$ mkdir Source
username#Vesiform$ ls -al
total 0
drwxrwxrwx 0 root root 512 Mar 29 2018 .
drwxrwxrwx 0 root root 512 Mar 28 12:04 ..
drwxrwxrwx 0 root root 512 Mar 28 11:12 Builder
drwxrwxrwx 0 root root 512 Mar 28 11:42 Library
drwxrwxrwx 0 root root 512 Mar 28 11:42 NPack
drwxrwxrwx 0 root root 512 Mar 28 11:42 PDBProc
drwxrwxrwx 0 root root 512 Mar 28 11:55 Projects
-rwxrwxrwx 1 root root 0 Mar 29 2018 Source
drwxrwxrwx 0 root root 512 Mar 28 11:55 SpacePack
drwxrwxrwx 0 root root 512 Mar 28 11:55 Utilities
username#Vesiform$ cd Source
-bash: cd: Source: Not a directory
username#Vesiform$
I'd like to think I'm not a linux noob but I am having an issue I can't explain. I hope it something stupid. I have an external drive that I am trying to set up on plex. It was originally formatted ntfs but I shrunk the partition and made another ext4 partition. Plex can't look into the drive and see the folders. I have been trying to change the perms but they aren't sticking. They don't stick if
myUser#mint /media/myUser $ ls -lah
total 44K
drwxr-x---+ 6 root root 4.0K Oct 24 11:21 .
drwxr-xr-x 3 root root 4.0K Oct 24 10:50 ..
drwx------ 1 myUser myUser 20K Oct 14 07:27 DataDisk
myUser#myUserMint /media/myUser $ sudo chmod -R 766 DataDisk/
[sudo] password for myUser:
myUser#mint /media/myUser $ ls -lah
total 44K
drwxr-x---+ 6 root root 4.0K Oct 24 11:21 .
drwxr-xr-x 3 root root 4.0K Oct 24 10:50 ..
drwx------ 1 myUser myUser 20K Oct 14 07:27 DataDisk
myUser#mint /media/myUser $
Am I missing something obvious or is this just weird?
I'm trying to provide my docker container a volume of encrypted file system for internal use.
The idea is that the container will write to the volume as usual, but in fact the host will be encrypting the data before writing it to the filesystem.
I'm trying to use EncFS - it works well on the host, e.g:
encfs /encrypted /visible
I can write files to /visible, and those get encrypted.
However, when trying to run a container with /visible as the volume, e.g.:
docker run -i -t --privileged -v /visible:/myvolume imagename bash
I do get a volume in the container, but it's on the original /encrypted folder, not going through the EncFS. If I unmount the EncFS from /visible, I can see the files written by the container. Needless to say /encrypted is empty.
Is there a way to have docker mount the volume through EncFS, and not write directly to the folder?
In contrast, docker works fine when I use an NFS mount as a volume. It writes to the network device, and not to the local folder on which I mounted the device.
Thanks
I am unable to duplicate your problem locally. If I try to expose an encfs filesystem as a Docker volume, I get an error trying to start the container:
FATA[0003] Error response from daemon: Cannot start container <cid>:
setup mount namespace stat /visible: permission denied
So it's possible you have something different going on. In any case, this is what solved my problem:
By default, FUSE only permits the user who mounted a filesystem to have access to that filesystem. When you are running a Docker container, that container is initially running as root.
You can use the allow_root or allow_other mount options when you mount the FUSE filesystem. For example:
$ encfs -o allow_root /encrypted /other
Here, allow_root will permit the root user to have acces to the mountpoint, while allow_other will permit anyone to have access to the mountpoint (provided that the Unix permissions on the directory allow them access).
If I mounted by encfs filesytem using allow_root, I can then expose that filesystem as a Docker volume and the contents of that filesystem are correctly visible from inside the container.
This is definitely because you started the docker daemon before the host mounted the mountpoint. In this case the inode for the directory name is still pointing at the hosts local disk:
ls -i /mounts/
1048579 s3-data-mnt
then if you mount using a fuse daemon like s3fs:
/usr/local/bin/s3fs -o rw -o allow_other -o iam_role=ecsInstanceRole /mounts/s3-data-mnt
ls -i
1 s3-data-mnt
My guess is that docker does some bootstrap caching of the directory names to inodes (someone who has more knowledge of this than can fill in this blank).
Your comment is correct. If you simply restart docker after the mounting has finished your volume will be correctly shared from host to your containers. (Or you can simply delay starting docker until after all your mounts have finished mounting)
What is interesting (but makes complete since to me now) is that upon exiting the container and un-mounting the mountpoint on the host all of my writes from within the container to the shared volume magically appeared (they were being stored at the inode on the host machines local disk):
[root#host s3-data-mnt]# echo foo > bar
[root#host s3-data-mnt]# ls /mounts/s3-data-mnt
total 6
1 drwxrwxrwx 1 root root 0 Jan 1 1970 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 bar
[root#host s3-data-mnt]# docker run -ti -v /mounts/s3-data-mnt:/s3-data busybox /bin/bash
root#5592454f9f4d:/mounts/s3-data# ls -als
total 8
4 drwxr-xr-x 3 root root 4096 Sep 16 16:05 .
4 drwxr-xr-x 12 root root 4096 Sep 16 16:45 ..
root#5592454f9f4d:/s3-data# echo baz > beef
root#5592454f9f4d:/s3-data# ls -als
total 9
4 drwxr-xr-x 3 root root 4096 Sep 16 16:05 .
4 drwxr-xr-x 12 root root 4096 Sep 16 16:45 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 beef
root#5592454f9f4d:/s3-data# exit
exit
[root#host s3-data-mnt]# ls /mounts/s3-data-mnt
total 6
1 drwxrwxrwx 1 root root 0 Jan 1 1970 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 bar
[root#host /]# umount -l s3-data-mnt
[root#host /]# ls -als
[root#ip-10-0-3-233 /]# ls -als /s3-stn-jira-data-mnt/
total 8
4 drwxr-xr-x 2 root root 4096 Sep 16 17:28 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 bar
You might be able to work around this by wrapping the mount call in nsenter to mount it in the same Linux mount namespace as the docker daemon, eg.
nsenter -t "$PID_OF_DOCKER_DAEMON" encfs ...
The question is whether this approach will survive a daemon restart itself. ;-)