how to connect to remote host running in docker container from jenkins running in docker container using ssh - linux

I have tried these steps a number of time but failing. Did lot of RnD but could not fix the issue.
I am using CentOS running on Oracle VM.
I am trying to connect from CentOS host -> Jenkins -> Remote Host using SSH
My present working directory is : /root/connect2nareshc/jenkins_0_to_hero/jenkins_image
I did ssh-keygen -f remote-key to generate public and private keys.
In the directory /root/connect2nareshc/jenkins_0_to_hero/jenkins_image I have Dockerfile as follows:
FROM centos:7
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chown 600 /home/remote_user/.ssh/authorized_keys
RUN ssh-keygen -A
CMD /usr/sbin/sshd -D
In one directory above in /root/connect2nareshc/jenkins_0_to_hero, I have docker-compose.yml as follows:
version: '3'
services:
jenkins:
container_name: jenkins_yml
image: "jenkins/jenkins:lts"
ports:
- 8080:8080
networks:
- net
remote_host:
container_name: remote-host
image: remote_host
build:
context: /root/connect2nareshc/jenkins_0_to_hero/jenkins_image
networks:
- net
networks:
net:
I execute the following command and it works fine. I.e. From host I connect to Jenkins and from Jenkins I connect to remote_host using password.
docker-compose build
docker-compose up -d
docker exec -it jenkins_yml bash
ssh remote_user#remote_host
#Enter password 1234 when prompted.
When I try to connect using keys, I am not able to:
docker cp remote-key jenkins_yml:/tmp/remote-key
docker exec -it jenkins_yml bash
cd /tmp
ssh -i remote-key remote_user#remote_host
Instead it prompts me to connect with password.
While on remote_host I did ls -altr on /var/log and got following output. I cannot find auth.log
drwxr-xr-x. 1 root root 4096 May 4 15:36 ..
-rw-r--r--. 1 root root 193 May 4 15:36 grubby_prune_debug
-rw-------. 1 root utmp 0 May 4 15:36 btmp
drwxr-xr-x. 1 root root 4096 May 4 15:37 .
-rw-------. 1 root root 1751 Jul 28 16:35 yum.log
-rw-------. 1 root root 64064 Jul 28 16:36 tallylog
-rw-rw-r--. 1 root utmp 1152 Jul 29 03:17 wtmp
-rw-r--r--. 1 root root 292292 Jul 29 03:17 lastlog

Related

Writing to mounted storage fails with permission denied

here is ls -ld folder_in_question on a host:
lrwxrwxrwx 1 xxx xxx 22 ноя 11 06:40 ../doc -> /mnt/nfs/2600/data/doc
here is ls -ld folder_in_question on from a container:
drwxrwxr-x 3 1002 1002 4096 Nov 11 03:31 /home/xxx/data
I can create and edit files from host, yet when I try say
FROM openjdk:9-jre-slim
#...
RUN adduser --disabled-password --gecos '' xxx
RUN mkdir /home/xxx/data
RUN chown -R xxx:xxx /home/xxx/data
#...
CMD echo "hello" >> /home/xxx/data/test
and run container from my user (as well as from root, default) I get:
sudo docker run -u xxx -it -v /home/xxx/doc:/home/xxx/data x/o:latest
/bin/sh: 1: cannot create /home/xxx/data/test: Permission denied
Same happens if I provide direct path to Docker -v.
How such problem can be solved, what steps shall I take?

Run each Docker container in a specific user namespace configuration

Problem:
I am trying to mount a directory as Docker volume in such a way,
that a user, which is created inside a container could write
into a file in that volume. And at the same time, the file should
be at least readable to my user lape outside the container.
Essentially, I need to remap a user UID from container user namespace to a specific UID on the host user namespace.
How can I do that?
I would prefer answers that:
do not involve changing the way how Docker daemon is run;
and allows a possibility to configure container user namespace for each container separately;
do not require rebuilding the image;
I would accept answer that shows a nice solution using Access Control Lists as well;
Setup:
This is how the situation can be replicated.
I have my Linux user lape, assigned to docker group, so I
can run Docker containers without being root.
lape#localhost ~ $ id
uid=1000(lape) gid=1000(lape) groups=1000(lape),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),121(lpadmin),131(sambashare),999(docker)
Dockerfile:
FROM alpine
RUN apk add --update su-exec && rm -rf /var/cache/apk/*
# I create a user inside the image which i want to be mapped to my `lape`
RUN adduser -D -u 800 -g 801 insider
VOLUME /data
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["sh", "/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
chmod 755 /data
chown insider:insider /data
# This will run as `insider`, and will touch a file to the shared volume
# (the name of the file will be current timestamp)
su-exec insider:insider sh -c 'touch /data/$(date +%s)'
# Show permissions of created files
ls -las /data
Once the is built with:
docker build -t nstest
I run the container:
docker run --rm -v $(pwd)/data:/data nstest
The output looks like:
total 8
4 drwxr-xr-x 2 insider insider 4096 Aug 26 08:44 .
4 drwxr-xr-x 31 root root 4096 Aug 26 08:44 ..
0 -rw-r--r-- 1 insider insider 0 Aug 26 08:44 1503737079
So the file seems to be created as user insider.
From my host the permissions look like this:
lape#localhost ~ $ ls -las ./data
total 8
4 drwxr-xr-x 2 800 800 4096 Aug 26 09:44 .
4 drwxrwxr-x 3 lape lape 4096 Aug 26 09:43 ..
0 -rw-r--r-- 1 800 800 0 Aug 26 09:44 1503737079
Which indicates that the file belongs to uid=800 (that is the insider user which does not even exist outside the Docker namespace).
Things I tried already:
I tried specifying --user parameter to docker run, but it seems it can only map which user on the host is mapped to uid=0 (root) inside the docker namespace, in my case the insider is not root. So it did not really work in this case.
The only way how I achieved insider(uid=800) from within container, to be seen as lape(uid=1000) from host, was by adding --userns-remap="default" to the dockerd startup script, and adding dockremap:200:100000 to files /etc/subuid and /etc/subgid as suggested in documentation for --userns-remap. Coincidentally this worked for me, but it is not sufficient solution, because:
it requires reconfigure the way how the Docker daemon runs;
requires to do some arithmetic on user ids: '200 = 1000 - 800', where 1000 is the UID my user on the host, and 800 the UID is of the insider user;
that would not even work if the insider user would need to have a higher UID than my host user;
it can only configure how user namespaces are mapped globally, without a way to have unique configuration per container;
this solution kind of works but it is a bit too ugly for practical usage.
If you just need a read access for your user, the simplest will be to add the read permissions for all files and subdirectories in /data with acls outside of docker.
Add default acl: setfacl -d -m u:lape:-rx /data.
You will also need to give access to the directory itself: setfacl -m u:lape:-rx /data.
Are there any obstacles for such a solution?

Create symbolic link fails in Docker for Windows, it's not supported yet?

I have a docker container running Ubuntu Server. I am running Docker for Windows and I have the following version of Docker and Docker Compose respectively installed:
> docker-compose -v
docker-compose version 1.11.2, build f963d76f
> docker -v
Docker version 17.03.1-ce-rc1, build 3476dbf
This is what I have tried so far without success:
// The dojo linked file exists so I've tried to update it as per this answer (http://stackoverflow.com/a/1951752/719427)
> docker exec -it dockeramp_webserver_1 ln -sf /var/www/html/externals/dojo /var/www/html/externals/public_html/js/dojo
ln: failed to create symbolic link '/var/www/html/externals/public_html/js/dojo': No such file or directory
// I have deleted the previous linked file and then I tried to create a new one
> docker exec -it dockeramp_webserver_1 ln -s /var/www/html/externals/dojo /var/www/html/externals/public_html/js/dojo
ln: failed to create symbolic link '/var/www/html/externals/public_html/js/dojo': No such file or directory
// removed the directory name from the link name
> docker exec -it dockeramp_webserver_1 ln -s /var/www/html/externals/dojo /var/www/html/externals/public_html/js
ln: failed to create symbolic link '/var/www/html/externals/public_html/js': No such file or directory
Because the error keep saying the directory doesn't exists then I've checked if the error is right or wrong:
> docker exec -u www-data -it dockeramp_webserver_1 ls -la /var/www/html/externals/dojo
total 80
drwxr-xr-x 2 root root 0 Mar 25 15:09 .
drwxr-xr-x 2 root root 4096 Mar 25 15:09 ..
drwxr-xr-x 2 root root 0 Mar 25 15:09 dijit
drwxr-xr-x 2 root root 0 Mar 25 15:09 dojo
drwxr-xr-x 2 root root 0 Mar 25 15:09 dojox
drwxr-xr-x 2 root root 0 Mar 25 15:09 mmi
-rwxr-xr-x 1 root root 74047 Mar 25 15:09 tundra.css
> docker exec -u www-data -it dockeramp_webserver_1 ls -la /var/www/html/public_html/js
total 24
drwxr-xr-x 2 root root 4096 Mar 26 14:40 .
drwxr-xr-x 2 root root 4096 Mar 25 15:11 ..
-rwxr-xr-x 1 root root 7123 Mar 25 15:09 jquery.PrintArea.js
-rwxr-xr-x 1 root root 6141 Mar 25 15:11 quoteit_delegate_search.js
They both exists so ... what I am missing here? It's not supported in Windows just yet? I have found the development team added something called mfsymlinks in a previous version than mine.
The command is telling you that /var/www/html/externals/public_html does not exist. You only showed that the /var/www/html/externals/dojo and /var/www/html/public_html/js folders exist. I believe this is a simple typo in your commands.

Can not add new user in docker container with mounted /etc/passwd and /etc/shadow

Example of the problem:
docker run -ti -v my_passwd:/etc/passwd -v my_shadow:/etc/shadow --rm centos
[root#681a5489f3b0 /]# useradd test # does not work !?
useradd: failure while writing changes to /etc/passwd
[root#681a5489f3b0 /]# ll /etc/passwd /etc/shadow # permission check
-rw-r--r-- 1 root root 157 Oct 8 10:17 /etc/passwd
-rw-r----- 1 root root 100 Oct 7 18:02 /etc/shadow
The similar problem arises when using passwd:
[root#681a5489f3b0 /]# passwd test
Changing password for user test.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: Authentication token manipulation error
I have tried using the ubuntu image, but the same problem arises.
I can manually edit passwd file and shadow file from within container.
I am getting the same problem on following two machines:
Host OS: CentOS 7 - SELinux Disabled
Docker Version: 1.8.2, build 0a8c2e3
Host OS: CoreOS 766.4.0
Docker version: 1.7.1, build df2f73d-dirty
I've also opened issue on GitHub: https://github.com/docker/docker/issues/16857
It's failing because passwd manipulates a temporary file, and then attempts to rename it to /etc/shadow. This fails because /etc/shadow is a mountpoint -- which cannot be replaced -- which results in this error (captured using strace):
102 rename("/etc/nshadow", "/etc/shadow") = -1 EBUSY (Device or resource busy)
You can reproduce this trivially from the command line:
# cd /etc
# touch foo
# mv foo shadow
mv: cannot move 'foo' to 'shadow': Device or resource busy
You could work around this by mounting a directory containing my_shadow and my_passwd somewhere else, and then symlinking /etc/passwd and /etc/shadow in the container appropriately:
$ docker run -it --rm -v $PWD/my_etc:/my_etc centos
[root#afbc739f588c /]# ln -sf /my_etc/my_passwd /etc/passwd
[root#afbc739f588c /]# ln -sf /my_etc/my_shadow /etc/shadow
[root#afbc739f588c /]# ls -l /etc/{shadow,passwd}
lrwxrwxrwx. 1 root root 17 Oct 8 17:48 /etc/passwd -> /my_etc/my_passwd
lrwxrwxrwx. 1 root root 17 Oct 8 17:48 /etc/shadow -> /my_etc/my_shadow
[root#afbc739f588c /]# passwd root
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root#afbc739f588c /]#

Linux permissions issue on sftp server

Good day!
I have a linux sftp server located in VM. This VM has access to a GlusterFS storage, where sftp directories are located. Sftp works via OpenSSH server and chroots sftpusers group to sftp directories on GlusterFS storage. All worked well... After one moment I had got an issue...
Trying to create user:
# useradd -d /mnt/cluster-data/repositories/masters/test-user -G masters,sftpusers -m -s /bin/nologin test-user
Checking:
# cat /etc/passwd | grep test-user
test-user:x:1029:1032::/mnt/cluster-data/repositories/masters/test-user:/bin/nologin
# cat /etc/group | grep test-user
masters:x:1000:test-user
sftpusers:x:1005:test-user
test-user:x:1032:
Doing chown and chmod for home dir by hand:
# chown -R test-user:test-user /mnt/cluster-data/repositories/masters/test-user
# chmod -R 770 /mnt/cluster-data/repositories/masters/test-user
Checking:
# ls -la /mnt/cluster-data/repositories/masters/test-user
итого 16
drwxrwx--- 2 test-user test-user 4096 Окт 27 2013 .
drwxr-xr-x 13 root masters 4096 Окт 27 2013 ..
Adding another user to test-user's group:
# usermod -G test-user -a tarasov-af
# cat /etc/passwd | grep tarasov-af
tarasov-af:x:1028:1006::/mnt/cluster-data/repositories/lecturers/tarasov-af/:/bin/nologin
# cat /etc/group | grep tarasov-af
masters:x:1000:tarasov-af,test-user
sftpusers:x:1005:tarasov-af,test-user
lecturers:x:1006:tarasov-af
specialists:x:1008:tarasov-af
test-user:x:1032:tarasov-af
Login as tarasov-af:
sftp> cd masters/test-user
sftp> ls
remote readdir("/masters/test-user"): Permission denied
sftp> ls -la ..
drwxr-xr-x 13 0 1000 4096 Oct 26 21:30 .
drwxr-xr-x 6 0 0 4096 Oct 2 15:53 ..
drwxrwx--- 2 1029 1032 4096 Oct 26 21:53 test-user
I tried to login as tarasov-af into bash (usermod -s /bin/bash tarasov-af):
$ id
uid=1028 gid=1006
groups=1000,1005,1006,1008,1032
p.s. I guess this issue began after VM disk failed and I've got /etc/passwd and /etc/group broken, I've restored them from backups and all previous accounts works well, I have this issue only with new accounts.
I've found the reason of this issue: user tarasov-af has more than 16 secondary groups, first 15 groups work good, other -- don't work. I've set kernel.ngroups_max = 65535 in sysctl.conf on every computer in cluster (GlusterFS) and on sftp VM but nothing changed.
This issue goes to glusterfs client, it can't manipulate with more than 15 secondary groups.
# glusterfs --version
glusterfs 3.2.7 built on Sep 29 2013 03:28:05

Resources