Writing to mounted storage fails with permission denied - linux

here is ls -ld folder_in_question on a host:
lrwxrwxrwx 1 xxx xxx 22 ноя 11 06:40 ../doc -> /mnt/nfs/2600/data/doc
here is ls -ld folder_in_question on from a container:
drwxrwxr-x 3 1002 1002 4096 Nov 11 03:31 /home/xxx/data
I can create and edit files from host, yet when I try say
FROM openjdk:9-jre-slim
#...
RUN adduser --disabled-password --gecos '' xxx
RUN mkdir /home/xxx/data
RUN chown -R xxx:xxx /home/xxx/data
#...
CMD echo "hello" >> /home/xxx/data/test
and run container from my user (as well as from root, default) I get:
sudo docker run -u xxx -it -v /home/xxx/doc:/home/xxx/data x/o:latest
/bin/sh: 1: cannot create /home/xxx/data/test: Permission denied
Same happens if I provide direct path to Docker -v.
How such problem can be solved, what steps shall I take?

Related

how to connect to remote host running in docker container from jenkins running in docker container using ssh

I have tried these steps a number of time but failing. Did lot of RnD but could not fix the issue.
I am using CentOS running on Oracle VM.
I am trying to connect from CentOS host -> Jenkins -> Remote Host using SSH
My present working directory is : /root/connect2nareshc/jenkins_0_to_hero/jenkins_image
I did ssh-keygen -f remote-key to generate public and private keys.
In the directory /root/connect2nareshc/jenkins_0_to_hero/jenkins_image I have Dockerfile as follows:
FROM centos:7
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chown 600 /home/remote_user/.ssh/authorized_keys
RUN ssh-keygen -A
CMD /usr/sbin/sshd -D
In one directory above in /root/connect2nareshc/jenkins_0_to_hero, I have docker-compose.yml as follows:
version: '3'
services:
jenkins:
container_name: jenkins_yml
image: "jenkins/jenkins:lts"
ports:
- 8080:8080
networks:
- net
remote_host:
container_name: remote-host
image: remote_host
build:
context: /root/connect2nareshc/jenkins_0_to_hero/jenkins_image
networks:
- net
networks:
net:
I execute the following command and it works fine. I.e. From host I connect to Jenkins and from Jenkins I connect to remote_host using password.
docker-compose build
docker-compose up -d
docker exec -it jenkins_yml bash
ssh remote_user#remote_host
#Enter password 1234 when prompted.
When I try to connect using keys, I am not able to:
docker cp remote-key jenkins_yml:/tmp/remote-key
docker exec -it jenkins_yml bash
cd /tmp
ssh -i remote-key remote_user#remote_host
Instead it prompts me to connect with password.
While on remote_host I did ls -altr on /var/log and got following output. I cannot find auth.log
drwxr-xr-x. 1 root root 4096 May 4 15:36 ..
-rw-r--r--. 1 root root 193 May 4 15:36 grubby_prune_debug
-rw-------. 1 root utmp 0 May 4 15:36 btmp
drwxr-xr-x. 1 root root 4096 May 4 15:37 .
-rw-------. 1 root root 1751 Jul 28 16:35 yum.log
-rw-------. 1 root root 64064 Jul 28 16:36 tallylog
-rw-rw-r--. 1 root utmp 1152 Jul 29 03:17 wtmp
-rw-r--r--. 1 root root 292292 Jul 29 03:17 lastlog

Let's encrypt SSL couldn't start by "Error: EACCES: permission denied, open '/etc/letsencrypt/live/domain.net/privkey.pem'"

I tried to use SSL by Node.js but it doesn't work because permission denied.
try {
var TLSoptions = {
key: fs.readFileSync("/etc/letsencrypt/live/domain.work/privkey.pem"),
cert: fs.readFileSync("/etc/letsencrypt/live/domain.work/cert.pem")
};
https.createServer(TLSoptions, app).listen(port, host, function() {
console.log("TLS Website started.")
}); catch(e) {
console.log(e)
}
=>
{ Error: EACCES: permission denied, open '/etc/letsencrypt/live/domain.work/privkey.pem'
at Object.fs.openSync (fs.js:663:18)
... (Librarys dump)
errno: -13,
code: 'EACCES',
syscall: 'open',
path: '/etc/letsencrypt/live/domain.work/privkey.pem' }
So I tried re-make files of *.pem.
rm -f /etc/letsencrypt/live
rm -f /etc/letsencrypt/archive
rm -f /etc/letsencrypt/renewal
sudo ./letsencrypt-auto certonly -a standalone -d domain.work
and check file authority.
/etc/letsencrypt/live/domain.work$ ls -lsa
total 12
4 drwxr-xr-x 2 root root 4096 Jan 3 21:56 .
4 drwx------ 3 root root 4096 Jan 3 21:56 ..
0 lrwxrwxrwx 1 root root 37 Jan 3 21:56 cert.pem ->
../../archive/domain.work/cert1.pem
0 lrwxrwxrwx 1 root root 38 Jan 3 21:56 chain.pem ->
../../archive/domain.work/chain1.pem
0 lrwxrwxrwx 1 root root 42 Jan 3 21:56 fullchain.pem ->
../../archive/domain.work/fullchain1.pem
0 lrwxrwxrwx 1 root root 40 Jan 3 21:56 privkey.pem ->
../../archive/domain.work/privkey1.pem
/etc/letsencrypt/archive/domain.work$ ls -lsa
total 24
4 drwxr-xr-x 2 root root 4096 Jan 3 21:56 .
4 drwx------ 3 root root 4096 Jan 3 21:56 ..
4 -rw-r--r-- 1 root root 1789 Jan 3 21:56 cert1.pem
4 -rw-r--r-- 1 root root 1647 Jan 3 21:56 chain1.pem
4 -rw-r--r-- 1 root root 3436 Jan 3 21:56 fullchain1.pem
4 -rw-r--r-- 1 root root 1708 Jan 3 21:56 privkey1.pem
but It is not resolved and I cannot find any mistakes and problems.
How to resolve this problem?
When you use sudo to issue the certificates, they will be owned by root.
Since node is not run as root, and the permissions on the certificate folder do not allow them to be opened by anyone except the owner, your node app cannot see them.
To understand the solution, let us assume node is running as the user nodeuser
You can get your user on ubuntu by using : whoami or ps aux | grep node
Solution #1 (temporary):
You could switch the owner of the certificates to your node user.
$ sudo chown nodeuser -R /etc/letsencrypt
However, this may break any other items that look at the cert, such as Nginx or Apache.
It will also only last till your next update, which is no more than 90 days.
On the other hand, whatever script you have that renews the cert can also set the owner.
Solution #2 (do not do this):
Run node as root.
sudo node index.js
This will run node as a root user, which means that the terribly insecure surface of node can access everything on your system. Please don't do this.
Solution #3 (do not do this either):
Open the certificates to everyone.
The certificates are stored in /etc/letsencrypt/archive/${domain}/cert1.pem, and are linked to from /etc/letsencrypt/live/${domain}/cert1.pem.
All folders in both of these paths are +x, meaning that all users on the system can open the folders, with the exception of the "live" and "archive" folders themselves.
You can make those open as well by changing their permissions.
$ sudo chmod +x /etc/letsencrypt/live
$ sudo chmod +x /etc/letsencrypt/archive
This is bad as it allows access from other unexpected sources. Generally opening folders to everyone is a bad idea.
Solution #4 (do this):
On the other hand, you can create a limited group, and allow the permissions to only be opened for them.
// Create group with root and nodeuser as members
$ sudo addgroup nodecert
$ sudo adduser nodeuser nodecert
$ sudo adduser root nodecert
// Make the relevant letsencrypt folders owned by said group.
$ sudo chgrp -R nodecert /etc/letsencrypt/live
$ sudo chgrp -R nodecert /etc/letsencrypt/archive
// Allow group to open relevant folders
$ sudo chmod -R 750 /etc/letsencrypt/live
$ sudo chmod -R 750 /etc/letsencrypt/archive
That should allow node to access the folders with the certs, while not opening it to anyone else.
You should then reboot or at least logout and in after these changes.
(Many changes to permission and groups require a new session, and we had issues with PM2 until reboot.)
On ec2 instance you can do sudo reboot.
Should something go wrong and you want to revert to original settings follow this
// Delete Group
$ sudo groupdel nodecert
// Reset Permission
$ sudo chown -R :root /etc/letsencrypt/live
$ sudo chown -R :root /etc/letsencrypt/archive
// Check Permissions
$ sudo ll /etc/letsencrypt/
I'm not familiar with Node.js, but it's clearly the same permissions problem as with PostgreSQL. So the same solution should work fine. This allows you to leave the permissions on /etc/letsencrypt as they are :
copy the certificates to your Node.js directory
chown the copied files to your "node" user
You can have a script doing that in /etc/letsencrypt/renewal-hooks/deploy which will be called everytime you renew your certificates.
Example /etc/letsencrypt/renewal-hooks/deploy/10-certbot-copy-certs :
#!/bin/bash
domain=domain.work # using your example name
node_dir=/path/to/cert_copies
node_user=nodeuser
cp /etc/letsencrypt/live/$domain/{fullchain,privkey}.pem "$node_dir"/
chown $node_user "$node_dir"/*.pem
This worked for me:
Copy all pem files that you need into the root folder of your project:
sudo cp /etc/letsencrypt/live/www.your-domain.com/privkey.pem /home/your-username/your-server-directory/privkey.pem
Read the files like so:
.createServer(
{
key: fs.readFileSync("privkey.pem"),
cert: fs.readFileSync("cert.pem"),
},
Grant permissions:
sudo chown your-username -R privkey.pem
I was using ec2-user on Amazon Linux 2 instance and had the same problem. This worked for me:
sudo chown ec2-user -R /etc/letsencrypt
The above top answer by #SamGoody didn't work for me since it didn't set all the group permissions. It worked after I setup the nodecert group as he suggested like this
$ sudo addgroup nodecert
$ sudo adduser nodeuser nodecert
$ sudo adduser root nodecert
and then did
$ sudo nautilus
and clicked down to /etc/letsencrypt then right clicked "Properties" and changed the group permissions manually to nodecert "Access files" in the following two folders and their domain name subfolders
/etc/letsencrypt/live
/etc/letsencrypt/archive
Also changed group permissions manually to nodecert "Read-only" for all contained files and symlinks.

Slackware creating directory when adding new user

I'm using slackware 14.2, and i want to create directory public_html in /home/*/ when i create user. I saw there's a file useradd in /etc/default/, but i don't know, if this file should be editing.
Like that:
# mkdir /etc/skel/public_html
# useradd -s /bin/bash -m -d /home/user1 user1
# ls -Al ~user1
total 4
drwxr-xr-x 2 user1 user1 4096 Dec 9 11:43 public_html

Can not add new user in docker container with mounted /etc/passwd and /etc/shadow

Example of the problem:
docker run -ti -v my_passwd:/etc/passwd -v my_shadow:/etc/shadow --rm centos
[root#681a5489f3b0 /]# useradd test # does not work !?
useradd: failure while writing changes to /etc/passwd
[root#681a5489f3b0 /]# ll /etc/passwd /etc/shadow # permission check
-rw-r--r-- 1 root root 157 Oct 8 10:17 /etc/passwd
-rw-r----- 1 root root 100 Oct 7 18:02 /etc/shadow
The similar problem arises when using passwd:
[root#681a5489f3b0 /]# passwd test
Changing password for user test.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: Authentication token manipulation error
I have tried using the ubuntu image, but the same problem arises.
I can manually edit passwd file and shadow file from within container.
I am getting the same problem on following two machines:
Host OS: CentOS 7 - SELinux Disabled
Docker Version: 1.8.2, build 0a8c2e3
Host OS: CoreOS 766.4.0
Docker version: 1.7.1, build df2f73d-dirty
I've also opened issue on GitHub: https://github.com/docker/docker/issues/16857
It's failing because passwd manipulates a temporary file, and then attempts to rename it to /etc/shadow. This fails because /etc/shadow is a mountpoint -- which cannot be replaced -- which results in this error (captured using strace):
102 rename("/etc/nshadow", "/etc/shadow") = -1 EBUSY (Device or resource busy)
You can reproduce this trivially from the command line:
# cd /etc
# touch foo
# mv foo shadow
mv: cannot move 'foo' to 'shadow': Device or resource busy
You could work around this by mounting a directory containing my_shadow and my_passwd somewhere else, and then symlinking /etc/passwd and /etc/shadow in the container appropriately:
$ docker run -it --rm -v $PWD/my_etc:/my_etc centos
[root#afbc739f588c /]# ln -sf /my_etc/my_passwd /etc/passwd
[root#afbc739f588c /]# ln -sf /my_etc/my_shadow /etc/shadow
[root#afbc739f588c /]# ls -l /etc/{shadow,passwd}
lrwxrwxrwx. 1 root root 17 Oct 8 17:48 /etc/passwd -> /my_etc/my_passwd
lrwxrwxrwx. 1 root root 17 Oct 8 17:48 /etc/shadow -> /my_etc/my_shadow
[root#afbc739f588c /]# passwd root
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root#afbc739f588c /]#

Linux permissions issue on sftp server

Good day!
I have a linux sftp server located in VM. This VM has access to a GlusterFS storage, where sftp directories are located. Sftp works via OpenSSH server and chroots sftpusers group to sftp directories on GlusterFS storage. All worked well... After one moment I had got an issue...
Trying to create user:
# useradd -d /mnt/cluster-data/repositories/masters/test-user -G masters,sftpusers -m -s /bin/nologin test-user
Checking:
# cat /etc/passwd | grep test-user
test-user:x:1029:1032::/mnt/cluster-data/repositories/masters/test-user:/bin/nologin
# cat /etc/group | grep test-user
masters:x:1000:test-user
sftpusers:x:1005:test-user
test-user:x:1032:
Doing chown and chmod for home dir by hand:
# chown -R test-user:test-user /mnt/cluster-data/repositories/masters/test-user
# chmod -R 770 /mnt/cluster-data/repositories/masters/test-user
Checking:
# ls -la /mnt/cluster-data/repositories/masters/test-user
итого 16
drwxrwx--- 2 test-user test-user 4096 Окт 27 2013 .
drwxr-xr-x 13 root masters 4096 Окт 27 2013 ..
Adding another user to test-user's group:
# usermod -G test-user -a tarasov-af
# cat /etc/passwd | grep tarasov-af
tarasov-af:x:1028:1006::/mnt/cluster-data/repositories/lecturers/tarasov-af/:/bin/nologin
# cat /etc/group | grep tarasov-af
masters:x:1000:tarasov-af,test-user
sftpusers:x:1005:tarasov-af,test-user
lecturers:x:1006:tarasov-af
specialists:x:1008:tarasov-af
test-user:x:1032:tarasov-af
Login as tarasov-af:
sftp> cd masters/test-user
sftp> ls
remote readdir("/masters/test-user"): Permission denied
sftp> ls -la ..
drwxr-xr-x 13 0 1000 4096 Oct 26 21:30 .
drwxr-xr-x 6 0 0 4096 Oct 2 15:53 ..
drwxrwx--- 2 1029 1032 4096 Oct 26 21:53 test-user
I tried to login as tarasov-af into bash (usermod -s /bin/bash tarasov-af):
$ id
uid=1028 gid=1006
groups=1000,1005,1006,1008,1032
p.s. I guess this issue began after VM disk failed and I've got /etc/passwd and /etc/group broken, I've restored them from backups and all previous accounts works well, I have this issue only with new accounts.
I've found the reason of this issue: user tarasov-af has more than 16 secondary groups, first 15 groups work good, other -- don't work. I've set kernel.ngroups_max = 65535 in sysctl.conf on every computer in cluster (GlusterFS) and on sftp VM but nothing changed.
This issue goes to glusterfs client, it can't manipulate with more than 15 secondary groups.
# glusterfs --version
glusterfs 3.2.7 built on Sep 29 2013 03:28:05

Resources