Unable to run command chsh -s /bin/bash ${USERNAME} - linux

I have a docker file where I have customized image myimage derived from some-debian-image (which derived from debian upstream.)
FROM some-debian-image myimge
USERNAME root:root
...........................
RUN chsh -s /bin/bash ${USERNAME}
docker build fails saying :
Password: chsh: PAM: Authentication failure
However, it does not fail with upstream
FROM bebain:bullseye myimage
USERNAME root:root
...........................
RUN chsh -s /bin/bash ${USERNAME}
Developers who have build the some-debian-image have done something add on with /etc/passwd , and it is having content
root:x:0:0:root:/root:/usr/sbin/nologin
May I please know how to successfully run this command :
RUN chsh -s /bin/bash ${USERNAME}
I am comparing docker images setup where it is working and where it is not working , and I found that:
The setup where the above command RUN chsh -s /bin/bash ${USERNAME} is working sudo su can be expected without any password
$ sudo su
#
In contrast in setup where I am facing issue ask for password when run the command sudo su
May I pleas know what changes I should do so that sudo su shall not ask for password?

Related

Using SSH inside docker with correct file permissions?

There are a few posts on how to use Docker + SSH. There are also posts on how to edit files mounted in a docker container, such that editing them won't cause the permissions to become root.
I'm trying to combine the 2 things, so I can SSH into a docker container and edit files without messing up their permissions.
For, using the correct file permissions, I use:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
in my docker-compose.yml and
docker compose -f commands/dev/docker-compose.yml run \
--service-ports \
--user $(id -u) \
develop \
bash
so that when I start the docker container, my user is the same user as my local computer.
However, this breaks up my SSH setup inside the Docker container:
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
# passwd -d ubuntu
apt install -y --no-install-recommends openssh-server vim-tiny sudo
# See: https://stackoverflow.com/questions/22886470/start-sshd-automatically-with-docker-container
sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
mkdir /var/run/sshd
bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUNLEVEL=1 dpkg-reconfigure openssh-server
ssh-keygen -A -v
update-rc.d ssh defaults
# Configure sudo
ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
Here I'm creating a user called ubuntu with password ubuntu for SSH-ing. This lets me SSH in ubuntu#localhost using the password ubuntu.
The issue is that by mounting the /etc/passwd file into my container, I erase the ubuntu user inside the container. This means when I try to ssh in with ssh -p 9002 ubuntu#localhost, the authentication fails (9002 is what I bind port 22 in the container to on the host).
Does anyone have a solution?
Here's a first pass answer.
I can use:
useradd -rm -d /home/yourusername -s /bin/bash -g root -G sudo yourusername
instead of
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
then, I:
Run the ssh server in the container with:
su root
/usr/sbin/sshd -D -o ListenAddress=0.0.0.0 -o PermitRootLogin=yes
I can ssh into the container as root (using the root password "root", which I set with RUN echo 'root:root' | chpasswd in the Dockerfile).
Then, I can do su yourusername, to switch my user.
While this works, it is pretty annoying since I need to bake the user name into the Docker container.

Script or tool for adding users to server with ssh keypair login

Is there a tool or a common script for adding users to a linux server that also configures the ssh keys?
For example, I found I can automate creation of users with useradd or adduser, and it is even possible to setup an account with password login with e.g. adduser --password my_password. However, that still leaves me having to add the .ssh folders and files and set the correct permissions, which in my case leaves plenty of room for typos.
What I am looking for is something like
adduser --ssh user_public_key
where user_public_key is key provided to me by the new user.
I imagine there might be an existing tool for this, but my duckducking didn't turn up anything useful.
Try this (for centos, plus enables docker)
set -euo pipefail
DEV_GROUP="somegroup"
sudo groupadd --force "${DEV_GROUP}"
function adduser() {
local var_user="$1"
shift
local var_ssh_pub_key="$*"
id --user "${var_user}" &>/dev/null || sudo useradd --gid "${DEV_GROUP}" --groups wheel,docker "${var_user}"
echo "${var_user} ALL=(ALL) NOPASSWD:ALL" | sudo tee "/etc/sudoers.d/${var_user}"
sudo --user "${var_user}" mkdir -p "/home/${var_user}/.ssh"
sudo --user "${var_user}" touch "/home/${var_user}/.ssh/authorized_keys"
echo "${var_ssh_pub_key}" | sudo --user "${var_user}" tee "/home/${var_user}/.ssh/authorized_keys"
}
adduser someuser ssh-rsa AAAAB3NzaC1.... user#host
You can do this in a script as root:
$ mkdir ~username/.ssh
$ cat user_public_key >> ~username/.ssh/authorized_keys
$ chown -R username ~username/.ssh

Commands will not pass to CLI after logging into new user with sudo su - user

Obligatory 'first post' tag. Issue: Commands will not pass to command line after entering password for a sudo su - userB
I am writing a script in bash that requires to be run as a specific user. Ideally we would like this script to be able to be run on our local workstations for ease of use. Here is the command I am running to test:
ssh -qt -p22 userA#hostname "whoami; sudo su - userB; whoami"
Expected:
userA
[sudo] password for userA:
userB
With this command I am able to get the prompt for sudo password but once it is entered I am presented with a normal terminal where I can manually run commands. Once I ctrl-D/exit it runs the second whoami as the userA and closes. I work in an extremely jailed environment so sudo su -c and similar "run-as-root" commands do not work and I cannot ssh directly to userB.
Is there any way to send the commands to userB by logging in with sudo su - userB?
su creates a subshell that reads the commands from standard input by default. It executes whoami after that exits. You can use the -c option to pass a command to it.
ssh -qt -p22 userA#hostname "whoami; sudo su - userB -c 'whoami'"
You can also use the -u option to sudo instead of using su:
ssh -qt -p22 userA#hostname "whoami; sudo -u userB whoami"

correct method to create user in alpine docker container so that sudo works correctly

When attempting to execute sudo in a docker container using alpine 3.8 I get the following output.
I am logged into the container using docker exec -i -t MYIMAGE /bin/bash
bash-4.4$ whoami
payara
bash-4.4$ sudo -s
bash-4.4$ whoami
payara
bash-4.4$ su root
su: incorrect password
bash-4.4$
My docker file contains the following user related commands to try and setup a user specifically for payara. I want sudo to work correctly though, if possible.
DockerFile
FROM "alpine:latest"
ENV LANG C.UTF-8
ENV http_proxy 'http://u:p#160.48.234.129:80'
ENV https_proxy 'http://u:p#160.48.234.129:80'
RUN apk add --no-cache bash gawk sed grep bc coreutils git openssh-client libarchive libarchive-tools busybox-suid sudo
RUN addgroup -S payara && adduser -S -G payara payara
RUN echo "payara ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# CHANGE TO PAYARA USER
USER payara
... rest of setup.
From man sudo:
-s, --shell
Run the shell specified by the SHELL environment variable if it is set or the shell specified by the invoking user's password database entry.
You have neither SHELL variable set, nor correct (interactive) default shell set in /etc/passwd for user payara. This is because you are creating a system user (-S) - this user has a default shell /bin/false (which just exits with exit code 1 - you may check with echo $? after unsuccessfull sudo -s).
You may overcome this in different ways:
a) specify the SHELL variable:
bash-4.4$ SHELL=/bin/bash sudo -s
bed662af470d:~#
b) use su, which will use the default root's shell:
bash-4.4$ sudo su -
bed662af470d:~#
c) just run the required privileged commands with sudo directly, without spawning an interactive shell.

Incorrect $HOME env variable for a newly created user

On my Ubuntu machine, I logged in as "olduser" and created "newuser" using the following command:
adduser --system --home /usr/share/newuser --no-create-home --ingroup newgroup --disabled-password --shell /bin/false newuser
This adds a new line:
newuser:x:104:1001::/usr/share/newuser:/bin/false
to my /etc/passwd file. But when I log into the machine as 'newuser', my home directory is set as /home/olduser.
echo $HOME
gives
/home/olduser
The same command mentioned above works as expected on a Debian machine but not on the Ubuntu machine.
Why could this be happening?
Edit
I tried changing the home directory using the command
usermod -m -d /usr/share/newuser newuser
This also didn't help.
Instead of changing the dir in /etc/passwd try usermod this way:
usermod -m -d /newhome/username username
Since you've already changed this file try logging out and in again.

Resources