Unable to log in as root in Oracle Linux Docker container - linux

I am trying to edit some files in a docker container using
docker exec -it container_Id bash
I am able to access the Commad line and the files but i can't login as root
user.I tried all these commands :
root#Linux-Vostro-3250:~# docker exec -it MS1 bash
[oracle#b1c48eff3e2e base_domain]$ yum install nano
Loaded plugins: ovl
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/Requirename'
You need to be root to perform this command.
[oracle#b1c48eff3e2e base_domain]$ su
bash: su: command not found
[oracle#b1c48eff3e2e base_domain]$ sudo
bash: sudo: command not found
[oracle#b1c48eff3e2e base_domain]$ su -
bash: su: command not found
[oracle#b1c48eff3e2e base_domain]$ su-
bash: su-: command not found
[oracle#b1c48eff3e2e base_domain]$
Can someone help me with thiss..
Thanks a lot!!

docker exec supports a -u / --user option:
docker exec -it -u root MS1 bash
Source: Docs

I had to include the --workdir flag when running an OL7 container
docker exec -it -u root -w /root CONTAINER /bin/bash

Perform the following commands:
1. docker exec -it countainername bash
2. su - oracle
3. sqlplus
4.
Username:"/ as sysdba"
Password:sys as sysdba

Related

Using SSH inside docker with correct file permissions?

There are a few posts on how to use Docker + SSH. There are also posts on how to edit files mounted in a docker container, such that editing them won't cause the permissions to become root.
I'm trying to combine the 2 things, so I can SSH into a docker container and edit files without messing up their permissions.
For, using the correct file permissions, I use:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
in my docker-compose.yml and
docker compose -f commands/dev/docker-compose.yml run \
--service-ports \
--user $(id -u) \
develop \
bash
so that when I start the docker container, my user is the same user as my local computer.
However, this breaks up my SSH setup inside the Docker container:
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
# passwd -d ubuntu
apt install -y --no-install-recommends openssh-server vim-tiny sudo
# See: https://stackoverflow.com/questions/22886470/start-sshd-automatically-with-docker-container
sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
mkdir /var/run/sshd
bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUNLEVEL=1 dpkg-reconfigure openssh-server
ssh-keygen -A -v
update-rc.d ssh defaults
# Configure sudo
ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
Here I'm creating a user called ubuntu with password ubuntu for SSH-ing. This lets me SSH in ubuntu#localhost using the password ubuntu.
The issue is that by mounting the /etc/passwd file into my container, I erase the ubuntu user inside the container. This means when I try to ssh in with ssh -p 9002 ubuntu#localhost, the authentication fails (9002 is what I bind port 22 in the container to on the host).
Does anyone have a solution?
Here's a first pass answer.
I can use:
useradd -rm -d /home/yourusername -s /bin/bash -g root -G sudo yourusername
instead of
useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
echo 'ubuntu:ubuntu' | chpasswd
then, I:
Run the ssh server in the container with:
su root
/usr/sbin/sshd -D -o ListenAddress=0.0.0.0 -o PermitRootLogin=yes
I can ssh into the container as root (using the root password "root", which I set with RUN echo 'root:root' | chpasswd in the Dockerfile).
Then, I can do su yourusername, to switch my user.
While this works, it is pretty annoying since I need to bake the user name into the Docker container.

Unable to run command chsh -s /bin/bash ${USERNAME}

I have a docker file where I have customized image myimage derived from some-debian-image (which derived from debian upstream.)
FROM some-debian-image myimge
USERNAME root:root
...........................
RUN chsh -s /bin/bash ${USERNAME}
docker build fails saying :
Password: chsh: PAM: Authentication failure
However, it does not fail with upstream
FROM bebain:bullseye myimage
USERNAME root:root
...........................
RUN chsh -s /bin/bash ${USERNAME}
Developers who have build the some-debian-image have done something add on with /etc/passwd , and it is having content
root:x:0:0:root:/root:/usr/sbin/nologin
May I please know how to successfully run this command :
RUN chsh -s /bin/bash ${USERNAME}
I am comparing docker images setup where it is working and where it is not working , and I found that:
The setup where the above command RUN chsh -s /bin/bash ${USERNAME} is working sudo su can be expected without any password
$ sudo su
#
In contrast in setup where I am facing issue ask for password when run the command sudo su
May I pleas know what changes I should do so that sudo su shall not ask for password?

How to enter a pod as root?

Currently I enter the pod as a mysql user using the command:
kubectl exec -it PODNAME -n NAMESPACE bash
I want to enter a container as root.
I've tried the following command:
kubectl exec -it PODNAME -n NAMESPACE -u root ID /bin/bash
kubectl exec -it PODNAME -n NAMESPACE -u root ID bash
There must be a way.
:-)
I found the answer.
You cannot log into the pod directly as root via kubectl.
You can do via the following steps.
1) find out what node it is running on kubectl get po -n [NAMESPACE] -o wide
2) ssh node
3) find the docker container sudo docker ps | grep [namespace]
4) log into container as root sudo docker exec -it -u root [DOCKER ID] /bin/bash
Actually there is already a possibility to connect via kubectl addon kubectl-plugins. Found a solution replying onto related question.
git clone https://github.com/jordanwilson230/kubectl-plugins.git
cd kubectl-plugins
./install-plugins.sh
source ~/.bash_profile
kubectl ssh -u root suse
Connecting...
Pod: suse
Namespace: NONE
User: root
Container: NONE
Command: /bin/sh
If you don't see a command prompt, try pressing enter.
sh-5.0#
SSH as root to kubernates pod.
For those on Windows Platform using minikube.
First you to ssh inside minikube
minikube ssh --user root
Then you need to find desired docker container
docker ps | grep NAME_POD
Copy fully qualified docker container name then use docker exec:
sudo docker exec -it -u root FQDN_CONTAINER bash
In my case it was :
sudo docker exec -it -u root k8s_jupyter_my-jupyter-0_default
_f05e2913-f1fd-4084-a8e8-e783519d4a71_0 bash
Once then i had full root access in bash inside POD.

How to docker exec a shell builtin of docker container specifically on Ubuntu docker image/container

thank you for reading my post.
Problem:
# docker ps
CONTAINER ID IMAGE COMMAND
35c8b832403a ubuntu1604:1 "sh -c /bin/sh"
# docker exec -i -t 35c8b832403a type type
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"type\": executable file not found in $PATH"
# Dockerfile
FROM ubuntu:16.04
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN apt-get update && apt-get -y upgrade
ENTRYPOINT ["sh", "-c"]
CMD ["/bin/bash"]
Description:
My objective is to get "type" shell builtin been execute in a way of writing docker exec as below
docker exec -i -t 35c8b832403a type type (FAILED)
NOT
docker exec -i -t 35c8b832403a sh -c "type type" (PASSED)
I have googling around, do some modification in the container (change /etc/profile, /etc/environment, bashrc) but failed.
From the docker documentation itself, it has state that:
COMMAND will run in the default directory of the container. It the
underlying image has a custom directory specified with the WORKDIR
directive in its Dockerfile, this will be used instead.
COMMAND should be an executable, a chained or a quoted command will
not work. Example: docker exec -ti my_container "echo a && echo
b" will not work, but docker exec -ti my_container sh -c "echo a &&
echo b" will.
But seem it IS POSSIBLE when I able to get the right output FROM DOCKER FEDORA (Dockerfile: FROM fedora:25)
# docker ps
CONTAINER ID IMAGE COMMAND
2a17b2338518 fedora25:1 "sh -c /bin/sh"
# docker exec -i -t 2a17b2338518 type type
type is a shell builtin
Question:
Is there any way to enable this on Ubuntu docker? Image/Container tweaks? Vagrantfile Configuration? Please help.
Others:
Using docker run, I able to get the right output because of the "ENTRYPOINT" in the Dockerfile. However the image need to be save instead of export.
Just in case, to be able to execute type as you expect, it would need to be part of the path. Being a shell builtin wouldn't help because as you said, you don't want to execute /bin/bash -c 'type type'
If you want to have type executed as a builtin shell command, this means you need to execute a shell /bin/bash or /bin/sh and then execute 'type type' on it, making it /bin/bash -c 'type type'
After all, as #Henry said, docker exec is a the full command that will be executed and there is no place for CMD or ENTRYPOINT on it.
CMD and ENTRYPOINT are meaningless if you run docker exec. The remaining arguments are taken as the command and executed inside the already existing container.
Maybe you wanted to use docker run?

Docker tool in Jenkins container (with mounted Docker socket) is not finding a Docker daemon to connect to

I just started a Jenkins docker container with a mounted docker socket like the following:
docker run -d \
--publish 8080:8080 \
--publish 50000:50000 \
--volume /my_jenkins_home:/var/jenkins_home \
--volume /var/run/docker.sock:/var/run/docker.sock \
--restart unless-stopped \
--name my_jenkins_container \
company/my_jenkins:latest
Then I bash into the container like this:
docker exec -it my_jenkins_container bash
A tool 'docker' command in a Jenkins pipeline script has automatically installed a Docker binary at the following path: /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker
However, when I try to run Docker commands from that Docker binary (assuming that it will connect with the Docker socket that has been mounted at /var/run/docker.sock) it returns the following error:
$ /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
How can I ensure that this Docker binary (the binary that has been automatically installed via the Jenkins' tool 'docker' command) runs its Docker commands by connecting to the mounted Docker socket at /var/run/docker.sock?
Short Answer:
The file permissions of the mounted Docker socket file had to be revised.
Long Answer:
When I simply tried to execute /path/to/dockerTool/bin/docker ps -a on the Docker container, it was producing an error.
$ docker exec -it my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Then, when I tried to execute /path/to/dockerTool/bin/docker ps -a with user=root, it worked fine.
$ docker exec -it --user=root my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9dd56411efe company/my_jenkins:latest "/bin/tini -- /usr/lo" 49 seconds ago Up 49 seconds 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp my_jenkins_container
So it means I just needed to set the right permissions to the Docker socket. All I had to do was chgrp the socket file to the jenkins group so that the jenkins group/users can read/write to that socket file (the before & after of the chgrp command is included here):
$ docker exec -it my_jenkins_container bash -c "ls -l /var/run/docker.sock"
srw-rw---- 1 root 999 0 Jan 15 08:29 /var/run/docker.sock
$ docker exec -it --user=root my_jenkins_container bash -c "chgrp jenkins /var/run/docker.sock"
$ docker exec -it my_jenkins_container bash -c "ls -l /var/run/docker.sock"
srw-rw---- 1 root jenkins 0 Jan 15 08:29 /var/run/docker.sock
After that, executing /path/to/dockerTool/bin/docker ps -a as a non-root user worked fine
$ docker exec -it my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9dd56411efe company/my_jenkins:latest "/bin/tini -- /usr/lo" 3 minutes ago Up 3 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp my_jenkins_container

Resources