How ro run sudo su - in Dockerfile - linux

I am trying to RUN sudo su - inside the Dockerfile and I get this error
/bin/sh: 1: sudo: not found
This is how my Dockerfile looks like:
FROM ubuntu:18.04
RUN sudo su -
RUN apt update && install openjdk-8-jdk
RUN wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - && sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/kenkins.list'
RUN apt update && apt install jenkins
RUN curl -fsSL get.docker.com | /bin/bash
RUN usermod -aG docker jenkins && systemctl restart jenkins
This error comes when I try to build it.
docker build -t jenkins .
Can someone help me?

The dockerfile will run as a virtual "root" user by default, so there is no need to include any sudo command.
Since the example script contains no "-y" defaults it seems that you have simply typed the description for a manual installation into a script. This will never work. And well, in a container the application does also need to be on PID-1 which systemctl will not do.
After going through a basic tutorial on docker you will find out why.

This command seems not to be doing anything except for creating an extra layer without any useful effect.

$ cat Dockerfile
FROM ubuntu:18.04
RUN apt-get update && apt-get install openjdk-8-jdk -y
If You want to change the use privilege use USER flag in Dockerfile

Related

Unable to install package using docker : kpt not found or ambiguous repo/dir#version specify '.git' in argument

I am using below script and it giving me an error #/bin/sh: 1: kpt: not found
FROM nginx
RUN apt update
RUN apt -y install git
RUN apt -y install curl
# install kpt package
RUN mkdir -p ~/bin
RUN curl -L https://github.com/GoogleContainerTools/kpt/releases/download/v1.0.0-beta.1/kpt_linux_amd64 --output ~/bin/kpt && chmod u+x ~/bin/kpt
RUN export PATH=${HOME}/bin:${PATH}
RUN SRC_REPO=https://github.com/kubeflow/manifests
RUN kpt pkg get $SRC_REPO/tf-training#v1.1.0 tf-training
But if I create the image using
FROM nginx
RUN apt update
RUN apt -y install git
RUN apt -y install curl
and perform
docker exec -it container_name bash
and manually do the task then I am able to install kpt package. Sharing below the screenshot of the process
The error changes if I provide the full path to /bin/kpt
Error: ambiguous repo/dir#version specify '.git' in argument
FROM nginx
RUN apt update
RUN apt -y install git
RUN apt -y install curl
RUN mkdir -p ~/bin
RUN curl -L https://github.com/GoogleContainerTools/kpt/releases/download/v1.0.0-beta.1/kpt_linux_amd64 --output ~/bin/kpt && chmod u+x ~/bin/kpt
RUN export PATH=${HOME}/bin:${PATH}
# Below line of code is to ensure that kpt is installed and working fine
RUN ~/bin/kpt pkg get https://github.com/ajinkya101/kpt-demo-repo.git/Packages/Nginx
RUN SRC_REPO=https://github.com/kubeflow/manifests
RUN ~/bin/kpt pkg get $SRC_REPO/tf-training#v1.1.0 tf-training
What is happening when I am using docker and not able to install it?
First, make sure SRC_REPO is declared as a Dockerfile environment variable
ENV SRC_REPO=https://github.com/kubeflow/manifests.git
^^^ ^^^^
And make sure the URL ends with .git.
As mentioned in kpt get:
In most cases the .git suffix should be specified to delimit the REPO_URI from the PKG_PATH, but this is not required for widely recognized repo prefixes.
Second, to be sure, specify the full path of kpt, without ~ or ${HOME}.
/root/bin/kpt
For testing, add a RUN id -a && pwd to be sure who and where you are when using the nginx image.

Nextcloud docker install with SSH access enabled

I’m trying to install SSH (and enable the service) on top of my Nextcloud installation in Docker, and have it work on reboot. Having run through many Dockerfile, docker-compose combinations I can’t seem to get this to work. Ive tried using entrypoint.sh scripts with Dockerfile, but it wants a CMD at the end and then it doesn’t execute the “normal” nextcloud start up.
entrypoint.sh:
#!/bin/sh
# Start the ssh server
service ssh start
# Execute the CMD
exec "$#"
Dockerfile:
FROM nextcloud:latest
RUN apt update -y && apt-get install ssh -y
RUN apt-get install python3 -y && apt-get install sudo -y
RUN echo 'ansible ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN useradd -m ansible -s /bin/bash
RUN sudo -u ansible mkdir /home/ansible/.ssh
RUN mkdir -p /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
Any help would be much appreciated. Thank you
In general I'd say - break the problem you're having down into smaller parts - it'll help isolate the source of the problem.
Here's how I'd approach the reported issue.
First - replace (in your Dockerfile)
apt-get install -y ssh
with the recommended
apt install -y openssh-server
Then - test just the required parts of your Dockerfile addressing the issue - simplify it just to the following:
FROM nextcloud:latest
RUN apt update
RUN apt install -y openssh-server
Then build a test image using this Dockerfile via the command
docker build . -t test_nextcloud
This will build the image - giving it the name (tag) of test_nextcloud.
Then run a container from this newly built image via the docker run command
docker run -p 8080:80 -d --name nextcloud test_nextcloud
This will run the container on port 8080 in detatched mode, and give the assicated container the name of nextcloud.
Then - with the container running - you should be able to enter into it using the following command
docker container exec -u 0 -it nextcloud bash
as root.
Now that you are in, you should be able to startup the ssh server via the command
service ssh start
Having followed a set of steps like this to confirm that you can indeed startup an ssh server in the nextcloud container, begin adding back in your additional logic (begining with the original Dockerfile).

Docker run doesn't work as part of a terraform startup script

I'm using terraform to provision a bunch of machines at once. Each one should run the same docker container. The startup script looks like this:
sudo apt-get remove docker docker-engine docker.io containerd runc -Y
sudo apt-get update -Y
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common -Y
curl https://get.docker.com | sh && sudo systemctl --now enable docker
sudo docker build -t dockertest /path/to/dockerfile
sudo docker run --gpus all -it -v /path/to/mount:/usr/src/app dockertest script.py -b 03
Basically it installs docker and then builds the container and then runs it.
Only the last line doesn't work. If I ssh into the machine, it works fine. But not as part of the startup script.
How can I get it to work as part of the startup script? It's a hassle to ssh into each of a swarm of machines.
If anyone else encounters this problem: the solution is simply to take -it out of the docker run command.

Docker run only works after build

I can build and run a container with
docker build -t hopperweb:v5-full -f Dockerfile . &&
docker run -p 127.0.0.1:3000:8080 --rm -ti hopperweb:v5-full
However when I run the container I get this error: standard_init_linux.go:211: exec user process caused "exec format error"
docker run -p 127.0.0.1:3000:8080 --rm -ti hopperweb:v5-full
Why is it working when it's run after &&??
I can run the image with bash: docker run -p 127.0.0.1:3000:8080 --rm -ti hopperweb:v5-full bash without issue.
This is my DockerFile
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install --yes curl
RUN apt-get install --yes sudo ## maybe not necessary, but helpful
RUN apt-get install --yes gnupg
RUN apt-get install --yes git ## not necessary, but helpful
RUN apt-get install --yes vim ## not necessary, but helpful
## INSTALL NPM
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo 'deb https://dl.yarnpkg.com/debian/ stable main' | sudo tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update
RUN apt-get install --yes yarn
RUN apt-get install --yes npm
## COPY IN APP FILES
RUN mkdir /app
COPY hopperweb/ /app/hopperweb/
RUN chmod +x /app/hopperweb/start.sh
RUN /app/hopperweb/start.sh
The contents of start.sh:
#!/bin/bash
cd /app/hopperweb/
yarn start
In your first command, the docker run is never executed, as the last command (start.sh) is run during your build and it will never terminate. So you were still running docker build.
Change the following line
RUN /app/hopperweb/start.sh
to
CMD /app/hopperweb/start.sh
Do not confuse RUN with CMD. RUN actually runs a command and commits the result; CMD does not execute anything at build time, but specifies the intended command for the image.
See: https://docs.docker.com/engine/reference/builder/#cmd

Installing Docker.io on Ubuntu 14.04LTS

I'm running a virtual machine in Windows Azure with the prebuild image for Ubuntu 14.04 LTS.
When I want to install Docker.io like described here:
http://blog.docker.io/2014/04/docker-in-ubuntu-ubuntu-in-docker/
The installation works but when i`m running:
sudo docker.io pull ubuntu
An error will be thrown:
Cannot connect to the Docker daemon. Is docker -d running on this host?
Can anyone help or has the similar problem?
P.S.: Can anyone with a high reputation create a Tag for Ubuntu-14.04?
Evidently the docker daemon is not running. You wanna check /etc/default/docker.conf for proper configuration and issue
sudo service docker.io start
or
sudo service docker start
depending on how they called the service
Adding myself to the docker group:
sudo usermod -a -G docker myuser
and rebooting the machine worked for me. This solution is discussed in: https://github.com/docker/docker/issues/5314
On Ubuntu 14.04, the docker.io package installs Docker 0.9.1.
According to the documentation, to install the current version use these commands:
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker
There is also a simple script available to help with this process:
$ curl -s https://get.docker.io/ubuntu/ | sudo sh
Alternatively, check the azure-docker-registry project for an example of how to automate Azure provisioning and Docker container deployment. For instance, this Ansible playbook:
- name: create docker data directory
file: path=/mnt/data/docker state=directory
- name: store docker files in data disk
file: src=/mnt/data/docker dest=/var/lib/docker state=link
- name: add repository key
command: creates=/etc/apt/sources.list.d/docker.list apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: copy repository source file
copy: src=docker.list dest=/etc/apt/sources.list.d/docker.list
- name: install docker package
apt: name=lxc-docker update_cache=yes state=present
Also make sure to symlink the docker.io binary to docker to use the tutorials/documentation without rewriting every command.
ln -s /usr/bin/docker.io /usr/bin/docker
Run docker -d to see if it shows any error messages.
If apparmor is missing install it with sudo apt-get install apparmor
Then sudo service docker start
Hard to say but sometime official docker installation procedure fails on Ubuntu 14.04.
One can simply install docker using below given commands [Quick and Dirty]
sudo apt-get update
sudo apt-get -y install docker.io

Resources