I'm pretty new to Docker, and completely baffled as to why my container exits upon start.
I've built an Ubuntu image of which starts Apache and fail2ban upon boot. I'm unsure as to whether it's an issue with the Dockerfile, or the command I am running to start the container.
I've tried:
docker run -d -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image /bin/bash
The Dockerfile is as follows:
FROM ubuntu:latest
RUN \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y iptables && \
apt-get install -y software-properties-common && \
apt-get install -y apache2 fail2ban && \
rm -rf /etc/fail2ban/jail.conf
ADD index.html /var/www/html/
ADD jail.conf /etc/fail2ban/
ENV HOME /root
WORKDIR /root
EXPOSE 80 443
ENTRYPOINT service apache2 start && service fail2ban start
CMD ["bash"]
I can jump into the container itself with:
docker exec -it image /bin/bash
But the moment I try to run it whilst staying within the host, it fails. Help?
Considering your question, where you mention "upon boot" I think it would be useful to read https://docs.docker.com/config/containers/multi-service_container/.
In a nutshell docker containers do not "boot" as a normal system, they start a process and execute it until it exits.
So, if you want to start two processes you can do a wrapper script as explained at the link above.
Remove the following line from your Dockerfile:
CMD ["bash"]
Also, when you want to get a shell into your container, you have to override the ENTRYPOINT definition of your Dockerfile:
docker exec -it --entrypoint "/bin/bash" image
See Dockerfile "ENTRYPOINT" documentation for more details
Related
I’m trying to install SSH (and enable the service) on top of my Nextcloud installation in Docker, and have it work on reboot. Having run through many Dockerfile, docker-compose combinations I can’t seem to get this to work. Ive tried using entrypoint.sh scripts with Dockerfile, but it wants a CMD at the end and then it doesn’t execute the “normal” nextcloud start up.
entrypoint.sh:
#!/bin/sh
# Start the ssh server
service ssh start
# Execute the CMD
exec "$#"
Dockerfile:
FROM nextcloud:latest
RUN apt update -y && apt-get install ssh -y
RUN apt-get install python3 -y && apt-get install sudo -y
RUN echo 'ansible ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN useradd -m ansible -s /bin/bash
RUN sudo -u ansible mkdir /home/ansible/.ssh
RUN mkdir -p /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
Any help would be much appreciated. Thank you
In general I'd say - break the problem you're having down into smaller parts - it'll help isolate the source of the problem.
Here's how I'd approach the reported issue.
First - replace (in your Dockerfile)
apt-get install -y ssh
with the recommended
apt install -y openssh-server
Then - test just the required parts of your Dockerfile addressing the issue - simplify it just to the following:
FROM nextcloud:latest
RUN apt update
RUN apt install -y openssh-server
Then build a test image using this Dockerfile via the command
docker build . -t test_nextcloud
This will build the image - giving it the name (tag) of test_nextcloud.
Then run a container from this newly built image via the docker run command
docker run -p 8080:80 -d --name nextcloud test_nextcloud
This will run the container on port 8080 in detatched mode, and give the assicated container the name of nextcloud.
Then - with the container running - you should be able to enter into it using the following command
docker container exec -u 0 -it nextcloud bash
as root.
Now that you are in, you should be able to startup the ssh server via the command
service ssh start
Having followed a set of steps like this to confirm that you can indeed startup an ssh server in the nextcloud container, begin adding back in your additional logic (begining with the original Dockerfile).
I'm using terraform to provision a bunch of machines at once. Each one should run the same docker container. The startup script looks like this:
sudo apt-get remove docker docker-engine docker.io containerd runc -Y
sudo apt-get update -Y
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common -Y
curl https://get.docker.com | sh && sudo systemctl --now enable docker
sudo docker build -t dockertest /path/to/dockerfile
sudo docker run --gpus all -it -v /path/to/mount:/usr/src/app dockertest script.py -b 03
Basically it installs docker and then builds the container and then runs it.
Only the last line doesn't work. If I ssh into the machine, it works fine. But not as part of the startup script.
How can I get it to work as part of the startup script? It's a hassle to ssh into each of a swarm of machines.
If anyone else encounters this problem: the solution is simply to take -it out of the docker run command.
I can build and run a container with
docker build -t hopperweb:v5-full -f Dockerfile . &&
docker run -p 127.0.0.1:3000:8080 --rm -ti hopperweb:v5-full
However when I run the container I get this error: standard_init_linux.go:211: exec user process caused "exec format error"
docker run -p 127.0.0.1:3000:8080 --rm -ti hopperweb:v5-full
Why is it working when it's run after &&??
I can run the image with bash: docker run -p 127.0.0.1:3000:8080 --rm -ti hopperweb:v5-full bash without issue.
This is my DockerFile
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install --yes curl
RUN apt-get install --yes sudo ## maybe not necessary, but helpful
RUN apt-get install --yes gnupg
RUN apt-get install --yes git ## not necessary, but helpful
RUN apt-get install --yes vim ## not necessary, but helpful
## INSTALL NPM
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo 'deb https://dl.yarnpkg.com/debian/ stable main' | sudo tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update
RUN apt-get install --yes yarn
RUN apt-get install --yes npm
## COPY IN APP FILES
RUN mkdir /app
COPY hopperweb/ /app/hopperweb/
RUN chmod +x /app/hopperweb/start.sh
RUN /app/hopperweb/start.sh
The contents of start.sh:
#!/bin/bash
cd /app/hopperweb/
yarn start
In your first command, the docker run is never executed, as the last command (start.sh) is run during your build and it will never terminate. So you were still running docker build.
Change the following line
RUN /app/hopperweb/start.sh
to
CMD /app/hopperweb/start.sh
Do not confuse RUN with CMD. RUN actually runs a command and commits the result; CMD does not execute anything at build time, but specifies the intended command for the image.
See: https://docs.docker.com/engine/reference/builder/#cmd
I'm completely new to linux and docker concepts
In my windows machine I boot up centos7 in virtual box
while running docker-compose build I get
/bin/sh: /usr/sbin/sshd-keygen: No such file or directory
How to rectify it
I tried to create a remote user
docker-compose.yml
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- "$PWD/jenkins_home:/var/jenkins_home"
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: centos7
networks:
- net
networks:
net:
DockerFile
FROM centos
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo "Thevenus987$" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D
In Dockerfile
Change RUN /usr/sbin/sshd-keygen // Centos8 doesn't accept this command
to RUN ssh-keygen -A // This works.
I hope this solution works fine.
Change the FROM as centos:7
Replace RUN /usr/sbin/sshd-keygen to RUN ssh
The Dockerfile should be like this:
FROM centos:7
RUN yum -y install openssh-server && \
yum install -y passwd && \ #Added
yum install -y initscripts #Added
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
#CMD /usr/sbin/sshd -D
CMD ["/usr/sbin/sshd", "-D"]
just use FROM centos:7 (instead of using centos8 base image)
and
yum install -y initscripts
Note: Updated initscripts bug fix enhancement package fixes several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6/7.
you don't need to remove or twek this below line at all
RUN /usr/sbin/sshd-keygen
it will work 100% ..
To learn more about initscripts bug fix enhancement:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/6.5_technical_notes/initscripts
Change the base image FROM centos to FROM centos:7 and it will work
The problem is with this line in your Dockerfile:
RUN /usr/sbin/sshd-keygen
This is what you get when this line gets executed: /etc/rc.d/init.d/functions: No such file or directory.
/usr/sbin/sshd-keygen: command not found.
This init.d/functions file is different for different linux distros. It's specific to whatever distribution you're running. This file contains functions to be used by most or all shell scripts stored in the /etc/init. d directory.
To try this yourself simply pull the CentOS:7 image from docker hub and test your RUN steps from your Dockerfile as follows:
docker container run -i -t -d --name test centos:7
docker exec -it test bash
cd /etc/rc.d/init.d
ls -a
There is no file called functions in this directory.
In CentOS:7 Docker image you have to simply install the package initscripts in order for this script to be installed, so add these lines to your Dockerfile:
FROM centos:7
RUN yum install -y initscripts
FROM centos pulls the latest by default which does not include sshd-keygen.
You need to change your Dockerfile to:
FROM centos:7
...
&& yum install -y initscripts \
&& /usr/sbin/sshd-keygen
CMD ["/usr/sbin/sshd", "-D"]
Just change FROM centos
FROM centos:7
That error happened because before docker centos run centos7 and now run centos 8
try below command instead of RUN /usr/sbin/sshd-keygen
and also as others pointed out use:
FROM centos:7
RUN ssh-keygen -A
1)
in Dockerfile change:
RUN /usr/sbin/sshd-keygen
to
RUN /usr/bin/ssh-keygen
2) or try
RUN sshd-keygen
if that is included and exists anywhere in your $PATH, it will execute.
I want to create docker image that will start nano editor after running and give users possibilities continue work after nano closing.
For that I wrote next Dockerfile
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y nano
RUN mkdir /home/working
ENV EDITOR /bin/nano
WORKDIR /home/working
ENTRYPOINT /bin/nano
After running container (docker run -it --rm test) nano starts, but after exiting off nano, container closes. I want to continue work into ubuntu container after closing nano. What should i change in my Dockerfile?
i wouldnt set the ENTRYPOINT to nano. better use /bin/bash.
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y nano
RUN mkdir /home/working
ENV EDITOR /bin/nano
WORKDIR /home/working
ENTRYPOINT /bin/bash
now you can (if the container is running) access the container with
docker exec -it <containername> /bin/bash
and use nano as often as you want. for example to edit multiple files. after you close nano /bin/bash is still running and the container doesnt exits.