Created flask application and deploying on gunicorn server on docker.
I want to write shell script to create linux service inside docker container, so I can start, stop and restart my flask app inside container.
Dockerfile:
FROM python
COPY ./app /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 9003
CMD ["gunicorn", "-b", "0.0.0.0:9003", "main:app"]
Build it and run using command : docker run -p 9003:9003 myapp.
What I have tried opened docker container cli: docker exec -it <container_id> bash . You can find container id by command docker ps.
Step 1: changed directory to /etc/init.d and created myservice.service file.
[Unit]
Description=Gunicorn instance to serve app
[Service]
WorkingDirectory=/home/app
ExecStart=gunicorn --workers 3 --bind 0.0.0.0:9003 unix:app.sock -m 007 main:app
Restart=always
[Install]
WantedBy=multi-user.target
step 3: I enable service by first running cd /etc/rc3.d and then run ln -s ../init.d/{SERVICENAME} S95{SERVICENAME}
step 4: Give permission : chmod +x /etc/init.d/{SERVICENAME}
step 5: start service : `service myservice start
But this give me error
enter image description here
Docker container OS: Debian 10
Why its giving me Unit not found? any idea? How to resolve?
Why its giving me Unit not found? any idea? How to resolve?
You are improperly creating the file in wrong directory and creating unrelated links. You seem to be mixing rc-scripts with systemd. They are not alike, they are completely different. Toss aside rc-scripts and forget /etc/rc.d ever existed. Re-read and follow a good tutorial on how to add a systemd service file to your system.
want to write shell script to create linux service inside docker container
This will make no difference, as the container you are running does not run systemd - you run only CMD ["gunicorn"..., so your service will never be running. Typically in docker, multiple services are run with a shell script or in more complicated cases with a supervisord. In your case, there is no need to setup anything - just let the docker run the service as its pid 0 process, it's only one process. Re-research how docker works and research how services are written in docker.
What you may want to create is a service file on host, not inside docker container.
Thanks for answer.
Using Window 10
please see references and way what I was doing.
Created myproject.service in /etc/systemd/system inside docker container.
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
this newly created service is not catched and when I try to start this service using command service myproject start, It gives error unrecognized service: myproject.
How will OS know that new service has been created and to reload it.
In Linux, by running daemon-reload, It catches all new created services. But In case of docker, how to do this?
Reference: https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04
For rc script, Reference: https://unix.stackexchange.com/questions/106656/how-do-services-in-debian-work-and-how-can-i-manage-them
Related
I have a container with a docker-compose like this
services:
app:
build:
context: app
restart: always
version: '3.5'
It launches a node app docker-compose run -d --name my-app app node myapp.js
the app is made to either run to completion or throw, and then the goal would be to have docker restart it in an infinite loop, regardless of the exit code. I'm unsure why but it doesn't restart it.
How can I debug this? I have no clue what exit code node is sending, nor do I know which exit code docker uses to decide to restart or not.
I am also on mac, haven't tested on linux yet. Edit: It does restart on linux, don't have another mac to see if the behavior is isolated to my mac only.
It is important to understand the following two concepts:
Ending your Node app doesn't mean the end of your container. Your container runs a shared process from your OS and your Node app is only a sub process of that. (Assuming your application runs with the Deamon)
The restart indicates the "starting" policy - it will never terminate and start your container again.
Having said that, what you need is a way you can really restart your container from within the application. The best way to do this is via Docker healthchecks:
https://docs.docker.com/engine/reference/builder/#healthcheck
Or, here are some answers on restarting a container from within the application.
Stopping docker container from inside
From Github Issue seems like it does not respect `--restart``, or from the #Charlie comment seems like its vary from platform to platform.
The docker-compose run command is for running “one-off” or “adhoc” tasks.The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
docker-compose run
Also if its like docker run -it then I am not seeing an option for restart=always but it should then respect ``restart` option in compose.
Usage:
run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] [-l KEY=VALUE...]
SERVICE [COMMAND] [ARGS...]
Options:
-d, --detach Detached mode: Run container in the background, print
new container name.
--name NAME Assign a name to the container
--entrypoint CMD Override the entrypoint of the image.
-e KEY=VAL Set an environment variable (can be used multiple times)
-l, --label KEY=VAL Add or override a label (can be used multiple times)
-u, --user="" Run as specified username or uid
--no-deps Don't start linked services.
--rm Remove container after run. Ignored in detached mode.
-p, --publish=[] Publish a container's port(s) to the host
--service-ports Run command with the service's ports enabled and mapped
to the host.
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
-v, --volume=[] Bind mount a volume (default [])
-T Disable pseudo-tty allocation. By default `docker-compose run`
allocates a TTY.
-w, --workdir="" Working directory inside the container
I'm trying to build a docker file in which I first download and install the Cloud SQL Proxy, before running nodejs.
FROM node:13
WORKDIR /usr/src/app
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
COPY . .
RUN npm install
EXPOSE 8000
RUN cloud_sql_proxy -instances=[project-id]:[region]:[instance-id]=tcp:5432 -credential_file=serviceaccount.json &
CMD node index.js
When building the docker file, I don't get any errors. Also, the file serviceaccount.json is included and is found.
When running the docker file and checking the logs, I see that the connection in my nodejs app is refused. So there must be a problem with the Cloud SQL proxy. Also, I don't see any output of the Cloud SQL proxy in the logs, only from the nodejs app. When I create a VM and install both packages separately, it works. I get output like "ready for connections".
So somehow, my docker file isn't correct, because the Cloud SQL proxy is not installed or running. What am I missing?
Edit:
I got it working, but I'm not sure this is the correct way to do.
This is my dockerfile now:
FROM node:13
WORKDIR /usr/src/app
COPY . .
RUN chmod +x wrapper.sh
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN npm install
EXPOSE 8000
CMD ./wrapper.sh
And this is my wrapper.sh file:
#!/bin/bash
set -m
./cloud_sql_proxy -instances=phosphor-dev-265913:us-central1:dev-sql=tcp:5432 -credential_file=serviceaccount.json &
sleep 5
node index.js
fg %1
When I remove the "sleep 5", it does not work because the server is already running before the connection of the cloud_sql_proxy is established. With sleep 5, it works.
Is there any other/better way to wait untill the first command is completely done?
RUN commands are used to do stuff that changes something in the file system of the image like installing packages etc. It is not meant to run a process when the you start a container from the resulting image like you are trying to do. Dockerfile is only used to build a static container image. When you run this image, only the arguments you give to CMD instruction(node index.js) is executed inside the container.
If you need to run both cloud_sql_proxy and node inside your container, put them in a shell script and run that shell script as part of CMD instruction.
See Run multiple services in a container
You should ideally have a separate container per process. I'm not sure what cloud_sql_proxy does, but probably you can run it in its own container and run your node process in its own container and link them using docker network if required.
You can use docker-compose to manage, start and stop these multiple containers with single command. docker-compose also takes care of setting up the network between the containers automatically. You can also declare that your node app depends on cloud_sql_proxy container so that docker-compose starts cloud_sql_proxy container first and then it starts the node app.
I've downloaded two docker containers and already configure them.
So, now all I want is to start them on system startup.
They are in a path like
/home/user/docker-mailserver
/home/user/docker-webserver
Hosted on a Ubuntu 18.04.01 (x64)
On boot those docker containers are not running.
On login, those docker containers are starting.
I already tried to do something like
docker run -it --restart unless-stopped fancydockercontainer:latest
docker run -dit --restart unless-stopped fancydockercontainer:latest
But then when I do docker ps there where new containers added to the pool.
Is there a way to "re-route" the start process of those container to system start without completely delete / remove them?
Addition:
I started them like docker-compose up -d mailserver
After #KamilCuk gave a hint to solve this with service, this was a possible solution.
Looks like this:
Create service file with command:
nano /etc/systemd/system/docker-mail.service
Done stuff like that in the file
[Unit]
Description=Docker Mailserver
Requires=docker.service
After=docker.service
[Service]
Restart=always
RemainAfterExit=yes
WorkingDirectory=/home/user/docker-mailserver
ExecStart=/usr/bin/docker-compose up -d mail
ExecStop=/usr/bin/docker-compose stop -d mail
[Install]
WantedBy=default.target
Adding the new service to systemctl with systemctl enable docker-mail.service
After rebooting the server, this mailserver is available.
At this point, I was able to see the startup log with journalctl -u docker-mail.service -b (-b is just "boot")
I have followed some posts and tutorials as well to create a script to start meteor project when server restart. i have followed answer mentioned in : How to run meteor on startup on Ubuntu server
Then I gave executable permission to script with "chmod +x meteor-server.sh".
I have tried to put this script in /etc/init.d and /etc/init folders but meteor project does not start at the reboot. I'm using ubuntu 16.04.
I would be grateful if you can show me the fault that i have done. Following code is my "meteor.server.sh" script file.
# meteorjs - meteorjs job file
description "MeteorJS"
author "Jc"
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [016]
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect fork
# Run before process
pre-start script
cd /home/me/projects/cricket
echo ""
end script
# Start the process
exec meteor run -p 4000 --help -- production
First of all, there's already a very good tool mupx that allows you to deploy meteor projects to your own architecture, so there's really no need to do it by yourself unless you have a very good.
If you really need to deploy manually, this will take several steps. I am not going to cover all the details because you're specifically asking about the startup script and the remaining instruction should be easily accessible in the Internet.
1. Prepare your server
Install MongoDB unless you are planning to use a remote database.
Install NodeJS.
Install reverse proxy server, e.g. Nginx, or haproxy.
Install Meteor, which we will use as the build tool only.
2. Build your app
I am assuming here that you already have the source code of your app put on the server to which you're planning to deploy. Go to your project root and run the following command:
meteor build /path/to/your/build --directory
Please note that if /path/to/your/build exists it will be recursively deleted first, so be careful with that.
3. Install app dependencies
Go to /path/to/your/build/bundle/programs/server and run:
npm install
4. Prepare a run.sh script
The file can be of the following form:
export MONGO_URL="mongodb://127.0.0.1:27017/appName"
export ROOT_URL="http://myapp.example.com"
export PORT=3000
export METEOR_SETTINGS="{}"
/usr/bin/env node /path/to/your/build/bundle/main.js
I will assume you put it in /path/to/your/run.sh. A couple of notes here:
This form of MONGO_URL assumes you have MongoDB installed locally.
You will need to instruct your reverse proxy server to point your app trafic to port 3000.
METEOR_SETTINGS should be the output of JSON.stringify(settings) of whatever settings object you may have.
5. Prepare upstart script
With all the preparations we've made so far, the script can be as simple as
description "node.js server"
start on (net-device-up and local-filesystems and runlevel [2345])
stop on runlevel [016]
respawn
script
exec /path/to/your/run.sh
end script
This file should go to /etc/init/appName.conf.
Finally i got it to work. I have used following 2 scripts to run meteor on the startup.
First i put this service file(meteor.service) in /etc/systemd/system
[Unit]
Description = My Meteor Application
[Service]
ExecStart=/etc/init.d/meteor.sh
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=meteor
[Install]
WantedBy=multi-user.target
I have called a scipt using this service. I put this following script(meteor.sh) in /etc/init.d
#!/bin/sh -
description "Meteor Projects"
author "Janitha"
#start service on following run levels
start on runlevel [2345]
#stop service on following run levels
stop on runlevel [016]
#restart service if crashed
respawn
#set user/group to run as
setuid janitha
setgid janitha
chdir /home/janitha/projects/cricket_app
#export HOME (for meteor), change dir to plex requests dir, and run meteor
script
export HOME=/home/janitha
exec meteor
end script
I make both these file executable by using
chmod +x meteor.service
chmod +x meteor.sh
And i have used following two commands to enable the service
systemctl daemon-reload
systemctl enable meteor.service
I used this configurations successfully
In /etc/init.d add a file called meteor.sh
#!/bin/sh
export HOME="/home/user"
cd /home/user/meteor/sparql-fedquest
meteor --allow-superuser
You must give executions permissions to meteor.sh
sudo chmod 644 meteor.sh
Also you must create meteor.service in /etc/systemd/system
[Unit]
Description =Portal of bibliographic resources of University of Cuenca
Author = Freddy Sumba
[Service]
ExecStart=/etc/init.d/meteor.sh
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=meteor
[Install]
WantedBy=multi-user.target
Also you must give permissions to meteor.service
$ sudo chmod 644 meteor.service
Then, we need add the service to this start each time that the server reboot
$ systemctl enable meteor.service
And finally start the service
$ service meteor start
I'm doing some initial tests with docker. At moment i have my images and I can put some containers running, with:
docker ps
I do docker attach container_id and start apache2 service.
Then from the main console I commit the container to the image.
After exiting the container, if I try to start the container or try to run one new container from the committed image, the service is always stopped.
How can create or restart one container with the services started, for example apache?
EDIT:
I've learned a lot about Docker since originally posting this answer. "Starting services automatically in Docker containers" is not a good usage pattern for Docker. Instead, use something like fleet, Kubernetes, or even Monit/SystemD/Upstart/Init.d/Cron to automatically start services that execute inside Docker containers.
ORIGINAL ANSWER:
If you are starting the container with the command /bin/bash, then you can accomplish this in the manner outlined here: https://stackoverflow.com/a/19872810/2971199
So, if you are starting the container with docker run -i -t IMAGE /bin/bash and if you want to automatically start apache2 when the container is started, edit /etc/bash.bashrc in the container and add /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf (or whatever your apache2 start command is) to a newline at the end of the file.
Save the changes to your image and restart it with docker run -i -t IMAGE /bin/bash and you will find apache2 running when you attach.
An option that you could use would to be use a process manager such as Supervisord to run multiple processes. Someone accomplished this with sshd and mongodb: https://github.com/justone/docker-mongodb
I guess you can't. What you can do is create an image using a Dockerfile and define a CMD in that, which will be executed when the container starts. See the builder documentation for the basics (https://docs.docker.com/reference/builder/) and see Run a service automatically in a docker container for information on keeping your service running.
You don't need to automate this using a Dockerfile. You could also create the image via a manual commit as you do, and run it command line. Then, you supply the command it should run (which is exactly what the Dockerfile CMD actually does). You can also override the Dockerfiles CMD in this way: only the latest CMD will be executed, which is the command line command if you start the container using one. The basic docker run -i -t base /bin/bash command from the documentation is an example. If your command becomes too long you could create a convenience script of course.
By design, containers started in detached mode exit when the root process used to run the container exits.
You need to start a Apache service in FOREGROUND mode.
docker run -p 8080:80 -d ubuntu/apache apachectl -D FOREGROUND
Reference: https://docs.docker.com/engine/reference/run/#detached-vs-foreground
Try to add start script to entrypoint in dockerfile like this;
ENTRYPOINT service apache2 restart && bash