I have a Flask project that runs inside a Docker container. I have managed to build my application and run it successfully. However, I would like to also build the sphinx documentation, so its static files can be served. The documentation is normally built using make html in the docs/ file. I've found a docker source for sphinx, and have set up a docker-compose config that runs successfully, however, I am not able to pass the make html command to sphinx -- I believe because I am running the command a level up, since make html needs to be run from within docs/ and not from within the base directory.
I get the following error when I try to build the sphinx documentation:
docker-compose run --rm sphinx make html
Starting web_project
Pulling sphinx (nickjer/docker-sphinx:latest)...
latest: Pulling from nickjer/docker-sphinx
c62795f78da9: Pull complete
d4fceeeb758e: Pull complete
5c9125a401ae: Pull complete
0062f774e994: Pull complete
6b33fd031fac: Pull complete
aac5b231ab1e: Pull complete
97be0ae484bc: Pull complete
ec7c8cca5e46: Pull complete
82cc981959eb: Pull complete
151a33a826a1: Pull complete
Digest: sha256:8125ca919069235278a5da631c002926cc57d741fa041b59c758183ebd48121f
Status: Downloaded newer image for nickjer/docker-sphinx:latest
make: *** No rule to make target 'html'. Stop.
My project has the following directory structure
docs/
web/
Dockerfile
run.py
requirements.txt
....
docker-compose.yml
README.md
And the following docker-compose configuration
version: '2'
services:
web:
restart: always
build: ./web
ports:
- "7000:7000"
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn -w 2 -b :7000 run:app
sphinx:
image: "nickjer/docker-sphinx"
volumes:
- "${PWD}:/docs"
user: "1000:1000"
depends_on:
- web
How do I build my sphinx documentation within the Docker container? Do I need to add another Dockerconfig file to my docs module?
I believe because I am running the command a level up, since make html
needs to be run from within docs/ and not from within the base
directory.
To test this theory, could you try something like this command?
docker-compose run --rm sphinx bash -c "cd docs; make html"
or possibly
docker-compose exec sphinx bash -c "cd docs; make html"
I had success with the following to build and deploy my sphinx docs for static read by the Flask app.
WORKDIR /pathapp/app
ENV PYTHON /pathapp/app
RUN python /pathapp/app/setup.py build_sphinx -b html
RUN python /pathapp/app/scripts/script_to_copy_build_sphinx_html_to_docs.py
The move script is just a simple copy directory.
Related
I'm trying to create a Node.js based docker image. For that, I'm looking for options for Parent image. Security is one of the main considerations in the image and we wanted to harden the image by not allowing shell or bash in the container.
Google Distroless does provide this option, but Distroless-NodeJS is in the experimental stage and not recommended for production.
Possible options I could think of are (compromising Distroless feature):
Official Node Image (https://hub.docker.com/_/node/) / Alpine / CentOS based image (but all would have a shell I believe).
With that being said,
Is there any alternative for Distroless?
What are the best options for the parent image for Node.js based docker image?
Any pointers would be helpful.
One option would be to start with a Node image that meets your requirements, then delete anything that you don't want (sh, bash, etc.)
At the extreme end you could add the following to your Dockerfile:
RUN /bin/rm -R /bin/*
Although I am not certain that this wouldn't interfere with the running of node.
On the official Node image (excl Apline) you have /bin/bash, /bin/dash and /bin/sh (a symlink to /bin/dash). Just deleting these 3 flies would be sufficient to prevent shell access.
The Alpine version has a symlink /bin/sh -> /bin/busybox. You could delete this symlink, but it may not run without busybox.
I think you can build an image from scratch which only contains your node application and required dependency, nothing more even no ls or pwd etc.
FROM node as builder
WORKDIR /app
COPY . ./
RUN npm install --prod
FROM astefanutti/scratch-node
COPY --from=builder /app /app
WORKDIR /app
ENTRYPOINT ["node", "bin/www"]
scratch-node
So if someone tries to get the shell,like,
docker run --entrypoint bash -it my_node_scratch
Will get error
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"bash\": executable file not found in $PATH": unknown.
I am referring this from official Node.js docker image.
Create a docker file in your project.
Then build and run docker image:
docker build - t test-nodejs-app
docker run -it --rm --name running-app test-nodejs-app
If you prefer docker compose:
Version: "2"
Services:
node:
image: "node:8"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=production
volumes:
- ./:/home/node/app
expose:
- "8081"
command: "npm start"
Run the compose file:
docker-compose up -d
I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"
I created a .Net core application with Linux docker support using Visual Studio 2017 on a Windows 10 PC with Docker for Windows installed. I can use the following command to run it (a console application)
docker run MyApp
I have another Linux machine with Docker installed. How to publish the .Net core application to the Linux machine? I need to publish and run the dockerized application on the Linux machine.
The linux has the following docker packages installed.
$ sudo yum list installed "*docker*"
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Installed Packages
docker-engine.x86_64 17.05.0.ce-1.el7.centos #dockerrepo
docker-engine-selinux.noarch 17.05.0.ce-1.el7.centos #dockerrepo
There are many ways to do this, just search for any tool for CI/CD.
The easiest way to do it is manually, connect to your Linux server, make a git pull of the code and then run the same commands that you run locally.
Other option is to do a push of your docker image to a container registry, then do a pull in you docker server and you are ready to go
Edit:
You should really take a look to some CI service, for example, in our environment, we use GitLab, when we do a push to master there is a gitlab.yml that builds the project, then do a push:
image: docker:latest
services:
- docker:dind
stages:
- build
api:
variables:
IMAGE_NAME: git.lagersoft.com:4567/gumbo/vtae/api:${CI_BUILD_REF}
stage: build
only:
- master
script:
- docker build -t ${IMAGE_NAME} -f vtae.api/Dockerfile .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN ${IMAGE_NAME}
- docker push ${IMAGE_NAME}
With this we only need to do a pull in our server with the latest version.
It's worth noticing that docker by itself does not handle the publication part, so you need to do it manually or with some tool (any CI tool like gitlab, jenkins, circleci, amazon code pipeline...) if you are starting learning I would recommend to start manually and then integrate some CI tool.
Edit 2
About the Visual Studio tool, I would not recommend to use it for anything else than local development, since yeah, it only works in windows and it only works in visual studio (Rider has integrated just very recently), so, to do the deploy in a linux environment we use our own docker and docker compose files, they are based in the defaults anyway, they are something like this:
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY lagersoft.common/lagersoft.common.csproj lagersoft.common/
COPY vtae.redirect/vtae.redirect.csproj vtae.redirect/
COPY vtae.data/vtae.data.csproj vtae.data/
COPY vtae.common/vtae.common.csproj vtae.common/
RUN dotnet restore vtae.redirect/vtae.redirect.csproj
COPY . .
WORKDIR /src/vtae.redirect
RUN dotnet build vtae.redirect.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish vtae.redirect.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "vtae.redirect.dll"]
This docker file copy all the related projects (I hate the copying part, but is the same as Microsoft do their default file), the do the build and publish the app, on the other hand we have a docker-compose to add some services (this files must be in the solution folder to access all the related projects):
version: '3.4'
services:
vtae.redirect.redis:
image: redis
volumes:
- "./volumes/redirect/redis/data:/data"
container_name: vtae.redirect.redis
vtae.redirect:
image: vtae.redirect
depends_on:
- vtae.redirect.redis
build:
context: .
dockerfile: vtae.redirect/Dockerfile
ports:
- "8080:80"
volumes:
- "./volumes/redirect/data:/data"
container_name: vtae.redirect
entrypoint: dotnet /app/vtae.redirect.dll
With this parts there is only left to do a commit, then a pull in the server and run the docker-compose up command to run our app (you could do it from the docker file directly, but it is easier and more manageable with docker compose.
Edit 3
To make the deployment in the server we use two tools.
First the gitlab ci is run after the commit is done
It makes the build specified in the docker file and pushes it to our Gitlab container registry, same if it was the container registry of amazon, google, azure... etc...
Then it makes a post request to the server in production, this server is running a special tool in a separate port
The server receive the post request and validates it, for this we use this tool (a friend is the repo owner)
The script receive the request, check the login, and if it is valid, then it simply does the pull from our gitlab container registry and run docker-compose up
Notes
The tool is not perfect, we are moving from just docker to use kubernetes, were you can connect to your cluster directly from your machine or some CI integration and do the deploys directly, no matter what solution do you choose, i recommend you that start to see how kubernetes can help you, sadly is one more layer to learn, but it is very promising, were you will be able to publish to almos any cloud or metal painless, with fallbacks, scaling and other stuff.
Also
If you do not want or can not use the container registry (I strongly recommend this way), you can use the same tool, in the .sh that executes it, just do a git pull and then a docker build or docker compose.
The most simple scenario could be to create an script yourself where you do ssh to the server, upload the files as zip and then run it in the server, remember, Ubuntu is in the microsoft store and could run this script, but the other solutions are more "independient" and scalable, so, make your choose!
I am currently trying out this tutorial for node express with mongodb
https://medium.com/#sunnykay/docker-development-workflow-node-express-mongo-4bb3b1f7eb1e
the first part works fine where to build the docker-compose.yml
it works totally fine building it locally so I tried to tag it and push into my dockerhub to learn and try more.
this is originally what's in the yml file followed by the tutorial
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
this works like a charm when I use docker-compose build and docker-compose up
so I tried to push it to my dockerhub and I also tag it as node-test
I then changed the yml file into
version: "2"
services:
web:
image: "et4891/node-test"
volumes:
- ./:/app
ports:
- "3000:3000"
then I removed all images I have previously to make sure this also works...but when I run docker-compose build I see this message error: web uses an image, skipping and nothing happens.
I tried googling the error but nothing much I can find.
Can someone please give me a hand?
I found out, I was being stupid.
I didn't need to run docker-compose build I can just directly run docker-compose up since then it'll pull the images down, the build is just to build locally
in my case below command worked:
docker-compose up --force-recreate
I hope this helps!
Clarification: This message (<service> uses an image, skipping)
is NOT an error. It's informing the user that the service uses Image and it's therefore pre-built, So it's skipped by the build command.
In other words - You don't need build , you need to up the service.
Solution:
run sudo docker-compose up <your-service>
PS: In case you changed some configuration on your docker-compose use --force-recreate flag to apply the changes and creating it again.
sudo docker-compose up --force-recreate <your-service>
My problem was that I wanted to upgrade the image so I tried to use:
docker build --no-cache
docker-compose up --force-recreate
docker-compose up --build
None of which rebuild the image.
What is missing ( from this post ) is:
docker-compose stop
docker-compose rm -f # remove old images
docker-compose pull # download new images
docker-compose up -d
I'm having an interesting problem... would love any tips, suggestions, or pointers in the right direction. Not sure where to start with this one, really.
Basically, we have a docker-compose.yml and Dockerfile.
Dockerfile:
FROM hypriot/rpi-node:7
# Create app directory
RUN mkdir -p /usr/src/rrp-database
WORKDIR /usr/src/rrp-database
# Install app dependencies
COPY package.json /usr/src/rrp-database
RUN npm install
# Bundle app source
COPY . /usr/src/rrp-database
docker-compose.yml:
mysql:
image: hypriot/rpi-mysql
environment:
- MYSQL_ROOT_PASSWORD=sqltest
- MYSQL_DATABASE=rrplocal
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- 3306:3306
application:
build: .
working_dir: /opt/rrp/src/rrp-database
ports:
- 8080:8080
links:
- mysql
command: bash -c "sleep 15 && node createTables.js && sleep 5 && node provisionDB.js && node server.js"
Most of this you don't need to delve in to, so he's there problem: when I run the setup via docker-compose build, our machine (a RaspPi, hence the hypriot versions) hangs completely when pulling the hypriot/rpi-node image.
$ docker-compose build
mysql uses an image, skipping
Building application
Step 1 : FROM hypriot/rpi-node:7
7: Pulling from hypriot/rpi-node
395823d8c49b: Extracting [====> ] 4.129 MB/45.86 MBBnload complete
298 B/298 BDownload complete
44f82080e2cc: Download complete
a3ed95caeb02: Download complete
f23aeb340745: Download complete
466adec6a1f2: Download complete
281ed5189bce: Download complete
95c0246ab315: Download complete
0a596801c90f: Downloading [=======================> ] 51.89 MB/111.9 MBnload complete
e1613bd476c1: Download complete
It stays like this forever, and hangs the machine. However, when running the Dockerfile alone - docker build -t rrp-database . (which I thought is essentially what the docker-compose is doing anyway...) the image pulls and builds without a hitch.
It's very much worth noting that this was tested on two separate machines, with the exact same result.
I'd really like to use docker-compose, but I'm not sure where to start with this issue. Any thoughts?
Much appreciation to anyone who has some answers for me! Cheers!