Docker Compose In Production - node.js

Trying to use docker-compose to build and up a simple Node.js application. Although I ran into the same problem with a Django application so I think I'm just missing some sort of vital step. Here is my Dockerfile:
FROM node:4.2.1
CMD mkdir -p /var/app
COPY . /var/app
EXPOSE 3000
CMD node /var/app/index.js
When I run docker compose up pointed towards a digital ocean machine it throws a node error suggesting it can't find the code in /var/app. Is there some other mechanism I am supposed to use to get my code onto the machine other than docker?

The line CMD mkdir -p /var/app is wrong. It should be only one CMD in a Dockerfile, usually at the end.
Only the last CMD directive in a chain of inherited docker images will be executed.
You should use RUN instead
From Dockerfile reference
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
The main purpose of a CMD is to provide defaults for an executing container.

Try taking out the mkdir step. You also need to set the working directory.

Related

Why docker entry point script fails when in contrast it runs perfectly inside container manually?

I have a Ubuntu:20.04 image with my software being installed by dockerfile RUN commands. So the script i want to execute is build by Dockerfile RUN call to buildmyscripts.sh
This program installs perfectly and then if i run the container (with default entrypoint of /bin/sh or /bin/bash)
and execute manually: /root/build/script.sh -i arg1 -m arg2 it works then.
However same doesn't work with ENTRYPOINT set to the /root/build/script.sh followed by CMD set to the arguments. I get following error on running the image
Error: cannot load shared library xyz.so
Xyz.so is a common shared library installed by RUN anyway before.
Please assist thanks
Note: i run as USER root because i have self hosted runner on a hardened Server so security not an issue.
Apparently we need to source the script for environment variables by prepending to the entrypoint/cmd variable in dockerfile. Since the source was through another script it wasnt working alone with ENV variable

Error 137 when build docker image locally from node app on Ubuntu 20.04

i'm having a trouble when i build a docker image locally. This is the build
FROM node
WORKDIR /home/node/unqfy
COPY package.json .
COPY package-lock.json .
RUN ["npm", "install"]
EXPOSE 5001
COPY . /home/node/unqfy/
RUN chown -R 777 /home/node/unqfy
CMD [ "node", "./src/API/requests/router.js" ]
Obviusly it's a node app. But when it comes to the npm install line it returns a 137 error, which implies that it reaches the RAM limit as investigated. The thing is that i use Ubuntu 20.04 and, unlike windows, it is not necessary to specify how much memory docker can use, it is dynamic and therefore there is not a config for that in ubuntu. I couldnt find anything refered to this error given in the image build process, contrary when a container is setted up, this error is much less common. I run the docker info command an saw that the Total Memory part have all the RAM that i have in my system. Previously i change the default directory where docker allocate images to a completly different partition, i am using the /mnt/UNQ/docker-final/ directory and i configure the root-dir param in the daemon.json file whit that path. I've done that 'cause i am running out of storage in the partition where docker stores the images by default.

Python Script to run commands in a Docker container

I am currently working on automating commands for a Docker container with a Python script on the host machine. This Python script for now, builds and runs a docker-compose file, with the commands for the containers written into the docker-compose file and the Dockerfile itself.
What I want to do is have the Python script action all commands to run within the container, so if I have different scripts I want to run, I am not changing the container. I have tried 2 ways.
First was to run os.system() command within the Python script, however, this works only as far as opening the shell for the container, the os.system() command does not execute code in the Docker container itself.
The second way uses CMD within the Dockerfile, however, this is limited and is hard coded to the container. If I have multiple scripts I have to change the Dockerfile, I don't want this. What I want is to build a default container with all services running, then run Python scripts on the host to run a sequence of commands on the container.
I am fairly new to Docker and think there must be something I am overlooking to run scripted commands on the container. One possible solution I have come across is nsenter. Is this a reliable solve and how does it work? Or is there a much simpler way? I have also used docker-volume to copy the python files into the container to be run on build, however, I can still not find a solve to automate the accessing and running these python scripts from the host machine.
If the scripts need to be copied into a running container, you can do this via the docker cp command. e.g. docker cp myscript.sh mycontiainer:/working/dir.
Once the scripts are in the container, you can run them via a docker exec command. e.g docker exec -it mycontainer /working/dir/myscript.sh.
Note, this isn't a common practice. Typically the script(s) you need would be built (not copied) into container image(s). Then when you want to execute the script(s), within a container, you would run the container via a docker run command. e.g. docker run -it mycontainerimage /working/dir/myscript.sh

Haskell Stack Image Container Execute On Docker run

I am following the turorials from stackage and docker to run a haskell build via docker.
Building and Image creation works well and i can run the app via docker run -p 5000:5000 {imagename} {app-exe}
I am using the build in features of the latest stack to create the docke image with this minimal configuration.
image:
container:
base: "fpco/ubuntu-with-libgmp"
How can i make the image to launch the executable automatically, so that i can just type docker run -p 5000:5000 {imagename}. I know how to do it in a dockerfile but not with stack. I was thinking that I have to use:
entrypoints:
- appname-exe
No success, no matter if I just use the name of executable or the absolute path to it. Maybe I don't understand what the entrypoint is for.
I am using Docker for Mac.
Any suggestions appreciated.
Cheers
Bjorn
I figured it out myself. Everything is working correctly, I just didn't understand that stack creates two separate images. One just for the environment and one for the entrypoint.
So I checked docker images and found indeed two images. I was simply running the wrong image. This is correct
docker run -p 5000:5000 {imagename-app-exe}
Man sometimes you don't see the forest.

Why does my Dockerfile CMD not work?

So at the end of my Dockerfile I have this:
WORKDIR /home
CMD django-admin startproject whattt
CMD /bin/bash
When I create image and then run container, everything works as expected there are no errors, and no errors in the Docker log. However there are still some issues that I cannot seem to figure out.
The first and most important problem is that CMD django-admin startproject is not actually creating any project. AFTER I run the container, then I can manually run django-admin startproject and it works as expected. When I issue this as a CMD from the Dockerfile though, then no project gets created.
The second issue is after the django-admin line, I put a second CMD with /bin/bash so when I run the container it opens a shell (so I can go in and check if my django project was created). Will this create a problem or conflict with the previous django-admin line? If I remove this line, then when I run the container I have no way to open the shell and check if my django project is there do I ?
Any help would be appreciated, thanks.
“There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.” via Dockerfile reference. So your first CMD will not take effects.
If you want to execute the bash of your container, try docker exec command, and the document provides example commands so you can follow.

Resources