i am trying to deploy my Go app with Alpine in docker, I was able to use it on my Mac and then going to Production with Centos 8 got issues
here is my Dockerfile:
FROM golang:alpine
RUN apk add --no-cache postgresql
RUN apk update && apk add --no-cache gcc && apk add --no-cache libc-dev && apk add --no-cache --update make
# Set the current working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. they will be cached of the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the WORKDIR inisde the container
COPY . .
# Build the Go app
RUN go build .
RUN rm -rf /usr/local/var/postgres/postmaster.pid
// this commands below like "psql -c ;'DROP DATABASE IF EXISTS prod'"
// "psql -c ;'CREATE USER prod'"
RUN make setup
# Exporse port 3000 or 8000 to the outisde world
EXPOSE 3000..
CMD ["make", "run" ]
then i got error:
psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
on my make setup i do the migration, create user, database
can make SUPERUSER on psql for that alpine also??
what u can see on the above syntax, is there any wrong and how to correct it? I have stuck from yesterday
Delete your original docker file's from 8th line to 20th and add these.
If your folder structure like this :
- directory
|
-> Dockerfile
-> go.mod
-> go.sum
-> go source files
# Copy go mod and sum files
COPY . /app
# Set the current working Directory inside the container
WORKDIR /app
RUN go mod download
RUN go build .
You cannot run database commands in a Dockerfile.
By analogy, consider the go generate command: you can embed special comments in your Go source code that ask the Go compiler to run programs for you, typically to generate other source files. Say you //go:generate: psql ... in your source code and run go generate ... && go install . Now you run that compiled binary on a different system. Since you're not pointing at the same database any more, the database setup is lost.
In the same way, a Dockerfile produces a compiled artifact (in this case the Docker image) and it needs to run independently of its host environment. In your example you could docker push the image you built on MacOS to a registry, and docker run it from the CentOS host without rebuilding it (and that's probably better practice for a production system).
For the specific commands you show in the question, you could put them in a database container's /docker-entrypoint-initdb.d directory, or otherwise just run them once pointing at your database. For more general-purpose database setup you might look at running a database migration tool at application startup, either in your program's main() function or in a wrapper entrypoint script.
Related
I am currently using Docker Desktop for Windows and following this tutorial for using Docker and VSCode ( https://scotch.io/tutorials/docker-and-visual-studio-code ) and when I am attempting to build the image, the daemon is able to complete the first step of the Dockerfile, but then hangs indefinitely on the second step. Sometimes, but very rarely, after an indeterminate amount of time, it will error out and give me this error
failed to start service utility VM (createreadwrite): CreateComputeSystem 97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm: The operation could not be started because a required feature is not installed.
(extra info: {"SystemType":"container","Name":"97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm","Layers":null,"HvPartition":true,"HvRuntime":{"ImagePath":"C:\\Program Files\\Linux Containers","LinuxInitrdFile":"initrd.img","LinuxKernelFile":"kernel"},"ContainerType":"linux","TerminateOnLastHandleClosed":true})
I have made sure that virtualization is enabled on my machine, uninstalled and reinstalled Docker, uninstalled Docker and deleted all files related to it before reinstalling, as well as making sure that the experimental features are enabled. These are fixes that I have found from various forums while trying to find others who have had the same issue.
Here is the Dockerfile that I am trying to build from. I have double checked with the tutorial that it is correct, though its still possible that I missed something (outside of the version number in the FROM line).
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
CMD npm start
I would expect the image to build correctly as I have followed the tutorial to a T. I have even full reset and started the tutorial over again and I'm still getting this same issue where it hangs indefinitely.
well, you copy some files two times. I would not do that.
so for the minimum change to your Dockerfile I would try:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY . .
RUN npm install --production --silent && mv node_modules ../
EXPOSE 3000
CMD npm start
I would also think about the && mv node_modules ../ part, if it is really needed.
If you don't do it already I advise you to write a .dockerignore file right next to your Dockerfile with the minimum content of:
/node_modules
so that your local node_modules directory does not get also copied while building the image (saves time).
hope this helps.
I've been trying out my Node.js app on a Raspberry Pi 3 Model B using Docker and it runs without any troubles.
The problem comes when an app dependency (raspicam) requires raspistill to make use of the camera to take a photo. Raspberry is running Debian Stretch and the pi camera is configured and tested. But I cant access it when running the app via Docker.
Basically, I build the image with Docker Desktop on a win10 64bit machine using this Dockerfile:
FROM arm32v7/node:10.15.1-stretch
ENV PATH /opt/vc/bin:/opt/vc/lib:$PATH
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
# Create the app directory
ENV APP_DIR /home/app
RUN mkdir $APP_DIR
WORKDIR $APP_DIR
# Copy both package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Then in the Raspberry, if I pull the image and run it with:
docker run --privileged --device=/dev/vchiq -p 3000:3000 [my/image:latest]
I get:
Error: spawn /opt/vc/bin/raspistill ENOENT
After some researching, I also tried running with:
docker run --privileged -v=/opt/vc/bin:/opt/vc/bin --device=/dev/vchiq -p 3000:3000 [my/image:latest]
And with that command, I get:
stderr: /opt/vc/bin/raspistill: error while loading shared libraries: libmmal_core.so: cannot open shared object file: No such file or directory
Can someone share some thoughts on what changes do I have to make to the Dockerfile so that I'm able to access the pi camera from inside the Docker container? Thanks in advance.
I've had the same problem trying to work with camera interface from docker container. With suggestions in this thread I've managed to get it working with the below dockerfile.
FROM node:12.12.0-buster-slim
EXPOSE 3000
ENV PATH="$PATH:/opt/vc/bin"
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf
COPY "node_modules" "/usr/src/app/node_modules"
COPY "dist" "/usr/src/app"
CMD ldconfig && node /usr/src/app/app.js
There are 3 main points here:
Add /opt/vc/bin to your PATH so that you can call raspistill without referencing the full path.
Add /opt/vc/lib to your config file so that raspistill can find all dependencies it needs.
Reload config file (ldconfig) during container's runtime rather than build-time.
The last point is the main reason why Anton's solution didn't work. ldconfig needs to be executed in a running container so either use similar approach to mine or go with entrypoint.sh file instead.
Try replace this from the Dockerfile:
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
With the following:
ADD 00-vmcs.conf /etc/ld.so.conf.d/
RUN ldconfig
And create the file 00-vmcs.conf:
/opt/vc/lib
Edit:
If it still doesn't work, try loading a Raspbian Docker image for example balenalib/rpi-raspbian:
FROM balenalib/rpi-raspbian
I have a meteor application.This app works well on the Centos7 VM.
I need to create docker container of this app and install or import this container on other virtual machines.
What do ِdocker file need to save and load container on another VM?
NodeJs?
Mongodb?
MeteorJs?
Shouldn't I store Mongodb file in Docker container?
this is my docker file:
# Pull base image.
FROM node:8.11.4
# Install build tools to compile native npm modules
RUN npm install -g node-gyp
RUN apt-get install curl -y
RUN curl https://install.meteor.com/ | sh
# Create app directory
RUN mkdir -p /usr/app
COPY . /usr/app
RUN cd /usr/app/programs/server
RUN npm install
WORKDIR /usr/app
CMD ["node", "main.js"]
EXPOSE 3000
There are many ways to skin this cat ... lets assume you have researched the alternatives on how to execute a meteor app using containers by using tools which automates the below setup - meteor calls their version of this automation Galaxy
I suggest you run the meteor commands outside the container intended to run your app from since a meteor install is huge, slow to install and some of the libraries you may pull in, or the libraries your libraries pull in, may need c or c++ compilers so meteor and its friends do not need to get installed into your app container everytime you want to recompile your app ... your app container only needs nodejs and your bundle ... when you execute a meteor app it does not use meteor instead the app is executed using nodejs directly since at this point your code has been compiled into a bundle which is pure nodejs
Yes you would do well to put mongodb into its own container
No, no need to put MeteorJs inside your app container instead just like meteor itself those compile time tools are not needed during execution time so install MeteorJs as well as all other tools needed for a successful meteor build on your host machine which is where you execute your meteor build command
In your above Dockerfile the last statement EXPOSE 3000 will never get reached so put it before your CMD node
So outside your container get meteor installed then issue
cd /your/webapp/src
meteor build --server https://example.com --verbose --directory /webapp --server-only
above will compile your meteor project into a bundle dir living at
ls -la /webapp/bundle/
then copy into that freshly cut bundle dir your Dockerfile etc :
.bashrc
Dockerfile
bundle/
then create your container
docker build --tag localhost:5000/hygge/loudweb-admin --no-cache .
docker push localhost:5000/hygge/loudweb-admin
here is a stripped down Dockerfile
cat Dockerfile
# normal mode - raw ubuntu run has finished and base image exists so run in epoc mode
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux
ENV NODE_VER=v8.11.4
ENV NODE_NAME=node-${NODE_VER}
ENV OS_ARCH=linux-x64
ENV COMSUFFIX=tar.gz
ENV NODE_PARENT=/${NODE_NAME}-${OS_ARCH}
ENV PATH=${NODE_PARENT}/bin:${PATH}
ENV NODE_PATH=${NODE_PARENT}/lib/node_modules
RUN apt-get update && apt-get install -y wget && \
wget -q https://nodejs.org/download/release/${NODE_VER}/${NODE_NAME}-${OS_ARCH}.${COMSUFFIX} && \
tar -xf ${NODE_NAME}-${OS_ARCH}.${COMSUFFIX}
ENV MONGO_URL='mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT/meteor'
ENV ROOT_URL=https://example.com
ENV PORT 3000
EXPOSE 3000
RUN which node
WORKDIR /tmp
# CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf" ]
# I strongly suggest you wrap below using supervisord
CMD ["node", "main.js"]
to launch your container issue
docker-compose -f /devopsmicro/docker-compose.yml pull loudmail loud-devops nodejs-enduser
docker-compose -f /devopsmicro/docker-compose.yml up -d
here is a stripped down docker compose yaml file
version: '3'
services:
nodejs-enduser:
image: ${GKE_APP_IMAGE_ENDUSER}
container_name: loud_enduser
restart: always
depends_on:
- nodejs-admin
- loudmongo
- loudmail
volumes:
- /cryptdata6/var/log/loudlog-enduser:/loudlog-enduser
- ${TMPDIR_GRAND_PARENT}/curr/loud-build/${PROJECT_ID}/webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
- loudmail
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Once you have your app executing using containers you can work to stop using ubuntu as your container base and use a smaller, simpler docker base image like nodejs, busybox, etc however using ubuntu is easier initially since it has ability to let you install packages from inside a running container which is nice during development
the machinations surrounding above are vast ... above is a quick copy N paste plucked from the devops side of the house with hundreds of helper binaries + scripts, config templates, tls certs ... this is a tiny glimpse into the world of getting an app to execute
#Scott Stensland answer is good, in that it explains how to manually create a docker container for Meteor.
There is a simpler way use Meteor-up (mup) http://meteor-up.com/
EASILY DEPLOY YOUR APP
Meteor Up is a production quality Meteor app deployment tool.
Install with one command:
$ npm install --global mup
You set up a simple config file, and it looks after creating the container, doing npm install, setting up ssl certs etc. Much less work than doing it by hand
I have an angularjs application, I'm running using docker.
The docker file looks like this:-
FROM node:6.2.2
RUN npm install --global gulp-cli && \
npm install --global bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
COPY bower.json /usr/src/app/
RUN npm install && \
bower install -F --allow-root --config.interactive=false
COPY . /usr/src/app
ENV GULP_COMMAND serve:dist
ENTRYPOINT ["sh", "-c"]
CMD ["gulp $GULP_COMMAND"]
Now when I make any changes in say any html file, It doesn't dynamically loads up on the web page. I have to stop the container, remove it, build the image again, remove the earlier image and then restart the container from new image. Do I have to do this every time? (I'm new to docker, and I guess this issue is coz my source code is not put into volume, but I don't know how to do it using docker file)
You are correct, you should use volumes for stuff like this. During development, give it the same volumes as the COPY directories. It'll override it with whatever is on your machine, no need to rebuild the image, or even restart the container. Perfect for development.
When actually baking your images for production, you remove the volumes, leave the COPY in, and you'll get a deterministic container. I would recommend you read through this article here: https://docs.docker.com/storage/volumes/.
In general, there are 3 ways to do volumes.
Define them in your dockerfile using VOLUME.
Personally, I've never done this. I don't really see the benefits of this against the other two methods. I believe it would be more common to do this when your volume is meant to act as a permanent data-store. Not so much when you're just trying to use your live dev environment.
Define them when calling docker run.
docker run ... -v $(pwd)/src:/usr/src/app ...
This is great, cause if your COPY in your dockerfile is ./src /usr/src/app then it temporarily overrides the directory while running the image, but it's still there for deployment when you don't use -v.
Use docker-compose.
My personal recommendation. Docker compose massively simplifies running containers. For sake of simplicity just calls docker run ... but automates the arguments based on a given docker-compose.yml config.
Create a dev service specifying the volumes you want to mount, other containers you want it linked to, etc. Then bring it up using docker-compose up ... or docker-compose run ... depending on what you need.
Smart use of volumes will DRAMATICALLY reduce your development cycle. Would really recommend looking into them.
Yes, you need to rebuild every time the files change, since you only modify the files that are outside of the container. In order to apply the changes to the files IN the container, you need to rebuild the container.
Depending on the use case, you could either make the Docker Container dynamically load the files from another repository, or you could mount an external volume to use in the container, but there are some pitfalls associated with either solution.
If you want to keep your container running as you add your files you could also use a variation.
Mount a volume to any other location e.g. /usr/src/staging.
While the container is running, if you need to copy new files into the container, copy them into the location of the mounted volume.
Run docker exec -it <container-name> bash to open a bash shell inside the running container.
Run a cp /usr/src/staging/* /usr/src/app command to copy all new files into the target folder.
Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.