I just started with the docker and I am creating docker image from my code. Here is the dir structure
project
/deployment
/Dockerfile.project1
/services
/ui
/project1
and Here is the code in Dockerfile.project1
FROM node:14
# arguments
ARG BUILD_COMMIT
ARG BUILD_BRANCH
ARG BUILD_TAG
# port number for app
ARG PORT=3000
ARG APP=adam_website_ui
LABEL build.tag=${BUILD_TAG}
LABEL app=${APP}
# set the env
ENV BUILD_BRANCH=${BUILD_BRANCH}
ENV BUILD_COMMIT=${BUILD_COMMIT}
WORKDIR /app
# Assiging user
USER root
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# copy the relavant code
COPY ../services/ui/project1 /app/
# installing deps
RUN npm install
RUN npm run build
RUN SET PORT=${PORT} && npm start
USER appusr:appusr
but this is showing
=> ERROR [4/7] COPY ../services/ui/project1 /app/ 0.0s
------
> [4/7] COPY ../services/ui/project1 /app/:
------
failed to compute cache key: "/services/ui/project1" not found: not found
and I am building using this command from deployment folder
docker build -t website_ui -f Dockerfile.project1 .
what can be the issue?
If you run docker build within the directory project/deployment with build context ., then docker is not able to find the files in project/services.
Try to run docker build -t website_ui -f deployment/Dockerfile.project1 . (the last argument is the build context)
From the docs:
The docker build command builds Docker images from a Dockerfile and a "context". A build's context is the set of files located in the specified PATH or URL.
When you build a Docker image you must specify a path to a directory which will be the build context, this is the dot at the end of your docker build command.
The content of this directory will then be copied by Docker (probably into an internal Docker directory), which is why you can't COPY paths outside the context.
You should see a message like "Uploading context of X bytes" when running docker build.
Change your COPY instruction to COPY services/ui/project1 /app/ and build your image from the project's root directory:
docker build -t website_ui -f deployment/Dockerfile.project1 .
Read more about the build context on the docker build documentation
Related
I am using a Openfaas Python3 function with terraform to create bucket in my AWS Account. I am trying it locally and created a local cluster using k3s and installed openfaas cli.
Steps:
Created local cluster and deployed faas-netes.
Created a new project using openfaas template of python3.
Exposed the port.
Changed my project yaml file with new host on gateway key in provider.
I am creating my tf files with python only and I am sure that the files are created with proper info as checked it by executing ls and printing the file data after closing the file.
Checked if terraform is installed in image and live by executing terraform--version.
I have already changes my dockerfile to have terraform installed while building image.
OS: Ubuntu 20.04 Docker: 20.10.15 Faas-CLI: 0.14.2
Docker File
FROM --platform=${TARGETPLATFORM:-linux/amd64} ghcr.io/openfaas/classic-watchdog:0.2.0 as watchdog
FROM --platform=${TARGETPLATFORM:-linux/amd64} python:3-alpine
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Allows you to add additional packages via build-arg
ARG ADDITIONAL_PACKAGE
COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
RUN chmod +x /usr/bin/fwatchdog
RUN apk --no-cache add ca-certificates ${ADDITIONAL_PACKAGE}
RUN apk add --no-cache wget
RUN apk add --no-cache unzip
RUN wget https://releases.hashicorp.com/terraform/1.1.9/terraform_1.1.9_linux_amd64.zip
RUN unzip terraform_1.1.9_linux_amd64.zip
RUN mv terraform /usr/bin/
RUN rm terraform_1.1.9_linux_amd64.zip
# Add non root user
RUN addgroup -S app && adduser app -S -G app
WORKDIR /home/app/
COPY index.py .
COPY requirements.txt .
RUN chown -R app /home/app && \
mkdir -p /home/app/python && chown -R app /home/app
USER app
ENV PATH=$PATH:/home/app/.local/bin:/home/app/python/bin/
ENV PYTHONPATH=$PYTHONPATH:/home/app/python
RUN pip install -r requirements.txt --target=/home/app/python
RUN mkdir -p function
RUN touch ./function/__init__.py
WORKDIR /home/app/function/
COPY function/requirements.txt .
RUN pip install -r requirements.txt --target=/home/app/python
WORKDIR /home/app/
USER root
COPY function function
# Allow any user-id for OpenShift users.
RUN chown -R app:app ./ && \
chmod -R 777 /home/app/python
USER app
ENV fprocess="python3 index.py"
EXPOSE 8080
HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
CMD ["fwatchdog"]
Handler.py
import os
import subprocess
def handle(req):
"""handle a request to the function
Args:
req (str): request body
"""
print(os.system("terraform --version"))
with open('terraform-local/bucket.tf','w') as resourcefile:
resourcefile.write('resource "aws_s3_bucket" "ABHINAV" { bucket = "new"}')
resourcefile.close()
with open('terraform-local/main.tf','w') as mainfile:
mainfile.write('provider "aws" {' + '\n')
mainfile.write('access_key = ""' + '\n')
mainfile.write('secret_key = ""' + '\n')
mainfile.write('region = "ap-south-1"'+ '\n')
mainfile.write('}')
mainfile.close()
print(os.system("ls"))
os.system("terraform init")
os.system("terraform plan")
os.system("terraform apply -auto-approve")
return req
But still not able to use terraform commands like terraform init and create bucket on AWS.
i have an angular app that i am trying to make it dockerize so with the below Dockerfile it is building an image , how do i run this app now locally for the port that i exposed 4200 i am new to docker stuff any help will be appreciated this will be without nginx.
Dockerfile
# --------------------------------------------------------------------------
FROM node:14 as builder
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install --production
# --------------------------------------------------------------------------
FROM gcr.io/distroless/nodejs:14
USER 9000:9000
# create the base directory
WORKDIR /apps/nodejs/gcp/
ENV HOME=/apps/nodejs/gcp/
# set the home directory
COPY --from=builder node_modules ./node_modules
COPY package.json ./
# copy readme.md
COPY README.md ./
# copy the dist to the home dir
COPY dist ./dist
# DO NOT COPY THE CERTS AND CONFIG FOLDER IN THIS IMAGE. THESE WILL BE INJECTED BY KUBERNETES.
# IN ORDER TO RUN THIS IMAGE IN LOCAL MOUNT THE HOST NODECERT AND CONFIG FOLDER TO THE DOCKER
# docker run -p 9082:9082 --rm \
#--env "NO_UPDATE_NOTIFIER=true NODE_ENV=production PORT=9082 \
#LOGCONSOLE=true CONFIGBASEPATH=/apps/nodejs/gcp/config/ CERTSBASEPATH=/apps/nodejs/gcp/nodecerts" \
#-v /apps/nodejs/gcp/nodecerts:/apps/nodejs/gcp/nodecerts -v /apps/nodejs/gcp/config/:/apps/nodejs/gcp/config/ <image name>
# TO GO INSIDE THE RUNNING CONTAINER
# docker container exec -it <container id> sh
#BUILDING Docker
# docker build -t <image name> .
# <image name>: all lowercase and if needed separated by hypen(-). eg redis-service
# port the server will be listening to.
EXPOSE 4200
CMD ng serve --host 0.0.0.0 --port 4200
Generate the build of the application
Use this command:
npm run build
Create a Dockerfile inside the build output folder and add this code in Dockerfile:
FROM nginx:latest
MAINTAINER yournick#winter.com
COPY ./ /usr/share/nginx/html/
EXPOSE 80
Finally build the docker image using this command:
docker build -t angular-dist-project:v1 .
Now run the image using this command:
docker run -d --name angular-app-container -p 2021:80 angular-dist-project:v1
Now go to browser and navigate http://your-ip:2021:
http://localhost:2021
Result: angular app is successfully dockerized
 NOTE: Do not forget and
remember that this only is an alternative, exist anothers many ways!
I hope you understand.
All the best 🌟
I am trying to run a webserver (right now still locally) out of a docker container. I am currently going step by step to understand the different parts.
Dockerfile:
FROM node:12.2.0-alpine as build
ENV environment development
WORKDIR /app
COPY . /app
RUN cd /app/client && yarn && yarn build
RUN cd /app/server && yarn
EXPOSE 5000
CMD ["sh", "-c","NODE_ENV=${environment}", "node", "server/server.js"]
Explanation:
I have the "sh", "-c" part in the CMD command due to the fact that without it I was getting this error:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:346: starting container process caused "exec:
\"NODE_ENV=${environment}\": executable file not found in $PATH":
unknown.
Building the container:
Building the container works just fine with:
docker build -t auth_example .
It takes a little while since the build context is (even after excluding all the node_modules) roughly 37MB, but that's okay.
Running the container:
Running the container and the app inside works like a charm if I do:
MyZSH: docker run -it -p 5000:5000 auth_example /bin/sh
/app # NODE_ENV=development node server/server.js
However, when running the container via the CMD command like this:
MyZSH: docker run -p 5000:5000 auth_example
Nothing happens, no errors, no nothing. The logs are empty and a docker ps -a reveals that the container was exited right upon start. I did some googling and tried different combinations of -t -i -d but that didn't solve it either.
Can anybody shed some light on this or point me into the right direction?
The problem is you're passing three arguments to sh -c whereas you'd usually pass one (sh -c "... ... ...").
It's likely you don't need the sh -c invocation at all; use /usr/bin/env to alias that environment variable instead (or just directly pass in NODE_ENV instead of environment):
FROM node:12.2.0-alpine as build
ENV environment development
WORKDIR /app
COPY . /app
RUN cd /app/client && yarn && yarn build
RUN cd /app/server && yarn
EXPOSE 5000
CMD /usr/bin/env NODE_ENV=${environment} node server/server.js
When you look at the Dockerfile for a maven build it contains the line:
VOLUME /root/.m2
Now this would be great if this is where my .m2 repository was on my mac - but it isn't - it's in
/Users/myname/.m2
Now I could do:
But then the linux implementation in Docker wouldn't know to look there. I want to map the linux location to the mac location, and have that as part of my vagrant init. Kind of like:
ln /root/.m2 /Users/myname/.m2
My question is: How do I point a docker image to my .m2 directory for running maven in docker on a mac?
How do I point a docker image to my .m2 directory for running maven in docker on a mac?
You rather point a host folder (like /Users/myname/.m2) to a container folder (not an image)
See "Mount a host directory as a data volume":
In addition to creating a volume using the -v flag you can also mount a directory from your Docker daemon’s host into a container.
$ docker run -d -P --name web -v /Users/myname/.m2:/root/.m2 training/webapp python app.py
This command mounts the host directory, /Users/myname/.m2, into the container at /root/.m2.
If the path /root/.m2 already exists inside the container’s image, the /Users/myname/.m2 mount overlays but does not remove the pre-existing content.
Once the mount is removed, the content is accessible again.
This is consistent with the expected behavior of the mount command.
To share the .m2 folder in build step you can overwrite the localRepository value in settings.xml.
Here is the Dockerfile snippet I used to share my local .m2 repository in docker.
FROM maven:3.5-jdk-8 as BUILD
RUN echo \
"<settings xmlns='http://maven.apache.org/SETTINGS/1.0.0\' \
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' \
xsi:schemaLocation='http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd'> \
<localRepository>/root/Users/myname/.m2/repository</localRepository> \
<interactiveMode>true</interactiveMode> \
<usePluginRegistry>false</usePluginRegistry> \
<offline>false</offline> \
</settings>" \
> /usr/share/maven/conf/settings.xml;
COPY . /usr/src/app
RUN mvn --batch-mode -f /usr/src/app/pom.xml clean package
FROM openjdk:8-jre
EXPOSE 8080 5005
COPY --from=BUILD /usr/src/app/target /opt/target
WORKDIR /opt/target
ENV _JAVA_OPTIONS '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005'
ENV swarm.http.port 8080
CMD ["java", "-jar", "app-swarm.jar"]
Here are the Dockerfiles and docker-compose for example project containing one spring service and any other services;
Spring-service dockerfile
FROM maven:3.5-jdk-8-alpine
WORKDIR /app
COPY . src
CMD cd src ; mvn spring-boot:run
docker-compose.yml
version: '3'
services:
account-service:
build:
context: ./
dockerfile: Dockerfile
ports:
- "8080:8080"
volumes:
- "${HOME}/.m2:/root/.m2"
Here in docker-compose we make volumes for our local .m2 repo and container one.
I don't know the specifics why the node application does not run. Basically I added a dockerfile in a nodejs app, and here is my Dockerfile
FROM node:0.10-onbuild
RUN mv /usr/src/app /ghost && useradd ghost --home /ghost && \
cd /ghost
ENV NODE_ENV production
VOLUME ["/ghost/content"]
WORKDIR /ghost
EXPOSE 2368
CMD ["bash", "start.bash"]
Where start.bash looks like this:
#!/bin/bash
GHOST="/ghost"
chown -R ghost:ghost /ghost
su ghost << EOF
cd "$GHOST"
NODE_ENV={$NODE_ENV:-production} npm start
EOF
I usually run docker like so:
docker run --name ghost -d -p 80:2368 user/ghost
With that I cannot see what is going on, and I decided to run it like this:
docker run --name ghost -it -p 80:2368 user/ghost
And I got this output:
> ghost#0.5.2 start /ghost
> node index
Seems, like starting, but as I check the status of the container docker ps -a , it is stopped.
Here is the repo for that but, the start.bash and dockerfile is different, because I haven't committed the latest, since both are not working:
JoeyHipolito/Ghost
I manage to make it work, there is no error in the start bash file nor in the Dockerfile, it's just that I failed to build the image again.
With that said, you can checkout the final Dockerfile and start.bash file in my repository:
Ghost-blog__Docker (https://github.com/joeyhipolito/ghost)
At the time I write this answer, you can see it in the feature-branch, feature/dockerize.