How to add files on application path while creating docker image - node.js

I have deployed a http trigger function in Azure Kubernetes Service. I need some files that has to be kept on application path. How to add those files while creating the docker image..
steps I have followed
created a http trigger function on local
created docker image of that function
pushed the docker image in azure portal
deploying it in AKS
here is my file structure:
[1]: https://i.stack.imgur.com/jkLLO.png
here is my docker file:
[2]: https://i.stack.imgur.com/hE4Ny.png
myfirstfunc folder contains the index.js and function.json files. I have to add the music.mp3 and thankyou.mp4 files in my docker container..
my docker file:
FROM mcr.microsoft.com/azure-functions/node:2.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY . /home/site/wwwroot
RUN cd /home/site/wwwroot
COPY package*.json ./
RUN npm install

Related

Deploy reactjs application on the container instance

I am very new to this Containers, so I am unable to deploy the application on the Container instance .
I tried below steps but i could not resolve the issue. Please help me out of this. Thanks in advance
steps:
1.I have created the Reactjs build.
2.Created the docker image and Create the container registry and pushed it into docker container instance.
my Docker file is :
FROM nginx: version
copy /build/user/share/nginx/html
I have created the docker image and build that image and successfully created the docker image
I have created the container rgistry and when I trying to push the docker image to container instance after that i am not able to access the application using web.
docker build -t image_name
Can anyone help me that how to access the application through UI
Thanks in Advance!
I tried to reproduce the same issue in my environment and got the below results
I have created and build the npm using below command
npm run build
I have created example docker file and run the file
#docker build -t image_name
FROM node:16.13.1-alpine as build
WORKDIR /app
ENV PATH /app/node/_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install react-scripts -g --silent
COPY . ./
RUN npm run build
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx","-g","daemon off"]
I have created the container registry
Enabled the access to to push the image to container registry
I have logged into the container registry using login server credentials
docker login server_name
username:XXXX
password:XXXXX
I have tagged and pushed the image into container registry
docker tag image_name login-server/image
docker push login-server/image_name
I have created the container instances to deploy the container image
While creating in Networking i have given the DNS label and created
By using FQDN link I can able to access the image through UI

Does GCP Cloud Build Docker remove files created during the Dockerfile execution?

I have a build step in the docker file that generates some files. Since I also need those files locally (when testing) I have the generation of them not in Cloud Build itself but in the Dockerfile (simple node script that executes via npx). Locally this works perfectly fine and my Docker image does contain those generated files. But whenever I throw this Dockerfile into Cloud Build it executes the script but it does not keep the generated files in the resulting image. I also scanned the logs and so on but found no error (such as a persission error or something similar).
Is there any flag or something I am missing here that prevents my Dockerfile from generating those files and storing them into the image?
Edit:
Deployment pipeline is a trigger onto a GitHub pull request that runs the cloud build.yaml in which the docker build command is located. Afterwards the image is getting pushed to the Artifact Registry and to Cloud Run. On Cloud Run itself the files are gone. Steps in-between I can't check but when building locally the files are getting generated and they are persistent in the image.
Dockerfile
FROM node:16
ARG ENVIRONMENT
ARG GOOGLE_APPLICATION_CREDENTIALS
ARG DISABLE_CLOUD_LOGGING
ARG DISABLE_CONSOLE_LOGGING
ARG GIT_ACCESS_TOKEN
WORKDIR /usr/src/app
COPY ./*.json ./
COPY ./src ./src
COPY ./build ./build
ENV ENVIRONMENT="${ENVIRONMENT}"
ENV GOOGLE_APPLICATION_CREDENTIALS="${GOOGLE_APPLICATION_CREDENTIALS}"
ENV DISABLE_CLOUD_LOGGING="${DISABLE_CLOUD_LOGGING}"
ENV DISABLE_CONSOLE_LOGGING="${DISABLE_CONSOLE_LOGGING}"
ENV PORT=8080
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
RUN npm install
RUN node ./build/generate-files.js
RUN rm -rf ./build
EXPOSE 8080
ENTRYPOINT [ "node", "./src/index.js" ]
Cloud Build (stuff before and after is just normal deployment to Cloud Run stuff)
...
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build --build-arg ENVIRONMENT=${_ENVIRONMENT} --build-arg DISABLE_CONSOLE_LOGGING=true --build-arg GIT_ACCESS_TOKEN=$$GIT_ACCESS_TOKEN -t location-docker.pkg.dev/$PROJECT_ID/atrifact-registry/docker-image:${_ENVIRONMENT} ./' ]
secretEnv: ['GIT_ACCESS_TOKEN']
...
I figured it out. Somehow the build process does not fail when crashing a RUN statement. This lead to me thinking there are no problem, when in fact it could not authorize my generation script. Adding --network=cloudbuild to the docker build command fixed the authorization problem.

Docker container bound to local volume doesn't update

I created a new docker container for a Node.js app.
My Dockerfile is:
FROM node:14
# app directory
WORKDIR /home/my-username/my-proj-name
# Install app dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 3016
CMD ["node", "src/app.js"]
After this I ran:
docker build . -t my-username/node-web-app
Then I ran: docker run -p 8160:3016 -d -v /home/my-username/my-proj-name:/my-proj-name my-username/node-web-app
The app is successfully hosted at my-public-ip:8160.
However, any changes I make on my server do not propagate to the docker container. For example, if I touch test.txt in my server, I will not be able GET /test.txt online or see it in the container. The only way I can make changes is to rebuild the image, which is quite tedious.
Did I miss something here when binding the volume or something? How can I make it so that the changes I make locally also appear in the container?

Unable to mount volume in azure app service?

I am trying to deploy a Node SDK on azure app service through docker container. In my sdk, I have mounted a connection file & written a docker-compose for this. But when I deploy it by azure I get below error.
InnerException: Docker.DotNet.DockerApiException, Docker API responded
with status code=InternalServerError, response={"message":"invalid
volume specification: ':/usr/src/app/connection.json'"}
docker-compose.yml
version: '2'
services:
node:
container_name: node
image: dhiraj1990/node-app:latest
command: [ "npm", "start" ]
ports:
- "3000:3000"
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot/connection.json:/usr/src/app/connection.json
connection.json is present at this path /site/wwwroot.
Dockerfile
FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 3000
Please tell me what is the issue ?
Update:
The problem is that you cannot mount a file to the persistent storage, it should be a directory. So the correct volumes should set like below:
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/usr/src/app
And you also need to enable the persistent storage in your Web app for the container by setting the environment variable WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE. For more details, see Add persistent storage.
The persistent storage is just used to persist your data from your container. And if you want to share your files to the container, I will suggest you mount the Azure File share to the container. But you need to pay attention to the caution here:
Linking an existing directory in a web app to a storage account will
delete the directory contents.
So you need to mount the Azure File Share to a new directory without necessary files. And you can get more details about the steps in Configure Azure Files in a Container on App Service. It not only supports Windows containers but also supports Linux containers.

Project dependency not found when running DockerFile build command on Azure DevOps

This is the Dockerfile generated by VS2017.
I changed a little bit for using it on Azure DevOps
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApi.csproj", "WebApi/"]
COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProj.Common/"]
RUN dotnet restore "MyProject.WebApi/MyProject.WebApi.csproj"
COPY . .
WORKDIR "/src/MyProject.WebApi"
RUN dotnet build "MyProject.WebApi.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.WebApi.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Solution structure
MyProject.sln
-MyProject.Common
...
-MyProject.WebApi
...
Dockerfile
I have created a Build Pipeline under Azure DevOps to run Docker Build with these steps :
Get Sources Step from Azure Repos Git
Agent Job (Hosted Ubuntu 1604)
Command Line script docker build -t WebApi .
I have this error
2019-02-02T18:14:33.4984638Z ---> 9af3faec3d9e
2019-02-02T18:14:33.4985440Z Step 7/17 : COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProject.Common/"]
2019-02-02T18:14:33.4999594Z COPY failed: stat /var/lib/docker/tmp/docker-builder671248463/MyProject.Common/MyProject.Common.csproj: no such file or directory
2019-02-02T18:14:33.5327830Z ##[error]Bash exited with code '1'.
2019-02-02T18:14:33.5705235Z ##[section]Finishing: Command Line Script
Attached Screenshot with the working directory used
I don't understand if I have to change something inside Dockerfile or into Console Script step on DevOps
This is just a hunch, but considering your Dockerfile is located under MyProject.WebApi and you want to copy files from MyProject.Common which is on the same level, then you might need to specify a different context root directory when running docker build:
docker build -t WebApi -f Dockerfile ../
When Docker builds an image it collects a context - a list of files which are accessible during build and can be copied into image.
When you run docker build -t WebApi . it runs inside MyProject.WebApi directory and all files in the directory . (unless you have .dockerignore file), which is MyProject.WebApi in this case, are included into context. But MyProject.Common is not part of the context and thus you can't copy anything from it.
Hope this helps
EDIT: Perhaps you don't need not specify Working Directory (shown in the screenshot), then the command would change into:
docker build -t WebApi -f MyProject.WebApi/Dockerfile .
In this case Docker will use Dockerfile located inside MyProject.WebApi and include all files belonging to the solution into the context.
You can also read about context in the Extended description for the docker build command in the official documentation.

Resources