I created an image with this dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
my workdir is /usr/src/app, and I copy my project using COPY . .
I created a container and uploaded it to azure web service. I search in the kudu bash for the workdir, or any other file, and I cant find it. The container runs and does what it should, but the files are no where to be seen. how can I find it?
For your question, the first thing you need to know is that the kudu bash is not in the system that your container. They are two different systems. So it's obvious you cannot find your files in it. And if you want to connect to your container, you need to enable the SSH in your container and SSH into it, then you can find the files in the working directory.
Related
Problem
Currently I can only create simple Dockerfiles but I have no experience with docker-compose. I have taken over a project where a static page was
built with Nuxt.js. I want to build a development environment where I can work
with hot reload or just a copy which immediately transfers the change to the nginx service.
Currently, only when I building a container, the dist folder is copied into the nginx/html folder. So I can then call the page.
Question:
What can I do so that the nginx/html folder is overwritten (hot reload)
when saving the Vue file?
I found an interesting approach on SO (option 4). https://stackoverflow.com/a/64010631/8466673
But I don't know how to configure a docker-compose from my one docker file.
My dockerfile
# build project
FROM node:14 AS build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
# create nginx server
FROM nginx:1.19.0-alpine AS prod-stage
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
I created a new docker container for a Node.js app.
My Dockerfile is:
FROM node:14
# app directory
WORKDIR /home/my-username/my-proj-name
# Install app dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 3016
CMD ["node", "src/app.js"]
After this I ran:
docker build . -t my-username/node-web-app
Then I ran: docker run -p 8160:3016 -d -v /home/my-username/my-proj-name:/my-proj-name my-username/node-web-app
The app is successfully hosted at my-public-ip:8160.
However, any changes I make on my server do not propagate to the docker container. For example, if I touch test.txt in my server, I will not be able GET /test.txt online or see it in the container. The only way I can make changes is to rebuild the image, which is quite tedious.
Did I miss something here when binding the volume or something? How can I make it so that the changes I make locally also appear in the container?
This is the Dockerfile generated by VS2017.
I changed a little bit for using it on Azure DevOps
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApi.csproj", "WebApi/"]
COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProj.Common/"]
RUN dotnet restore "MyProject.WebApi/MyProject.WebApi.csproj"
COPY . .
WORKDIR "/src/MyProject.WebApi"
RUN dotnet build "MyProject.WebApi.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.WebApi.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Solution structure
MyProject.sln
-MyProject.Common
...
-MyProject.WebApi
...
Dockerfile
I have created a Build Pipeline under Azure DevOps to run Docker Build with these steps :
Get Sources Step from Azure Repos Git
Agent Job (Hosted Ubuntu 1604)
Command Line script docker build -t WebApi .
I have this error
2019-02-02T18:14:33.4984638Z ---> 9af3faec3d9e
2019-02-02T18:14:33.4985440Z Step 7/17 : COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProject.Common/"]
2019-02-02T18:14:33.4999594Z COPY failed: stat /var/lib/docker/tmp/docker-builder671248463/MyProject.Common/MyProject.Common.csproj: no such file or directory
2019-02-02T18:14:33.5327830Z ##[error]Bash exited with code '1'.
2019-02-02T18:14:33.5705235Z ##[section]Finishing: Command Line Script
Attached Screenshot with the working directory used
I don't understand if I have to change something inside Dockerfile or into Console Script step on DevOps
This is just a hunch, but considering your Dockerfile is located under MyProject.WebApi and you want to copy files from MyProject.Common which is on the same level, then you might need to specify a different context root directory when running docker build:
docker build -t WebApi -f Dockerfile ../
When Docker builds an image it collects a context - a list of files which are accessible during build and can be copied into image.
When you run docker build -t WebApi . it runs inside MyProject.WebApi directory and all files in the directory . (unless you have .dockerignore file), which is MyProject.WebApi in this case, are included into context. But MyProject.Common is not part of the context and thus you can't copy anything from it.
Hope this helps
EDIT: Perhaps you don't need not specify Working Directory (shown in the screenshot), then the command would change into:
docker build -t WebApi -f MyProject.WebApi/Dockerfile .
In this case Docker will use Dockerfile located inside MyProject.WebApi and include all files belonging to the solution into the context.
You can also read about context in the Extended description for the docker build command in the official documentation.
How do I execute a SSHFS Mount to mount a volume on a different server into my docker image / docker container ?
The docker container contains a simple NodeJS web server. This web-page displays pictures. I have to get those image-files from a different server with different IP.
So far I had this without a docker container. For this I had a CronJob which executes the SSHFS mount to my system. Then the NodeJS server had the files and I was able to display the pictures.
Now I have to do the same with a docker container. I'd like to have the volume inside the container but I don't think that it works with the docker run -v /path/ [...] because this would require that I have the files on the host that the container lies in.
Is it possible to add the SSHFS mount into the docker run command or the Dockerfile ? Are there any other alternatives ?
~ cat Dockerfile
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 80
CMD [ "npm", "start" ]
I'm using AWS Elastic Beanstalk Multicontainer environment.
the problem is when using image node:0.12 or node:argon
the container start then close immediately.
node:argon "node" About a minute ago Exited (0)
after investigation we found that we must build our own image with some commands that will start the app when container initialized.
my question is:
is there any public node image that do this?
we don't want to create a private repository or build any images.
Following is a simple tutorial using argon docker image.
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
Sample docker file :
FROM node:argon
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]