Unable to mount volume in azure app service? - azure

I am trying to deploy a Node SDK on azure app service through docker container. In my sdk, I have mounted a connection file & written a docker-compose for this. But when I deploy it by azure I get below error.
InnerException: Docker.DotNet.DockerApiException, Docker API responded
with status code=InternalServerError, response={"message":"invalid
volume specification: ':/usr/src/app/connection.json'"}
docker-compose.yml
version: '2'
services:
node:
container_name: node
image: dhiraj1990/node-app:latest
command: [ "npm", "start" ]
ports:
- "3000:3000"
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot/connection.json:/usr/src/app/connection.json
connection.json is present at this path /site/wwwroot.
Dockerfile
FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 3000
Please tell me what is the issue ?

Update:
The problem is that you cannot mount a file to the persistent storage, it should be a directory. So the correct volumes should set like below:
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/usr/src/app
And you also need to enable the persistent storage in your Web app for the container by setting the environment variable WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE. For more details, see Add persistent storage.
The persistent storage is just used to persist your data from your container. And if you want to share your files to the container, I will suggest you mount the Azure File share to the container. But you need to pay attention to the caution here:
Linking an existing directory in a web app to a storage account will
delete the directory contents.
So you need to mount the Azure File Share to a new directory without necessary files. And you can get more details about the steps in Configure Azure Files in a Container on App Service. It not only supports Windows containers but also supports Linux containers.

Related

Deploy reactjs application on the container instance

I am very new to this Containers, so I am unable to deploy the application on the Container instance .
I tried below steps but i could not resolve the issue. Please help me out of this. Thanks in advance
steps:
1.I have created the Reactjs build.
2.Created the docker image and Create the container registry and pushed it into docker container instance.
my Docker file is :
FROM nginx: version
copy /build/user/share/nginx/html
I have created the docker image and build that image and successfully created the docker image
I have created the container rgistry and when I trying to push the docker image to container instance after that i am not able to access the application using web.
docker build -t image_name
Can anyone help me that how to access the application through UI
Thanks in Advance!
I tried to reproduce the same issue in my environment and got the below results
I have created and build the npm using below command
npm run build
I have created example docker file and run the file
#docker build -t image_name
FROM node:16.13.1-alpine as build
WORKDIR /app
ENV PATH /app/node/_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install react-scripts -g --silent
COPY . ./
RUN npm run build
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx","-g","daemon off"]
I have created the container registry
Enabled the access to to push the image to container registry
I have logged into the container registry using login server credentials
docker login server_name
username:XXXX
password:XXXXX
I have tagged and pushed the image into container registry
docker tag image_name login-server/image
docker push login-server/image_name
I have created the container instances to deploy the container image
While creating in Networking i have given the DNS label and created
By using FQDN link I can able to access the image through UI

Docker - volumes explanation

As far as I know, volume in Docker is some permanent data for the container, which can map local folder and container folder.
In early day, I am facing Error: Cannot find module 'winston' issue in Docker which mentioned in:
docker - Error: Cannot find module 'winston'
Someone told me in this post:
Remove volumes: - ./:/server from your docker-compose.yml. It overrides the whole directory contains node_modules in the container.
After I remove volumes: - ./:/server, the above problem is solved.
However, another problem occurs.
[solved but want explanation]nodemon --legacy-watch src/ not working in Docker
I solve the above issue by adding back volumes: - ./:/server, but I don't know what is the reason of it
Question
What is the cause and explanation for above 2 issues?
What happen between build and volumes, and what is the relationship between build and volumes in docker-compose.yml
Dockerfile
FROM node:lts-alpine
RUN npm install --global sequelize-cli nodemon
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
...
test-web:
environment:
- NODE_ENV=local
- PORT=3030
build: . <------------------------ It takes Dockerfile in current directory
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server <------------------------ how and when does this line works?
ports:
- "3030:3030"
depends_on:
- test-db
When you don't have any volumes:, your container runs the code that's built into the image. This is good! But, the container filesystem is completely separate from the host filesystem, and the image contains a fixed copy of your application. When you change your application, after building and testing it in a non-Docker environment, you need to rebuild the image.
If you bind-mount a volume over the application directory (.:/server) then the contents of the host directory replace the image contents; any work you do in the Dockerfile gets completely ignored. This also means /server/node_modules in the container is ./node_modules on the host. If the host and container environments don't agree (MacOS host/Linux container; Ubuntu host/Alpine container; ...) there can be compatibility issues that cause this to break.
If you also mount an anonymous volume over the node_modules directory (/server/node_modules) then only the first time you run the container the node_modules directory from the image gets copied into the volume, and then the volume content gets mounted into the container. If you update the image, the old volume contents take precedence (changes to package.json get ignored).
When the image is built only the contents of the build: block have an effect. There are no volumes: mounted, environment: variables aren't set, and the build environment isn't attached to networks:.
The upshot of this is that if you don't have volumes at all:
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
It is completely disconnected from the host environment. You need to docker-compose build the image again if your code changes. On the other hand, you can docker push the built image to a registry and run it somewhere else, without needing a separate copy of Node or the application source code.
If you have a volume mount replacing the application directory then everything in the image build is ignored. I've seen some questions that take this to its logical extent and skip the image build, just bind-mounting the host directory over an unmodified node image. There's not really benefit to using Docker here, especially for a front-end application; install Node instead of installing Docker and use ordinary development tools.

How to use node.js on container that has a "WORKDIR /app" in its image to access a shared volume (volumes_from)?

I have 2 containers, the first ('client') writes files to a volume while the second ('server') needs to read them (this is a simplified version of my requirement). My problem is that I don't know how to access the files from the second container using node.js when it is set with a WORKDIR /app.
(I've seen examples of how to access volume using volumes_from, like this one: https://phoenixnap.com/kb/how-to-share-data-between-docker-containers which works on my tests, but it doesn't demonstrate my settings)
this is my docker-compose file (simplified):
volumes:
aca_storage:
services:
server-test:
image: aca-server:0.0.1
container_name: aca-server-test
command: sh -c "npm install && npm run start:dev"
ports:
- 8080:8080
volumes_from:
- client-test:ro
environment:
NODE_ENV: development
client-test:
image: aca-client:0.0.1
container_name: aca-client-test
ports:
- 81:80
volumes:
- aca_storage:/app/files_to_share
This is the docker file for the aca-server image:
FROM node:alpine
WORKDIR /app
COPY ["./package*.json", "./"]
RUN npm install -g nodemon
RUN npm install --production
COPY . .
CMD [ "node", "main.js"]
on my server's node app I'm trying to read files like this:
fs.readdir(PATH_TO_SHARED_VOLUME, function (err, files) {
if (err) {
return console.log('Unable to scan directory: ' + err);
}
files.forEach(function (file) {
console.log(file);
});
});
but all my tests to fill PATH_TO_SHARED_VOLUME with valid path failed. For example:
/aca_storage
/aca_storage/_data
/acaproject_aca_storage
/acaproject_aca_storage/_data
(acaproject is the VS Code workspace name which I noticed is being added automatically)
Using docker cli on the 'aca-server-test' container, I'm getting:
/app #
which with ls exposes only the files/folders on my node.js app but doesn't allow access to the volume 'aca_storage' as happens with the examples I can find on the internet.
If relevant, my environment is:
Windows 10 Home with WSL2
Docker Desktop set as Linux Containers
I'm noob with Linux and Docker so as many details as possible will be appreciated.
You can mount the same storage into different containers on different paths. I'd avoid using volumes_from:, which is inflexible and a little bit opaque.
version: '3.8'
volumes:
aca_storage:
services:
client:
volumes:
- aca_storage:/app/data
server:
volumes:
- aca_storage:/app/files_to_share
In each container the mount path needs to match what the application code is expecting, but they don't necessarily need to be the same path. With this configuration, in the server code, you'd set PATH_TO_SHARED_VOLUME = '/app/files_to_share', for example.
If VSCode is adding a bind mount to replace the server image's /app directory with your local development code, volumes_from: will also copy this mount into the client container. That could result in odd behavior.
Sharing files between containers adds many complications; it makes it hard to scale the setup and to move it to a clustered setup like Docker Swarm or Kubernetes. A simpler approach could be for the client container to HTTP POST the data to the server container, which can then manage its own (unshared) storage.

Error passing docker secrets to azure web app 'No such file or directory: '/run/secrets/'

I am relatively new to Docker and am currently building a multi-container dockerized azure web app (in flask). However, I am having some difficulty with secret management. I had successfully built a version that was storing app secrets through environment variables. But based on some recent reading it has come to my attention that that is not a good idea. I've been attempting to update my app to use Docker Secrets but have had no luck.
I have successfully created the secrets based on this post:
how do you manage secret values with docker-compose v3.1?
I have deployed the stack and verified that the secrets are available in both containers in /run/secrets/. However, when I run the app in azure I get an error.
Here are the steps I've taken to launch the app in azure.
docker swarm init --advertise-addr XXXXXX
$ echo "This is an external secret" | docker secret create my_external_secret
docker-compose build
docker push
docker stack deploy -c *path-to*/docker-compose.yml webapp
Next I'll restart the azure web app to pull latest images
Basic structure of the docker-compose is below.
version: '3.1'
services:
webapp:
build: .
secrets:
- my_external_secret
image: some_azure_registry/flask_site:latest
celery:
build: .
command: celery worker -A tasks.celery --loglevel=INFO -P gevent
secrets:
- my_external_secret
image: some_azure_registry.azurecr.io/flask_site_celery:latest
secrets: # top level secrets block
- my_external_secret
external: true
However, when I run the app in azure I get:
No such file or directory: '/run/secrets/my_external_secret
I can attach a shell to the container and successfully run:
python
open('/run/secrets/*my_external_secret*', 'r').read().strip()
But when the above line is executed by the webapp it fails with the no file or directory error. Any help would be greatly appreciated.
Unfortunately, the secret at the top-level of docker-compose is not supported in Azure Web App for Container. Take a look below:
Supported options
command
entrypoint
environment
image
ports
restart
services
volumes
Unsupported options
build (not allowed)
depends_on (ignored)
networks (ignored)
secrets (ignored)
ports other than 80 and 8080 (ignored)
For more details, see Docker Compose options.

How setup a Node.js development environment using Docker Compose

I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it

Resources