How to host docker-compose app on Azure App Service? - azure

I have a very basic web app that I am trying to host on azure app service.
The only error log I am seeing so far is "stopping site because it failed during startup".
It is a very basic express app. I was able to get the wordpress tutorial to work, but not sure what I am doing wrong with this app or what is different about it.
I deployed it from Azure CLI with the config file pointed at the docker-compose.yml.
I set the multicontainer-config-type to "compose"

I tried to reproduce the same in my environment to host the docker-compose app on Azure App Service.
Follow the below steps to host the docker-compose app to Azure App Service.
I have developed a source code on my local machine with the below VM configurations and returned a docker file to produce the image out of the source code.
VM Configuration:
Ubuntu: 20.04
Docker version: 23.0.1
Docker Composed version:1.29.2
I have created a simple node JS hello world application.
Make sure that expose the application on Port:8080, below is my server.js file
'use strict';
const express = require('express');
const PORT = 8080;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('<h2> style="color: purple"> Welcome to Docker Compose on Azure web app v2.0');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I have used below Dockerfile and docker-compose.yaml
Dockerfile:
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
Copy . .
EXPOSE 8080
CMD ["npm", "start"]
docker-compose.yaml:
version: '3.8'
services:
app:
build: .
container_name: testnodewebapp
restart: always
ports:
- '8080:8080'
volumes:
- .:/app
- /app/node_modules
This "build ." command will be used to find the docker file in the current working directory(represented by ".").
the resulting container will be named "testnodewebapp" as we have mentioned the container name in "docker-composed.yaml" file.
If the docker file is not in the referenced path in the docker-compse.yaml it will throw an error while executing the docker-composed.yaml file as below.
If the file is in the current directory, you are able to execute docker-composed .yaml file without any issues as below.
Response:
I have created Azure Web app as below.
Note: Make sure that you select Docker Compose (Preview) option while creating the web app.
Created Azure container Registry and enabled Admin user.
Note: Make sure that enable Admin user
Push the images to Azure container Registry using the below command.
docker tag <image-name>:<tag> <acr-login-server>/<image-name>:<tag>
docker login <acr-login-server>
docker push <acr-login-server>/<image-name>:<tag>
Once you push the images to ACR, check the same in the portal.
Change the Web app deployment settings as below.
Make sure that add Config file: in Azure Web App.
version: '3.8'
services:
app:
image: acrvijay123.azurecr.io/nodewebapp:latest
ports:
- "8000:8080"
restart: always
Docker compose app successfully running on App Service.

Related

How to configure port of React app fetch when deploying to ECS fargate cluster with two tasks

I have two docker images that communicate fine when deployed locally, but I'm not sure how to set up my app to correctly make fetch() calls from the React app to the correct port on the other app when they're both deployed as tasks on the same ECS cluster.
My react app uses a simple fetch('/fooService/' + barStr) type call, and my other app exposes a /fooService/{barStr} endpoint on port 8081.
For local deployment and local docker, I used setupProxy.js to specify a proxy:
const { createProxyMiddleware } = require("http-proxy-middleware");
module.exports = function(app) {
app.use(createProxyMiddleware('/fooService',
{ target: 'http://fooImage:8081',
changeOrigin: true
}
));
}
In ECS this seems to do nothing, though. I see the setupProxy run when the image starts up, but the requests from my react app just go directly to {sameIPaddress}/fooService/{barStr}, ignoring the proxy specification entirely. I can see in the browser that the requests are being made over port 80. If these requests are made on port 8081 manually, they complete just fine, so the port is available and the service is running.
I've exposed port 8081 on the other task, and I can access it externally with no problem, I just am unclear on how to design my react app to point to it, since I won't necessarily know what IP address I will be assigned until the task launches. If I use a relative path, I cannot specify the port.
What's the idiomatic way to specify the destination of my fetch requests in this context?
Edit: If it is relevant, here is how the docker images are configured. They are built automatically on dockerhub and work fine if I deploy them in my local docker instance.
docker-compose.yaml
version: "3.8"
services:
fooImage:
image: myname/foo-image:0.1
build: ./
container_name: server
ports:
- '8081:8081'
barImage:
image: myname/bar-image:0.1
build: ./bar
container_name: front
ports:
- '80:80'
stdin_open: true
tty: true
Dockerfile - foo image
#
# Build stage
#
FROM maven:3.8.5-openjdk-18-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:18-alpine
COPY --from=build /home/app/target/*.jar /usr/local/lib/app.jar
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "/usr/local/lib/app.jar"]
Dockerfile - bar image
FROM node:17-alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
ECS Foo task ports
ECS Bar task ports
The solution to this issue was to return the proxy target to "localhost:8081". Per Amazon support:
For quickest resolve your issue, you can try to change your proxy
configuration from "http://server:8081" to "http://localhost:8081" and
the proxy should work.
That's because when using Fargate with awsvpc network mode, containers running in a Task share the same network namespace, which means containers can communicate with each other through localhost. (e.g. When back-end container listen at port 8081, front-end container can access back-end container via localhost:8081) And when using docker-compose [2], you can use hostname to communicate to another container that specified in the same docker-compose file. So proxying back-end traffic with "http://server:8081" in Fargate won't work and should be modified to "http://localhost:8081"."

How to expose port of Docker container for localhost

I have dockerized simple node.js REST API. I'm building container with this API on my raspberry. This node.js app needs to be installed on few raspberry devices, on every device user will be using angular app which is hosted on my server and this client app will send request to this API.
I can connect to api by using IP of container which can be obtained from container VM by
using ifconfig - http://{containerIp}:2710.The problem is that my angular app is using HTTPS and API is using HTTP so there will be Mixed content error. That way may also generate additional problem with container IP because every machine can have different container IP but there is one client app with one config.
I suppose if I will configure access to this API for http://localhost:2710 there will be no error, but I don't know how can I make this container visible on raspberry localhost
This is how i'm building my docker environment
Dockerfile
FROM arm32v7/node
RUN mkdir -p /usr/src/app/back
WORKDIR /usr/src/app/back
COPY package.json /usr/src/app/back/
RUN npm install
COPY . /usr/src/app/back/
EXPOSE 2710
CMD [ "node", "index.js" ]
Compose
services:
backend:
image: // path to my repo with image
ports:
- 2710:2710
container_name: backend
privileged: true
restart: always

Unable to deploy node image(NestJS) on AWS Elastic beanstalk

I am literally new with AWS as well and Containerization technology. What I am trying to achieve is that deploying a node image to AWS.
AS I am working with NESTJS my main.ts bootstrap method
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.setGlobalPrefix('api/v1');
await app.listen(5000);
Logger.log(`Server is running on port ${port}`, "Bootstrap");
}
bootstrap();
I am also using Travis CI to ship my container to AWS
My Docker file
# Download base image
FROM node:alpine as builder
# Define Base Directory
WORKDIR /usr/app/Api
# Copy and restore packages
COPY ./package*.json ./
RUN npm install
# Copy all other directories
COPY ./ ./
# Setup base command
CMD ["npm", "run", "start"]
MY .travis.yml file --> which is the config of Travis CI
sudo: required
services:
- docker
branches:
only:
- master
before_install:
- docker build -t xx/api .
script:
- docker run xx/api npm run test
deploy:
provider: elasticbeanstalk
region: "us-east-2"
app: "api"
env: "api-env"
bucket_name: "name"
bucket_path: "api"
on:
branch: master
access_key_id: "$AWS_ACCESS_KEY"
secret_access_key: "$AWS_SECRET_KEY"
Every time code pushed from Travis CI my Elastic beanstalk start building and failed.
So, I start googling to solve the issue. What I could is that I need to expose port using NGINX. Expost 80 PORT
FROM Nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My question is how should I incorporate NGINX to my docker file? AS my application is not something static content. If I move all my build artefacts to /usr/share/nginx/html. This will simply not work as I am not serving static content. So What I need is that I simultaneously run my server to server node app as well as run another container with NGINX which will export 80 port and proxy my requests.
How should I do that? Any help?

Azure Web App For Containers - Correct Port Setup and Logging Issue

I have an rshiny server setup in a docker container, and am trying to deploy this as an azure multi-container web app with some other services. I've already succeeded in deploying it in the single-container setting, for which I set the WEBSITES_PORT app setting to be 3838.
In my attempt to move to the multi-container situation, using the following docker-compose script:
version: '3.7'
services:
shiny-app:
image: my-shiny-image
restart: always
ports:
- "80:3838"
My understanding is that the 80-3838 mapping is what "informs" azure of what the website port is? This does not work when deployed, and I cannot retrieve any docker logs. I get the error message "Error retrieving logs". If I run the
az webapp log tail
command, it just hangs and then fails.
What am I doing wrong?

Dockerized NodeJS application is unable to invoke another dockerized SpringBoot API

I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?
Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.
I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.

Resources