Unable to access a docker image running in Azure Web App for Containers - azure

I've created a docker image and added it to the Azure Container Registry. I've then deployed it to a Web App but am having trouble accessing the site. When I browse to it loads for a long time (minutes) and then get a 503 error 'service unavailable.
I thought it might be a port issue as my docker file has:
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 15973
EXPOSE 44369
And have added the following key/value pair to the app settings: WEBSITES_PORT: 15973 but still doesn't return anything.
I thought that I'd go back to basics and use the quickstart from the web app for containers (appsvc/sample-hello-world:latest) but that does the same, can't actually start the app.
What could be the issue? Should it be as simple as point it to an image to get a website up and running? Is there another public template I can use as a basic image to see a container runnning?
Update
So I've tried running the image locally a suggested by #Charles. When it starts it definitely runs as if I don't pass environment variables (such as connection string) then I get the related error message.
The issue running locally is that after it's started I'm not sure how it's exposed. I've tried http://localhost:15973, http://localhost:44369, http://localhost/ and tried passing -p 8080:80 and then http://localhost:8080 but they don't don't respond.
The response of starting the image is:
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
I think that this is the problem with it being hosted in Azure as well, the requests aren't being routed through to the image.
What do I need to do to call the api in the image?

For your issue, I try a test about Azure Web App for Containers and follow the steps in Deploy a Docker/Go web app in Web App for Containers. It works well.
So, I suggest you can test your image on your local machine first. If it can work as you want on your local machine.

Related

Connecting React Frontend to Node JS backend within Kubernetes

What is the recommended way to make API requests from a React Frontend Application to a Node JS backend application within Kubernetes? They are both hosted on the same container, with the dockerfile as such:
COPY frontend/package.json .
COPY frontend/yarn.lock .
RUN npm install
COPY frontend/tsconfig.json .
COPY frontend/src ./src
COPY frontend/public/ ./public
WORKDIR /app/backend
COPY backend/package.json .
RUN npm install
COPY backend/index.js .
COPY backend/controllers ./controllers
COPY backend/models ./models
COPY backend/routes ./routes
EXPOSE 3000
EXPOSE 3001
WORKDIR /app/frontend
RUN npm run build
RUN npm install -g serve
WORKDIR /app
RUN mkdir newImages
RUN mkdir exportedData
COPY image/* newImages/
COPY startup.sh /app
CMD /app/startup.sh
startup.sh navigates to both the frontend and backend folders and starts both servers. Running each of these locally on my machine works fine, and api requests go through correctly. However, when I deploy to Kubernetes, API requests made from my browser all time out. I attempt to use the friendly name of the pod - so these requests are being made to http://podname:PORT/api/XXX. If I exec into the pod and run the requests through curl, the api requests work fine. Similarly, if I expose the API port with a LoadBalancer and replace "podname" with a hardcoded external IP for the service, everything works fine. However, that is not a viable long term solution as External IP's can change easily if pods go down/are restarted.
Is there some easy way to route these requests properly through a proxy within the frontend, or will I need to set up an NGINX controller to handle everything here? I assume the issue is that React is running directly within the browser I access the frontend from - causing requests to come from my computer, meaning that IP's are not properly resolved (or even accessible, given that the friendly name resolves to the internal cluster IP)
Any help would be appreciated!
TL;DR Two Things
You cannot use pod names to connect with services from outside the cluster. That is because pod names are DNS names that are resolved into IP addresses that are internal to your cluster and this subnet cannot be reached from the outside. And you SHOULDN'T use pod names to connect to your pods from the outside, that's what services are for. Because once your application grows to a larger size, 100s of pods, it will be impossible to maintain pod names on the client side.
With a LoadBalancer service type, your design will work perfectly fine. You don't have to worry about IP addresses changing and not 'mapping to pods' anymore. That is because your pods are being interfaced by a service. The connectivity between this service and your pods is consistent even if your LoadBalancer IP address keeps changing. All that will change is the way you will connect to your Service object. The internal connectivity between a service and its pods is not your concern at all.
Also, please note that it is a very bad idea to design an application this way, where two separate logical components sit on the same container. It's a purely monolithic design and it is very hard to scale and maintain. You shouldn't even use different containers, but rather different pods for this, to begin with. There will be no issues since all pods can communicate with each other very transparently in Kubernetes and no extra configuration will be needed.
Ideally, you will have to setup two different services that will connect with each other.
Bottom line is, there is no need to setup a proxy to directly connect your pods to your clients. Instead, simply use a service of LoadBalancer type that exposes an endpoint your clients can use. And there is no need to worry about changing externalIPs. If you are using the application internally, you can fix a private IP address which can easily be used. If you are deploying this application to a larger scale, you really need to rethink your design and also invest into static public IP addresses.

AWS ALB return an empty page when used with ECS

I am working on containerizing one react app & provision it using ECS cluster with alb.
Everything looks great but whenever I accessing the ALB DNS in the browser it returns an empty page with 2 words only "react app"
I checked health check all the backend instances are healthy and returning 200 code.
I have used the ec2 instances IP address in the browser and the page loaded completely.
It seem issue with the alb, why not loaded the complete page
It should be a problem on how you have configured the DockerFile.
You would have added some default route to nginx configuration which displays 'react app'.
To debug, please try to run the DockerFile from your local using Docker Desktop and see if you are able to browse the react app.
If it works fine in your local then for sure 100% it'll work in ECS without any problem.

HTTP requests between two docker services works fine but I don't know why [duplicate]

I have a ReactJS project with its own Dockerfile, exposing port 3000:3000.
I also have a PHP project with its own Dockerfile, exposing port 80:80. The PHP app also has containers for MySQL, Redis and Nginx
For the PHP app, I have a docker-compose file that creates a network (my-net) for PHP, Nginx, MySQL and Redis to communicate on. However, I now want the ReactJS (which is in a separate project) to be able to communicate with the PHP app.
I added a docker-compose file to the React project, and added it to the network from the PHP project my-net and declared it as external so that it doesn't try to create it.
This seems to work: From the ReactJS container, I can ping app (the name of my backend service) and it works properly. However, from the ReactJS code, if I use something like axios to try and hit the backend API, it can't resolve app or http://app or any variation. It can however access the underlying IP address if I substitute that into in axios.
So there seems to be some issue with the hostname resolution, and presumably this is on the axios / JavaScript end. is there something I'm missing or a reason this isn't working?
When the JavaScript runs in a browser (outside of Docker) you can not use app because that is only available inside the Docker network (via the embedded DNS server).
To access your PHP server from outside use localhost and the exposed port (80) instead.

Persistent CORS error with React/Node running in docker-compose on Amazon EC2 instance

I am trying to get a streaming service running from a modified version of an open source repo https://github.com/nabendu82/streams.
I have a frontend client in React, a RTMP server for the stream, and a backend API. I have got a docker compose file to host them all together. If I run docker-compose up on my local computer, everything works perfectly. I can visit http://localhost:3000/matches/view and see two stream windows that aren't loaded, until I open up the streaming software OBS, Settings -> Stream -> Server: rtmp://localhost/live, Stream Key: 7. Then the right stream window will start.
To host this repo on the internet, I've created a basic EC2 instance on AWS (http://13.54.200.18:3000/matches/view). I installed docker-compose and I've copied all the repo files up to it.
However, when running on the AWS box the stream does not load, and the console error is always the spectacularly unhelpful:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://server:3002/streams/6. (Reason: CORS request did not succeed).
So for some reason CORS is preventing the React frontend from reading the server backend while it is hosted on AWS.
Here is the catch. I can actually get the streaming on the AWS hosted site to work, but only by running docker-compose up on my LOCAL computer at the same time. For some unknown reason, the AWS hosted version is able to pick up on the backend server running on my local machine (rather than the one running alongside it in docker-compose on AWS) and connect that way. I can even stream to the website via OBS at rtmp://13.54.200.18/live and everything works. But it only works on exactly my local computer running the docker-compose infrastructure (and only if I use calls to 'localhost' instead of the docker-compose service 'server'), if anyone else tries to see the stream on the live site they will just get Loading... perpetually and the CORS error.
Why is the AWS hosted code not looking at its own docker-compose file and its own server:3002 service? For the rest of the world, and for me if I'm not running a local server, it throws a CORS error. For just my local computer, and only if I'm running a local server and making requests to 'localhost:3002', it works perfectly.
If I ssh on to the AWS image, then docker-compose run client curl localhost:3002/streams will fail, but docker-compose run client curl server:3002/streams will give me back the correct JSON data. From everything I understand about docker compose, my services should be able to access each other and it appears they can, everything works great locally, and the services can talk to each other on the AWS box too, but just somehow this CORS error appears out of nowhere only on the AWS hosted version.
I've tried everything under the sun I can think of. I was originally using json-server, but I thought that might be the issue (as it has to specifically bind to -H 0.0.0.0), so I wrote my own Express server using the cors package to replace it and there has been no change. I've tried every configuration of docker-compose variables I can imagine. As far as I can understand I've done everything right, but somehow the AWS box wants to talk to my own computer's localhost aka "server service" aka 0.0.0.0 instead of its own. What is going on?
Repository here: https://github.com/JeremyEllingham/streams
Any help much appreciated.
I figured out how to get it working, by just posting direct to the Linux box IP address in production instead of trying to get it working with "localhost" or the docker service names. Kind of disappointed that docker-compose doesn't seem to work quite like I thought it did, but it's totally functional to just conditionally alter the base URL.
See also this answer: React app (in a docker container) cannot access API (in a docker container) on AWS EC2

Unable to have Docker containers communicate with each other

I have two Docker containers, one running a React app (built using create-react-app) and another with a Node app API. I have a docker-compose file set up, and according to the documentation I should be able to use the names of the services to communicate between containers.
However, when I try to send a request to the /login endpoint of my API from the React app I received a net::ERR_NAME_NOT_RESOLVED error. I'm using Unirest to send the request.
I've done a bunch of digging around online and have come across a few things describing similar issues but still haven't been able to find a solution. When I run cat /etc/resolve.conf (see this issue) in my React container the container with my API doesn't show up, but Docker is still fairly new to me so I'm not sure if that's part of the issue. I've also tried using links and user-defined networks in my compose file but to no avail.
I've included gists of my docker-compose.yml file as well as the code snippet of my request. Any help is much appreciated!
docker-compose.yml
Unirest request to /login
As discussed in the comments on the original post this was a DNS issue. Configuring the DNS was a little to involved for the use case of this project, so I've solved my problem by using an environment variable to set the URL that is used to make calls to my API container based on whether I'm running on a dev or prod environment:
(process.env.REACT_APP_URL_ROOT || '/app')
I set REACT_APP_URL_ROOT to my localhost address when running locally, and have an nginx container configured to proxy to /app when I build and deploy the React app.

Resources