Connecting React Frontend to Node JS backend within Kubernetes - node.js

What is the recommended way to make API requests from a React Frontend Application to a Node JS backend application within Kubernetes? They are both hosted on the same container, with the dockerfile as such:
COPY frontend/package.json .
COPY frontend/yarn.lock .
RUN npm install
COPY frontend/tsconfig.json .
COPY frontend/src ./src
COPY frontend/public/ ./public
WORKDIR /app/backend
COPY backend/package.json .
RUN npm install
COPY backend/index.js .
COPY backend/controllers ./controllers
COPY backend/models ./models
COPY backend/routes ./routes
EXPOSE 3000
EXPOSE 3001
WORKDIR /app/frontend
RUN npm run build
RUN npm install -g serve
WORKDIR /app
RUN mkdir newImages
RUN mkdir exportedData
COPY image/* newImages/
COPY startup.sh /app
CMD /app/startup.sh
startup.sh navigates to both the frontend and backend folders and starts both servers. Running each of these locally on my machine works fine, and api requests go through correctly. However, when I deploy to Kubernetes, API requests made from my browser all time out. I attempt to use the friendly name of the pod - so these requests are being made to http://podname:PORT/api/XXX. If I exec into the pod and run the requests through curl, the api requests work fine. Similarly, if I expose the API port with a LoadBalancer and replace "podname" with a hardcoded external IP for the service, everything works fine. However, that is not a viable long term solution as External IP's can change easily if pods go down/are restarted.
Is there some easy way to route these requests properly through a proxy within the frontend, or will I need to set up an NGINX controller to handle everything here? I assume the issue is that React is running directly within the browser I access the frontend from - causing requests to come from my computer, meaning that IP's are not properly resolved (or even accessible, given that the friendly name resolves to the internal cluster IP)
Any help would be appreciated!

TL;DR Two Things
You cannot use pod names to connect with services from outside the cluster. That is because pod names are DNS names that are resolved into IP addresses that are internal to your cluster and this subnet cannot be reached from the outside. And you SHOULDN'T use pod names to connect to your pods from the outside, that's what services are for. Because once your application grows to a larger size, 100s of pods, it will be impossible to maintain pod names on the client side.
With a LoadBalancer service type, your design will work perfectly fine. You don't have to worry about IP addresses changing and not 'mapping to pods' anymore. That is because your pods are being interfaced by a service. The connectivity between this service and your pods is consistent even if your LoadBalancer IP address keeps changing. All that will change is the way you will connect to your Service object. The internal connectivity between a service and its pods is not your concern at all.
Also, please note that it is a very bad idea to design an application this way, where two separate logical components sit on the same container. It's a purely monolithic design and it is very hard to scale and maintain. You shouldn't even use different containers, but rather different pods for this, to begin with. There will be no issues since all pods can communicate with each other very transparently in Kubernetes and no extra configuration will be needed.
Ideally, you will have to setup two different services that will connect with each other.
Bottom line is, there is no need to setup a proxy to directly connect your pods to your clients. Instead, simply use a service of LoadBalancer type that exposes an endpoint your clients can use. And there is no need to worry about changing externalIPs. If you are using the application internally, you can fix a private IP address which can easily be used. If you are deploying this application to a larger scale, you really need to rethink your design and also invest into static public IP addresses.

Related

Run nginx as its own container separate from client app container

I'm trying to decide whether it's better to run nginx as its own container, not containing the client app on it. Then run another container that is the client app. And a third will be a container serving a backend API
I found that trying to combine nginx with a client app's code in a Dockerfile or on the same nginx container instance is not very clean. You end up having to use multi-builds in Docker or some other way to first dockerize your client app before you run nginx over it. So thinking about splitting this out into 2 separate containers to keep it clean.
So I'll end up with 3 Dockerfiles and then use Docker compose to run them:
nginx.Dockerfile (Will use the nginx image from dockerhub. Is it possible to tell nginx.conf to point to index.html of my client container via localhost resolution somehow and not look in `/usr/share/nginx/html`?)
client.Dockerfile (ReactJS front-end, dist code from webpack)
backend.Dockerfile (server - RESTful or GraphQL Data API)
Goal:
I'd prefer not to mix/have my client app code in the nginx container. I'd like to decouple that from the nginx container...which is why I'm saying I want 3 containers here to keep things decoupled from one another. I realize my nginx.conf will need to find where index.html is of my client app but I want to do so not in the same container.
Put another way, I'm wondering if there's a way to tell nginx.config via localhost to find index.html over say port 80 which is the port my separate client container will be exposing that file from. So instead of having nginx find index.html in /usr/share/nginx/html (same container with client app code) can I tell it to look through localhost:80 which is from my client app container? I want my nginx container to run independently of the client code. Or is that completely not possible and I must copy my client app code into /usr/share/nginx/html and that's the only way, they must live together in the same image and my dreams of decoupling don't make sense?
My client app will still have an expressJS server but just for serving static assets. My backend will be another express API for serving data back up to the client.
This setup is fine, assuming running the extra Web server doesn't bother you. I'd probably also base the "client" image off of Nginx, but the only thing it would do is to serve static content from the dist directory that Webpack builds (so it would be a multi-stage build where the final stage doesn't include Node or any of the build tools).
In the "nginx" container, you'd use the proxy_pass directive to forward requests to one container or the other based on URL matching.
location = / {
return /ui/ # 302 redirect
}
location /ui/ {
proxy_pass http://client/ # host name matches a Compose service
}
location /api/ {
proxy_pass http://backend/ # host name matches a Compose service
}
The further corollary to this is that you don't actually need to host the "client" code in Docker. Webpack generates a bunch of static files, and can be configured to give them unique names. If you have some sort of non-Docker file-serving service (Amazon S3 will work here) you can just upload the results of the Webpack build to that service, and either proxy_pass or redirect to the UI in the same way as here.
(If you were running this application in Kubernetes, with an Nginx-based ingress controller, it would have exactly the layout you describe, and it would be totally routine.)

Run two applications in the same AWS Cloud9 instance (front-end and API)

I'm trying to run json-server on port 8082 to serve some mock data to the front-end I'm developing using create-react-app (via yarn start to get the dev server on port 8080). Although both servers run correctly in their respective terminals, I can only access the first one I run from the AWS-provided URL, thus making it impossible to perform any kind of HTTP request from the React app to the json-server.
How should I go about this? Probably running two EC2 instances would work, but that seems awfully inefficient...

Unable to access a docker image running in Azure Web App for Containers

I've created a docker image and added it to the Azure Container Registry. I've then deployed it to a Web App but am having trouble accessing the site. When I browse to it loads for a long time (minutes) and then get a 503 error 'service unavailable.
I thought it might be a port issue as my docker file has:
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 15973
EXPOSE 44369
And have added the following key/value pair to the app settings: WEBSITES_PORT: 15973 but still doesn't return anything.
I thought that I'd go back to basics and use the quickstart from the web app for containers (appsvc/sample-hello-world:latest) but that does the same, can't actually start the app.
What could be the issue? Should it be as simple as point it to an image to get a website up and running? Is there another public template I can use as a basic image to see a container runnning?
Update
So I've tried running the image locally a suggested by #Charles. When it starts it definitely runs as if I don't pass environment variables (such as connection string) then I get the related error message.
The issue running locally is that after it's started I'm not sure how it's exposed. I've tried http://localhost:15973, http://localhost:44369, http://localhost/ and tried passing -p 8080:80 and then http://localhost:8080 but they don't don't respond.
The response of starting the image is:
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
I think that this is the problem with it being hosted in Azure as well, the requests aren't being routed through to the image.
What do I need to do to call the api in the image?
For your issue, I try a test about Azure Web App for Containers and follow the steps in Deploy a Docker/Go web app in Web App for Containers. It works well.
So, I suggest you can test your image on your local machine first. If it can work as you want on your local machine.

Unable to have Docker containers communicate with each other

I have two Docker containers, one running a React app (built using create-react-app) and another with a Node app API. I have a docker-compose file set up, and according to the documentation I should be able to use the names of the services to communicate between containers.
However, when I try to send a request to the /login endpoint of my API from the React app I received a net::ERR_NAME_NOT_RESOLVED error. I'm using Unirest to send the request.
I've done a bunch of digging around online and have come across a few things describing similar issues but still haven't been able to find a solution. When I run cat /etc/resolve.conf (see this issue) in my React container the container with my API doesn't show up, but Docker is still fairly new to me so I'm not sure if that's part of the issue. I've also tried using links and user-defined networks in my compose file but to no avail.
I've included gists of my docker-compose.yml file as well as the code snippet of my request. Any help is much appreciated!
docker-compose.yml
Unirest request to /login
As discussed in the comments on the original post this was a DNS issue. Configuring the DNS was a little to involved for the use case of this project, so I've solved my problem by using an environment variable to set the URL that is used to make calls to my API container based on whether I'm running on a dev or prod environment:
(process.env.REACT_APP_URL_ROOT || '/app')
I set REACT_APP_URL_ROOT to my localhost address when running locally, and have an nginx container configured to proxy to /app when I build and deploy the React app.

Docker app and database on different containers

I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.

Resources