Deploy at Kubernetes a Docker Image with Frontend + Backend application - node.js

I have a simple application composed with Express.js as backend API and React.js as Frontend client.
I create a singles image container with frontend and backend
Application repo: https://github.com/vitorvr/list-users-kubernetes
Dockerfile:
FROM node:13
WORKDIR /usr/app/listusers
COPY . .
RUN yarn
RUN yarn client-install
RUN yarn client-build
EXPOSE 8080
CMD ["node", "server.js"]
server.js
const express = require('express');
const cors = require('cors');
const path = require('path');
const app = express();
const ip = process.env.IP || '0.0.0.0';
const port = process.env.PORT || 8080;
app.use(express.json());
app.use(cors());
app.use(express.static(path.join(__dirname, 'public')));
app.get('/users', (req, res) => {
res.json([
{ name: 'Jhon', id: 1 },
{ name: 'Ashe', id: 2 }
]);
});
app.listen(port, ip, () =>
console.log(`Server is running at http://${ip}:${port}`)
);
React call:
const api = axios.create({
baseURL: 'http://0.0.0.0:8080'
});
useEffect(() => {
async function loadUsers() {
const response = await api.get('/users');
if (response.data) {
setUsers(response.data);
}
}
loadUsers();
}, []);
To deploy and run this image in minikube I use these follow commands:
kubectl run list-users-kubernetes --image=list-users-kubernetes:1.0 --image-pull-policy=Never
kubectl expose pod list-users-kubernetes --type=LoadBalancer --port=8080
minikube service list-users-kubernetes
The issue occurs when the front end try to access the localhost:
I don't know where I need to fix this, if I have to do some fix in React, or do some settings in Kubernetes or even this is the best practice to deploy small applications as a Container Image at Kubernetes.
Thanks in advance.

Your Kubernetes node, assuming it is running as a virtual machine on your local development machine, would have an IP address assigned to it. Similarly, when an IP address would be assigned to your pod where the "list-user-kubernetes" service is running. You can view the IP address by running the following command: kubectl get pod list-users-kubernetes, and to view more information add -o wide at the end of the command, eg. kubectl get pod list-users-kubernetes -o wide.
Alternatively, you can do port forwarding to your localhost using kubectl port-forward pod/POD_NAME POD_PORT:LOCAL_PORT. Example below:
kubectl port-forward pod/list-users-kubernetes 8080:8080
Note: You should run this as a background service or in a different tab in your terminal, as the port forwarding would be available as long as the command is running.
I would recommend using the second approach, as your external IP for the pod can change during deployments, but mapping it to localhost would allow you to run your app without making code changes.
Link to port forwarding documentation

Related

The user-provided container failed to start and listen on the port defined provided by the PORT=8080

I am very new to cloud run. I created a very simple express server as shown below with no Dockerfile as I decided to deploy from source.
import dotenv from 'dotenv';
dotenv.config();
import express from 'express';
const app = express();
const port = process.env.PORT || 8000;
app.get('/test', (req, res) => {
return res.json({ message: 'test' });
})
app.listen(port, async function () {
console.log(`Sample Service running on port ${port} in ${process.env.NODE_ENV} mode`);
});
Please note that I am deploying from source, hence no Dockerfile in my directory.
Here is the command I used to deploy
gcloud run deploy --source .
And then the error I keep getting back is:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. Logs for this revision might contain more information.
I have no idea where PORT 8080 is coming from as I am listening on PORT 8000 and not 8080.
How can this be resolved?
Thanks
The issue most likely is not to do with the port but with some other problem that is causing the container to fail at startup. I suggest the following:
Visit Cloud Run in the Google cloud console and for this specific service, go to the logs from the Cloud Run service detail itself. It should tell you the exact reason while the container startup is failing. At times, it could be a dependency, a missing command, etc.
For the port 8080 being used, instead of 8000 -- Cloud Run injects a default port which is 8080. Check out the container contract documentation. You can override that by specifying the --port parameter in the gcloud command but it may not be necessary at this point.

localhost didn’t send any data on Docker and Nodejs app

I've searched this answer on the StackOverflow community and none of them resulted so I ask one here.
I have a pretty simple nodejs app that has a server.js file, having the following.
'use strict'
require('dotenv').config();
const app = require('./app/app');
const main = async () => {
try {
const server = await app.build({
logger: true,
shopify: './Shopify',
shopifyToken: process.env.SHOPIFY_TOKEN,
shopifyUrl: process.env.SHOPIFY_URL
});
await server.listen(process.env.PORT || 3000);
} catch (err) {
console.log(err)
process.exit(1)
}
}
main();
If I boot the server locally works perfect and I able to see a json on the web browser.
Log of the working server when running locally:
{"level":30,"time":1648676240097,"pid":40331,"hostname":"Erick-Macbook-Air.local","msg":"Server listening at http://127.0.0.1:3000"}
When I run my container, and I go to localhost:3000 I see a blank page with the error message:
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
I have my Dockerfile like this:
FROM node:16
WORKDIR /app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["node", "server.js"]
This is how I run my container:
docker run -d -it --name proxyservice -p 3000:3000 proxyserver:1.0
And when I run it I see the container log working:
{"level":30,"time":1648758470430,"pid":1,"hostname":"03f5d00d762b","msg":"Server listening at http://127.0.0.1:3000"}
As you can see it boot's up right, but when going to localhost:3000 I see that error message. Any idea of what am I missing/doing wrong?
Thanks!
can you add 0.0.0.0 in the host section of your service,
something like this?
server.listen(3000, '0.0.0.0');
give it a try then.
Since you want your service to be accessible from outside the container you should give the address as 0.0.0.0

CRA Socket.io request returns net::ERR_CONNECTION_TIMED_OUT

Hello. I've spent some time without luck trying to understand the problem here.
I've looked through each Question on StackOverflow which seems to deal with the same problem, though nothing has worked so far.
I have a simple chat app built using Create React App and Socket.io (which runs fine on localhost), but when deployed to my Node server I'm receiving ERR_CONNECTION_TIMED_OUT errors and no response. The website itself runs fine, but when I make a call to my Socket.io server, but errors.
I'm guessing this is down to my lack of knowledge with how Node and Socket.io want to work.
Some info:
server.js
const path = require("path");
const express = require("express");
const app = express();
const http = require("http").createServer(app);
const port = 8080;
http.listen(port, () => console.log(`http: Listening on port ${port}`));
const io = require("socket.io")(http, { cookie: false });
app.use(express.static(path.join(__dirname, "build")));
app.get("/*", function (req, res) {
res.sendFile(path.join(__dirname, "build", "index.html"));
});
io.on("connection", (socket) => {
console.log("New client connected");
// Emitting a new message. Will be consumed by the client
socket.on("messages", (data) => {
socket.broadcast.emit("messages", data);
});
//A special namespace "disconnect" for when a client disconnects
socket.on("disconnect", () => console.log("Client disconnected"));
});
client.js
....
const socket =
process.env.NODE_ENV === "development"
? io("http://localhost:4001")
: io("https://my-test-site:8080");
socket.on("messages", (msgs: string[]) => {
setMessages(msgs);
});
....
docker-compose.yml
version: "X.X"
services:
app:
image: "my-docker-image"
build:
context: .
dockerfile: Dockerfile
args:
DEPENDENCY: "my-deps"
ports:
- 8080:8080
Dockerfile
...
RUN yarn build
CMD node server.js // run my server.js
...
UPDATE: I got around this problem by making sure my main port was only used to run Express (with socket.io) - in my set up that was port: 8080. When running in the same Docker container, I don't think I needed to create and use the https version of the express 'createServer'.
This looks like you forgot to map the port of your docker container. The expose statement in your dockerfile will only advertise for other docker containers, which share a docker network with your container, that they can connect to port 4001 of your container.
The port mapping can be configured with the -p flag for docker run commands. In your case the full command look somehow like this:
docker run -p 4001:4001 your_image_name
Also, do you have a signed certificate? Browser will likely block the conneciton if they do not trust your servers certificate.
I got around this problem by keeping just one port available (in my case :8080). This port is what express/socket.io is using (originally I had two different ports, one for my site, one for express). Also, in my case, when running in the same Docker container, I didn't require the require("https").createServer(app) (https) version of the server, as http was sufficient.

How to tell azure to run node server.js

i have angular 6 app running fine on azure too but i need to add a server.js file for some server variables i added server.js file on root like
Server.js
const express = require('express');
const bodyParser = require('body-parser');
const path = require('path');
const http = require('http');
const app = express();
var router = express.Router();
const publicIp = require('public-ip');
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false}));
// Angular DIST output folder
app.use(express.static(path.join(__dirname, 'dist/projectname')));
app.get('/getip', (req, res) => {
var ipcheck = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
(async () => {
console.log(await publicIp.v4());
res.json({
ip: await publicIp.v4(),
ip2: req.headers['x-forwarded-for'],
ip3: req.connection.remoteAddress,
ip4: req.ip
});
})();
});
// Send all other requests to the Angular app
app.get('/*', (req, res) => {
res.sendFile(path.join(__dirname, 'dist/projectname/index.html'));
});
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname, 'dist/prjectname/index.html'));
});
//Set Port
const port = process.env.PORT || '3000';
app.set('port', port);
const server = http.createServer(app);
server.listen(port, () => console.log(`Running on localhost:${port}`));
then i goto cmd and run node server.js it works great on localhost
but now i want to do same on azure server how to deploy it.
or how to tell azure always run node server.js
I assume that you have known how to create an Azure WebApp in Azure Portal or Azure CLI. Then, I see you developed your Node.js app in Visual Stodio 2017 and want to deploy it to Azure using the same tool, so I suggested you can refer to the NTVS wiki page Publish to Azure Website using Web Deploy to know how to do.
Next, there are four notes below you have to know first before deploying.
Please confirm the Node.js version in your created WebApp, the default version is 0.10+. You can accessing the Kudo console of your webapp via the url https://<your webapp name>.scm.azurewebsites.net/DebugConsole and command node -v to get it. You can set it in Azure CLI to run the command like az webapp config appsettings set --resource-group <your ResourceGroup> --name <app_name> --settings WEBSITE_NODE_DEFAULT_VERSION=10.14.1 or follow the figure below in Azure portal. For more details about Node version on Azure WebApp, please refer to my answer for the SO thread azure webapp webjob node version.
In your project, there must be a Web.config file to talk with IIS on Azure WebApp how to startup your Node app. The content of Web.config file is the same as the sample in the Kudu wiki page Using a custom web.config for Node apps, you can directly copy it because your bootstrap js file name also is server.js.
The listening port on Azure WebApp is random assigned by environment, you can get it from the environment variable HTTP_PLATFORM_PORT which also will be got by process.env.PORT. So you will not do anything in your code for port.
Your Node app will be deployed into the path home/site/wwwroot of Azure WebApp which also can be accessed via Kudu console. Therefore, without any tools, you also can use Kudu console to deploy your app, just make the file structure in wwwroot as same as your project, which includes all js files and directories, node_modules and web.config.
just build your project ng build --prod then open project folder in dist folder in vs 17 then add
package.json
index.js
web.config
you can get these 3 files form here and you can edit index.js as per need i just need to add this
app.get('/getip', (req, res) => {
res.json({
ip: req.headers['x-forwarded-for'].split(":")[0]
});
});
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname, 'index.html'));
});
this runs my app through nodejs.
when i call getip this gives me server ip and otherwise return angular index.html which handle all routing.

Can't communicate with simple Docker Node.js web app [duplicate]

This question already has answers here:
Containerized Node server inaccessible with server.listen(port, '127.0.0.1')
(2 answers)
Closed 9 months ago.
I'm just trying to learn Node.js and Docker at the same time. I have a very simple Node.js app that listens on a port and returns a string. The Node app itself runs fine when running locally. I'm now trying to get it running in a Docker container but I can't seem to reach it.
Here's my Node app:
const http = require('http');
const hostname = '127.0.0.1';
const port = 3000;
var count = 0;
var server = http.createServer(function(req, res) {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end("Here's the current value: " + count);
console.log('Got a request: ', req.url);
count++;
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
My Dockerfile:
FROM node:latest
MAINTAINER Jason
ENV PORT=3000
COPY . /var/www
WORKDIR /var/www
EXPOSE $PORT
ENTRYPOINT ["node", "app.js"]
My build command:
docker build -t jason/node .
And my run command:
docker run -p 3000:3000 jason/node
The app.js file and Dockerfile live in the same directory where I'm running the commands. Doing a docker ps shows the app running but I just get a site cannot be reached error when navigating to 127.0.0.1:3000 in the browser. I've also confirmed that app.js was properly added to the image and I get the message "Server running at http://127.0.0.1:3000/" after running.
I think I'm missing something really simple, any ideas?
Omit hostname or use '0.0.0.0' on listen function. Make it server.listen(port, '0.0.0.0', () => { console.log(Server running..); });
If You use docker on Windows 7/8 you most probably have a docker-machine running then You would need to access it on something like 192.168.99.100 or whatever ip your docker-machine has.
To see if you are running a docker-machine just issue the command
docker-machine ls

Resources