Error: The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable - node.js

I'm trying to deploy my API to Cloud Run but I'm stuck with this error
ERROR: (gcloud.run.deploy) The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. Logs for this revision might contain more information.
This is my Dockerfile
FROM node:lts
WORKDIR /src
COPY package.json package*.json ./
RUN npm install --omit=dev
COPY . .
CMD [ "npm", "execute" ]
This are my package.json scripts
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "env-cmd -x -f ./src/config/env/.env.local nodemon ./src/index.js",
"deploy:dev": "env-cmd -x -f ./src/config/env/.env.dev ./deploy.sh",
"execute": "env-cmd -x -f ./src/config/env/.env.dev node ./src/index.js"
},
This is my index.js
const api = require("../src/config/config");
const port = process.env.PORT || 8080;
console.log("Puerto => ", port);
api.listen(port, () => {
console.log(`Rest API started succesfully`);
});
This is my config file (I'm working with firebase)
const express = require("express");
// Config
const api = express();
api.use(express.json());
// Routes
api.use(require("../routes/start.routes.js"));
module.exports = api;
And I have a .env file with the PORT variable
PORT=8080
And these are the commands I execute
gcloud builds submit --tag gcr.io/$GOOGLE_PROJECT_ID/api --project=$GOOGLE_PROJECT_ID
gcloud run deploy api --image gcr.io/$GOOGLE_PROJECT_ID/api --port 8080 --platform managed --region us-central1 --allow-unauthenticated --project=$GOOGLE_PROJECT_ID
I have followed every tip about similar questions but none has worked for me
I checked logs and they only expose the description I referred in the beginning.
I try to run my project locally with Cloud Run Emulator, it does not work but I don't get enough info to figure out what's wrong. I don't even understand why in the docker container I see several ports except 8080 which is the one the app should listen on and then says the deploy process failed
I'm using windows 11
My API works fine locally if I run npm run start

The error clearly does indicate that there is an issue with the specific defined incoming HTTP requests ports and the container is failing to listen on the expected port.The official document Cloud Run container contract has these mentioned to meet these requirements in order to operate properly.
In Node.js ,your js should define as below:
const port = process.env.PORT || 8080; app.listen(port, () => { console.log('Hello listening port', port); });
You may check if your container is listening on all network interfaces once by denoting port as 0.0.0.0 and see if that works.You may also want to confirm if your container image is compiled for 64-bit Linux as expected by the container runtime contract.

You need to troubleshoot the issue
Check the logs of your Cloud Run service using the command in cloudshell
gcloud logs read --project $GOOGLE_PROJECT_ID --service api --limit 100
//You can adjust the --limit flag to show more or fewer log entries.
2.Check the logs of your container If there is any problem with the container itself, you will get using that command
docker run -p 8080:8080 gcr.io/$GOOGLE_PROJECT_ID/api
that command starts your container and map port 8080 inside the container to your local machine's port 8080. Access your app at http://localhost:8080. Check the console output for any errors with the container.
Check your Cloud Run configuration
Check your application code
in your index.js file Make sure the api.listen() function is using a port variable that is set to the value of the PORT environment variable:
const port = process.env.PORT || 8080;
api.listen(port, () => {
console.log(`Rest API started successfully`);
});
5 check your firewall settings
some times firewall blocks traffic to port 8080.
you can check the firewall rules in the Google Cloud Console
using that step you find the issues and if you can't find the issue then reach out to the Cloud Run support team for further assistance.

Related

The user-provided container failed to start and listen on the port defined provided by the PORT=8080

I am very new to cloud run. I created a very simple express server as shown below with no Dockerfile as I decided to deploy from source.
import dotenv from 'dotenv';
dotenv.config();
import express from 'express';
const app = express();
const port = process.env.PORT || 8000;
app.get('/test', (req, res) => {
return res.json({ message: 'test' });
})
app.listen(port, async function () {
console.log(`Sample Service running on port ${port} in ${process.env.NODE_ENV} mode`);
});
Please note that I am deploying from source, hence no Dockerfile in my directory.
Here is the command I used to deploy
gcloud run deploy --source .
And then the error I keep getting back is:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. Logs for this revision might contain more information.
I have no idea where PORT 8080 is coming from as I am listening on PORT 8000 and not 8080.
How can this be resolved?
Thanks
The issue most likely is not to do with the port but with some other problem that is causing the container to fail at startup. I suggest the following:
Visit Cloud Run in the Google cloud console and for this specific service, go to the logs from the Cloud Run service detail itself. It should tell you the exact reason while the container startup is failing. At times, it could be a dependency, a missing command, etc.
For the port 8080 being used, instead of 8000 -- Cloud Run injects a default port which is 8080. Check out the container contract documentation. You can override that by specifying the --port parameter in the gcloud command but it may not be necessary at this point.

localhost didn’t send any data on Docker and Nodejs app

I've searched this answer on the StackOverflow community and none of them resulted so I ask one here.
I have a pretty simple nodejs app that has a server.js file, having the following.
'use strict'
require('dotenv').config();
const app = require('./app/app');
const main = async () => {
try {
const server = await app.build({
logger: true,
shopify: './Shopify',
shopifyToken: process.env.SHOPIFY_TOKEN,
shopifyUrl: process.env.SHOPIFY_URL
});
await server.listen(process.env.PORT || 3000);
} catch (err) {
console.log(err)
process.exit(1)
}
}
main();
If I boot the server locally works perfect and I able to see a json on the web browser.
Log of the working server when running locally:
{"level":30,"time":1648676240097,"pid":40331,"hostname":"Erick-Macbook-Air.local","msg":"Server listening at http://127.0.0.1:3000"}
When I run my container, and I go to localhost:3000 I see a blank page with the error message:
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
I have my Dockerfile like this:
FROM node:16
WORKDIR /app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["node", "server.js"]
This is how I run my container:
docker run -d -it --name proxyservice -p 3000:3000 proxyserver:1.0
And when I run it I see the container log working:
{"level":30,"time":1648758470430,"pid":1,"hostname":"03f5d00d762b","msg":"Server listening at http://127.0.0.1:3000"}
As you can see it boot's up right, but when going to localhost:3000 I see that error message. Any idea of what am I missing/doing wrong?
Thanks!
can you add 0.0.0.0 in the host section of your service,
something like this?
server.listen(3000, '0.0.0.0');
give it a try then.
Since you want your service to be accessible from outside the container you should give the address as 0.0.0.0

ENV variables within cloud run server are no accessible

So,
I am using NUXT
I am deploying to google cloud run
I am using dotenv package with a .env file on development and it works fine.
I use the command process.env.VARIABLE_NAME within my dev server on Nuxt and it works great, I make sure that the .env is in git ignore so that it doesnt get uploaded.
However, I then deploy my application using the google cloud run... I make sure I go to the Enviroments tab and add in exactly the same variables that are within the .env file.
However, the variables are coming back as "UNDEFINED".
I have tried all sorts of ways of fixing this, but the only way I can is to upload my .env with the project - which I do not wish to do as NUXT exposes this file in the client side js.
Anyone come across this issue and know how to sort it out?
DOCKERFILE:
# base node image
FROM node:10
WORKDIR /user/src/app
ENV PORT 8080
ENV HOST 0.0.0.0
COPY package*.json ./
RUN npm install
# Copy local nuxt code to the container
COPY . .
# Build production app
RUN npm run build
# Start the service
CMD npm start
Kind Regards,
Josh
Finally I found a solution.
I was using Nuxt v1.11.x
From version equal to or greater than 1.13, Nuxt comes with Runtime Configurations, and this is what you need.
in your nuxt.config.js:
export default {
publicRuntimeConfig: {
BASE_URL: 'some'
},
privateRuntimeConfig: {
TOKEN: 'some'
}
}
then, you can access like:
this.$config.BASE_URL || context.$config.TOKEN
More details here
To insert value to the environment variables is not required to do it in the Dockerfile. You can do it through the command line at the deployment time.
For example here is the Dockerfile that I used.
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm","start"]
this is the app.js file
const express = require('express')
const app = express()
const port = 8080
app.get('/',(req,res) => {
const envtest = process.env.ENV_TEST;
res.json({message: 'Hello world',
envtest});
});
app.listen(port, () => console.log(`Example app listening on port ${port}`))
To deploy use a script like this:
gcloud run deploy [SERVICE] --image gcr.io/[PROJECT-ID]/[IMAGE] --update-env-vars ENV_TEST=TESTVARIABLE
And the output will be like the following:
{"message":"Hello world","envtest":"TESTVARIABLE"}
You can check more detail on the official documentation:
https://cloud.google.com/run/docs/configuring/environment-variables#command-line

Deploying two Express servers in one application

I have two express servers opened in a single Node.JS application.
The use case is to open 2 separate sockets to run 2 simultaneous services.
The issue is Google Cloud or Heroku does not provide the access to multiple ports -> No separate ports = No separate sockets.
Is there a way we can open 2 server ports in a single deployment (to get two separate sockets) in any cloud services (Like Google Cloud/AWS/Heroku)?
Or Is there any other workaround?
It's possible. In typical MERN stacks I've used concurrently to run both my React.js application and Express.js application in parallel, in different ports and on the same deployment. A similar setup could be used in this scenario.
The following example has been tested using Google Cloud Platform's App Engine (Flexible environment)
Suppose you have a project structure like such:
~/project
|- Dockerfile
|- app.yaml
|
|- service1
|| - index.js
|| - package.json
|| ...
|
|- service2
|| - index.js
|| - package.json
|| ...
Assuming your express applications were created using express-generator, then it is a fact that as of creation, both applications are running on the same port as denoted by this specific line in ./service1/bin/www and ./service2/bin/www: var port = normalizePort(process.env.PORT || '3000');. Instead, you can leave service1's bin/www the same so that it continues listening to process.env.PORT which in this case would be 8080, and instead edit ./service2/bin/www so that it runs on any specific port you like other than 8080 or 3000, say you wish to run it on port 3005 for example.
Next, you would then have to edit service1's package.json so that on npm start, it will run both applications on their respective ports, concurrently. This can be done like such:
"scripts": {
"start": "concurrently \"node ./bin/www\" \"node ../service2/bin/www\""
}
Unfortunately, concurrently does not come packaged by default and so this command will likely yield unrecognized errors. As such, you can use a Dockerfile whose purpose would be to specify the runtime image as well as the build steps for deployment (npm install both applications, and npm install -g concurrently followed by npm start)
FROM gcr.io/google_appengine/nodejs
COPY . /app/
RUN cd ./service1 && npm install
RUN cd ./service2 && npm install
RUN npm install -g concurrently
CMD cd ./service1 && npm start
Finally, provide an app.yaml file which will specify the runtime used as well as the environment.
runtime: custom
env: flex

process.env.VARIABLE is undefined in webpack/docker/heroku deployment

I am having some issues getting my app to read the heroku environment variables. The application itself is a node.js react/redux web application ran with webpack-dev-server, deployed in a Docker container which is hosted on Heroku.
I'm setting the variables in a config.js file, which are to be used through the app:
module.exports = {
baseUri: process.env.BASE_URI || "http://localhost:8000",
baseApiUri: process.env.BASE_API_URI || "http://localhost:8080",
port: process.env.PORT || 8000
}
In my index.tsx file I am trying to make a simple get request to the baseApiUri, but am finding that it is defaulting to localhost even though I have these variables as settings in Heroku.
I am getting these results both:
locally when I run
set BASE_URI='http://test.com/'&&set BASE_API_URI='http://testapi.com/'&&set PORT=5000&&webpack-dev-server --host 0.0.0.0
remotely when I run:
webpack-dev-server --host 0.0.0.0

Resources