Openshift Layer4 connection, App Won't Start - node.js

I recently pushed a set of node.js changes to an app on Openshift. The app runs fine on my local machine and is pretty close to the vanilla example deployed by Openshift. The Openshift haproxy log has this final line:
[fbaradar-hydrasale.rhcloud.com logs]> [WARNING] 169/002631 (93881) :
Server express/local-gear is DOWN, reason: Layer4 connection problem,
info: "Connection refused", check duration: 0ms. 0 active and 0 backup
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
The nodejs.log has this final line and no error mesages before this line: DEBUG: Program node server.js exited with code 8
I have searched high and low and can't seem to find anyone with a similar problem or hints at how to resolve this issue. Obviously, the above result in a 503 Service Unavailable when trying to access the app over the web.

Looking at the question I think it is happening because you don't have any routes configured at root '/'. OpenShift uses HAProxy as a load balancer in scalable applications. HAProxy is configured to ping root '/' url for health checks to determine whether your application is up or down. In your application, you have not configured anything at the root url so when HAProxy pings '/' it gets 503, hence your application behaves like this. There are two ways you can fix this problem
Create an index.html and push it to OpenShift application
The better solution is to configure HAProxy configuration file. SSH into the main gear using rhc ssh --app command, then change directory to haproxy/conf, and then update option httpchk GET / to option httpchk GET /valid_location, and finally restart the HAProxy using rhc cartridge-restart --cartridge haproxy. You can check the status of your gears by going to http://myapp-myusername.rhcloud.com/haproxy-status.
Hope this will help you.

Thanks for the response! However, I just discovered what the issue was by rolling back and making one change at a time. There was a buried npm dependency down in a subfile. This dependency had not been added to the package.json file and Openshift was failing to rebuild node appropriately. Once the dependency was added everything started to run again. The log errors were a bit of a red herring and simply a side effect of not having a good application to start!

Related

node-opcua-samples - simple_client.js does not connect to simple_server.js

i'm trying to gain deeper knowledge of using opc. So i installed node js and then node-opcua-samples via npm.
Afterwards i went to powershell, changed cwd to the bin directory of node-opcua-samples and started simple_server.js with
node simple_server.js
The server is starting and prints
server now waiting for connections. CTRL+C to stop
to the console. Then it prints
contacting discovery server backoff opc.tcp://localhost:4840 attempt # 0 retrying in 2 seconds
server registration is still pending (is Local Discovery Server up and running ?)
From the output i expect to be able to connect to the running server, even though it shows the warning concerning the discovery server. Am i right?
The next step is to start simple_client.js in a second powershell by changeing the cwd to the bin directory of node-opcua-samples and then use
node simple_client.js >endpointUrl printed by server<
At this point i'm expecting the client to connect to the started server and complete the test cases build in. But the client seemingly is not able to connect to the server and prints
backoff attempt # 0 retrying in 2 seconds
Following the hint given inside of simple_client.js and running simple_client_ts.ts with ts-node results in the same behavior.
So where is my mistake?
Any hints or questions will be appreciated.
Regards
Gregor
Systemdetails for reproduction:
Windows 10
Node Version 12.13.0
node-opcua-samples Version 2.5.7
Ok, i solved the problem....
Instead of using the endpointUrl printed by the server, i had to start the client with the endpoint opc.tcp://localhost:26543. The used port is the default port set in simple_server.js.
The warning about the discovery server vanished after setting registerServerMethod in simple_server.js from RegisterServerMethod.LDS to RegisterServerMethod.HIDDEN.
Best regards
Gregor

ExpressJS Server Goes Offline Every Night - 502 Bad Gateway

I have a website with Nginx installed as a reserve proxy for an ExpressJS server (proxies to port 3001). This uses Node and ReactJS for my frontend application.
This is simply a testing website currently, and isn't known or used by any users. I have this installed on a Digital Ocean Droplet with Ubuntu.
Every morning when I wake up, I load my website and see 502 Bad Gateway. The problem is, I don't know how to find out how this happened. I have PM2 installed which should automatically restart my ExpressJS server but it hasn't done so, and when I run pm2 list, my application is still showing online:
When I run pm2 logs, I get the following error (I am running this as an Administrator):
So I'll run pm2 restart all to restart the app, but then I don't see any crash information. However on this occasion when taking this screenshot, there were a couple of unusual requests. /robots.txt, /sitemap.xml and /.well-known/security.txt, but nothing indicating a crash:
When I look at my Nginx error.log file, all I can see is the following:
There is, however, something obscure within my access.log ([09/Oct/2018:06:33:19 +0000]) but I have no idea what this means:
If I run curl localhost:3001 whilst the server is offline, I will receive a connection error message. This works fine after I run pm2 restart all.
I'm completely stuck with this and even the smallest bit of help would be appreciated greatly, even if it's just to tell me I'm barking up the wrong tree completely and need to look elsewhere - thank you.
I think you should check this github thread, it seems like it could help you.
Basically, after few hours, a Nodejs server stop functioning, and the poor nginx can not forward its requests, as the service listening to the forward port is dead. So it triggers a 502 error.
It was all due to a memory leak, that leads to a massive garbage collection, then to the server to crash. Check your memory consumption, you could have some surprises. And try to debug your app code, a piece (dependency) at the time.
Updated answer:
So, i will add another branch to my question as it seems it has not helped you so far.
You could try to get rid of pm2, and use systemd to manage your app life cycle.
Create a service file
sudo vim /lib/systemd/system/appname.service
this is a simple file i used myself for a random ExpressJS app:
[Unit]
Description=YourApp Site Server
[Service]
ExecStart=/home/user/appname/index.js
Restart=always
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/home/user/appname
[Install]
WantedBy=multi-user.target
Note that it will try to restart if it fails somehow Restart=always
Manage it with systemd
Register the new service with:
sudo systemctl daemon-reload
Now start your app from systemd with:
sudo systemctl start appname
from now on you should be able to manage your app life cycle with the usual systemd commands.
You could add stdout and stderr to syslog to understand what you app is doing
StandardOutput=syslog
StandardError=syslog
Hope it helps more
You cannot say when exactly NodeJS will crash, or will do big GC, or will stun for other reason.
Easiest way to cover such issues is to do health check and restart an app. This should not be an issue when working with cluster.
Please look at health check module implementation, you may try to use it, or write some simple shell script to do the check

How to increase deployment timeout for openshift nodejs app?

When I do git push and rhc tail <appname> in other terminal I can see that my server still does not start, but I have Application '<appname>' failed to start (port 8080 not available) in git push output.
When I use no-scaling app it wasn't a problem – all working good, but now, with scalable app I should manually restart haproxy after my server started (I can see it by rhc tail).
I know that solution present, at least for JBoss applications. It's here. But can I use it for my case or else what I should use?
Thanks for your attention.

Deploying Sails App on Openshift

Can any one help me with deploying a SailsJS app on Openshift?
I followed How you get Sail.js running on Openshift
After making my changes and pushing it to the repo I get the status as successful but when I go to my link it says
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Apache/2.2.15 (Red Hat) Server at kittylogintest-kittygame.rhcloud.com Port 80
I finally managed to deploy my sails application on openshift
Most of the steps mentioned in the link mentioned in my question will help
After following the steps i was getting an error stating permission denied for grunt-cli
I solved it by SSHing in to my app and then doing the npm install in app-root/runtime/repo
Hope this helps if some one stumbles on the same stair.

What does "app started" in cloudfoundry mean

I read the cloudfoundry documents on how to push/start application:
Starting Applications
Deploy an Application
But neither of them tells when the cf push command or cf start command exits.
Starting app locally
For example, I have a nodejs application which can be started by command npm start which will keep blocking when running on local machine like:
$ npm start
> my-app#0.0.1 start /example-deployment-cf
> node index.js
connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0
I never know when the nodejs listens to tcp port except I print out something in my application by kind of event handler for "start" event.
pushing app to cloudfoundry
$ cf push
...
1 of 1 instances running
App started
Showing health and status for app shawnzhu-test-node in org me#example.com / space dev as me#example.com...
OK
requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: shawnzhu-test-node.pivotal.io
state since cpu memory disk
#0 running 2014-05-12 02:32:05 PM 0.0% 56.1M of 512M 26.4M of 1G
it just simply (and magically) tells "App started" but didn't tell when (and why).
In real world, there will be lots of function calls in npm start like page generation and the app may not ready after cf push command exits. So I still need to use event handlers in code to print out log entries like "finished generating pages".
So my simple question is, what is the actual meaning of "app started" in cloudfoundry (before reading its code).
If a route is assigned to the pushed app (like shawnzhu-test-node.pivotal.io in your case), then Cloud Foundry will check that the URL returns a 200 and report that it is running as soon as it does.
You can also do "cf logs shawnzhu-test-node" to see more detailed output from the app startup process.

Resources