active or backup DOWN+ openshift + nodejs - node.js

i deploy my nodejs application on openshift. it is start and working fine. when is check ctl_app status it shows that app is started and running.
but in haproxy-status it not coming up. it shows "active or backup DOWN".so i can't access my app via url
i check logs of node application it is running perfectly.
please help me. anyone know how i can solve this?
thanks

i found solution, as my app haven't root(/) api so when haproxy check status via get on root path(/).
it get null when haproxy call app for check status. so haproxy asume that app still no came up.

Related

What could be the reason for the status code 503 Service Unavailable

I am running an Ubuntu server (DigitalOcean droplet) with 2 services. React (Create React App) frontend in PORT 3000 and Node.js backend/API in PORT 8765 with PM2(pm2.keymetrics.io).
Screenshot of PM2 running 2 services
The ports are open (3000 and 8765). I checked.
lsof command showing the open ports
Problem: When the frontend app (in my browser) tries to access the backend it returns the '503 Service Unavailable' status code.
Screeshot of the browser developer tools showing 503
Question: What could be the reason? Can you suggest any steps to try?
Note: It worked fine during the past few weeks. (I used to pull the new changes from the Bitbucket) But today I got this issue.
What I have tried so far
Restart the server. Same error.
Change the Backend port from 9000 to 8765
Run the same code in a fresh AWS EC2 instance. There it works fine.
Force to listen on IPv4 (based on #adel comment)
Screenshot using IPv4
I called the backend (using the frontend) multiple times and observed the running processes with the Ubuntu 'top' utility. See the below screenshot.
I noted that the process id for the backend node.js application process is 12225. And this process is repeatedly called even though '503 Service Unavailable' is returned.
It turned out that the backend application code returns this status code programmatically (with a try-catch block, whenever the error occurs)
try{
...
}
catch (e) {
res.send(503, ...)
}
I struggled this with for a few days. Hope this will help someone.
(with this finding the new problem is why the particular error happens on a particular server only. It works fine on EC2 instance. I think it is a different question.)

Node app fails to hit API after Deployment to Heroku. It says err::Connection Refused

I've deployed my Node app to heroku. It worked all good and fine but later it failed to hit any api end point. The console has an err::connection refused.
Here's the screenshot
I tried re-deployment but still the problem persists.
Looking at your screenshot, it seems that you are making HTTP calls on localhost:8080, not on your Heroku instance.

How to connect node app to node api with Nginx

I built a Node app using this tutorial. Then I built a Node API using this tutorial. The app uses the app on port 4000 to connect to the API which then connects to a mongodb to store the info on the server.
This setup works great on my local machine, but I'm trying to deploy it on a digital ocean droplet. I have Nginx setup to listen to port 8080 for the main app. I'm able to navigate to the app. But when I try to register a user and submit the data to the API I get the following error in my browser OPTIONS http://localhost:4000/users/register net::ERR_CONNECTION_REFUSED.
I suspect I have to specify something in the Nginx config files. Or would it be a ufw issue? Any help would be much appreciated.
The error is very clear. The application try to fetch on localhost:4000, so you expect any visitor of your web app to have the API launched on their own computer.
Change your code to point to the correct host and port of you server.
Then, as you guess it, you will have to create a little Nginx configuration to tell him what to proxy to the APP and what to proxy to the API.

Sails.js deployment issue on azure vm

I am deploying my sailsjs app using forever in windows azure. And it is taking forever to get my server up and running. Initally it was working fine when it was http but I changed to SSL with self signed certificate and things are not working at all. I tried forever list and it was showing my server was up and running but I can't knock my server at all. I tried to knock the server using curl and it was telling me port 443 connection refused. Can anyone help.
I found the issue. Deploying it as a Super user solves the problem. In linux when we deploy to a port below 1024 we need to have superuser access.

Getting HTTP Status 503 on glassfish after deployment

Here's the scanario... i have a glassfish server runnung my app on EC2, i configured a virtual server on glassfish for one of my domains (lets say mydomain.com) and this same virtual server has a default web module (lets say "myapp").
it works like a charm, when i access www.mydomain.com i get the login screen for my app, as it should be... no need to access www.mydomain.com/myapp (/myapp is the default context path for myapp).
But here's the thing; after i do a new deployment of my WAR file i can't access my app. if i type www.mydomain.com on the browser and press ENTER, the server gives me an "HTTP Status 503" however, if a access www.mydomain.com/myapp y can see my login page.
this problem goes away after i do a "sudo service glassfish restart" but as you might think, restarting the app server after every deployment is a pain, and btw, this is not the only app i'm running here, so... restarting glassfish just shuts down all apps and pisses off all users.
I'm deploying from Netbeans but i get the same result deploying from command line (asadmin).
i tried google but the notes i found didn't help.
is this a glassfish config problem?
am i missing a step after deployment?
for reference, i'm using: jsf 2.1, primefaces 3.2, jasper reports 4.6 (with required dependencies), mysql connector, glassfish server ose 3.1.2.2
i'll appreciate any help.
thanks.
Looks like Im shooting in the dark but here it goes.
In most cases, HTTP Error 503: Service Unavailable indicates that the App Server / Tomcat Server was not communicating well with the Web server, normally Apache.
Whats interesting is that when you sudo restart, it works, ITs a good indication of permissions of the filesystems. If you have deployed as a normal user but you have set up the server as root, this might just be the problem.
Yes, its a config problem & more. Try checking the file permissions & assign users to each process & make them play well.
Let me know if you could share some errors / logs, it would be helpful.

Resources