When I run my node.js app in development I intermittently see connection refused an about every 2nd or 3rd request. I am not even sending the requests very quickly (about 1 per second). The requests should be completing very quickly as this is an express app with an end-point that is just checking if the content-type is set correctly. Is it likely that I am seeing the issue because I am not proxying the requests through nginx? Nginx would queue the requests; whereas not using nginx would mean that I am just hitting my node.js app directly. I don't see anything in my node.js app's logs that would indicate an error.
Related
Background:
I use a Nginx+NodeJS structure to run the website. The server has quite some traffic like 300 concurrent people online. Visiting all kind of pages. And I use pm2 to manage my node apps.
Problem:
However, when I restart the node server with pm2 restart xxx, in a short duration (like 15 seconds), the users will encounter a 502 error. And accordingly, there is “connect() failed (111: Connection refused)” in the log.
According to one other question on SO.
A 502 Bad Gateway error usually suggests that the proxy (Nginx in NodeJS's case) can't find a destination to route the traffic to.
So I guess the error is occurring because of the moment that a user requests the server while the Node hasn't ready for its business. So my Nginx couldn't "contact" my nodejs and threw a 502 error.
Is there any way to fix this?
If you want to continue serving your users while you restart your Node.js server, you need a second Node.js server. More precisely:
Before the restart, node myapp.js is running and listening on port A. Nginx routes traffic to port A.
Now you can start a second Node.js server, probably on a newer version of your app, node mynewapp.js, that listens on port B. While you do that, traffic is still routed to port A.
Once node mynewapp.js is up and running, you switch Nginx so that it routes traffic to port B.
Allow a grace period for requests on port A to finish, then you can shut down the node myapp.js process.
Note two potential pitfalls with this approach:
Long running requests on port A would prevent you from shutting down the "old" Node.js server.
Requests that leave a state in the Node.js server (in global Javascript variables, say), would lose that state when you switch over to the other Node.js server. But (session) states that you write to a database will survive.
I have a nodejs app in a ubuntu server. i use plesk for server management.
I am using a url for catching requets from an external api. It sends a webhook to my url for catching information.
I see in my server log, 499 error when this url is requested.
I cannot find any nginx configuration for fixing this problem
Any ideas?
Kind regards
I had similar issues with NGINX recently.
The reason was server timeout - NGINX forwarded request to a backend service (Spring in my case), the backend service timed out.
In my case it was due to "out of memory" exc in Spring Boot app.
So, most probably it's an issue in your node service.
I have an NodeJS API using Express Framework.
I use Nginx for Load Balancing betwween my NodeJS instances. I use PM2 to spawn NodeJS Instances.
I identified in the log that Ngnix makes some "dummy/internal" requests, probably to identify if the instance is on (heartbeat requests could be the appropriate name for this requests).
My question is: Which is the right method to identifiy these "dummy/internal" requests on my API?
I'm fairly certain that nginx only uses passive health checks for upstream servers. In other words – because all HTTP requests are assumed to result in a response, nginx says "If I send this server a bunch of requests and don't get responses for them, I'll consider the server to be unhealthy".
Can you share some access logs of the requests you're seeing?
As far as I know, nginx does not send any requests to upstream servers that are not ultimately initiated by a client.
Just after a few weeks of working fine, our Socket.io started spewing errors on some browsers. I've tried updated to the latest Socket.io version, I've tried our setup on different machines, I've tried all sorts of machines, it seems to work on most browsers with no clear pattern of which work.
These errors appear on a second interval:
OPTIONS https://website.com/socket.io/?EIO=2&transport=polling&t=1409760272713-52&sid=Dkp1cq0lpKV75IO8AdA3 socket.io-1.0.6.js:2
XMLHttpRequest cannot load https://website.com/socket.io/?EIO=2&transport=polling&t=1409760272713-52&sid=Dkp1cq0lpKV75IO8AdA3. Invalid HTTP status code 400
We're behind Amazon's ELB, Socket.io on polling because the ELB router doesn't support WebSockets.
I found the problem that has been causing this, and it's is really unexpected...
This problem comes from using load balanced services like AWS ELB (independent EC2 should be fine though) and Heroku, their infrastructure doesn't support Socket.io features fully. AWS ELB flat out won't support WebSockets, and Heroku's router is trash for Socket.io, even in conjunction with socket.io-redis.
The problem is hidden when you use a single server, but as soon you start clustering, you will get issues. A single Heroku dyno on my application worked fine, and then the problems started appearing in production out of development, when we weren't using more than one server. We tried on ELB with sticky-load balance and even then, we still had the same issues.
When socket.io returns 400 errors, in this case it was saying "This session doesn't exist and you never completed the handshake", because you completed the handshake on a different server in your cluster.
The solution for me was just dedicating an EC2 instance for my web app to handle Socket.io.
I am checking node-http-proxy and nodejs-proxy to build a DIY reverse proxy/load balancer in Node.js. After coding a small version, I setup 2 WEBrick servers for the same Rails app so I could load balance (round robin) between them. However each HTTP request is sent to one or another server which is very inefficient since the loading process of CSS and Javascript files from the home page is performed with more than 25 GET requests.
I tried to play a bit with socket events but I didn't get anywhere because by default it uses keep-alive connections (possibly this is why nginx just support http/1.0).
Ok, so I am wondering how can my proxy send a block of HTTP requests (for instance loading a webpage entirely, etc) to only one server so I could send the next block to another server.
You need to consider stickiness or session persistence. This will ensure future connections after the first connection inbound will get 'stuck' to the chosen server for the duration of the session or until the persistence connection times out.