I recently created an e-commerce site using express and the node server worked fine on my local machine. When I uploaded it to my VPS and tried running it using pm2 and with nodemon, the server stopped responding to requests after few minutes, even when the number of requests is low.
Although all the internal functionalities other than request handling were working well. I used a lot of console.log()s in my codes, is this problem due to the excessive use of console.log()?
Related
So I am in a very weird situation.
I have a react app with node js backend which was working just fine a couple of days ago.
I don't know what happened but the client stopped connecting to the server by returning a message "Proxy error: Could not proxy request".
I then tried testing just the server side and sent a few different requests via postman. I had these requests saved in my postman as I used them a thousand times before and they all worked fine. But then postman comes back with "Client sent an HTTP request to an HTTPS server".
I checked hundred times my server is running, I can see in the console "Listening on port 5001...".
Furthermore, I tried running the app on another machine, my laptop, and it works all good.
I have spent two days on this issue and I have no clue what's going on.
Things I tried:
all proxy related suggestions on stackoverflow such as changing localhost to 127.0.0.1 or adding a "/", etc.
deleted node modules, and reinstalled
deleted the repo and recloned
tried running on node instead of nodemon
I have no clue what happened to my desktop machine that it literally stopped connecting to the server.
I don't know if you'd need to see any piece of code but I am happy to share anything you need, I just don't know what would help to show.
I'm hosting a node application on iis using iisnode.
Every a while, which mostly be every day, the application doesn't receive any request, although there are requests from frontend. When I restart the application, it accepts requests smoothly.
Note: this application is on production and has a heavy traffic.
Any ideas?
I've been setting up my Raspberry Pi as a web server and everything worked fine until yesterday, when every node process started timing out when doing external requests.
For example:
npm i -g n (The node version manager) times out when fetching the ideal tree, and my express applications can't make calls to external APIs, but requests to the server work perfectly. When doing the same outgoing calls with curl, everything works fine. Only Node.js processes seem to have this problem.
Any tips on where to look for the problem? Is it firewall related?
Not working:
Expected npm i -g n to install the package, instead it times out. .
All external calls made by a node process time out with reason connect ETIMEDOUT
Working:
curl to external resources, so connection is acctive
Requests to an express server from clients work as usual
Responses from the express server work if there are no external api calls involved
cloudflare tunnels and access for ssh work.
we are using ELB to Load balance between two NODEJS server.
Suddenly yesterday the service has started to recieve errors while i have two servers under the ELB.
when removing one of the servers and staying with only one server the service is working fine.
i don't have any log of traffic direction between the the servers and it seems that the system works fine with one server (no matter which one of them) and doesn't work with more than one server.
Any suggestions what should we check ?
10x!
Just after a few weeks of working fine, our Socket.io started spewing errors on some browsers. I've tried updated to the latest Socket.io version, I've tried our setup on different machines, I've tried all sorts of machines, it seems to work on most browsers with no clear pattern of which work.
These errors appear on a second interval:
OPTIONS https://website.com/socket.io/?EIO=2&transport=polling&t=1409760272713-52&sid=Dkp1cq0lpKV75IO8AdA3 socket.io-1.0.6.js:2
XMLHttpRequest cannot load https://website.com/socket.io/?EIO=2&transport=polling&t=1409760272713-52&sid=Dkp1cq0lpKV75IO8AdA3. Invalid HTTP status code 400
We're behind Amazon's ELB, Socket.io on polling because the ELB router doesn't support WebSockets.
I found the problem that has been causing this, and it's is really unexpected...
This problem comes from using load balanced services like AWS ELB (independent EC2 should be fine though) and Heroku, their infrastructure doesn't support Socket.io features fully. AWS ELB flat out won't support WebSockets, and Heroku's router is trash for Socket.io, even in conjunction with socket.io-redis.
The problem is hidden when you use a single server, but as soon you start clustering, you will get issues. A single Heroku dyno on my application worked fine, and then the problems started appearing in production out of development, when we weren't using more than one server. We tried on ELB with sticky-load balance and even then, we still had the same issues.
When socket.io returns 400 errors, in this case it was saying "This session doesn't exist and you never completed the handshake", because you completed the handshake on a different server in your cluster.
The solution for me was just dedicating an EC2 instance for my web app to handle Socket.io.