We are running NodeJS in the App Engine standard environment and while we try to be perfect programmers, we some times have a bug, the issue we're running into app engine completely crashes the server every time and throws a 203 error.
We've tried to do all the standard error handling things for Node, but it seems like app engine is a special case. Has anyone seen this or handled this issue before?
As it is stated in the answer https://stackoverflow.com/a/51769527/10301041:
"The error 203 means that Google App Engine detected that the RPC channel has closed unexpectedly and shuts down the instance. The request failure is caused by the instance shutting down."
An error in your code can be the cause of that. Other cause might be one of the project quotas.
If you still running on the issue and you can't identify the source of the error I would suggest to contact GCP support, as it is also suggested in the answer above.
Related
I'm currently seeing many Dependency errors in Azure application insights, and I'm having trouble determining the root cause.
I currently have an API deployed as an app service within azure. The API is connected to a CosmosDB account for basic CRUD operations. While monitoring the default application insights, I've run across several Dependency Errors:
Type: Azure DcumentDB
Name: Create/Querydocument
Call status: false
res: undefined
This behavior seems to be very intermittent (maybe a problem with concurrency), but does not seem to actually be causing API errors as the query itself still appears to be completed successfully. Any thoughts on the root cause of the issue, or how to get details regarding the error specifically would be greatly appreciated.
Here is a screenshot of the end-to-end transaction for reference:
Dependency Error
Is your app running on Windows? Is it compiled as X64/Release?
The "failure" is related to this: https://learn.microsoft.com/azure/cosmos-db/sql/performance-tips-query-sdk?tabs=v3&pivots=programming-language-csharp#use-local-query-plan-generation
Your app seems to be performing cross-partition queries, when the SDK is either not running on Windows or not built as x64 or when not all the DLLs that come with the nuget package are copied, it needs to do an HTTP request to obtain the Query Plan.
What you are seeing is the SDK retrying the Query Plan request because for some reason, you are having high latency (500ms is quite high for an HTTP request).
So the company I work for has been having this issue for years. We continuously get this error almost every other day. After the application pool crashes, it does reboot. However, we're having customers submit their applications/deals to our website and instead are encountering errors due to this app pool crashing/rebooting. I've searched extensively online and on YouTube trying to fix this issue. However, I have had no luck in finding a fix.
Has anyone had to deal with this issue before, and do you happen to have any ideas?
After a few days, sometimes my previously running application will flake out with this error and no longer load. I have searched, but have not found the cause. I have a hunch that it may be because I'm giving my Node.js app only 256MB memory, but cannot confirm. Any recommendations?
I was facing same issue, that was related to PORT using, I had given Hardcoded value in PORT that was an issue, then I used AppEnv of cfenv module and used AppEnv().port.
This solved my issue.
A 502 error can mean a variety of things, but without any logs, it is difficult to say for sure.
I agree that there is a good chance that your app is potentially running out of memory given the symptoms (or there could be a code bug which is causing some kind of unresponsive behavior).
There could be internal Cloud Foundry GoRouter / networking problems, but to debug those, you would probably need to CF SSH into a running container and trace packets. When you push the app again, you will most-likely get deployed to different machines, so many times network issues will resolve when that happens. (Given the seemingly consistent behavior of your error, I think this is unlikely.)
I'm trying to figure out why my nodejs app becomes unresponsive after 11h 20min. It happens every time, no matter if I run it on amazon-linux or Red Hat.
My Stack:
nodejs (v. 6.9.4)
mongodb (3.2)
pm2 process manager
AWS EC2 instance T2 medium
Every time I'm running the app it becomes unresponsive with an error returned to the browser:
net::ERR_CONNECTION_RESET
Pm2 doesn't restart the app, so I suspect it has nothing to do with nodejs, I also analysed the app and it doesn't have memory leaks. Db logs also look alright.
The only constant factor is the fact that the app crashes after it runs for 11h 20min.
I'm handling all possible errors from the nodejs app, but no errors in the log files occur so I suspect it has to be something else.
I also checked var/log/messages and /home/centos/messages but nothing related to the crash of the app there either.
/var/log/mongodb/mongo.log doesn't show anything specific either.
What would be the best way to approach the problem ?
Any clues how can I debug it or what could be the reason ?
Thanks
Copied from the comment since it apparently led to the solution:
You're leaking something other than memory is my guess, maybe file descriptors. Try using netstat or lsof to see if there are a lot more open connections or files than you expect.
I've been developing publishing features for my website using socket.io on a node server. I've been having issues over the past month or so with the socket connections becoming painfully slow or altogether unresponsive after only a couple days running. The server is not out of memory. I'm not fairly familiar with debugging this kind of issue.
The socket.io logs were not telling me much beyond "websocket connection invalid" or "client not handshaken 'client should reconnect'"
I had googled around and eventually saw a thread recommending running netstat in the command line and saw a large number of connections in FIN_WAIT2 and CLOSE_WAIT and figured that was the cause of my issue. After looking at some threads on the socket.io github related recommended upgrading to branch 0.9.14 (I had been running 0.9.13 at the time).
I have since done so and am still having periods of 'downtime' when the server has only been running for a few days straight. My site does not get anywhere near the amount of traffic where this should be an issue.
A new error has started popping up in my logs (websocket parser error: no handler for opcode 10), but my googling has turned up squat on the issue. I am not sure where to turn to resolve this issue or if I am simply after a red herring and the real issue is something else that one of you may be able to help me shed some light on.
I am running node.js v0.10.10 and using socket.io v0.9.14. A hard reboot of the linux server will resolve the issue 100% of the time whereas a restart of the node service does not, which is what has led me to believe it is an issue related to open sockets on the server.
You are probably experiencing a known bug in node.js that was recently fixed - see issue#5504.
Is the problem still there after upgrading to node v0.10.11?