I am developing una app web con Nest and microservice by grpc.
At the first moment, I don´t have a server grpc, only a client, and get error 14 UNAVAILABLE. Ok, I want to control this error.
I don´t want to send the error 500 Internal Server.
How could I do it, with my own exception?
I have tried to create exceptions at the controller level, which inherit from GrpcExceptionFilter and do not work, since it does not capture the error.
I have also created a filter that captures the HttpStatus.SERVICE_UNAVAILABLE exception and does not work either
I need to be able to control the error that occurs when the microservice is not available
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last month.
Improve this question
This is my first Node.js application I designed and developed.
When I run the app in staging or production it crashes when fatal errors occur.
For instance, trying to access error[0] when error is empty. This takes the entire service down and the client receives 503 errors. I am used to PHP & C# where this doesn't happen. I mean, the end-user will get an error, but the server for PHP or C# doesn't go down. With Node.js the entire Web service is no longer available.
I am working through the code to catch everything that could be a fatal error, but still, I don't have confidence in this app knowing one mistake and my clients are not able to work. To restart the services, I created a health check system that expects a 200 code.
Here is my environment:
React.js with Ant Design & Node.js (14.21.1)
This is what I run to start the services on production and staging for the web service and client respectively:
nohup node app.js > nohup.out &
nohup node node_modules/#craco/craco/scripts/start.js > nohup.out &
The web service uses Apollo GraphQL & Sequelize with a pool
{
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
Running Apache2 on AWS EC2 instances with Aurora/MySQL, but without using an RDS proxy.
PHP 8 to make health checks every 10 seconds and restarting services when anything >= 300 is returned.
Here is what I want to know:
Why does the entire web service go down when an error happens? When PHP or C# have fatal errors, it only affects the one request, not the entire site. What I am experiences is the equivalent of a PHP web service having a fatal error and the entire service crashes and goes offline.
Is this normal for Node.js services?
If this is not normal, what I am doing wrong?
How can I stop error from taking down the entire service?
When you use Node.js, your application is the server. On the other hand, when you use PHP, the server is Apache and your code is just a script being executed by Apache's mod_php module. (at least, these are the typical configurations for Node.js and PHP, though not the only ones)
So when your Node.js application has an uncaught error, it's equivalent to your HTTP server having an uncaught error. It will crash. While with PHP, Apache mod_php will catch it and handle it in a specific way.
But that doesn't mean it's acceptable for a run-of-the-mill error to cause a Node.js HTTP server to crash. If that happens, it just means your error handling needs improvement. A well-coded Node.js server will catch an error, log it, respond with an error response, and keep chugging. You have to write that code yourself though, so it's not as forgiving as PHP in that regard.
As for what to do about it, it depends on what kind of errors you're facing exactly, but the general idea is that there should be a top-level error handler that catches any error during a request, logs the error, does whatever else should be done, and returns a code 500 response. There are at least a couple of gotchas in addition:
That kind of error handler cannot catch an "uncaught promise rejection". This sort of error happens when you are not awaiting or catch()ing your promises. You should always do so, but if you want to stop your server from crashing while you diagnose where those are, you can subscribe to the unhandledRejection event. This will prevent them from crashing the process. Make sure to log them and fix them the right way.
If you are using something that inherits from EventEmitter, and it fires an error event which you have not subscribed to, this will be re-thrown as an unhandled error and crash the application. If you're using anything that fires events (not super common for an HTTP server), make sure you subscribe to its error event.
If you are using callback APIs (there's rarely a good reason to anymore) you need to be careful about throwing an error from within a callback, doing so can crash the application.
I am running the angular application in Microsoft Teams and deploying it on Azure.
But it's not running up as it is looking for the robots933456.txt route.
It wasn’t the case before today.
Please guide me on how to proceed further.
On running the app the logs shows:
The error was natively on my side nothing to do with the above but pretty much seen and raised by people so thought of sharing below:
After doing some research figured out I can safely ignore this message. /robots933456.txt is a dummy URL
the path that App Service uses to check if the container is capable of serving
requests. A 404 response simply indicates that the path doesn't exist, but
it lets App Service know that the container is healthy and ready to respond
to requests.
https://github.com/MicrosoftDocs/azure-docs/issues/32472
I have built a simple Python/Flask app for sending automatic messages in Slack and Telegram after receiving a post request in the form of:
response = requests.post(url='https://my-cool-endpoint.a.run.app/my-app/api/v1.0/',
json={'message': msg_body, 'urgency': urgency, 'app_name': app_name},
auth=(username, password))
, or even a similar curl request. It works well in localhost, as well as a containerized application. However, after deploying it to Cloud Run, the requests keep resulting in the following 503 Error:
POST 503 663 B 10.1 s python-requests/2.24.0 The request failed because either the HTTP response was malformed or connection to the instance had an error.
Does it have anything to do with a Flask timeout or something like that? I really don't understand what is happening, because the response doesn't take (and shouldn't) take more than a few seconds (usually less than 5s).
Thank you all.
--EDIT
Problem solved after thinking about AhmetB reply. I've found that I was setting the host as the public ip address of the SQL instance, and that is not the case when you post it to Cloud Run. For that to work out, you must replace host by unix_socket and then set its path.
Thanks you all! This question is closed.
Hi,
We have a NestJS application that is reachable both by HTTP and gRPC (using a custom RPC strategy).
Whilst they don't share the same routes, they do share some code. For example, a service to lookup resources from the database. We initially followed the documentation and used Exceptions filters to manage HTTP response status code. This means that service throws an instance of NotFoundException when a resource cannot be found.
However now that we're trying to integrate with the RPC exception handler, we found that any type of exception that isn't a instance of RpcException will be considered a Internal Server Error by the RPC microservice, including HttpExceptions.
After looking at the documentation and the source code I cannot find a way to share exceptions correctly between those two microservices, but I can be totally mistaken.
Is there a way to share code between HTTP and RPC services while still reporting exceptions accurately through both protocol?
You can bind a global exception filter / or interceptor to the gRPC microservice, that would transform every thrown HttpException. At the moment there's no generic exception class for each transport type.
Replying to my own question: based on reported issue, this is not supported when in the context of an Hybrid Application.
My project requirements have changed however so I am unlikely to investigate further.
Inside a nodejs connect middleware, the default way of error reporting is to call next(err), usually followed by return, if no further message should be shown to the user. The error handler may show an vanilla http 500 page then for example.
However, some errors may result in exceptions, including those throwed by used third party libs. The connect (or express?) middleware stack however catches those, and redirect them to the error handler as well.
I followed some discussions saying nodejs should be restarted on exceptions, as some state may be corrupted. However, the connect (or express) makers doesn't share this view it seems?
Having it this way, is it feasable to just throw exceptions inside middleware? Or may this baypass some connect-internal operation?