Shared Exceptions between HTTP and RPC - nestjs

Hi,
We have a NestJS application that is reachable both by HTTP and gRPC (using a custom RPC strategy).
Whilst they don't share the same routes, they do share some code. For example, a service to lookup resources from the database. We initially followed the documentation and used Exceptions filters to manage HTTP response status code. This means that service throws an instance of NotFoundException when a resource cannot be found.
However now that we're trying to integrate with the RPC exception handler, we found that any type of exception that isn't a instance of RpcException will be considered a Internal Server Error by the RPC microservice, including HttpExceptions.
After looking at the documentation and the source code I cannot find a way to share exceptions correctly between those two microservices, but I can be totally mistaken.
Is there a way to share code between HTTP and RPC services while still reporting exceptions accurately through both protocol?

You can bind a global exception filter / or interceptor to the gRPC microservice, that would transform every thrown HttpException. At the moment there's no generic exception class for each transport type.

Replying to my own question: based on reported issue, this is not supported when in the context of an Hybrid Application.
My project requirements have changed however so I am unlikely to investigate further.

Related

Data Aggregator/composition service in Microservices

I am developing an application where there is a dashboard for data insights.
The backend is a set of microservices written in NodeJS express framework, with MySQL backend. The pattern used is the Database-Per-Service pattern, with a message broker in between.
The problem I am facing is, that I have this dashboard that derives data from multiple backend services(Different databases altogether, some are sql, some are nosql and some from graphDB)
I want to avoid multiple queries between front end and backend for this screen. However, I want to avoid a single point of failure as well. I have come up with the following solutions.
Use an API gateway aggregator/composition that makes multiple calls to backend services on behalf of a single frontend request, and then compose all the responses together and send it to the client. However, scaling even one server would require scaling of the gateway itself. Also, it makes the gateway a single point of contact.
Create a facade service, maybe called dashboard service, that issues calls to multiple services in the backend and then composes the responses together and sends a single payload back to the server. However, this creates a synchronous dependency.
I favor approach 2. However, I have a question there as well. Since the services are written in nodeJs, is there a way to enforce time-bound SLAs for each service, and if the service doesn't respond to the facade aggregator, the client shall be returned partial, or cached data? Is there any mechanism for the same?
GraphQL has been designed for this.
You start by defining a global GraphQL schema that covers all the schemas of your microservices. Then you implement the fetchers, that will "populate" the response by querying the appropriate microservices. You can start several instances to do not have a single point of failure. You can return partial responses if you have a timeout (your answer will incluse resolver errors). GraphQL knows how to manage cache.
Honestly, it is a bit confusing at first, but once you got it, it is really simple to extend the schema and include new microservices into it.
I can’t answer on node’s technical implementation but indeed the second approach allows to model the query calls to remote services in a way that the answer is supposed to be received within some time boundary.
It depends on the way you interconnect between the services. The easiest approach is to spawn an http request from the aggregator service to the service that actually bring the data.
This http request can be set in a way that it won’t wait longer than X seconds for response. So you spawn multiple http requests to different services simultaneously and wait for response. I come from the java world, where these settings can be set at the level of http client making those connections, I’m sure node ecosystem has something similar…
If you prefer an asynchronous style of communication between the services, the situation is somewhat more complicated. In this case you can design some kind of ‘transactionId’ in the message protocol. So the requests from the aggregator service might include such a ‘transactionId’ (UUID might work) and “demand” that the answer will include just the same transactionId. Now the sends when sent the messages should wait for the response for the certain amount of time and then “quit waiting” after X seconds/milliseconds. All the responses that might come after that time will be discarded because no one is expected to handle them at the aggregator side.
BTW this “aggregator” approach also good / simple from the front end approach because it doesn’t have to deal with many requests to the backend as in the gateway approach, but only with one request. So I completely agree that the aggregator approach is better here.

Fallback to another service endpoint if the first is busy according to status code

How is possibile to fallback to another API service if the first try fails with status code of 429 or 500?
Consider this situation:
The gateway should first try a microservice host, but if it timeouts or answer a non 2xx status code it should try a next one.
This is not supported in Express Gateway out of the box. If a request is failing, you'll get a failed request.
Right now you can specify multiple urls for a serviceEndpoint that will be used, but in a round robin manner; therefore you'll still get an error first, before trying again with the second service.
Express Gateway could — and maybe should — handle such case. In meantime I'd suggest you to look to other alternatives offered by your infrastructure. Consul could be the way to go.

Error reading FrontendRequest message content error in Azure API Management

Sometime we are getting "Error reading FrontendRequest message content" exceptions in API Management. Backend calls are actually not failing, and their responses are what we expect. This is not very frequent (a handful per day), but I would like to find the reason.
Thanks in advance,
Jose
This means that there were some problems (details should be in logs) reading request content from client, that is client that has made a call to APIM service. It's normal to have some of those since usually you do not control where calling clients are or what is their connection quality. But if you have this persistently or do control your clients and sure that there are no problems with their connection, might want to file a support request.

grpc complete async java service request/response mapping

A Java service (let's call it portal) is both a gRPC client as well as server. It serves millions of gRPC clients (server), each client requesting for some task/resource. Based on the incoming request, portal will figure out the backend services and talk to one or more of them, and send the returned response(s) to the originating client. Hence, here, the requirement is:
Original millions of clients will have their own timeouts
The portal should not have a thread blocking for the millions of clients (async). It should also not have a thread blocking for each client's call to the backend services (async). We can use the same thread which received a client call for invoking the backend services.
If the original client times out, portal should be able to communicate it to the backend services or terminate the specific call to the backend services.
On error from backend services, portal should be able to communicate it back to the specific client whose call failed.
So the questions here are:
We have to use async unary calls here, correct?
How do the intermediate server (portal) match the original requests to the responses from the backend services?
In case of errors on backend services, how does the intermediate server propagate the error?
How does the intermediate server propagate the deadlines?
How does the intermediate server cancel the requests on the backend services, if the originating client terminates?
gRPC Java can make a proxy relatively easily. Using async stubs for such a proxy would be common. When the proxy creates its outgoing RPCs, it can save a reference to the original RPC in the callback of the outgoing RPC. When the outgoing RPC's callback fires, simply issue the same call to the original RPC. That solves both messages and errors.
Deadline and cancellation propagation are automatically handled by io.grpc.Context.
You may want to reference this grpc-level proxy example (which has not been merged to grpc/grpc-java). It uses ClientCall/ServerCall because it was convenient and because it did not want to parse the messages. It is possible to do the same thing using the StreamObserver API.
The main difficulty in such a proxy would be to observe flow control. The example I referenced does this. If using StreamObserver API you should cast the StreamObserver passed to the server to ServerCallStreamObserver and get a ClientCallStreamObserver by passing a ClientResponseObserver to the client stub.

Best practices to handle exception in SailsJS

I just started trying out SailsJS a few days ago.
I've realized that the Node is terminated whenever I have an uncaught exception.
I have a list of controllers and each of them calls a specific service JS file (Containing logics and DB calls) in services/.
Can I write a global error handler for all services so that any type of error that occurs from these services should be handled by it and appropriate error response has to be communicated to front-end.
I tried using process.on('uncaughtexception') or some of basic exceptions but it needs to be added to each service method.
Also can I have one common point for all service calls made from client to server through which all io.socket.post() and io..socket.get() goes through
I would appreciate any pointer/article that would show me the common best practices for handling uncaught exceptions in SailsJS and using shorter code rather than writing redundant code in all services.
Best practice is using Domains in your controller. This will handle exceptions in async code, and its relatively straight forward.
You can use something like trycatch to simplify things a little, but domain based exceptions will be most effective. It'll insure that exceptions do not crash your application. Just create a new domain in your controller and run your controller methods inside of that domain.
Sailsjs being based on express you can use connect middleware, and you can seamlessly create a new domain from middleware. There such thing as express-domain-middleware. This might be the most aesthetic option, and most convenient.
Update:
As mention by Benjamin Gruenbaum, Domains are planned to become deprecated in v1 of node. Perhaps you should read through Joyents Error Handling Best Practices. Its agnostic to the framework you are using.
Additonally you can still use Domains, while there isn't a way to globally handle errors in node.js otherwise. Once deprecated you could always remove your dependence on Domains, relatively easily. That said, it may be best not to rely solely on domains.
Strongloop also provides a library inspired by domains called Zone. This is also an option.
Its OK to let node instance error out due to a programming error, else it may continue in an inconsistent state and mess-up business logic. In production environment the server can be restarted on crash, this will reset its state and keep it available, if the error is not frequent. And in all of it its very important to Log everything. This applies to most of Node setups, including SailsJS.
The following approach can be taken:
Use a Logger : A dedicated logger should be accessible to server components. Should be connected to a service that notifies the developer (email ?) of very serious errors.
Propagate per request errors to the end: Carefully forward errors from any step in request processing. In ExperssJs/ConnectJs/middle-ware based setup's, the next(err) can be used to pass an error to the middle-ware chain. An error catching middle-ware at the end of the chain will get this error, log it verbose, and send a 500 status back. You can use Domains or Zones or Promises or async or whatever you like to process request and catch errors.
Shutdown on process.on('uncaughtexception'): Log the erorr, do necessary clean-up, and throw the same error again to shutdown process.
User PM2/Forever or Upstart/init.d on linux : Now when the process shuts down due to the bad exception, these tools will restart it and track how many time server has been crashing. If the server is crashing way too many time, its good to stop it and take immediate action.
I have not tried this, but I believe you should be able to set a catch-all exception handler in bootstrap.js using process.on('uncaughtexception').
Personally, I use promises via the bluebird library, and put a catch statement that passes all errors to a global error handling function.

Resources