Catch all `error` events from any EventEmitter in node - node.js

Via the Node.js documentation, an unhandled EventEmitter will crash a running process:
When an EventEmitter instance experiences an error, the typical action is to emit an 'error' event. Error events are treated as a special case in node. If there is no listener for it, then the default action is to print a stack trace and exit the program.
I would very much like for the process to not crash when this happens.[1] Ideally, I could catch every instance of an EventEmitter error like this:
emitter.on('error', function(err) { console.log(err); })
However our application is large, a simple search of the node_modules folder reveals that there are lots of EventEmitters, and tracking them down would be cumbersome.
Is there a global hook I can use to catch all instances of an EventEmitter failure?
I tried process.on('uncaughtException') but this doesn't catch EventEmitter errors. I also tried process.on('error') which catches errors emitted by the process, but does not catch errors emitted by other EventEmitters.
Other places say you should use domains, however, it sounds like you need to wrap specific function calls in it, at which point you might as well find and wrap every EventEmitter with .on('error'). My colleague also says domains are, if not deprecated, not going to be used going forward.
[1] I understand the logic behind the "processes should crash" argument. Partly I would like to keep processes alive because a) our server takes a long time to restart, and b) processes keep crashing with literally zero stack trace; I figure keeping the process alive will help with logging and tracking down errors.

process.on('uncaughtException', ...) does catch EventEmittor errors. Try this:
'use strict';
setInterval(function () {}, Number.MAX_VALUE); // keep process alive
var myEmitter = new (require('events').EventEmitter)();
// add this handler before emitting any events
process.on('uncaughtException', function (err) {
console.log('UNCAUGHT EXCEPTION - keeping process alive:', err); // err.message is "foobar"
});
myEmitter.emit('error', new Error('foobar'));
Note that if you add the uncaughtException listener after your error event has fired, the exception won't get caught!

Related

Why are these promise rejections global?

We have a fairly complex code base in NodeJS that runs a lot of Promises synchronously. Some of them come from Firebase (firebase-admin), some from other Google Cloud libraries, some are local MongoDB requests. This code works mostly fine, millions of promises being fulfilled over the course of 5-8 hours.
But sometimes we get promises rejected due to external reasons like network timeouts. For this reason, we have try-catch blocks around all of the Firebase or Google Cloud or MongoDB calls (the calls are awaited, so a rejected promise should be caught be the catch blocks). If a network timeout occurs, we just try it again after a while. This works great most of the time. Sometimes, the whole thing runs through without any real problems.
However, sometimes we still get unhandled promises being rejected, which then appear in the process.on('unhandledRejection', ...). The stack traces of these rejections look like this, for example:
Warn: Unhandled Rejection at: Promise [object Promise] reason: Error stack: Error:
at new ApiError ([repo-path]\node_modules\#google-cloud\common\build\src\util.js:59:15)
at Util.parseHttpRespBody ([repo-path]\node_modules\#google-cloud\common\build\src\util.js:194:38)
at Util.handleResp ([repo-path]\node_modules\#google-cloud\common\build\src\util.js:135:117)
at [repo-path]\node_modules\#google-cloud\common\build\src\util.js:434:22
at onResponse ([repo-path]\node_modules\retry-request\index.js:214:7)
at [repo-path]\node_modules\teeny-request\src\index.ts:325:11
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
This is a stacktrace which is completely detached from my own code, so I have absolutely no idea where I could improve my code to make it more robust against errors (error message seems to be very helpful too).
Another example:
Warn: Unhandled Rejection at: Promise [object Promise] reason: MongoError: server instance pool was destroyed stack: MongoError: server instance pool was destroyed
at basicWriteValidations ([repo-path]\node_modules\mongodb\lib\core\topologies\server.js:574:41)
at Server.insert ([repo-path]\node_modules\mongodb\lib\core\topologies\server.js:688:16)
at Server.insert ([repo-path]\node_modules\mongodb\lib\topologies\topology_base.js:301:25)
at OrderedBulkOperation.finalOptionsHandler ([repo-path]\node_modules\mongodb\lib\bulk\common.js:1210:25)
at executeCommands ([repo-path]\node_modules\mongodb\lib\bulk\common.js:527:17)
at executeLegacyOperation ([repo-path]\node_modules\mongodb\lib\utils.js:390:24)
at OrderedBulkOperation.execute ([repo-path]\node_modules\mongodb\lib\bulk\common.js:1146:12)
at BulkWriteOperation.execute ([repo-path]\node_modules\mongodb\lib\operations\bulk_write.js:67:10)
at InsertManyOperation.execute ([repo-path]\node_modules\mongodb\lib\operations\insert_many.js:41:24)
at executeOperation ([repo-path]\node_modules\mongodb\lib\operations\execute_operation.js:77:17)
At least this error message says something.
All my Google Cloud or MongoDB calls have await and try-catch blocks around them (and the MongoDB reference is recreated in the catch block), so if the promise were rejected inside those calls, the error would be caught in the catch block.
A similar problem sometimes happens in the Firebase library. Some of the rejected promises (e.g. because of network errors) get caught by our try-catch blocks, but some don't, and I have no possibility to improve my code, because there is no stack trace in that case.
Now, regardless of the specific causes of these problems: I find it very frustrating that the errors just happen on a global scale (process.on('unhandledRejection', ...), instead of at a location in my code where I can handle them with a try-catch. This makes us lose so much time, because we have to restart the whole process when we get into such a state.
How can I improve my code such that these global exceptions do not happen again? Why are these errors global unhandled rejections when I have try-catch blocks around all the promises?
It might be the case that these are the problems of the MongoDB / Firebase clients: however, more than one library is affected by this behavior, so I'm not sure.
a stacktrace which is completely detached from my own code
Yes, but does the function you call have proper error handling for what IT does?
Below I show a simple example of why your outside code with try/catch can simply not prevent promise rejections
//if a function you don't control causes an error with the language itself, yikes
//and for rejections, the same(amount of YIKES) can happen if an asynchronous function you call doesn't send up its rejection properly
//the example below is if the function is returning a custom promise that faces a problem, then does `throw err` instead of `reject(err)`)
//however, there usually is some thiAPI.on('error',callback) but try/catch doesn't solve everything
async function someFireBaseThing(){
//a promise is always returned from an async function(on error it does the equivalent of `Promise.reject(error)`)
//yet if you return a promise, THAT would be the promise returned and catch will only catch a `Promise.reject(theError)`
return await new Promise((r,j)=>{
fetch('x').then(r).catch(e=>{throw e})
//unhandled rejection occurs even though e gets thrown
//ironically, this could be simply solved with `.catch(j)`
//check inspect element console since stackoverflow console doesn't show the error
})
}
async function yourCode(){
try{console.log(await someFireBaseThing())}
catch(e){console.warn("successful handle:",e)}
}
yourCode()
Upon reading your question once more, it looks like you can just set a time limit for a task and then manually throw to your waiting catch if it takes too long(because if the error stack doesn't include your code, the promise that gets shown to unhandledRejection would probably be unseen by your code in the first place)
function handler(promise,time){ //automatically rejects if it takes too long
return new Promise(async(r,j)=>{
setTimeout(()=>j('promise did not resolve in given time'),time)
try{r(await promise)} catch(err){j(err)}
})
}
async function yourCode(){
while(true){ //will break when promise is successful(and returns)
try{return await handler(someFireBaseThing(...someArguments),1e4)}
catch(err){yourHandlingOn(err)}
}
}
Elaborating on my comment, here's what I would bet is going on: You set up some sort base instance to interact with the API, then use that instance moving forward in your calls. That base instance is likely an event emitter that itself can emit an 'error' event, which is a fatal unhandled error with no 'error' listener setup.
I'll use postgres for an example since I'm unfamiliar with firebase or mongo.
// Pool is a pool of connections to the DB
const pool = new (require('pg')).Pool(...);
// Using pool we call an async function in a try catch
try {
await pool.query('select foo from bar where id = $1', [92]);
}
catch(err) {
// A SQL error like no table named bar would be caught here.
// However a connection error would be emitted as an 'error'
// event from pool itself, which would be unhandled
}
The solution in the example would be to start with
const pool = new (require('pg')).Pool(...);
pool.on('error', (err) => { /* do whatever with error */ })

Catching an error in an async function in Node/Express

Is there any way to catch an error that occurs in an async callback after an Express next() or res.send() has been called from middleware or a route handler? Consider the following code:
app.use('/throw-error', (req, res) => {
setTimeout(() => {
throw new Error('Async error causes thread death')
}, 500)
res.send('This thread is going to die...')
})
It will execute and send "This thread is going to die..." to the browser. It will also, a half second later, crash that Node thread it is running in. If you happen to be running an app that uses Node's cluster module, maybe it launches a new thread, but it died nonetheless. You might see something like this in your logs:
::1 [2019-07-17T18:54:55.142Z] - [71700] 4.740 ms "GET /throw-error" 200 -
/Users/moryl/Projects/crashtest/express.js:66
throw new Error('Async error causes thread death')
^
Error: Async error causes thread death
at Timeout.setTimeout [as _onTimeout] (/Users/moryl/Projects/InSight/sources/server/config/express.js:66:13)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
That thread is now dead.
My question is, how the heck do you handle a (possibly unknown) async error that is outside the scope of a normal request, whether by design or through bad code? How do I prevent the thread from dying?
I don't want to be told that I shouldn't do this kind of stuff in async calls to begin with. I know this. I'm trying to write defensive code to catch "bad stuff" written by others.
This has been documented in express error handling doc:
You must catch errors that occur in asynchronous code invoked by route
handlers or middleware and pass them to Express for processing. For
example:
app.get('/', function (req, res, next) {
setTimeout(function () {
try {
throw new Error('BROKEN')
} catch (err) {
next(err)
}
}, 100)
})
The above example uses a try...catch block to catch errors in the
asynchronous code and pass them to Express. If the try...catch block
were omitted, Express would not catch the error since it is not part
of the synchronous handler code.
So, basically you need to try..catch the route. (the examples are basically same, mother of coincidence)
My question is, how the heck do you handle an async error that is outside the scope of a normal request, whether by design ...
You still want to handle errors in asynchronous code, even if it was fired and forgotten by design. Add a try { } catch { } or .catch to every independent task. With asynchronous code, Promises and async / awaithelp you (as they group independent callbacks into tasks, and you can handle errors per task then):
const timer = ms => new Promise(res => setTimeout(res, ms));
async function fireAndForgetThis() {
await timer(500);
throw new Error('Async error doesn't cause thread death, because its handled properly')
}
fireAndForgetThis()
.catch(console.error); // But always "handle" errors
... or through bad code?
Fix bad code.
How do I prevent the thread from dying?
That's not the thing you want to prevent. If an error occurs, and was not handled, your application gets into an unplaned state. Continuing execution might create even more problems. You don't want that. You want to prevent the unhandled rejection / unhandled error itself (by handling it properly).
For sure there are cases you can't handle, e.g. if the connection to the backing database goes down. In that case, NodeJS crashes, causes the monitoring to wake up DevOps that get the database back running. Crashing is also a form of handling the error ;)
If you read this far, and you still want to handle unhandled errors, don't. Okay well, you probably have your reasons, there you go.

Firebase Function Deployment Possible EventEmitter memory leak [duplicate]

I am getting following warning:
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace:
at EventEmitter.<anonymous> (events.js:139:15)
at EventEmitter.<anonymous> (node.js:385:29)
at Server.<anonymous> (server.js:20:17)
at Server.emit (events.js:70:17)
at HTTPParser.onIncoming (http.js:1514:12)
at HTTPParser.onHeadersComplete (http.js:102:31)
at Socket.ondata (http.js:1410:22)
at TCP.onread (net.js:354:27)
I wrote code like this in server.js:
http.createServer(
function (req, res) { ... }).listen(3013);
How to fix this ?
I'd like to point out here that that warning is there for a reason and there's a good chance the right fix is not increasing the limit but figuring out why you're adding so many listeners to the same event. Only increase the limit if you know why so many listeners are being added and are confident it's what you really want.
I found this page because I got this warning and in my case there was a bug in some code I was using that was turning the global object into an EventEmitter! I'd certainly advise against increasing the limit globally because you don't want these things to go unnoticed.
This is explained in the node eventEmitter documentation
What version of Node is this? What other code do you have? That isn't normal behavior.
In short, its: process.setMaxListeners(0);
Also see: node.js - request - How to “emitter.setMaxListeners()”?
The accepted answer provides the semantics on how to increase the limit, but as #voltrevo pointed out that warning is there for a reason and your code probably has a bug.
Consider the following buggy code:
//Assume Logger is a module that emits errors
var Logger = require('./Logger.js');
for (var i = 0; i < 11; i++) {
//BUG: This will cause the warning
//As the event listener is added in a loop
Logger.on('error', function (err) {
console.log('error writing log: ' + err)
});
Logger.writeLog('Hello');
}
Now observe the correct way of adding the listener:
//Good: event listener is not in a loop
Logger.on('error', function (err) {
console.log('error writing log: ' + err)
});
for (var i = 0; i < 11; i++) {
Logger.writeLog('Hello');
}
Search for similar issues in your code before changing the maxListeners (which is explained in other answers)
By default, a maximum of 10 listeners can be registered for any single event.
If it's your code, you can specify maxListeners via:
const emitter = new EventEmitter()
emitter.setMaxListeners(100)
// or 0 to turn off the limit
emitter.setMaxListeners(0)
But if it's not your code you can use the trick to increase the default limit globally:
require('events').EventEmitter.prototype._maxListeners = 100;
Of course you can turn off the limits but be careful:
// turn off limits by default (BE CAREFUL)
require('events').EventEmitter.prototype._maxListeners = 0;
BTW. The code should be at the very beginning of the app.
ADD: Since node 0.11 this code also works to change the default limit:
require('events').EventEmitter.defaultMaxListeners = 0
Replace .on() with once(). Using once() removes event listeners when the event is handled by the same function.
If this doesn't fix it, then reinstall restler with this in your package.json
"restler": "git://github.com/danwrong/restler.git#9d455ff14c57ddbe263dbbcd0289d76413bfe07d"
This has to do with restler 0.10 misbehaving with node. you can see the issue closed on git here: https://github.com/danwrong/restler/issues/112
However, npm has yet to update this, so that is why you have to refer to the git head.
Node Version : v11.10.1
Warning message from stack trace :
process.on('warning', e => console.warn(e.stack));
(node:17905) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit
at _addListener (events.js:255:17)
at Connection.addListener (events.js:271:10)
at Connection.Readable.on (_stream_readable.js:826:35)
at Connection.once (events.js:300:8)
at Connection._send (/var/www/html/fleet-node-api/node_modules/http2/lib/protocol/connection.js:355:10)
at processImmediate (timers.js:637:19)
at process.topLevelDomainCallback (domain.js:126:23)
After searching for github issues, documentation and creating similar event emitter memory leaks, this issue was observed due to node-apn module used for iOS push notification.
This resolved it :
You should only create one Provider per-process for each
certificate/key pair you have. You do not need to create a new
Provider for each notification. If you are only sending notifications
to one app then there is no need for more than one Provider.
If you are constantly creating Provider instances in your app, make
sure to call Provider.shutdown() when you are done with each provider
to release its resources and memory.
I was creating provider object each time the notification was sent and expected the gc to clear it.
I am getting this warning too when install aglio on my mac osx.
I use cmd fix it.
sudo npm install -g npm#next
https://github.com/npm/npm/issues/13806
In my case, it was child.stderr.pipe(process.stderr) which was being called when I was initiating 10 (or so) instances of the child. So anything, that leads to attach an event handler to the same EventEmitter Object in a LOOP, causes nodejs to throw this error.
Sometimes these warnings occur when it isn't something we've done, but something we've forgotten to do!
I encountered this warning when I installed the dotenv package with npm, but was interrupted before I got around to adding the require('dotenv').load() statement at the beginning of my app. When I returned to the project, I started getting the "Possible EventEmitter memory leak detected" warnings.
I assumed the problem was from something I had done, not something I had not done!
Once I discovered my oversight and added the require statement, the memory leak warning cleared.
I prefer to hunt down and fix problems instead of suppressing logs whenever possible. After a couple days of observing this issue in my app, I realized I was setting listeners on the req.socket in an Express middleware to catch socket io errors that kept popping up. At some point, I learned that that was not necessary, but I kept the listeners around anyway. I just removed them and the error you are experiencing went away. I verified it was the cause by running requests to my server with and without the following middleware:
socketEventsHandler(req, res, next) {
req.socket.on("error", function(err) {
console.error('------REQ ERROR')
console.error(err.stack)
});
res.socket.on("error", function(err) {
console.error('------RES ERROR')
console.error(err.stack)
});
next();
}
Removing that middleware stopped the warning you are seeing. I would look around your code and try to find anywhere you may be setting up listeners that you don't need.
Thanks to RLaaa for giving me an idea how to solve the real problem/root cause of the warning. Well in my case it was MySQL buggy code.
Providing you wrote a Promise with code inside like this:
pool.getConnection((err, conn) => {
if(err) reject(err)
const q = 'SELECT * from `a_table`'
conn.query(q, [], (err, rows) => {
conn.release()
if(err) reject(err)
// do something
})
conn.on('error', (err) => {
reject(err)
})
})
Notice there is a conn.on('error') listener in the code. That code literally adding listener over and over again depends on how many times you call the query.
Meanwhile if(err) reject(err) does the same thing.
So I removed the conn.on('error') listener and voila... solved!
Hope this helps you.
As pointed out by others, increasing the limit is not the best answer. I was facing the same issue, but in my code I was nowhere using any event listener. When I closely looked into the code, I was creating a lot of promises at times. Each promise had some code of scraping the provided URL (using some third-party library). If you are doing something like that, then it may be the cause.
Refer this thread on how to prevent that: What is the best way to limit concurrency when using ES6's Promise.all()?
i was having the same problem. and the problem was caused because i was listening to port 8080, on 2 listeners.
setMaxListeners() works fine, but i would not recommend it.
the correct way is to, check your code for extra listeners, remove the listener or change the port number on which you are listening, this fixed my problem.
I was having this till today when I start grunt watch. Finally solved by
watch: {
options: {
maxListeners: 99,
livereload: true
},
}
The annoying message is gone.
You need to clear all listeners before creating new ones using:
Client / Server
socket.removeAllListeners();
Assuming socket is your client socket / or created server socket.
You can also subscribe from specific event listeners like for example removing the connect listener like this:
this.socket.removeAllListeners("connect");
I was facing the same issue, but i have successfully handled with async await.
Please check if it helps.
let dataLength = 25;
Before:
for (let i = 0; i < dataLength; i++) {
sftp.get(remotePath, fs.createWriteStream(xyzProject/${data[i].name}));
}
After:
for (let i = 0; i < dataLength; i++) {
await sftp.get(remotePath, fs.createWriteStream(xyzProject/${data[i].name}));
}
In my case it was due to not closing the Sequelize connections to database while creating them inside of the async function called with setInterval.
You said you are using process.on('uncaughtException', callback);
Where are you executing this statement? Is it within the callback passed to http.createServer?If yes, different copy of the same callback will get attached to the uncaughtException event upon each new request, because the function (req, res) { ... } gets executed everytime a new request comes in and so will the statement process.on('uncaughtException', callback);Note that the process object is global to all your requests and adding listeners to its event everytime a new request comes in will not make any sense. You might not want such kind of behaviour. In case you want to attach a new listener for each new request, you should remove all previous listeners attached to the event as they no longer would be required using: process.removeAllListeners('uncaughtException');
Our team's fix for this was removing a registry path from our .npmrc. We had two path aliases in the rc file, and one was pointing to an Artifactory instance that had been deprecated.
The error had nothing to do with our App's actual code but everything to do with our development environment.
Adding EventEmitter.defaultMaxListeners = <MaxNumberOfClients> to node_modules\loopback-datasource-juggler\lib\datasource.js fixed may problem :)
Put this in the first line of your server.js (or whatever contains your main Node.js app):
require('events').EventEmitter.prototype._maxListeners = 0;
and the error goes away :)

How do I handle all errors on my server, so that it never crashes?

Let's say I have this server route (using expressjs):
app.get('/cards', function(req, res) {
anUndefinedVariable // Server doesn't crash
dbClient.query('select * from cards', function(err, result) {
anUndefinedVariable // Server crashes
res.send(result.rows)
});
});
When I simply reference an undefined variable at the root of the /cards route callback, the server doesn't crash, but if I reference it in the nested callback it crashes.
Is it because Express is catching the error when it's at the root level? Why doesn't it also catch the errors in the nested functions?
I tried catching the error like this myself:
app.get('/cards', function(req, res) {
try {
dbClient.query('select * from cards', function(err, result) {
anUndefinedVariable
res.send(result.rows)
});
} catch (e) {
console.log('...')
}
});
But it never enters the catch block. Maybe this is the reason Express isn't able to catch the error. Is it because that, in order to be able to catch an error, you need to do it on the function that actually calls the callback? E.g. try {functionThatCallsTheQueryCallback() } catch(e) {...}? I don't think so, as query certainly calls the callback indirectly at a certain point.
How would I go about catching all errors so that my server never crashes?
try...catch only catches errors that occur in synchronous operations. It won't catch errors that occur in callbacks to async operations, like you have in your second example above.
As for the first example, express handles errors that are thrown synchronously and sends a 500 response to the client.
You can look into domains for handling errors across async operations. But be aware that they are pending deprecation. It's worth reading through the warnings in the docs about why they're being deprecated.
It can done in node, although generally not recommended, by letting node to handle the uncaughtException event,
https://nodejs.org/docs/latest/api/process.html#process_event_uncaughtexception
process.on('uncaughtException', (err) => {
console.log(`Caught exception: ${err}`);
});
Another more preferable approach would be just let it crush, and have it restarted automatically afterward. There are some tools available for this, such as nodemon, pm2, forever...

Nodejs exit on error, shoud prevent or not?

I'm using Nodejs in my windows machine. the question is Nodejs always terminate process on errors e.g. empty Mysql insert statement.
So in production time, and without manual error handling, how can prevent NodeJs exit?
example code:
app.post('/api/accounts',function(req,res){
pool.getConnection(function(error,connection){
connection.query('insert into accounts set ?',req.body,function(err,results){
if (err) {
throw err ;
} else {
console.log(results) ;
}
});
});
console.log('post received') ;
console.log(req.body);
});
Imagine i post an empty req.body.
nodejs will exit on error like this
\node_modules\mysql\lib\protocol\Parser.js:77
throw err; // Rethrow non-MySQL errors
^
Is it possible to configure something in node to just show errors but don't exit?
It's not really a good thing to be continuing execution after a unhandled exception has been thrown by the interpreter (as Ginden said in his answer) - anything could happen and it could prove to be a mistake later, any sort of hole could easily be opened by stopping the process from cleaning up after something went so unexpectedly wrong in your code.
You could sensibly add a event handler for unhandledException like the answer by Ginden points out, however, it seems you're using express and it would make much more sense actually handling the error with middleware when it happens, instead of using throw as per your code.
Replace throw err; with return next(err); and that should mean the request will fall through to the next set of middleware, which should then handle the error, do some logging, tell the user, whatever you want it to do.
app.use(function(err, req, res, next) {
// Maybe log the error for later reference?
// If this is development, maybe show the stack here in this response?
res.status(err.status || 500);
res.send({
'message': err.message
});
});
Don't try to prevent process shutdown. If error was thrown, anything could happen.
Warning: Using 'uncaughtException' correctly
Note that 'uncaughtException' is a crude mechanism for exception handling intended to be used only as a last resort. The event should not be used as an equivalent to On Error Resume Next. Unhandled exceptions inherently mean that an application is in an undefined state. Attempting to resume application code without properly recovering from the exception can cause additional unforeseen and unpredictable issues.
Exceptions thrown from within the event handler will not be caught. Instead the process will exit with a non zero exit code and the stack trace will be printed. This is to avoid infinite recursion.
Attempting to resume normally after an uncaught exception can be similar to pulling out of the power cord when upgrading a computer -- nine out of ten times nothing happens - but the 10th time, the system becomes corrupted.
Domain module: don't ignore errors.
By the very nature of how throw works in JavaScript, there is almost never any way to safely "pick up where you left off", without leaking references, or creating some other sort of undefined brittle state.
The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else.
The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.

Resources