AsyncLocalStorage not working for each request - node.js

I am using NestJS as a backend framework in NodeJS +16
I am trying to implement:
https://medium.com/#sascha.wolff/advanced-nestjs-how-to-have-access-to-the-current-user-in-every-service-without-request-scope-2586665741f
My idea is to have a #Injectable() service that will have, among other things, methods like:
hasUserSomeStuff(){
const user = UserStorage.get()
if(user) {
// do magic
}
and then pass this service around as it is usually done in NestJS
To avoid passing the request down the rabbit hole, or bubbling up the request scope so every dependency gets instantiated for each request, but also avoiding to use UserStorage everywhere where I need to get the user from the current request and do stuff
I've gone through the docs many times, it is my understanding that node would take care of instantiating a new storage for each async context (in my case each request), but what seems to happen to me is that when I first run my backend, it works just fine, I've got the user from the current request, but once the first async context / promise is completed, I retrieved data for the consumer, and in the next request UserStorage returns a undefined (as doc states it will if you are outside of the same async context, which I guess it is not what happens, as it should be a brand new async context)
However if I debug, what seems to happen is that this UserStorage is called and a new AsyncLocalStorage is instantiated at init, before the app is ready to be used, and then the very first request returns a undefined user.
I am failing to understand what is going on, can anyone help me on this, or any better approach to achieve my goal?
Thanks in advance

Related

How to call an event-handler after PUT/PATCHing data in json-server?

I have a json-server running in a small node.js project.
I can PATCH and PUT the existing data and will GET the updated values in return later. So far so good.
But for some of these operations, I also need to broadcast an MQTT message with the updated object.
Of couse I could implement my own handlers all the way, like
server.put('/somepath', (req, res) => {
data.something = req.body
mqttClient.publish('somepath', JSON.stringify(data.something)
res.status(200).send()
})
But I'd like to take advantage of the built in logic of the json-server that automatically mutates my data when doing POST/PATCH/PUT/DELETE requests and still be able to broadcast the new document over MQTT after the mutation is done.
Is that possible to do in a smarter generic way instead of implementing a request handler for each single endpoint?
Thanks in advance for any tips :)

Why am I getting data from the database slowly?

I use nestjs with typeorm and postgresql, I get data from the database in 150-200ms, but if you wait 20 seconds and send a request to the backend again and get the data, then I get the data in 1000ms or 1500ms, although in theory and in general it should usually be 150- 200ms? Tried to use sequelize result was same. As if, if you wait, the server starts to fall asleep and wakes up for a long time when the request goes to it again.
This is code how I do request to database:
async getProducts() {
const products = await this.productRepository.find();
return products;
}
Please any ideas, answers, options
It is very unlikely that Nestjs is adding a significant delay here. The code you posted looks okay-ish.
Try removing the "surroundings" (as in moving parts) to get to the bottom of this. E.g. execute this method in the main.ts
Hint: if your service would be called MyService you could access it there like this:
const service = app.get(MyService);
Another way would be to remove everything that is involved in the response of the request:
Your code from controller -> service -> repository
Code that could intercept: Middlewares, Pipes, Guards, Interceptors
By switching Sequelize to TypeORM you kinda removed the DB communication layer as a suspect, if your investigation does not yield anything helpful you should consider looking into the underlying DB and the specified connection options (if any, e.g. pool size). Most likely the causing code should be in the application tho. For Sequelize and TypeScript, you can also enable logging to get better insights. Good luck on your research!

Async function in NodeJS EventEmitter on AWS Lambda

I have an AWS Lambda application built upon an external library that contains an EventEmitter. On a certain event, I need to make a HTTP request. So I was using this code (simplified):
myEmitter.on("myEvent", async() => {
setup();
await doRequest();
finishingWork();
});
What I understand that happens is this:
My handler is called, but as soon as the doRequest function is called, a Promise is returned and the EventEmitter continues with the next handlers. When all that is done, the work of the handler can continue (finishingWork).
This works locally, because my NodeJS process keeps running and any remaining events on the eventloop are handled. The strange thing is that this doesn't seem to work on AWS Lambda. Even if context.callbackWaitsForEmptyEventLoop is set to true.
In my logging I can see my handler enters the doRequest function, but nothing after I call the library to make the HTTP call (request-promise which uses request). And the code doesn't continue when I make another request (which I would expect if callbackWaitsForEmptyEventLoop is set to false, which it isn't).
Has anyone experienced something similar and know how to perform an ansynchronous HTTP request in the handler of a NodeJS event emitter, on AWS Lambda?
I have similar issue as well, my event emitter logs all events normally until running into async function. It works fine in ECS but not in Lambda, as event emitter runs synchronously but Lambda will exit once the response is returned.
At last, I used await-event-emitter to solve the problem.
await emitter.emit('onUpdate', ...);
If you know how to solve this, feel free to add another answer. But for now, the "solution" for us was to put the eventhandler code elsewhere in our codebase. This way, it is executed asynchronously.
We were able to do that because there is only one place where the event is emitted, but the eventhandler way would have been a cleaner solution. Unfortunately, it doesn't seem like it's possible.

Send request progress to client side via nodejs and express

I am using this (contentful-export) library in my express app like so
const app = require('express');
...
app.get('/export', (req, rex, next) => {
const contentfulExport = require('contentful-export');
const options = {
...
}
contentfulExport(options).then((result) => {
res.send(result);
});
})
now this does work, but the method takes a bit of time and sends status / progress messages to the node console, but I would like to keep the user updated also.. is there a way I can send the node console progress messages to the client??
This is my first time using node / express any help would be appreciated, I'm not sure if this already has an answer since im not entirely sure what to call it?
Looking of the documentation for contentful-export I don't think this is possible. The way this usually works in Node is that you have an object (contentfulExport in this case), you call a method on this object and the same object is also an EventEmitter. This way you'd get a hook to react to fired events.
// pseudo code
someLibrary.on('someEvent', (event) => { /* do something */ })
someLibrary.doLongRunningTask()
.then(/* ... */)
This is not documented for contentful-export so I assume that there is no way to hook into the log messages that are sent to the console.
Your question has another tricky angle though. In the code you shared you include a single endpoint (/export). If you would like to display updates or show some progress you'd probably need a second endpoint giving information about the progress of your long running task (which you can not access with contentful-export though).
The way this is usually handled is that you kick of a long running task via a certain HTTP endpoint and then use another endpoint that serves infos via polling or or a web socket connection.
Sorry that I can't give a proper solution but due to the limitation of contentful-export I don't think there is a clean/easy way to show progress of the exported data.
Hope that helps. :)

AWS Lambda function times out when I require aws-sdk module

I'm trying to query a DynamoDB table from a Lambda function (for an Alexa skill), but when I send the intent that calls require('aws-sdk'), the skill seems to hang and timeout. The Alexa test page just says "There was a problem with the requested skill's response", and I don't see any errors in the CloudWatch logs. I have the skill set up to catch any errors and return them as a spoken response, so I'm sure it's not an uncaught exception. I've also tried wrapping the require in a try/catch block, but that didn't work either.
This is the module that gets loaded with require if the test database intent request is received:
const AWS = require('aws-sdk');
module.exports = () => {
return 'Success!';
};
If I comment out require('aws-sdk'), the function works properly and Alexa responds with "Success".
Why does my skill break when all I'm doing is requiring the aws-sdk module?
I'm very new to AWS and this is my first experience trying to access a DynamoDB table in a Lambda function.
The Lambda function is uploaded as a zip that contains my source code, package.json (that includes aws-sdk as a dependency), and the node_modules folder.
After hours of debugging, I've found that changing import * as AWS from 'aws-sdk'; to import {DynamoDB} from 'aws-sdk'; (or {CloudFront} or whatever you actually use) made the timeout issue disappear. Mind you, the time to actually connect to DynamoDB was never an issue for me, it was always the import line where the timeout happened.
This can be fixed by either increasing the timeout or the memory allotted to the lambda function.
This is probably because the SDK is too big to be imported by the default timeout value of 3 seconds and the default memory value of 128 MB.
This is why it will probably work if you try importing smaller components like only DynamoDB.
Lambda, when using NodeJS, uses a callback continuation model. Your module should export a function that takes three parameters: event, context, and callback.
Event provides input parameters.
The other two are used for returning control from your handler function, depending on the NodeJS version you are using.
Try adding the three parameters that I mentioned and the, from within your exported handler function, call:
module.export = function(event, context, callback) {
callback(‘success’);
}
Bear in mind, I wrote this on mobile off the top of my mind, so you may need to make small afjustments to the code but the idea is the same. Don’t return directly from the function, but call the callback to supply the response as a continuation. And note that in earlier versions of NodeJS, prior to version 4, you would have to use the context to set success or failure, rather than calling the callback.
For more details, see the Lambda with NodeJS tech docs on AWS.
The other thing to keep in mind is that for Alexa specifically, the response should be in the correct format. That is a JSON response that contains all the necessary elements as explained in the Alexa Skills Kit tech docs.
The Alexa ASK sdk that you’ve included generates those responses but I thought I should point you to the actual docs in case you were going to try building the response by hand to understand how it works.

Resources