AWSXRay.captureAsyncFunc() from Lambda - am I missing something? - node.js

I'm trying to get a custom X-Ray segment reporting, but I'm not seeing anything in the trace. My code looks something like this:
var AWSXRay = require('aws-xray-sdk-core');
AWSXRay.captureAsyncFunc('callSoapService', subsegment => {
doSomethingAsync(params, err => {
if (err) {
subsegment.close(err);
} else {
doSomethingElse().then(result => {
console.info('all done, now close the segment');
subsegment.close();
}, subsegment.close);
}
});
});
Do I need to add it to the parent segment or something?

ugh. there seems to be a bug with AWSXRay.captureHTTPs() - if I remove that call captureAsyncFunc() starts working

For the AWS X-Ray Node SDK, automatic mode is build on the continuation-local-storage (cls) package which has known compatibility issues with promise libraries. This is why your 'then' seems to be losing context. However, most of these libraries have various CLS shims available to provide the compatibility necessary to work.
Which promise library are you using?
For bluebird, there's 'cls-bluebird' or for Q there's 'cls-q' that's available that will get it working.
They usually ask to pass in the CLS namespace, which is available from xray.getNamespace().
Hope this helps.

Related

pgtypes.fetcher is not a function

I have been trying to implement pg-postgis-types npm package in my express project for my internship. I'm using PostgreSQL and Sequelize.
Unfortunately, although I have implemented the code in the documentation, our API returns pgtypes.fetcher is not a function. Does anyone encounter with this issue? I checked the definition of the package in node modules folder and I found the definition, as it should be.
To be a reference, my code is like below.
const getgeojson = async (mapID) => {
try {
postgis(pgtypes.fetcher(pg, connection), null, (err, oids) => {
if (err) {
throw err;
}; ...
I know this is not a much popular repo but maybe someone encounter and solved it before, I just wanted to ask. Sorry if this is a bad question :)
The package is much older, thus, the usage is slightly different from the original documentation in both npm and GitHub repo. You can call functions by changing postgisto postgis.default and pgtypes.fetcher to pgtypes.default.fetcher. This solved my stated issue above. Have a nice day y'all.

Can't load Features.Diagnostics

I'm creating a web client for joining Teams meetings with the ACS Calling SDK.
I'm having trouble loading the diagnostics API. Microsoft provides this page:
https://learn.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/call-diagnostics
You are supposed to get the diagnostics this way:
const callDiagnostics = call.api(Features.Diagnostics);
This does not work.
I am loading the Features like this:
import { Features } from '#azure/communication-calling'
A statement console.log(Features) shows only these four features:
DominantSpeakers: (...)
Recording: (...)
Transcription: (...)
Transfer: (...)
Where are the Diagnostics??
User Facing Diagnostics
For anyone, like me, looking now...
ATOW, using the latest version of #azure/communication-calling SDK, the documented solution, still doesn't work:
const callDiagnostics = call.api(Features.Diagnostics);
call.api is undefined.
TL;DR
However, once the call is instantiated, this allows you to subscribe to changes:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
userFacingDiagnostics.media.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
userFacingDiagnostics.network.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
This isn't documented in the latest version, but is under this alpha version.
Whether this will continue to work is anyone's guess ¯\(ツ)/¯
Accessing Pre-Call APIs
Confusingly, this doesn't currently work using the specified version, despite the docs saying it will...
Features.PreCallDiagnostics is undefined.
This is actually what I was looking for, but I can get what I want by setting up a test call asking for the latest values, like this:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
console.log(userFacingDiagnostics.media.getLatest())
console.log(userFacingDiagnostics.network.getLatest())
Hope this helps :)
Currently the User Facing Diagnostics API is only available in the Public Preview and npm beta packages right now. I confirmed this with a quick test comparing the 1.1.0 and beta packages.
Check the following link:
https://github.com/Azure-Samples/communication-services-web-calling-tutorial/
Features are imported from the #azure/communication-calling,
for example:
const {
Features
} = require('#azure/communication-calling');

Why node 10 has made it mandatory to pass callback on fs.writeFile()?

Suddenly, I started getting this error on my application when the node engine was upgraded to 10.7.0
TypeError [ERR_INVALID_CALLBACK]: Callback must be a function
Code which was working with node 4.5: fs.writeFile(target, content);
After a bit of debugging I found this in node_internal/fs.js:
function writeFile(path, data, options, callback) {
callback = maybeCallback(callback || options);
...
}
function maybeCallback(cb) {
if (typeof cb === 'function')
return cb;
throw new ERR_INVALID_CALLBACK();
}
Certainly, if do not pass a third/fourth argument here, my code will fail. I want to know is there any way to mitigate this problem. Or if not, what could be the motivation behind such a breaking change. After all, fs.writeFile() is such a basic operation, issues such as these are really a pain while upgrading.
Node.js has documented the purpose for this change: https://github.com/nodejs/node/blob/master/doc/api/deprecations.md#dep0013-fs-asynchronous-function-without-callback
There is a lot more discussion here: https://github.com/nodejs/node/pull/12562#issuecomment-300734746
In fact it seems like some developers agree with you, however the decision has been made and the callback is now required.
There is no mitigation per se; you will just have to add a callback. Even an empty one will work okay:
fs.writeFile(target, content, () => {});
I understand this may require a lot of changes for currently working code, but in fact it might be a good opportunity for you to add error handling as well.

Firebase Function Deployment Possible EventEmitter memory leak [duplicate]

I am getting following warning:
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace:
at EventEmitter.<anonymous> (events.js:139:15)
at EventEmitter.<anonymous> (node.js:385:29)
at Server.<anonymous> (server.js:20:17)
at Server.emit (events.js:70:17)
at HTTPParser.onIncoming (http.js:1514:12)
at HTTPParser.onHeadersComplete (http.js:102:31)
at Socket.ondata (http.js:1410:22)
at TCP.onread (net.js:354:27)
I wrote code like this in server.js:
http.createServer(
function (req, res) { ... }).listen(3013);
How to fix this ?
I'd like to point out here that that warning is there for a reason and there's a good chance the right fix is not increasing the limit but figuring out why you're adding so many listeners to the same event. Only increase the limit if you know why so many listeners are being added and are confident it's what you really want.
I found this page because I got this warning and in my case there was a bug in some code I was using that was turning the global object into an EventEmitter! I'd certainly advise against increasing the limit globally because you don't want these things to go unnoticed.
This is explained in the node eventEmitter documentation
What version of Node is this? What other code do you have? That isn't normal behavior.
In short, its: process.setMaxListeners(0);
Also see: node.js - request - How to “emitter.setMaxListeners()”?
The accepted answer provides the semantics on how to increase the limit, but as #voltrevo pointed out that warning is there for a reason and your code probably has a bug.
Consider the following buggy code:
//Assume Logger is a module that emits errors
var Logger = require('./Logger.js');
for (var i = 0; i < 11; i++) {
//BUG: This will cause the warning
//As the event listener is added in a loop
Logger.on('error', function (err) {
console.log('error writing log: ' + err)
});
Logger.writeLog('Hello');
}
Now observe the correct way of adding the listener:
//Good: event listener is not in a loop
Logger.on('error', function (err) {
console.log('error writing log: ' + err)
});
for (var i = 0; i < 11; i++) {
Logger.writeLog('Hello');
}
Search for similar issues in your code before changing the maxListeners (which is explained in other answers)
By default, a maximum of 10 listeners can be registered for any single event.
If it's your code, you can specify maxListeners via:
const emitter = new EventEmitter()
emitter.setMaxListeners(100)
// or 0 to turn off the limit
emitter.setMaxListeners(0)
But if it's not your code you can use the trick to increase the default limit globally:
require('events').EventEmitter.prototype._maxListeners = 100;
Of course you can turn off the limits but be careful:
// turn off limits by default (BE CAREFUL)
require('events').EventEmitter.prototype._maxListeners = 0;
BTW. The code should be at the very beginning of the app.
ADD: Since node 0.11 this code also works to change the default limit:
require('events').EventEmitter.defaultMaxListeners = 0
Replace .on() with once(). Using once() removes event listeners when the event is handled by the same function.
If this doesn't fix it, then reinstall restler with this in your package.json
"restler": "git://github.com/danwrong/restler.git#9d455ff14c57ddbe263dbbcd0289d76413bfe07d"
This has to do with restler 0.10 misbehaving with node. you can see the issue closed on git here: https://github.com/danwrong/restler/issues/112
However, npm has yet to update this, so that is why you have to refer to the git head.
Node Version : v11.10.1
Warning message from stack trace :
process.on('warning', e => console.warn(e.stack));
(node:17905) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit
at _addListener (events.js:255:17)
at Connection.addListener (events.js:271:10)
at Connection.Readable.on (_stream_readable.js:826:35)
at Connection.once (events.js:300:8)
at Connection._send (/var/www/html/fleet-node-api/node_modules/http2/lib/protocol/connection.js:355:10)
at processImmediate (timers.js:637:19)
at process.topLevelDomainCallback (domain.js:126:23)
After searching for github issues, documentation and creating similar event emitter memory leaks, this issue was observed due to node-apn module used for iOS push notification.
This resolved it :
You should only create one Provider per-process for each
certificate/key pair you have. You do not need to create a new
Provider for each notification. If you are only sending notifications
to one app then there is no need for more than one Provider.
If you are constantly creating Provider instances in your app, make
sure to call Provider.shutdown() when you are done with each provider
to release its resources and memory.
I was creating provider object each time the notification was sent and expected the gc to clear it.
I am getting this warning too when install aglio on my mac osx.
I use cmd fix it.
sudo npm install -g npm#next
https://github.com/npm/npm/issues/13806
In my case, it was child.stderr.pipe(process.stderr) which was being called when I was initiating 10 (or so) instances of the child. So anything, that leads to attach an event handler to the same EventEmitter Object in a LOOP, causes nodejs to throw this error.
Sometimes these warnings occur when it isn't something we've done, but something we've forgotten to do!
I encountered this warning when I installed the dotenv package with npm, but was interrupted before I got around to adding the require('dotenv').load() statement at the beginning of my app. When I returned to the project, I started getting the "Possible EventEmitter memory leak detected" warnings.
I assumed the problem was from something I had done, not something I had not done!
Once I discovered my oversight and added the require statement, the memory leak warning cleared.
I prefer to hunt down and fix problems instead of suppressing logs whenever possible. After a couple days of observing this issue in my app, I realized I was setting listeners on the req.socket in an Express middleware to catch socket io errors that kept popping up. At some point, I learned that that was not necessary, but I kept the listeners around anyway. I just removed them and the error you are experiencing went away. I verified it was the cause by running requests to my server with and without the following middleware:
socketEventsHandler(req, res, next) {
req.socket.on("error", function(err) {
console.error('------REQ ERROR')
console.error(err.stack)
});
res.socket.on("error", function(err) {
console.error('------RES ERROR')
console.error(err.stack)
});
next();
}
Removing that middleware stopped the warning you are seeing. I would look around your code and try to find anywhere you may be setting up listeners that you don't need.
Thanks to RLaaa for giving me an idea how to solve the real problem/root cause of the warning. Well in my case it was MySQL buggy code.
Providing you wrote a Promise with code inside like this:
pool.getConnection((err, conn) => {
if(err) reject(err)
const q = 'SELECT * from `a_table`'
conn.query(q, [], (err, rows) => {
conn.release()
if(err) reject(err)
// do something
})
conn.on('error', (err) => {
reject(err)
})
})
Notice there is a conn.on('error') listener in the code. That code literally adding listener over and over again depends on how many times you call the query.
Meanwhile if(err) reject(err) does the same thing.
So I removed the conn.on('error') listener and voila... solved!
Hope this helps you.
As pointed out by others, increasing the limit is not the best answer. I was facing the same issue, but in my code I was nowhere using any event listener. When I closely looked into the code, I was creating a lot of promises at times. Each promise had some code of scraping the provided URL (using some third-party library). If you are doing something like that, then it may be the cause.
Refer this thread on how to prevent that: What is the best way to limit concurrency when using ES6's Promise.all()?
i was having the same problem. and the problem was caused because i was listening to port 8080, on 2 listeners.
setMaxListeners() works fine, but i would not recommend it.
the correct way is to, check your code for extra listeners, remove the listener or change the port number on which you are listening, this fixed my problem.
I was having this till today when I start grunt watch. Finally solved by
watch: {
options: {
maxListeners: 99,
livereload: true
},
}
The annoying message is gone.
You need to clear all listeners before creating new ones using:
Client / Server
socket.removeAllListeners();
Assuming socket is your client socket / or created server socket.
You can also subscribe from specific event listeners like for example removing the connect listener like this:
this.socket.removeAllListeners("connect");
I was facing the same issue, but i have successfully handled with async await.
Please check if it helps.
let dataLength = 25;
Before:
for (let i = 0; i < dataLength; i++) {
sftp.get(remotePath, fs.createWriteStream(xyzProject/${data[i].name}));
}
After:
for (let i = 0; i < dataLength; i++) {
await sftp.get(remotePath, fs.createWriteStream(xyzProject/${data[i].name}));
}
In my case it was due to not closing the Sequelize connections to database while creating them inside of the async function called with setInterval.
You said you are using process.on('uncaughtException', callback);
Where are you executing this statement? Is it within the callback passed to http.createServer?If yes, different copy of the same callback will get attached to the uncaughtException event upon each new request, because the function (req, res) { ... } gets executed everytime a new request comes in and so will the statement process.on('uncaughtException', callback);Note that the process object is global to all your requests and adding listeners to its event everytime a new request comes in will not make any sense. You might not want such kind of behaviour. In case you want to attach a new listener for each new request, you should remove all previous listeners attached to the event as they no longer would be required using: process.removeAllListeners('uncaughtException');
Our team's fix for this was removing a registry path from our .npmrc. We had two path aliases in the rc file, and one was pointing to an Artifactory instance that had been deprecated.
The error had nothing to do with our App's actual code but everything to do with our development environment.
Adding EventEmitter.defaultMaxListeners = <MaxNumberOfClients> to node_modules\loopback-datasource-juggler\lib\datasource.js fixed may problem :)
Put this in the first line of your server.js (or whatever contains your main Node.js app):
require('events').EventEmitter.prototype._maxListeners = 0;
and the error goes away :)

possible EventEmitter memory leak detected

I am getting following warning:
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace:
at EventEmitter.<anonymous> (events.js:139:15)
at EventEmitter.<anonymous> (node.js:385:29)
at Server.<anonymous> (server.js:20:17)
at Server.emit (events.js:70:17)
at HTTPParser.onIncoming (http.js:1514:12)
at HTTPParser.onHeadersComplete (http.js:102:31)
at Socket.ondata (http.js:1410:22)
at TCP.onread (net.js:354:27)
I wrote code like this in server.js:
http.createServer(
function (req, res) { ... }).listen(3013);
How to fix this ?
I'd like to point out here that that warning is there for a reason and there's a good chance the right fix is not increasing the limit but figuring out why you're adding so many listeners to the same event. Only increase the limit if you know why so many listeners are being added and are confident it's what you really want.
I found this page because I got this warning and in my case there was a bug in some code I was using that was turning the global object into an EventEmitter! I'd certainly advise against increasing the limit globally because you don't want these things to go unnoticed.
This is explained in the node eventEmitter documentation
What version of Node is this? What other code do you have? That isn't normal behavior.
In short, its: process.setMaxListeners(0);
Also see: node.js - request - How to “emitter.setMaxListeners()”?
The accepted answer provides the semantics on how to increase the limit, but as #voltrevo pointed out that warning is there for a reason and your code probably has a bug.
Consider the following buggy code:
//Assume Logger is a module that emits errors
var Logger = require('./Logger.js');
for (var i = 0; i < 11; i++) {
//BUG: This will cause the warning
//As the event listener is added in a loop
Logger.on('error', function (err) {
console.log('error writing log: ' + err)
});
Logger.writeLog('Hello');
}
Now observe the correct way of adding the listener:
//Good: event listener is not in a loop
Logger.on('error', function (err) {
console.log('error writing log: ' + err)
});
for (var i = 0; i < 11; i++) {
Logger.writeLog('Hello');
}
Search for similar issues in your code before changing the maxListeners (which is explained in other answers)
By default, a maximum of 10 listeners can be registered for any single event.
If it's your code, you can specify maxListeners via:
const emitter = new EventEmitter()
emitter.setMaxListeners(100)
// or 0 to turn off the limit
emitter.setMaxListeners(0)
But if it's not your code you can use the trick to increase the default limit globally:
require('events').EventEmitter.prototype._maxListeners = 100;
Of course you can turn off the limits but be careful:
// turn off limits by default (BE CAREFUL)
require('events').EventEmitter.prototype._maxListeners = 0;
BTW. The code should be at the very beginning of the app.
ADD: Since node 0.11 this code also works to change the default limit:
require('events').EventEmitter.defaultMaxListeners = 0
Replace .on() with once(). Using once() removes event listeners when the event is handled by the same function.
If this doesn't fix it, then reinstall restler with this in your package.json
"restler": "git://github.com/danwrong/restler.git#9d455ff14c57ddbe263dbbcd0289d76413bfe07d"
This has to do with restler 0.10 misbehaving with node. you can see the issue closed on git here: https://github.com/danwrong/restler/issues/112
However, npm has yet to update this, so that is why you have to refer to the git head.
Node Version : v11.10.1
Warning message from stack trace :
process.on('warning', e => console.warn(e.stack));
(node:17905) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit
at _addListener (events.js:255:17)
at Connection.addListener (events.js:271:10)
at Connection.Readable.on (_stream_readable.js:826:35)
at Connection.once (events.js:300:8)
at Connection._send (/var/www/html/fleet-node-api/node_modules/http2/lib/protocol/connection.js:355:10)
at processImmediate (timers.js:637:19)
at process.topLevelDomainCallback (domain.js:126:23)
After searching for github issues, documentation and creating similar event emitter memory leaks, this issue was observed due to node-apn module used for iOS push notification.
This resolved it :
You should only create one Provider per-process for each
certificate/key pair you have. You do not need to create a new
Provider for each notification. If you are only sending notifications
to one app then there is no need for more than one Provider.
If you are constantly creating Provider instances in your app, make
sure to call Provider.shutdown() when you are done with each provider
to release its resources and memory.
I was creating provider object each time the notification was sent and expected the gc to clear it.
I am getting this warning too when install aglio on my mac osx.
I use cmd fix it.
sudo npm install -g npm#next
https://github.com/npm/npm/issues/13806
I prefer to hunt down and fix problems instead of suppressing logs whenever possible. After a couple days of observing this issue in my app, I realized I was setting listeners on the req.socket in an Express middleware to catch socket io errors that kept popping up. At some point, I learned that that was not necessary, but I kept the listeners around anyway. I just removed them and the error you are experiencing went away. I verified it was the cause by running requests to my server with and without the following middleware:
socketEventsHandler(req, res, next) {
req.socket.on("error", function(err) {
console.error('------REQ ERROR')
console.error(err.stack)
});
res.socket.on("error", function(err) {
console.error('------RES ERROR')
console.error(err.stack)
});
next();
}
Removing that middleware stopped the warning you are seeing. I would look around your code and try to find anywhere you may be setting up listeners that you don't need.
In my case, it was child.stderr.pipe(process.stderr) which was being called when I was initiating 10 (or so) instances of the child. So anything, that leads to attach an event handler to the same EventEmitter Object in a LOOP, causes nodejs to throw this error.
Sometimes these warnings occur when it isn't something we've done, but something we've forgotten to do!
I encountered this warning when I installed the dotenv package with npm, but was interrupted before I got around to adding the require('dotenv').load() statement at the beginning of my app. When I returned to the project, I started getting the "Possible EventEmitter memory leak detected" warnings.
I assumed the problem was from something I had done, not something I had not done!
Once I discovered my oversight and added the require statement, the memory leak warning cleared.
Thanks to RLaaa for giving me an idea how to solve the real problem/root cause of the warning. Well in my case it was MySQL buggy code.
Providing you wrote a Promise with code inside like this:
pool.getConnection((err, conn) => {
if(err) reject(err)
const q = 'SELECT * from `a_table`'
conn.query(q, [], (err, rows) => {
conn.release()
if(err) reject(err)
// do something
})
conn.on('error', (err) => {
reject(err)
})
})
Notice there is a conn.on('error') listener in the code. That code literally adding listener over and over again depends on how many times you call the query.
Meanwhile if(err) reject(err) does the same thing.
So I removed the conn.on('error') listener and voila... solved!
Hope this helps you.
As pointed out by others, increasing the limit is not the best answer. I was facing the same issue, but in my code I was nowhere using any event listener. When I closely looked into the code, I was creating a lot of promises at times. Each promise had some code of scraping the provided URL (using some third-party library). If you are doing something like that, then it may be the cause.
Refer this thread on how to prevent that: What is the best way to limit concurrency when using ES6's Promise.all()?
i was having the same problem. and the problem was caused because i was listening to port 8080, on 2 listeners.
setMaxListeners() works fine, but i would not recommend it.
the correct way is to, check your code for extra listeners, remove the listener or change the port number on which you are listening, this fixed my problem.
I was having this till today when I start grunt watch. Finally solved by
watch: {
options: {
maxListeners: 99,
livereload: true
},
}
The annoying message is gone.
You need to clear all listeners before creating new ones using:
Client / Server
socket.removeAllListeners();
Assuming socket is your client socket / or created server socket.
You can also subscribe from specific event listeners like for example removing the connect listener like this:
this.socket.removeAllListeners("connect");
I was facing the same issue, but i have successfully handled with async await.
Please check if it helps.
let dataLength = 25;
Before:
for (let i = 0; i < dataLength; i++) {
sftp.get(remotePath, fs.createWriteStream(xyzProject/${data[i].name}));
}
After:
for (let i = 0; i < dataLength; i++) {
await sftp.get(remotePath, fs.createWriteStream(xyzProject/${data[i].name}));
}
In my case it was due to not closing the Sequelize connections to database while creating them inside of the async function called with setInterval.
You said you are using process.on('uncaughtException', callback);
Where are you executing this statement? Is it within the callback passed to http.createServer?If yes, different copy of the same callback will get attached to the uncaughtException event upon each new request, because the function (req, res) { ... } gets executed everytime a new request comes in and so will the statement process.on('uncaughtException', callback);Note that the process object is global to all your requests and adding listeners to its event everytime a new request comes in will not make any sense. You might not want such kind of behaviour. In case you want to attach a new listener for each new request, you should remove all previous listeners attached to the event as they no longer would be required using: process.removeAllListeners('uncaughtException');
Our team's fix for this was removing a registry path from our .npmrc. We had two path aliases in the rc file, and one was pointing to an Artifactory instance that had been deprecated.
The error had nothing to do with our App's actual code but everything to do with our development environment.
Adding EventEmitter.defaultMaxListeners = <MaxNumberOfClients> to node_modules\loopback-datasource-juggler\lib\datasource.js fixed may problem :)
Put this in the first line of your server.js (or whatever contains your main Node.js app):
require('events').EventEmitter.prototype._maxListeners = 0;
and the error goes away :)

Resources