I am running mocha/chai (3.2.0/3.5.0) based test cases for my nodejs (6.10.2) based application, on macOS 10.12.4, and I am running into a "Segmentation fault: 11" failure.
So far I have tried:
Erasing my node_modules folder and doing a new npm install
Checking for outdated dependencies and upgrading them
Upgrading the nodejs version (was using 6.7.0), via sudo port upgrade nodejs6
The code that is failing for me is as follows.
chai.request(url)
.post(`/api/filestore?token=${token}`)
.timeout(20000)
.attach('file', fs.readFileSync(filepath), filename)
.field('name', data.name)
.field('description', data.description)
.field('keywords', data.keywords)
.end(function(err, res) {
if (err) { done(err); }
res.should.have.status(200);
res.should.be.json;
res.body.should.have.property('name');
res.body.should.have.property('description');
res.body.should.have.property('categories');
res.body.keywords.should.be.a('array');
res.body.keywords.join(',').should.be.equal(data.keywords);
done();
});
The segmentation fault disappears when I remove the 'attach' line:
.attach('file', fs.readFileSync(filepath), filename)
I have tried the fs.readFileSync(filepath) separately and I don't encounter the issue.
This wasn't an issue in the past and only became an issue recently. I am wondering whether it coincides with the upgrade of the OS, but I can't be sure. Test file is only 34K in size.
Does anyone have any suggestions?
Edit: Failing on Ubuntu test machine as well with a segmentation fault.
It seems like the chai code was a 'red herring'. Using the segfault-handler node module, I was able to establish that the issue was actually being caused by code in the project being tested (it is running in the same process).
In my case, the stack trace suggested the issue was being caused by something in the sqlite3 code. Further investigation revealed it was due to not handling the return value of in the function passed to Promise.mapSeries, which was doing a sqlite SQL operation, via Sequelize.
Note I am using bluebird for my promises.
Related
I'm calling the following Firebase function:
exports.getUserRecord = functions.https.onCall(async (data, context) => {
try {
//This successfully logs an existing uid in firestore, it should be retrievable
console.log(context.auth.uid)
const doc = admin.firestore().collection('user').doc(context.auth.uid);
const res = await doc.get() //Isolated it down to this line that is failing
return res
} catch (err) {
console.log(err)
throw new functions.https.HttpsError('unavailable', 'some error message');
}
});
When calling this function I receive the following error on the client:
POST https://us-central1-xxx-xxx.cloudfunctions.net/getUserRecord 500
Uncaught (in promise) Error: INTERNAL
On the server logs I see this error:
Unhandled error function error(...args) {
write(entryFromArgs('ERROR', args));
}
I am wondering how there is an error that neither of my error logging lines are picking up, and also what is causing this error?
EDIT: I have also tried logging other things within my catch block but they do not appear, it seems there is an error but the code does not enter the catch block somehow.
I have also seen this post which seems to suggest this was an issue that was patched in firebase-functions 3.9.1, but I have upgraded and still have this issue
Walked through the firebase-functions code for onCall at v3.11.0 and I don't see any other issues that could relate to this in the code since the fix
https://github.com/firebase/firebase-functions/issues/757
After discussing with #Matt about node_module versions we found that the issue is related to node_modules not having updated to latest once the upgrade was initially done.
Notes for anyone running into this issue in the future
If updating to latest for this module make sure to do the following to cover all bases,
Look into node_modules/firebase-functions/package.json attribute version to make sure that the proper version is installed.
Also take a look at your root folder package.json and package-lock.json to makes sure the proper versions are the latest.
If anything is not at version v3.9.1 or higher, then do the following,
rm -rf node_modules
npm i firebase-functions#latest --save
After that, double check everything again to make sure all is good.
I am using the pg package (node.js), and for some reason the connect function gives me nothing. My code gets hung up on that line and I'm unable to see any errors, what's wrong, or what's happening.
i.e.
console.log("HERE");
await pgPool.connect()
console.log("NOW HERE") //this line never prints
I've tried a bunch of variations too:
console.log("HERE");
const client = await pgPool.connect()
console.log(client) //this line never prints
Does anyone know how to get a verbose stream from pg? My pg version is 7.15.0 and my npm version is 6.14.4
I've tried waiting it out for over an hour. For friends running the same code from the same branch on their local machines it connects in under a second. I've confirmed they have the same version of pg as me.
I am able to connect directly to the database using psql in a separate terminal without issues (it immediately connects in < 1 second)
Updated my pg to 8.2.1 and it solved the problem. Must be an incompatibility issue with an earlier version
I'm testing a NodeJS app. I encountered this error when I ran the tests. The test script is below:
.expect((res) => {
expect(res.headers['x-auth']).toExist();
expect(res.body._id).toExist();
expect(res.body.email).toBe(email);
})
The error showed:
TypeError: expect(...).toExist is not a function
How can I resolve this issue?
The expect assertion library has changed ownership. It was handed over to the Jest team, who in their infinite wisdom, created a new API.
You must now use toBeTruthy()instead of toExist().
You can still install expect as before, npm install expect --save-dev, which is currently at version 21.2.1. Most methods names will remain unchanged except for a few, including toExist().
If you are using Jest you can also use 'toBeDefined()'
I do not know, if this is related to koa, or is problem of some other npm module or something else. I am going to start from here.
So to the problem. I am having REST api written in koa v1. We are running node server in the Docker image. One of the endpoints we have, starts the import and returns the status 200 with message: "import started", and when the import finishes, we send Slack message to notify us.
So first I tested the server on my local machine, everything works (endpoint does not throw any errors). Then I built docker image. I run container localy, everything works (endpoint does not throw any errors). I deploy my image to Mesos environment, everything works so far. Container runs, every endpoint works, beside import endpoint. When I call it, after few seconds (5 to 10), I get ECONNRESET error, the running container gets killed and new running instance is started. So import is terminated.
At the beginning we assigned 128 MB ram to the docker container and that seems to be enough. After import error occurred, we thought maybe OOM killed process. So we decided to check dmesg and we could not find any log entries related to the OOM and the process of the running container. Then we checked ram usage of the container locally (with htop) and found out it uses aprox. 250+ MB, so we decided to add more ram in marathon config (512 MB). That however did not help, same error occurred.
Because the error was not explicit enough we installed longjohn module, so we could get more detailed error message. That got us just a little bit more information, but not as much as we thought it would.
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
---------------------------------------------
at Application.app.callback (/src/node_modules/koa/lib/application.js:130:45)
at Application.app.listen (/src/node_modules/koa/lib/application.js:73:39)
at Promise.then.result (/src/server.js:97:13)
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
Line 97 of the server.js is:
96:if(!module.parent) {
97: app.listen(port, (err) => {
98: if (err) {
99: console.error('Server error', err);
100: }
101: console.log('Listening on the port', port);
102: });
103:}
So what exactly happens in the endpoint logic. We are using postgres npm module pg. We are passing pg.Pool to the context, so later we can use it in our models. We are executing insert query encapsulated in promise and push promises in the array. There are roughly 2700+ records. Later we do Promise.all on the array of promises and with then we send the message to Slack.
As you can see I do not know if the error is related to koa or pg or some other thing. What is more intriguing is that locally everything works (node server as well as in docker container), but on Mesos it does not. How can I find out what is wrong?
version of koa npm module: 1.2.0
version of pg npm module: 6.1.0
version of Postgres 9.5
version of Mesos: 1.0.1
According to this github issue this is an error caused by tiny-lr.
It seems that downgrading to version 0.2.1 stops it, but this is usually a dependency of other packages you're using that you've got no control over. You might be able to filter out the error by displaying all errors except this, as such:
if (error.code !== 'ECONNRESET') { console.log(error) }
The issue is still open, and dates from Oct 27, 2016. Don't know if it will get fixed or not. But as far as feedback goes, it doesn't seem like a dangerous error, or to have any impact whatsoever. But heh, I'd rather fix mine too, if there was a way.
Thanks to another developer, we found out what was the cause of the ERROR. We used all connections in the pool when there was an import running.
When the marathon was requesting the service status at the time of the import, service tried to connect to the database to test the connection and at that time the connection to the database was terminated. Service became unhealthy and marathon restarted the service. We re-factored the import code. We are limiting the number of pool connections.
I have been using limestone module and Nodejs to query sphinx index. The limestone is out-dated in my npm so i downloaded from github and it is got connected to the sphinx server successfully. But i am now facing the issue as follows,
When i tried to execute the following code,
var limestone = require("limestone").SphinxClient(),
sys = require("sys");
limestone.connect("192.168.2.443:9312", // port. 9312 is standard Sphinx port. also 'host:port' allowed
function(err) {
if (err) {
sys.puts('Connection error: ' + err);
}
sys.puts('Connected, sending query');
limestone.query(
{'query':'raja',maxmatches:1},
function(err, answer) {
if(err){
console.log("Sphinx ERR: "+err);
}else{
console.log(JSON.stringify(answer));
limestone.disconnect();
}
});
});
i got the below error,
Sphinx ERR: Searchd command older than client's version, some options might not workServer issued ERROR: 0bad multi-query count 0 (must be in 1..32 range)
Please help me on this!
Ok, so I installed sphinxseach on Ubuntu, the version in the repository is 0.9.9. I got a similar error as yours:
Searchd command older than client's version, some options might not workServer issued ERROR: Qclient version is higher than daemon version (client is v.1.24, daemon is v.1.22) undefined
After looking through the issues at limestone's github, I figured it was supposed to work with Sphinx version 2. So I installed 2.0.4 from Sphinx download page (they have Ubuntu packages), and it works! So, if it's possible for you to upgrade, that might be a good idea anyway -- and limestone will probably only ever track the latest release.