I use node.js with the google-translate-api package.
It all worked fine for months but suddenly and I can't tell why, the simple code
translate("hello", {from: "en", to: "fr"}).then(res => {
console.log(res.text);
}).catch(err => console.log(err));
stopped working and I get this error everytime now :
Error at C:\Users\...\AutoTranslate\node_modules\google-translate-api\index.js:105:17 at at process._tickCallback (internal/process/next_tick.js:188:7) code: 'BAD_REQUEST'
Therefore it is not due to my code but probably to some parameters of node but I don't know. From now on, other packages which use async calls crash with the same error.
I even tried to uninstall node and reinstall it but I can't make it to work again.
Thanks!
Your Ip has been blocked so try to connect with other network it will work fine
Related
I've got a weird issue going on. I have a node service being started with yarn .... The app seems to work fine for some random amount of time before I get hit with a...
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
There is no error logged. My service already has the following two handlers, but they never log anything when the service dies like this.
process.on('unhandledRejection', (err: any) => {
logger.fatal({ err: err }, 'Process failed. Unhandled Rejection');
});
process.on('uncaughtException', (list) => {
logger.fatal({ err: list }, 'unhandledExceptionThrown');
});
I've read some other SO question (error Command failed with exit code 1. when I try to run yarn) and tried to clear node modules / clean my yarn cache, but so far that has not helped the issue.
Is there something else I can try to track down what's causing the issue?
After lots of digging around, I found something that was able to help move me in the right direction. Im not sure why the handlers above weren't catching the issue, however... when I added --unhandled-rejections=strict to my node command to start the service, the app would blow up as expected and actually would output some useful information.
I'm testing a NodeJS app. I encountered this error when I ran the tests. The test script is below:
.expect((res) => {
expect(res.headers['x-auth']).toExist();
expect(res.body._id).toExist();
expect(res.body.email).toBe(email);
})
The error showed:
TypeError: expect(...).toExist is not a function
How can I resolve this issue?
The expect assertion library has changed ownership. It was handed over to the Jest team, who in their infinite wisdom, created a new API.
You must now use toBeTruthy()instead of toExist().
You can still install expect as before, npm install expect --save-dev, which is currently at version 21.2.1. Most methods names will remain unchanged except for a few, including toExist().
If you are using Jest you can also use 'toBeDefined()'
Node project worked fine couple of months ago, now after implementation of sockets, when trying to run the app, i see the following error:
Error in console(git bash)
I'm using win 10, tried changing Node versions(4.7, 6.10, 7.2, 7.9), No result there.
The port I'm using is not busy.
We are using socket.io, tried to remove the part of code, that uses the module, but unexpectedly, it did not help.
Any ideas ?
Update: Works fine on OS X, and doesn't work on 3 different win 10 oter computers.
You should listen to the error event.
socket.on('error', function () {
// Error logging here
});
I am running mocha/chai (3.2.0/3.5.0) based test cases for my nodejs (6.10.2) based application, on macOS 10.12.4, and I am running into a "Segmentation fault: 11" failure.
So far I have tried:
Erasing my node_modules folder and doing a new npm install
Checking for outdated dependencies and upgrading them
Upgrading the nodejs version (was using 6.7.0), via sudo port upgrade nodejs6
The code that is failing for me is as follows.
chai.request(url)
.post(`/api/filestore?token=${token}`)
.timeout(20000)
.attach('file', fs.readFileSync(filepath), filename)
.field('name', data.name)
.field('description', data.description)
.field('keywords', data.keywords)
.end(function(err, res) {
if (err) { done(err); }
res.should.have.status(200);
res.should.be.json;
res.body.should.have.property('name');
res.body.should.have.property('description');
res.body.should.have.property('categories');
res.body.keywords.should.be.a('array');
res.body.keywords.join(',').should.be.equal(data.keywords);
done();
});
The segmentation fault disappears when I remove the 'attach' line:
.attach('file', fs.readFileSync(filepath), filename)
I have tried the fs.readFileSync(filepath) separately and I don't encounter the issue.
This wasn't an issue in the past and only became an issue recently. I am wondering whether it coincides with the upgrade of the OS, but I can't be sure. Test file is only 34K in size.
Does anyone have any suggestions?
Edit: Failing on Ubuntu test machine as well with a segmentation fault.
It seems like the chai code was a 'red herring'. Using the segfault-handler node module, I was able to establish that the issue was actually being caused by code in the project being tested (it is running in the same process).
In my case, the stack trace suggested the issue was being caused by something in the sqlite3 code. Further investigation revealed it was due to not handling the return value of in the function passed to Promise.mapSeries, which was doing a sqlite SQL operation, via Sequelize.
Note I am using bluebird for my promises.
I do not know, if this is related to koa, or is problem of some other npm module or something else. I am going to start from here.
So to the problem. I am having REST api written in koa v1. We are running node server in the Docker image. One of the endpoints we have, starts the import and returns the status 200 with message: "import started", and when the import finishes, we send Slack message to notify us.
So first I tested the server on my local machine, everything works (endpoint does not throw any errors). Then I built docker image. I run container localy, everything works (endpoint does not throw any errors). I deploy my image to Mesos environment, everything works so far. Container runs, every endpoint works, beside import endpoint. When I call it, after few seconds (5 to 10), I get ECONNRESET error, the running container gets killed and new running instance is started. So import is terminated.
At the beginning we assigned 128 MB ram to the docker container and that seems to be enough. After import error occurred, we thought maybe OOM killed process. So we decided to check dmesg and we could not find any log entries related to the OOM and the process of the running container. Then we checked ram usage of the container locally (with htop) and found out it uses aprox. 250+ MB, so we decided to add more ram in marathon config (512 MB). That however did not help, same error occurred.
Because the error was not explicit enough we installed longjohn module, so we could get more detailed error message. That got us just a little bit more information, but not as much as we thought it would.
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
---------------------------------------------
at Application.app.callback (/src/node_modules/koa/lib/application.js:130:45)
at Application.app.listen (/src/node_modules/koa/lib/application.js:73:39)
at Promise.then.result (/src/server.js:97:13)
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
Line 97 of the server.js is:
96:if(!module.parent) {
97: app.listen(port, (err) => {
98: if (err) {
99: console.error('Server error', err);
100: }
101: console.log('Listening on the port', port);
102: });
103:}
So what exactly happens in the endpoint logic. We are using postgres npm module pg. We are passing pg.Pool to the context, so later we can use it in our models. We are executing insert query encapsulated in promise and push promises in the array. There are roughly 2700+ records. Later we do Promise.all on the array of promises and with then we send the message to Slack.
As you can see I do not know if the error is related to koa or pg or some other thing. What is more intriguing is that locally everything works (node server as well as in docker container), but on Mesos it does not. How can I find out what is wrong?
version of koa npm module: 1.2.0
version of pg npm module: 6.1.0
version of Postgres 9.5
version of Mesos: 1.0.1
According to this github issue this is an error caused by tiny-lr.
It seems that downgrading to version 0.2.1 stops it, but this is usually a dependency of other packages you're using that you've got no control over. You might be able to filter out the error by displaying all errors except this, as such:
if (error.code !== 'ECONNRESET') { console.log(error) }
The issue is still open, and dates from Oct 27, 2016. Don't know if it will get fixed or not. But as far as feedback goes, it doesn't seem like a dangerous error, or to have any impact whatsoever. But heh, I'd rather fix mine too, if there was a way.
Thanks to another developer, we found out what was the cause of the ERROR. We used all connections in the pool when there was an import running.
When the marathon was requesting the service status at the time of the import, service tried to connect to the database to test the connection and at that time the connection to the database was terminated. Service became unhealthy and marathon restarted the service. We re-factored the import code. We are limiting the number of pool connections.