We have implemented a REST API using Restify and Node.js. There is a current issue where we have a certain set of parameters and when tested in environment1 it works properly. However, when the same set of parameters are used and tested in environment2, the API returns an error that there are missing parameters. After restarting the API, it starts working again. Nothing appears in the logs though.
Sample body parameters:
{
"records" : [
{"param1" : "param1val", "param2" : "param2val"},
{"param2" : "param2val", "param2" : "param2val"}
]
}
This is how the values are being accessed.
let data = req.body
let records = data.records
if (records == undefined) {
res.send(...)
return
}
else {
// use the values of records
}
So what happens is sometimes, the code for returning an error message that there are no parameters are triggered (although in reality the parameters are complete) from time to time. But after a while, using the same set of parameters, it proceeds successfully.
Restify version is 7.7.0. Node.js version is 10.15.3.
Has anyone encountered this before? What needs to be done to resolve this?
Related
I have an application on NodeJS that uses Cluster, WS, and memcached-client to manage two memcached-servers
During normal times, it works like a charm
But during high load, my application stops working and fetches data from memcached-servers
That is, the logs inside client.get callback do not work, and are not written to the console, when the load is high, therefore the client does not receive its cached value (although it is present on the memcached server and sometimes even with high load it works fine). For a while it will look like it's dead and not doing anything
getValue = function(key, callback){
console.log(`Calculated server for choose: ${strategy(key, client.servers.length)}`) // works with highload
console.log(`Try to get from cache by key: ${key}.`); // works with highload
client.get( key, function(err, data) {
const isError = err || !data // doesn't work with highload
console.log('Data from cache is: ', data) // callback will be never executed
if (!isError) {
console.log(`Found data in cache key-value: ${key} - ${data}`);
}else{
console.log(`Not found value from cache by key: ${key}`);
}
const parsedData = isError ? null : JSON.parse(data.toString())
callback(isError, parsedData); // and this won't work also
});
}
And after some time, socket connection is simply closed (with 1000 code, no errors, looks like user just leaves out)
INFO [ProcessID-100930] Connection close [772003], type [ws], code [1000], message []
Then, after 5-10 seconds, all processes start working again as if nothing had happened and the memcached-client callback starts to execute correctly
I've been trying for so long to catch this moment and understand why this is happening, but I still don't understand. I have changed already several memcached clients(memjs now, memcached, mc) but still get the same behavior under high load
When receiving data from memcached-server, the callback simply does not work, and data from the memcached is not returned (although judging by the memcached logs, they were there at that moment)
Can someone suggest please?
Given this simple senario
Scenario: checkout the response code after foo data request
I request foo data
Then the response code is 200
In my foo step file i write a step that make a api call :
When(/^I request foo data$/, (callback) => {
...
apiCall().then((response) => {
...
this.responseStatus = response.statusCode;
callback();
})
});
And in my common step file i want to put shared steps like :
Then(/^the response code is (\d+)$/, function (responseCode) {
assert.equal(responseCode, this.responseStatus);
});
But the problem is when i try to run it I got :
this object is not shared apparently and I got :
AssertionError [ERR_ASSERTION]: 200 == undefined
And if I move the code to the same file it work !
So how can I solve this issue with different files ?
One way we were doing this is to have a global variable that is accessible in both the files. Something like a constant which is initialized and then reassigned in your WHEN step. This can be utilized in your THEN file later.
I had ran the codes shown below. The 1st one runs but 2nd one does not
Can anyone please tell me the reason behind it.
//This runned successfully
updatePost(req,res)
{
let postId = req.params.postId
let posts = req.store.posts
posts[postId] = req.body
res.status(200).send(posts[postId])
}
//This gave error
updatePost(req,res)
{
req.store.posts[req.params.postId]=req.body
res.send(200).send(req.store.posts[req.params.postId])
}
Without knowing the error message....your last line is res.send(200).send(req.store.posts[req.params.postId]),
When it gets to ".send(req.store.posts[req.params.postId])" the response has already been sent.
Try changing it to res.status(200).send(req.store.posts[req.params.postId])
Like you have in the first block of code.
If this isn't your problem (maybe thats just a typo in your question and not in your code), please share the error message and I'll update my answer.
Im using Azure documentdb and accessing it through my node.js on express server, when I query in loop, low volume of few hundred there is no issue.
But when query in loop slightly large volume, say around thousand plus
I get partial results (inconsistent, every time I run result values are not same. May be because of asynchronous nature of Node.js)
after few results it crashes with this error
body: '{"code":"429","message":"Message: {\"Errors\":[\"Request rate is large\"]}\r\nActivityId: 1fecee65-0bb7-4991-a984-292c0d06693d, Request URI: /apps/cce94097-e5b2-42ab-9232-6abd12f53528/services/70926718-b021-45ee-ba2f-46c4669d952e/partitions/dd46d670-ab6f-4dca-bbbb-937647b03d97/replicas/130845018837894542p"}' }
Meaning DocumentDb fail to handle 1000+ request per second?
All together giving me a bad impression on NoSQL techniques.. is it short coming of DocumentDB?
As Gaurav suggests, you may be able to avoid the problem by bumping up the pricing tier, but even if you go to the highest tier, you should be able to handle 429 errors. When you get a 429 error, the response will include a 'x-ms-retry-after-ms' header. This will contain a number representing the number of milliseconds that you should wait before retrying the request that caused the error.
I wrote logic to handle this in my documentdb-utils node.js package. You can either try to use documentdb-utils or you can duplicate it yourself. Here is a snipit example.
createDocument = function() {
client.createDocument(colLink, document, function(err, response, header) {
if (err != null) {
if (err.code === 429) {
var retryAfterHeader = header['x-ms-retry-after-ms'] || 1;
var retryAfter = Number(retryAfterHeader);
return setTimeout(toRetryIf429, retryAfter);
} else {
throw new Error(JSON.stringify(err));
}
} else {
log('document saved successfully');
}
});
};
Note, in the above example document is within the scope of createDocument. This makes the retry logic a bit simpler, but if you don't like using widely scoped variables, then you can pass document in to createDocument and then pass it into a lambda function in the setTimeout call.
I'm parsing a large amount of files using nodejs. In my process, I'm parsing audio files, video files and than the rest.
The function to parse files looks like this :
/**
* #param arr : array of files objects (path, ext, previous directory)
* #param cb : the callback when every object is parsed,
* objects are then throwed in a database
* #param others : the array beeing populated by matching objects
**/
var parseOthers = function(arr, cb, others) {
others = others === undefined ? [] : others;
if(arr.length == 0)
return cb(others); //should be a nextTick ?
var e = arr.shift();
//do some tests on the element and add it
others.push(e);
//Then call next tested callImediate and nextTick according
//to another stackoverflow questions with no success
return parseOthers(arr, cb, others);
});
Full code here (care it's a mess)
Now with about 3565 files (not so much) the script catch a "RangeError: Maximum call stack size exceeded" exception, with no trace.
What have I tried :
I've tried to debug it with node-inspector and node debug script, but it never hangs as if it was running without debugging (does debugging increase the stack ?).
I've tried with process.on('uncaughtException') to catch the exception with no success.
I've got no memory leak.
How may I found an exception trace ?
Edit 1
Increasing the --stack_size seams to work pretty well. Isn't there another way of preventing this ?
(about 1300 there)
Edit 2
According to :
$ node --v8-options | grep -B0 -A1 stack_size
The default stack size (in kBytes) is 984.
Edit 3
A few more explanations :
I'm never reading this type of files itselves
I'm working here on an array of paths, I don't parse folders recursively
I'm looking at the path and checking if it's already stored in the database
My guess is that the populated array becomes to big for nodejs, but memory looks fine and that's weird...
Most Stackoverflow situations are not easy or sometimes possible to debug. Even if you debug on the problem, you may not find the trigger.
But I can suggest you a way to share the task load easily (including the queue management):
JXcore (a multithreaded fork on Node.JS) would suit better in your case. Simply create a task pool and set a task method handling 1 file at a time. It will manage your queue 1 by 1 multithreaded.
var myTask = function ( args here )
{
logic here
}
for(var i=0;i<LIST_OF_THE_FILES;i++)
jxcore.tasks.addTask( myTask, paramshere, optional callback ...
OR in case the logic definition is out of the scope of a single method;
var myTask = function ( args here )
{
require('mytasketc.js').handleTask(args here);
}
for(var i=0;i<LIST_OF_THE_FILES;i++)
jxcore.tasks.addTask( myTask, paramshere, optional callback ...
Remarks
Every single thread has its own V8 memory limit.
The context among the threads are separated
Make sure the task method closes the file in the end
Link
You can find more on multithreaded Javascript tasks
You getting this error because of recursion. Reformat your code to do not use it, especially because this peace of code really don't need it. Here is just APPROXIMATE example, to show you how better to do it:
var parseElems = function(arr, cb) {
var result = [];
arr.forEach(function (el) {
//do some tests on the element (el)
result.push(el);
});
cb(result);
});