I need to create looping thread or timed service for getting latitude and longitude especially if it changes. What way should I proceed with this one. I'm currently using nativescript-geolocation plugin (https://www.npmjs.com/package/nativescript-geolocation).
When you follow the link in the thread you will see the whole example code on github.
You can get the current location by doing this:
var location = geolocation.getCurrentLocation({desiredAccuracy: 3, updateDistance: 10, maximumAge: 20000, timeout: 20000}).
then(function(loc) {
if (loc) {
// Your code here
}
}, function(e){
console.log("Error: " + e.message);
});
When you want to keep updating it, you can use:
var watchId = geolocation.watchLocation(
function (loc) {
if (loc) {
// Your code here
}
},
function(e){
console.log("Error: " + e.message);
},
{desiredAccuracy: 3, updateDistance: 10, updateTime: 1000 * 20}); // should update every 20 sec according to google documentation this is not so sure.
Related
An error occurs when trying to attach a file by calling Office.context.mailbox.item.addFileAttachmentAsync from the Office 365 Outlook add-in (on the web).
This happens frequently right after loading the Outlook website. It may not occur after some time after loading.
It occurs in IE, Chrome (on Windows10, on Mac) and Safari (on Mac). There is no problem with Desktop version.
This has not happened before, but it has recently occurred.
There have been no changes to the Outlook add-in in the last few months.
I created and tested the following simple program. The same error occurs in this program.
Office.initialize = function (reason) {
$(document).ready(function () {
$("#send-btn").click(function () {
try {
var url = "https://www.cloudstoragelight.com/proaxiastorage/f/Demo/TestData.xlsx";
var attachemrtFilename = "TestData.xlsx";
debugger;
Office.context.mailbox.item.addFileAttachmentAsync(
url,
attachemrtFilename,
{ asyncContext: null },
function (asyncResult) {
if (asyncResult.status == Office.AsyncResultStatus.Failed) {
if (asyncResult.error && asyncResult.error.message) {
console.log("Error(" + asyncResult.error.message + ")");
} else {
console.log("Error");
}
}
else {
console.log("SUCCESS");
}
});
} catch (e) {
if (e.message) {
console.log("Error(" + e.message + ")");
} else {
console.log("Error");
}
}
});
});
};
Is there any workaround?
[additional information]
I tried debugging with outlook-web-16.01.debug.js.
Looking at the contents of the stack when an error occurs, there is a _checkMethodTimeout function. The timeout judgment of this function has become true, and Callback has been called.
In this case, Microsoft.Office.Common.ClientEndPoint.invoke function sends the following message in postMessage.
{"_messageType": 0, "_actionName": "ExecuteMethod", "_conversationId": "7cc28a93_6a3c12a5_1581127643048", "_correlationId": 5, "_origin": "https: // localhost: 44313 / MessageRead.html? et =", "_data": {"ApiParams": {"uri": "https://www.cloudstoragelight.com/proaxiastorage/f/Demo/TestData.xlsx", "name": "TestData.xlsx", "isInline": false, "__ timeout __": 600000}, "MethodData": {"ControlId": "963d4dfe-eaad-8e5b-6fa5-3eaac31b660d", "DispatchId": 16}}, "_ actionType": 0, "_ serializerVersion": 1}
I have a Node app in which there is a Gremlin client:
var Gremlin = require('gremlin');
const client = Gremlin.createClient(
443,
config.endpoint,
{
"session": false,
"ssl": true,
"user": `/dbs/${config.database}/colls/${config.collection}`,
"password": config.primaryKey
}
);
With which I then making calls to a CosmoDB to add some records using:
async.forEach(pData, function (data, innercallback) {
if (data.type == 'Full'){
client.execute("g.addV('test').property('id', \"" + data.$.id + "\")", {}, innercallback);
} else {
innercallback(null);
}
}, outercallback);
However on my Azure side there is a limit of 400 requests / second and subsequently I get the error:
ExceptionType : RequestRateTooLargeException
ExceptionMessage : Message: {"Errors":["Request rate is large"]}
Does anyone have any ideas on how I can restrict the number of requests made per second, without having to scale up on Azure (as that costs more :) )
Additionally:
I tried using
async.forEachLimit(pData, 400, function (data, innercallback) {
if (data.type == 'Full'){
client.execute("g.addV('test').property('id', \"" + data.$.id + "\")", {}, innercallback);
} else {
innercallback(null);
}
}, outercallback);
However if keep seeing RangeError: Maximum call stack size exceeded if its too high otherwise if I reduce I just get the same request rate too large exception.
Thanks.
RangeError: Maximum call stack size exceeded
That might happen because innercallback is called synchronously in else case. It should be:
} else {
process.nextTick(function() {
innercallback(null)
});
}
The call to forEachLimit looks generally correct, but you need to make sure that when a request is really fired (if block), innercallback is not called earlier than 1 second to guarantee that there are no more than 400 request in one second fired. The easiest is to delay the callback execution exactly for 1 second:
client.execute("g.addV('test').property('id', \"" + data.$.id + "\")", {},
function(err) {
setTimeout(function() { innercallback(err); }, 1000);
});
The more accurate solution would be to calculate the actual request+response time and setTimeout only for the time remaining to 1 second.
As a further improvement, looks like you can filter your pData array before doing async stuff to get rid of if...else, so eventually:
var pDataFull = pData.filter(function(data) => {
return data.type == 'Full';
});
async.forEachLimit(pDataFull, 400, function (data, innercallback) {
client.execute("g.addV('test').property('id', \"" + data.$.id +
"\")", {},
function(err) {
setTimeout(function() { innercallback(err); }, 1000);
}
);
}, outercallback);
Let's clear up something first. You don't have a 400 requests/second collection but a 400 RU/s collection. RUs stand for request units and they don't translate to a request.
Roughly:
A retrieval request for the retrieval of a document that's 1KB will cost 1 RU.
A modification for the retrieval of a document that's 1KB will cost 5 RU.
Assuming your documents are 1KB big, you can only add 80 documents per second.
Now that we have that out of the way, it sounds like async.queue() can do the trick for you.
I'm running a CouchDB server with docker and I'm trying to POST data through a Node app.
But I'm frequently prompted with a ESOCKETTIMEDOUT error (not always).
Here's the way I'm opening the connexion to the DB:
var remoteDB = new PouchDB('http://localhost:5984/dsndatabase', {
ajax: {
cache: true,
timeout: 40000 // I tried a lot of durations
}
});
And here's the code used to send the data :
exports.sendDatas = function(datas,db, time) {
console.log('> Export vers CouchDB')
db.bulkDocs(datas).then(function () {
return db.allDocs({include_docs: true});
}).then(function (){
var elapsedTime = new Date().getTime() - time;
console.log('> Export terminé en ', elapsedTime, ' ms');
}).catch(function (err) {
console.log(err);
})
};
The error doesn't show up every time but I'm unable to find a pattern.
And, timeout or not, all my data is successfully loaded in my CouchDB !
I've seen a lot of posts on this issue but none of them really answers my question ...
Any idea ?
Okay this seems to be real issue:
https://github.com/pouchdb/pouchdb/issues/3550#issuecomment-75100690
I think you can fix it by stating a reasonably longer timeout value/a retry logic using exponential backoff.
Let me know if that works for you.
I setuped collabora online, and the users complain about performances.
I'd like to be able to graph performances to be able to correlate to other monitoring graphs.
Here is an open document you can access:
https://cloud.pierre-o.fr/s/qnkheXaoBQV97EH
I'd like to be able to time the appearance of the document.
I tried various ways, but it is really tricky.
Here is one attempte with casperjs:
var casper = require('casper').create();
casper.options.waitTimeout = 30000;
casper.start('https://cloud.pierre-o.fr/s/qnkheXaoBQV97EH', function() {
this.waitForSelector('div#StateWordCount', function() {
this.echo('the document is loaded');
}, function _onTimeout(){
this.capture('screenshot.png');
});
})
casper.on("page.error", function(msg, trace) {
this.echo("Error: " + msg, "ERROR");
this.echo("file: " + trace[0].file, "WARNING");
this.echo("line: " + trace[0].line, "WARNING");
this.echo("function: " + trace[0]["function"], "WARNING");
errors.push(msg);
});
casper.run()
As you guess, I just get the screenshot without the document.
phantomjs --version
2.1.1
casperjs --version
1.1.3
I use the recent versions. I guess it is related to websocket, but I'm not sure.
Thanks for your help!
Interesting, this also fails even with a huge timeout
casper.options.viewportSize = { width: 1024, height:800};
casper.test.begin('TEST DOC', 2, function (test) {
casper.start("https://cloud.pierre-o.fr/s/qnkheXaoBQV97EH", function () {
test.assertTitle("Nextcloud");
});
casper.waitUntilVisible("div#StateWordCount", function() {
test.assertExists("Test!", "Found test text");
}, function() {
casper.capture("fail.jpg")
}, 150000);
casper.run(function () {
test.done();
});
});
It shows the following screen:
I would try slimerjs as it looks like it might be a web socket issue!
I started using AWS Lambda to perform a very simple task which is executing an SQL query to retrieve records from an RDS postgres database and create SQS message base on the result.
Because Amazon is only providing aws-sdk module (using node 4.3 engine) by default and we need to execute this SQL query, we have to create a custom deployment package which includes pg-promise. Here is the code I'm using:
console.info('Loading the modules...');
var aws = require('aws-sdk');
var sqs = new aws.SQS();
var config = {
db: {
username: '[DB_USERNAME]',
password: '[DB_PASSWORD]',
host: '[DB_HOST]',
port: '[DB_PORT]',
database: '[DB_NAME]'
}
};
var pgp = require('pg-promise')({});
var cn = `postgres://${config.db.username}:${config.db.password}#${config.db.host}:${config.db.port}/${config.db.database}`;
if (!db) {
console.info('Connecting to the database...');
var db = pgp(cn);
} else {
console.info('Re-use database connection...');
}
console.log('loading the lambda function...');
exports.handler = function(event, context, callback) {
var now = new Date();
console.log('Current time: ' + now.toISOString());
// Select auction that need to updated
var query = [
'SELECT *',
'FROM "users"',
'WHERE "users"."registrationDate"<=${now}',
'AND "users"."status"=1',
].join(' ');
console.info('Executing SQL query: ' + query);
db.many(query, { status: 2, now: now.toISOString() }).then(function(data) {
var ids = [];
data.forEach(function(auction) {
ids.push(auction.id);
});
if (ids.length == 0) {
callback(null, 'No user to update');
} else {
var sqsMessage = {
MessageBody: JSON.stringify({ action: 'USERS_UPDATE', data: ids}), /* required */
QueueUrl: '[SQS_USER_QUEUE]', /* required */
};
console.log('Sending SQS Message...', sqsMessage);
sqs.sendMessage(sqsMessage, function(err, sqsResponse) {
console.info('SQS message sent!');
if (err) {
callback(err);
} else {
callback(null, ids.length + ' users were affected. SQS Message created:' + sqsResponse.MessageId);
}
});
}
}).catch(function(error) {
callback(error);
});
};
When testing my lambda function, if you look at the WatchLogs, the function itself took around 500ms to run but it says that it actually took 30502.48 ms (cf. screenshots).
So I'm guessing it's taking 30 seconds to unzip my 318KB package and start executing it? That for me is just a joke or am I missing something? I tried to upload the zip and also upload my package to S3 to check if it was faster but I still have the same latency.
I noticed that the Python version can natively perform SQL request without any custom packaging...
All our applications are written in node so I don't really want to move away from it, however I have a hard time to understand why Amazon is not providing basic npm modules for database interactions.
Any comments or help are welcome. At this point I'm not sure Lambda would be benefic for us if it takes 30 seconds to run a script that is triggered every minute...
Anyone facing the same problem?
UPDATE: This is how you need to close the connection as soon as you don't need it anymore (thanks again to Vitaly for his help):
exports.handler = function(event, context, callback) {
[...]
db.many(query, { status: 2, now: now.toISOString() }).then(function(data) {
pgp.end(); // <-- This is important to close the connection directly after the request
[...]
The execution time should be measured based on the length of operations being executed, as opposed to how long it takes for the application to exit.
There are many libraries out there that make use of a connection pool in one form or another. Those typically terminate after a configurable period of inactivity.
In case of pg-promise, which in turn uses node-postgres, such period of inactivity is determined by parameter poolIdleTimeout, which defaults to 30 seconds. With pg-promise you can access it via pgp.pg.defaults.poolIdleTimeout.
If you want your process to exit after the last query has been executed, you need to shut down the connection pool, by calling pgp.end(). See chapter Library de-initialization for details.
It is also shown in most of the code examples, as those need to exit right after finishing.