How to specify HTTP timeout for DownloadURL() in Akavache? - akavache

I am developing an application targetting mobile devices, so I have to consider bad network connectivity. In one use case, I need to reduce the timeout for a request, because if no network is available, that's okay, and I'd fall back to default data immediately, without having the user wait for the HTTP response.
I found that HttpMixin.MakeWebRequest() has a timeout parameter (with default=null) but DownloadUrl() never makes use of it, so the forementioned function always waits for up to 15 seconds:
request.Timeout(timeout ?? TimeSpan.FromSeconds(15),
BlobCache.TaskpoolScheduler).Retry(retries);
So actually I do not have the option to use a different timeout, or am I missing something?
Thanks for considering a helpful response.

So after looking at the signature for DownloadUrl in
HttpMixin.cs
I saw what you are talking about and am not sure why it is there but, it looks like the timeout is related to building the request and not a timeout for the request itself.
That being said, in order to set a timeout with a download, you have a couple options that should work.
Via TPL aka Async Await
var timeout = 1000;
var task = BlobCache.LocalMachine.DownloadUrl("http://stackoverflow.com").FirstAsync().ToTask();
if (await Task.WhenAny(task, Task.Delay(timeout)) == task) {
// task completed within timeout
//Do Stuff with your byte data here
//var result = task.Result;
} else {
// timeout logic
}
Via Rx Observables
var obs = BlobCache.LocalMachine
.DownloadUrl("http://stackoverflow.com")
.Timeout(TimeSpan.FromSeconds(5))
.Retry(retryCount: 2);
var result = obs.Subscribe((byteData) =>
{
//Do Stuff with your byte data here
Debug.WriteLine("Byte Data Length " + byteData.Length);
}, (ex) => {
Debug.WriteLine("Handle your exceptions here." + ex.Message);
});

Related

improve http request response time

I've created a nodejs script that make HTTP requests every 50ms, but it takes too long to receive response as request number grows.
how can I improve response time?
function makeRequest() {
superagent
.post('http://example.com')
.send({"test": "test"})
.set('Connection', 'keep-alive')
.then(console.log, console.log);
}
setInterval(() => makeRequest(), 50);
This is troublesome code. If your http request takes longer than 50ms to complete, then the number of active requests in flight will get larger and larger until eventually, you will consume too many system resources (sockets, memory, etc...). Things may get slower and slower or you may actually exhaust some resource and start to get errors or crash.
In addition, you don't want to be hitting the target server with thousands of simultaneous requests as it may also slow down under that type of load. This type of issue can also lead to an avalanche failure where a slight delay in the responsiveness of the response causes sudden build-up of requests which slows down the target server which leads to more build-up which quickly gets out of control and something dies. It's important to always code these types of things to avoid any sort of avalanche failure.
What I would suggest is making a new request some fixed number of ms from completion of the previous request (so there is only one request at a time in flight). Or a more complicated version would make a new request 50ms from when the previous one started, but not before the previous one finishes. This way, you'd only ever have one request in flight at a time and they would never build-up and accumulate and resource usage should stay fairly constant, not building over time, even if the target server gets slow for some reason.
Here's a way to make the next request after the completion of the previous request and no more often than once every 50ms:
function makeRequest() {
return superagent
.post('http://example.com')
.send({ "test": "test" })
.set('Connection', 'keep-alive');
}
function delay(t) {
return new Promise(resolve => {
setTimeout(resolve, t);
});
}
function run() {
const repeatTime = 50;
const startTime = Date.now();
return makeRequest().catch(err => {
console.log(err);
// decide here if you want to keep going or not
// if so, then just return
// if not, then throw
}).then(result => {
console.log(result);
let delta = Date.now() - startTime;
if (delta < repeatTime) {
// wait until at least repeatTime has passed before starting next request
return delay(repeatTime - delta).then(run);
} else {
return run();
}
}).catch(() => {
// aborted because of error
});
}
run();

Async base-local with MQTT

I need to synchronize a base and a local client with MQTT. If client publishes then the other one will get the message.
If my MQTT broker is down, I need to stop sending messages, save the messages somewhere, wait for a connection, then continue sending.
If my local or base client is down for a second, I need to save the message which I didn't send, then send it when I turn on my base/local.
I'm working with Node.js and can't figure out how to implement this.
This is my handler when I connect or disconnect with my MQTT server.
client.on('connect',()=>{
store.state = true;
run(store).then((value)=>console.log('stop run'));
});
client.on('offline',()=>{
store.state = false;
console.log('offline');
});
This is my run function. I use store.state to decide if I should stop this interval. But this code does not seem to be a good way to implement my concept.
function run(store) {
return new Promise((resolve,reject)=>{
let interval = setInterval(()=>{
if (!store.state) {
clearInterval(interval);
resolve(true);
}
else if (store.queue.length > 0) {
let data = store.queue.pop();
let res = client.publish('push',JSON.stringify(data),{qos:2});
}
},300)
});
}
What should I do to implement a function which always sends, stop upon 'disconnect', then continues sending when connected?
I don't think set interval which 300ms is good.
If you want something that "always runs", at set intervals and in spite of any errors inside the loop, setInterval() makes sense. You are right that queued messages can be cleared faster than "once every 300 ms".
Since MQTT.js has a built-in queue, you could simplify a lot by using it. However, your messages are published to a target called "push", so I guess you want them delivered in the order of the queue. This answer keeps the queue and focuses on sending the next message as soon as the last one is confirmed.
What if res=client.publish(..) false ?
Good point! If you want to make sure it arrives, better to remove it once the publish has succeeded. For this, you need to retrieve the value without removing it, and use the callback argument to find out what happened (publish() is asynchronous). If that was the only change, it might look like:
let data = store.queue[store.queue.length - 1];
client.publish('push', JSON.stringify(data), {qos:2}, (err) => {
if(!err) {
store.queue.pop();
}
// Ready for next publish; call this function again
});
Extending that to include a callback-based run:
function publishFromQueue(data) {
return new Promise((resolve,reject)=>{
let res = client.publish('push', JSON.stringify(data), {qos:2}, (err) => {
resolve(!err);
});
});
}
async function run(store) {
while (store.queue.length > 0 && store.state) {
let data = store.queue[store.queue.length - 1];
let res = await publishFromQueue(data);
if(res) {
store.queue.pop();
}
}
}
This should deliver all the queued messages in order as soon as possible, without blocking. The only drawback is that it does not run constantly. You have two options:
Recur at set intervals, as you have already done. Slower, though you could set a shorter interval.
Only run() when needed, like:
let isRunning = false; //Global for tracking state of running
function queueMessage(data) {
store.queue.push(data);
if(!isRunning) {
isRunning = true;
run(store);
}
isRunning = false;
}
As long as you can use this instead of pushing to the queue, it should come out similar length, and more immediate and efficient.

node process can not find setTimeout object in subsequent requests

I am trying to clear timeout set using setTimeout method by node process, in subsequent requests (using express). So, basically, I set timeout when our live stream event starts (get notified by webhook) and aim to stop this for guest users after one hour. One hour is being calculated via setTimeout, which works fine so far. However, if event gets stopped before one hour, I need to clear the timeout. I am trying to use clearTimeOut but it just can't find same variable.
// Event starts
var setTimeoutIds = {};
var val = req.body.eventId;
setTimeoutIds[val] = setTimeout(function() {
req.app.io.emit('disable_for_guest',req.body);
live_events.update({event_id:req.body.eventId},{guest_visibility:false},function(err,data){
//All ok
});
}, disable_after_milliseconds);
console.log(setTimeoutIds);
req.app.io.emit('session_started',req.body);
When event ends:
try{
var event_id = req.body.eventId;
clearTimeout(setTimeoutIds[event_id]);
delete setTimeoutIds[event_id];
}catch(e){
console.log('Event ID could not be removed' + e);
}
req.app.io.emit('event_ended',req.body);
Output :
Output
You are defining setTimeoutIds in the scope of the handler. You must define it at module level.
var setTimeoutIds = {};
router.post('/webhook', function(req, res) {
...
That makes the variable available until the next restart of the server.
Note: this approach only works as long as you only have a single server with a single node process serving your application. Once you go multi-process and/or multi-server, you need a completely different approach.

nodejs http response.write: is it possible out-of-memory?

If i have following code to send data repeatedly to client every 10ms:
setInterval(function() {
res.write(somedata);
}, 10ms);
What would happen if the client is very slow to receive the data?
Will server get out-of-memory error?
Edit:
actually the connection is kept alive, sever send jpeg data endlessly (HTTP multipart/x-mixed-replace header + body + header + body.....)
Because node.js response.write is asynchronous,
so some users guess it may store data in internal buffer and wait until low layer tells it can send,
so the internal buffer will grow, am i right?
If i am right, then how to resolve this?
the problem is node.js does not notify me when data is send for a single write call.
In other word, i can not tell user this way is theoretically no risk of "out of memory" and how to fix it.
Update:
By the keyword "drain" event given by user568109, i studied the source of node.js, and got conclusion:
it really will cause "out-of-memory" error. I should check return value of response.write(...)===false and then handle "drain" event of the response.
http.js:
OutgoingMessage.prototype._buffer = function(data, encoding) {
this.output.push(data); //-------------No check here, will cause "out-of-memory"
this.outputEncodings.push(encoding);
return false;
};
OutgoingMessage.prototype._writeRaw = function(data, encoding) { //this will be called by resonse.write
if (data.length === 0) {
return true;
}
if (this.connection &&
this.connection._httpMessage === this &&
this.connection.writable &&
!this.connection.destroyed) {
// There might be pending data in the this.output buffer.
while (this.output.length) {
if (!this.connection.writable) { //when not ready to send
this._buffer(data, encoding); //----------> save data into internal buffer
return false;
}
var c = this.output.shift();
var e = this.outputEncodings.shift();
this.connection.write(c, e);
}
// Directly write to socket.
return this.connection.write(data, encoding);
} else if (this.connection && this.connection.destroyed) {
// The socket was destroyed. If we're still trying to write to it,
// then we haven't gotten the 'close' event yet.
return false;
} else {
// buffer, as long as we're not destroyed.
this._buffer(data, encoding);
return false;
}
};
Some gotchas:
If sending over http it is not be a good idea. The browser may consider the request as timeout if it is not finished within specified amount of time. Server too will close connection which is idle for too long. If client cannot keep up, the timeout is almost certain.
setInterval for 10ms is also subject to some restrictions. It doesn't mean it will repeat after every 10ms, 10ms is the minimum it will wait before repeating. It will be slower than what you set the interval.
Let's say you chance to overload the response with data, then at some point the server will end connection and respond by 413 Request Entity Too Large depending on what the limit is set.
Node.js has single threaded architecture with a max memory limitation of around 1.7 GB. If you set your above server limits to too high and have many incoming connections you will get process out of memory error.
So with appropriate limits it will either give timeout or be request too large. (And there are no other errors in your program.)
Update
You need to use drain event. The http response is a writable stream. It has its own internal buffer. When the buffer is emptied the drain event is triggered. You should learn more about streams as you would go in deeper. This will help you not just in http. You can find several resources about streams on web.

Socket.IO server throttling a fast client

I have a server that uses socket.io and I need a way of throttling a client that is sending the server data too quickly. The server exposes both a TCP interface and a socket.io interface - with the TCP server (from the net module) I can use socket.pause() and socket.resume(), and this effectively throttles the client. But with socket.io's socket class there are no pause() and resume() methods.
What would be the easiest way of getting feedback to a client that it is overwhelming the server and needs to slow down? I liked socket.pause() and socket.resume() because it didn't require any additional code on the client-side - backup the TCP socket and things naturally slow down. Any equivalent for socket.io?
Update: I provide an API to interact with the server (there is currently a python version which runs over TCP and a JavaScript version which uses socket.io). So I don't have any real control over what the client does. Which is why using socket.pause() and socket.resume() is so great - backing up the TCP stream slows the python client down no matter what it tries to do. I'm looking for an equivalent for a JavaScript client.
With enough digging I found this:
this.manager.transports[this.id].socket.pause();
and
this.manager.transports[this.id].socket.resume();
Granted this probably won't work if the socket.io connection isn't a web sockets connection, and may break in a future update, but for now I'm going to go with it. When I get some time in the future I'll probably change it to the QUOTA_EXCEEDED solution that Pascal proposed.
Here is a dirty way to achieve throttling. Although this is a old post; some people may benefit from it:
First register a middleware:
io.on("connection", function (socket) {
socket.use(function (packet, next) {
if (throttler.canBeServed(socket, packet)) {
next();
}
});
//You other code ..
});
canBeServed is a simple throttler as seen below:
function canBeServed(socket, packet) {
if (socket.markedForDisconnect) {
return false;
}
var previous = socket.lastAccess;
var now = Date.now();
if (previous) {
var diff = now - previous;
//Check diff and disconnect if needed.
if (diff < 50) {
socket.markedForDisconnect = true;
setTimeout(function () {
socket.disconnect(true);
}, 1000);
return false;
}
}
socket.lastAccess = now;
return true;
}
You can use process.hrtime() instead of Date.time().
If you have a callback on your server somewhere which normally sends back the response to your client, you could try and change it like this:
before:
var respond = function (res, callback) {
res.send(data);
};
after
var respond = function (res, callback) {
setTimeout(function(){
res.send(data);
}, 500); // or whatever delay you want.
};
Looks like you should slow down your clients. If one client can send too fast for your server to keep up, this is not going to go very well with 100s of clients.
One way to do this would be have the client wait for the reply for each emit before emitting anything else. This way the server can control how fast the client can send by only answering when ready for example, or only answer after a set time.
If this is not enough, when a client exceeded x requests per second, start replying with something like QUOTA_EXCEEDED error, and ignore the data they send in. This will force external developers to make their app behave as you want them to do.
As another suggestion, I would propose a solution like this:
It is common for MySQL to get a large amount of requests which would take longer time to apply than the rate the requests coming in.
The server can record the requests in a table in db assuming this action is fast enough for the rate the requests are coming in and then process the queue at a normal rate for the server to sustain. This buffer system will allow the server to run slow but still process all the requests.
But if you want something sequential, then the request callback should be verified before the client can send another request. In this case, there should be a server ready flag. If the client is sending request while the flag is still red, then there can be a message telling the client to slow down.
simply wrap your client emitter into a function like below
let emit_live_users = throttle(function () {
socket.emit("event", "some_data");
}, 2000);
using use a throttle function like below
function throttle(fn, threshold) {
threshold = threshold || 250;
var last, deferTimer;
return function() {
var now = +new Date, args = arguments;
if(last && now < last + threshold) {
clearTimeout(deferTimer);
deferTimer = setTimeout(function() {
last = now;
fn.apply(this, args);
}, threshold);
} else {
last = now;
fn.apply(this, args);
}
}
}

Resources