I have an iphone app using ASIHttpRequest. The server code is on heroku in node.js
From time to time, a single request is sent from the iphone (only one trace) app but it is received twice on herokuapp (I can see twice the same request in the heroku logs).
I though at the beginning the request was requested twice because of an error in the first attempt but it's not the case as both request (the one I need and the second one I don't need) are performed on server side.
Any idea ?
Are you starting the queue with accurate progress turned on? If so, ASIHTTP makes one request (HEAD) to get the total size of the data to be downloaded, then it makes the real request. Hope that helps.
If that's not the case, try setting the persistent connection to NO, like so:
[asiRequest setShouldAttemptPersistentConnection:NO];
From my understanding, the latest version of ASIHTTPRequest defaults the persistent connection to NO. You can read more here:
https://github.com/pokeb/asi-http-request/issues/94
Related
So, I have Express server that accepts a request. The request is web scraping that takes 3-4 minute to finish. I'm using Bull to queue the jobs and processing it as and when it is ready. The challenge is to send this results from processed jobs back as response. Is there any way I can achieve this? I'm running the app on heroku, but heroku has a request timeout of 30sec.
You don’t have to wait until the back end finished do the request identified who is requesting . Authenticate the user. Do a res.status(202).send({message:”text});
Even though the response was sended to the client you can keep processing and stuff
NOTE: Do not put a return keyword before res.status...
The HyperText Transfer Protocol (HTTP) 202 Accepted response status code indicates that the request has been accepted for processing, but the processing has not been completed; in fact, processing may not have started yet. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.
202 is non-committal, meaning that there is no way for the HTTP to later send an asynchronous response indicating the outcome of processing the request. It is intended for cases where another process or server handles the request, or for batch processing.
You always need to send response immediately due to timeout. Since your process takes about 3-4 minutes, it is better to send a response immediately mentioning that the request was successfully received and will be processed.
Now, when the task is completed, you can use socket.io or web sockets to notify the client from the server side. You can also pass a response.
The client side also can check continuously if the job was completed on the server side, this is called polling and is required with older browsers which don't support web sockets. socket.io falls back to polling when browsers don't support web sockets.
Visit socket.io for more information and documentation.
Best approach to this problem is socket.io library. It can send data to client send whenever you want. It triggers a function on client side which receives the data. Socket.io supports different languages and it is really ease to use.
website link
Documentation Link
create a jobs table in a database or persistant storage like redis
save each job in the table upon request with a unique id
update status to running on starting the job
sent HTTP 202 - Accepted
At the client implement a polling script, At the server implement a job status route/api. The api accept a job id and queries the job table and respond with the status
When the job is finished update the job table with status completed, when the jon is errored updated the job table with status failed and maybe a description column to store the cause for error
This solution makes your system horizontaly scalable and distributed. It also prevents the consequences of unexpected connection drops. Polling interval depends on average job completion duration. I would recommend an average interval of 5 second
This can be even improved to store job completion progress in the jobs table so that the client can even display a progress bar
->Request time out occurs when your connection is idle, different servers implement in a different way so timeout time differs
1)The solution for this timeout problem would be to make your connections open(constant), that is the connection between client and servers should remain constant.
So for such scenarios use WebSockets, which ensures that after the initial request and response handshake between client and server the connection stays open.
there are many libraries to implement realtime connection.Eg Pubnub,socket.io. This is the same technology used for live streaming.
Node js can handle many concurrent connections and its lightweight too, won't use many resources too.
Encountered a very weird issue.
I have two VMs, running CentOS Linux.
Server side has a REST API (Using none-Poco socket), and one of the API is to response a POST.
On the client side, use POCO library to call the REST.
If the returned message is long, it will be truncated at 176 k, or 240 k, or 288 k.
Same code, same environment, running on server side, Good.
On the client VM, use python to do the REST call, Good.
ONLY failed if I use the same good code, on client VM.
When msg got truncated, the https status code always return 200
On the server side, I logged the response message that I sent every time. Everything looks normal.
I have tried whole bunch of things, like:
set the socket timeout and receiveResponse timeout to an hour
wait for 2 seconds after I send the request but before I call the receive
Set the receive buffer big enough
Try whole bunch of approach to make sure receive stream is empty, no more data
It just does not work.
Anyone have similar issue? I started pulling my hair.... Please talk to me, anything... before I am bold.
I could be overthinking this, but I just wanted a sanity check:
I'd like my slackbot to ping my server every minute
On receiving a 404, it will stop pinging the server and message me to inform me that the server is down.
Would I just...have a setTimeOut func that makes a request and handle errors/success from there?
Or am I missing something...?
Thanks!
Yes, this is called a healthcheck.
Typically what you want is to add a route to your server, say /healthcheck which just returns a 200 status and empty page. No need to overload your server by requesting a full set of assets every minute for no reason.
Then as you said, something like :
setInterval(()=>{
checkStatus();
},60000);
function checkStatus(){
request.get(options,(err,res,body)=>{
if(res.statusCode!==200){
//handle statuscode error
}
});
}
Instead of using a custom script to ping and message you could also use a uptime service to monitor your bot. There are many to choose from, some are even free for small scale use like uptimerobot.com. I use it for all of my Slack bots and apps and it works pretty well.
You can also use Google Stack Driver (not sure if it's free). It pings your server in a given time interval from various location around the globe. You can integrate it with your slack work space too, and stack driver will post a message just like your custom slack bot whenever it doesn't receive 200 OK from your server.
Hope this help!
I'm using a very simple express server, with a PUT and GET routes on an Ubuntu machine, but if I use several clients (around 8) doing requests at the same time it very easily gets flooded and starts to return connect EADDRNOTAVAIL errors. I have found no way to avoid this other than reducing the number of requests per client, but is there a way to throttle answers on the server so that instead of returning error it queues petitions and serves them in due time?
Maybe it's better to check whether there are answers to requests on the client and not insert new ones if they have not been served? Client is here
Queuing seems to be wrong, you should first check your current ulimit (every connection needs a handle).
To solve your problem, just change the ulimit.
I've implemented the chat application using node.js. The program open the connection with the client and it'll response the new message when the EventEmitter emit "recv" event.
The problem is it takes very long time to response to other request when the server hold about 3 or 4 more streams. The chrome developer tool show the status of the request as pending. it took more than 5-30 second to reach the server(localhost). I use console.log to log when the new request is received by the node.js
I have no idea why there's a long pause. Is there any limit on chrome browser, node.js or any other stuffs i should know? Does the node delay when it hold too many request at the same time and how should i measure this value? Thank you
Chrome supports six simultaneous connections per domain, so if those are already in use, it will have to wait for one to close. If you want to know what's going on, use a packet capture program to check the actual network traffic.
Browsers are limited to certain number of parallel connections which applies to the same browser context - for example when you have opened let's say more than 6 tabs, then the connections will be queued and you will see them pending.
You can avoid this limitation, for example, by using unique poll subdomain for each client connection. This is how facebook workaround this limitation, however problem is with Firefox, where this workaround doesn't work and your connections will be queued when they reach the limit even when you use unique subdomains.
Other solution might be to use HTML5 local storage where you can take advantage of StorageEvent which propagate changes also to other tabs within the same browser. This is how StackOverflow chat is done. Advantage of this approach is that you need only one polling connection with the server, but disadvantage is lack of HTML5 local storage support in older browsers or different implementation in FF version < 4.