I want to send out page requests from a browser window as fast as possible to test website responsiveness but don't have any idea how to do it.
On the receiving end is a server that will be getting multiple packets but in order to insure that they are being processed in an FIFO manner I want to send them as close to one another as possible.
The input needs to come from a web browser (doesn't matter which type or if its emulated some how) and cannot be done from the command line. My idea is to somehow freeze the network stack and then unfreezing it to release the packets all at once, but in correct order. Is there any way to do this or is there a better way to do this?
HTTP communication usually uses TCP/IP connections so the order of packets should be guaranteed. I suggest building a small web browser using c# with WebBrowser control and have it constantly request the website in a loop. Or, I believe this shouldn't be hard to do with other script languages like python.
Related
I am currently planning a complicated networking project on Windows IoT Enterprise. Basically, I will have a C program keeping alive a special network interface. This C program should receive tasks in some way from other programs on the same host, which will generally be written in all sorts of languages (e.g. node.js). I have never did such cooperation between tasks. Do you have any advice on how a node.js server can pass information to an already running C program, and preferably receive a success code or an error message?
It is very important for me that this process is as fast as possible, as this solution will handle multiple thousand requests per second.
In one of the comments I was pointed towards zeroMQ, and I am now using it successfully in my application, thank you for the help!
I have an online now feature which requires me to set a field in my database which I have integrated into getting notification updates. As such this is being done via long polling (since short polling it isn't much better and this results in less connections to the server).
I used to do this on PHP but as those of you who know about PHP will understand, PHP will lose all it's available connections quite quickly, even under fpm.
So I turned to node.js which is supposed to be able to handle thousands, if not millions, of concurrent connections but the more I look it seems node.js handles these via event based programming. Of course event based programming has massive benefits.
This is fine for chat apps and what not but what if I have an online now feature that I have integrated into long polling to mark that a user is still online?
Would node.js still get saturated quickly or is it actually able to handle these open connections still?
Long Polling with Node.js
Long Polling will eat up some of your connection pool, so be sure to set your ulimit high if using a Linux or Unix variety.
Ideally you'll maintain state in something like memcached or redis. A prefered approach would be to use Redis. For this you'll subscribe to a pub/sub channel, and everytime the user state updates you'll publish an event. This will trigger a handler which will cause your long-poll to respond with the updated status/s. This is typically prefered to scheduling and much cleaner, but as long as you're not looping or otherwise blocking node's thread of execution you shouldn't see any problems.
Short Polling with PHP
As you're already using a PHP stack it might be prefered to not move away from that. PHP's(more so php-fpm) paradigm starts a process per connection, and these processes are set to timeout. So long polling isn't really an option.
Short polling on intervals can update the state on the front-end. As you specified that you are using cronjob, it might be cleaner to just hold the state in memory on the front-end and update it periodically.
This should work, however this might increase your need to scale earlier, as each user will be sending n more requests. However, this might be the easiest approach, and you're not adding unnecessary complexity to your stack.
Websockets
Adding websockets for such a simple feature is likely overkill, and websockets themselves can only have a limited amount of connections(depending on your host and configurations) so you're not really solving any of the issues that long polling presents. If you don't plan to use websockets for more than just maintaining user state then you're adding another technology to your stack to solve a simple problem.
I am running a webservice to convert ODT documents to PDF using OpenOffice on an Ubuntu server.
Sadly, OpenOffice chokes occasionally when more then 1 request is made simultaneously (converting a PDF takes around 500-1000ms). This is a real threat since my webservice is multithreaded and jobs are mostly issued in batches.
What I am looking for is a way to hand off the conversion task from my webservice to a intermediate process that queues all requests and streamlines them 1 by 1 to OpenOffice.
However, sometimes I want to be able to issue a high priority conversion that gets processed immediately (after the current one, if busy) and have the webservice wait (block) for that. This seems a tricky addition that makes most simple scheduling techniques obsolete.
What you're after is some or other message/work queue system.
One of the simplest work queueing systems I've used, that also supports prioritisation, is beanstalkd.
You would have a single process running on your server, that will run your conversion process when it receives a work request from beanstalkd, and you will have your web application push a work request onto beanstalkd with relevant information.
The guys at DigitalOcean have written up a very nice intro to it here:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-beanstalkd-work-queue-on-a-vps
I heard of node.js is very suitable for applications where a persistent connection from the browser to the server is needed. That "long-polling" technique is used, that allows to send updates to the user in real time without needing a lot of server resources. A more traditional server model would need a thread for every single user.
My question, what is done instead, how are the requests served differently?
Why doesn't it take so much resources?
Nodejs is event-driven. The node script is started and then loops continuously, waiting for events to be fired, until stopped. Once running, the overhead associated with loading is done.
Compare this to a more traditional language such as c#.net or PHP, where a request causes the server to load and run the script and it's dependencies. The script then does its' task (often serving a web page) and then shuts down. Another page is requested, the whole process starts again.
Even with a poor network connection?
Specifically, I've written code which launches a separate thread (from the UI) that attempts to upload a file via HTTP POST. I've found, however, that if the connection is bad, the processor gets stuck on outputstream.close() or httpconnection.getheaderfield() or any read/write which forces data over the network. This causes not only the thread to get stuck, but steals the entire processor, so even the user interface becomes unresponsive.
I've tried lowering the priority of the thread, to no avail.
My theory is that there is no easy way of avoiding this behavior, which is why all the j2me tutorial instruct developers to create a ‘sending data over the network…’ screen, instead of just sending everything in a background thread. If someone can prove me wrong, that would be fantastic.
Thanks!
One important aspect is you need to have a generic UI or screen that can be displayed when the network call in background fails. It is pretty much a must on any mobile app, J2ME or otherwise.
As Honza said, it depends on the design, there are so many things that can be done, like pre-fetching data on app startup, or pre-fetching data based on the screen that is loaded (i.e navigation path), or having a default data set built in into the app etc.
Another thing that you can try is a built-in timer mechanism that retries data download after certain amount of time, and aborting after say 5 tries or 1-2 minutes and displaying generic screen or error message.
Certain handsets in J2ME allow detection of airplane mode, if possible you can detect that and promptly display an appropriate screen.
Also one design that has worked for me is synchronizing UI and networking threads, so that they dont lock up each other (take this bit of advice with heavy dose of salt as I have had quite a few interesting bugs on some samsung and sanyo handsets because of this)
All in all no good answer for you, but different strategies.
It pretty much depends on how you write the code and where you run it. On CLDC the concept of threading is pretty limited and if any thread is doing some long lasting operation other threads might be (and usualy are) blocked by it as well. You should take that into account when designing your application.
You can divide your file data into chunks and then upload with multiple retries on failure. This depends on your application strategy . If your priority is to upload a bulk data with out failure. You need to have assemble the chunks on server to build back your data . This may have the overhead for making connections but the chance is high for your data will get uploaded . If you are not uploading files concurrently this will work with ease .