Linux command line queueing system - linux

I am running a webservice to convert ODT documents to PDF using OpenOffice on an Ubuntu server.
Sadly, OpenOffice chokes occasionally when more then 1 request is made simultaneously (converting a PDF takes around 500-1000ms). This is a real threat since my webservice is multithreaded and jobs are mostly issued in batches.
What I am looking for is a way to hand off the conversion task from my webservice to a intermediate process that queues all requests and streamlines them 1 by 1 to OpenOffice.
However, sometimes I want to be able to issue a high priority conversion that gets processed immediately (after the current one, if busy) and have the webservice wait (block) for that. This seems a tricky addition that makes most simple scheduling techniques obsolete.

What you're after is some or other message/work queue system.
One of the simplest work queueing systems I've used, that also supports prioritisation, is beanstalkd.
You would have a single process running on your server, that will run your conversion process when it receives a work request from beanstalkd, and you will have your web application push a work request onto beanstalkd with relevant information.
The guys at DigitalOcean have written up a very nice intro to it here:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-beanstalkd-work-queue-on-a-vps

Related

API getting Slow due to Iteration using Node JS

I am using Node js for creating a REST API.
In my scenario i have two API's.
API 1 --> Have to get 10,000 records and make a iteration to modify some of the data
API 2: Simple get method.
When i open post man and hit the first API and Second API parallel
Because of Node JS is single threaded Which Causes second API slower for getting response.
My Expectation:
Even though the 1st API getting time it should not make the 2nd API for large time.
From Node JS docs i have found the clustering concept.
https://nodejs.org/dist/latest-v6.x/docs/api/cluster.html
So i implemented Cluster it created 4 server instance.
Now i hit the API 1 in one tab and API 2 in second tab it worked fine.
But when i opened API 1 in 4 tabs and 5th tab again API 2 which causes the slowness again.
What will be the best solution to solve the issue?
Because of the single threaded nature of node.js, the only way to make sure your server is always responsive to quick requests such as you describe for API2 is to make sure that you never have any long running operations in your server.
When you do encounter some operation in your code that takes awhile to run and would affect the responsiveness of your server, your options are as follows:
Move the long running work to a new process. Start up a new process and run the length operation in another process. This allows your server process to stay active and responsive to other requests, even while the long running other process is still crunching on its data.
Start up enough clusters. Using the clustering you've investigated, start up more clusters than you expect to have simultaneous calls to your long run process. This allows there to always be at least one clustered process that is available to be responsive. Sometimes, you cannot predict how many this will be or it will be more than you can practically create.
Redesign your long running process to execute its work in chunks, returning control to the system between chunks so that node.js can interleave other work it is trying to do with the long running work. Here's an example of processing a large array in chunks. That post was written for the browser, but the concept of not blocking the event loop for too long is the same in node.js.
Speed up the long running task. Find a way to speed up the long running job so it doesn't take so long (using caching, not returning so many results at once, faster way to do it, etc...).
Create N worker processes (probably one less worker process than the number of CPUs you have) and create a work queue for the long running tasks. Then, when a long running request comes in, you insert it in the work queue. Then, each worker process is free to work on items in the queue. When more than N long tasks are being requested, the first ones will get worked on immediately, the later ones will wait in the queue until there is a worker process available to work on them. But, most importantly, your main node.js process will stay free and responsive for regular requests.
This last option is the most foolproof because it will be effective to any number of long running requests, though all of the schemes can help you.
Node.js actually is not multi-threaded, so all of these requests are just being handled in the event loop of a single thread.
Each Node.js process runs in a single thread and by default it has a memory limit of 512MB on 32 bit systems and 1GB on 64 bit systems.
However, you can split a single process into multiple processes or workers. This can be achieved through a cluster module. The cluster module allows you to create child processes (workers), which share (or not) all the server ports with the main Node process.
You can invoke the Cluster API directly in your app, or you can use one of many abstractions over the API
https://nodejs.org/api/cluster.html

Nodejs scaling and prioritising functions

We have a node application running on the server that gets hit a lot and has to compile a zip file for download. That works well so far but I am nervous we will hit a point where performance becomes an issue.
(The application is currently running with forever on a ubuntu 14.04 machine.)
I am now asked to add all kinds of new features to the app which are more secondary and should not decrease the performance of the main function (the zip download). It would be OK to have those additional features fail in case the app is hit too many times in favour of the main zipping process.
What is the best practise here. Creating a REST API for the secondary features and put everything into a waiting list? It surely isn't enough to just create a second app and spawn a new process each time the main zip process finishes? How Can I ensure the most redundancy? I'm not talking about a multi-core cluster setup or load-balancing on NGINX, but a smart way of prioritising application functions on application level.
I hope this is not too broad. Cheers
First off, everything should be using async I/O, no synchronous I/O anywhere in your server. That's the #1 rule for building a scalable node.js server.
Second off, the highest priority tasks that have any significant CPU usage should be allowed to use multiple cores. If, as you say, the highest priority tasks is creating the zip download, then you should makes sure that that operation can take advantage of multiple cores.
You can accomplish that either with clustering (your whole server runs multiple instances that can each be on a separate core) or by creating a set of processes specifically for creating the zip files and then create a work queue in the main process that feeds these other processes work and gets the result back from them. This second option is likely a bit more complex to code than clustering, but it does prioritize the zip file creation so only one core is serving other server needs and all other cores of working on zip file creation. Clustering shares all cores with all server responsibilities.
At the pure server application level, your server can maintain a work queue of all incoming work to be done no matter what kind and it can prioritize that work. For example, if an API call comes in and there are already N zip file requests in the queue, you could immediately fail the API call to keep it from building up on the server. I don't think I'd personally recommend that solution unless your API calls are really heavy operations because it's very hard for a developer to reliably use your API if it regularly just fails on them. They would generally find it better for the API to just be slow sometimes than to regularly fail.
You might not even have to use a queue, you could just use a counter to keep track of how many ZIP file requests were "in process", but you'd have to make absolutely sure the counter was accurate in all cases. If there was ever an accumulating error in the counter, then you might just end up failing all API requests until your server was restarted.

Node.js Clusters with Additional Processes

We use clustering with our express apps on multi cpu boxes. Works well, we get the maximum use out of AWS linux servers.
We inherited an app we are fixing up. It's unusual in that it has two processes. It has an Express API portion, to take incoming requests. But the process that acts on those requests can run for several minutes, so it was build as a seperate background process, node calling python and maya.
Originally the two were tightly coupled, with the python script called by the request to upload the data. But this of course was suboptimal, as it would leave the client waiting for a response for the time it took to run, so it was rewritten as a background process that runs in a loop, checking for new uploads, and processing them sequentially.
So my question is this: if we have this separate node process running in the background, and we run clusters which starts up a process for each CPU, how is that going to work? Are we not going to get two node processes competing for the same CPU. We were getting a bit of weird behaviour and crashing yesterday, without a lot of error messages, (god I love node), so it's bit concerning. I'm assuming Linux will just swap the processes in and out as they are being used. But I wonder if it will be problematic, and I also wonder about someone getting their web session swapped out for several minutes while the longer running process runs.
The smart thing to do would be to rewrite this to run on two different servers, but the files that maya uses/creates are on the server's file system, and we were not given the budget to rebuild the way we should. So, we're stuck with this architecture for now.
Any thoughts now possible problems and how to avoid them would be appreciated.
From an overall architecture prospective, spawning 1 nodejs per core is a great way to go. You have a lot of interdependencies though, the nodejs processes are calling maya which may use mulitple threads (keep that in mind).
The part that is concerning to me is your random crashes and your "process that runs in a loop". If that process is just checking the file system you probably have a race condition where the nodejs processes are competing to work on the same input/output files.
In theory, 1 nodejs process per core will work great and should help to utilize all your CPU usage. Linux always swaps the processes in and out so that is not an issue. You could start multiple nodejs per core and still not have an issue.
One last note, be sure to keep an eye on your memory usage, several linux distributions on EC2 do not have a swap file enabled by default, running out of memory can be another silent app killer, best to add a swap file in case you run into memory issues.

Controlling the flow of requests without dropping them - NodeJS

I have a simple nodejs webserver running, it:
Accepts requests
Spawns separate thread to perform background processing
Background thread returns results
App responds to client
Using Apache benchmark "ab -r -n 100 -c 10", performing 100 requests with 10 at a time.
Average response time of 5.6 seconds.
My logic for using nodejs is that is typically quite resource efficient, especially when the bulk of the work is being done by another process. Seems like the most lightweight webserver option for this scenario.
The Problem
With 10 concurrent requests my CPU was maxed out, which is no surprise since there is CPU intensive work going on the background.
Scaling horizontally is an easy thing to, although I want to make the most out of each server for obvious reasons.
So how with nodejs, either raw or some framework, how can one keep that under control as to not go overkill on the CPU.
Potential Approach?
Could accepting the request storing it in a db or some persistent storage and having a separate process that uses an async library to process x at a time?
In your potential approach, you're basically describing a queue. You can store incoming messages (jobs) there and have each process get one job at the time, only getting the next one when processing the previous job has finished. You could spawn a number of processes working in parallel, like an amount equal to the number of cores in your system. Spawning more won't help performance, because multiple processes sharing a core will just run slower. Keeping one core free might be preferred to keep the system responsive for administrative tasks.
Many different queues exist. A node-based one using redis for persistence that seems to be well supported is Kue (I have no personal experience using it). I found a tutorial for building an implementation with Kue here. Depending on the software your environment is running in though, another choice might make more sense.
Good luck and have fun!

Is it possible to upload data as a background process in j2me?

Even with a poor network connection?
Specifically, I've written code which launches a separate thread (from the UI) that attempts to upload a file via HTTP POST. I've found, however, that if the connection is bad, the processor gets stuck on outputstream.close() or httpconnection.getheaderfield() or any read/write which forces data over the network. This causes not only the thread to get stuck, but steals the entire processor, so even the user interface becomes unresponsive.
I've tried lowering the priority of the thread, to no avail.
My theory is that there is no easy way of avoiding this behavior, which is why all the j2me tutorial instruct developers to create a ‘sending data over the network…’ screen, instead of just sending everything in a background thread. If someone can prove me wrong, that would be fantastic.
Thanks!
One important aspect is you need to have a generic UI or screen that can be displayed when the network call in background fails. It is pretty much a must on any mobile app, J2ME or otherwise.
As Honza said, it depends on the design, there are so many things that can be done, like pre-fetching data on app startup, or pre-fetching data based on the screen that is loaded (i.e navigation path), or having a default data set built in into the app etc.
Another thing that you can try is a built-in timer mechanism that retries data download after certain amount of time, and aborting after say 5 tries or 1-2 minutes and displaying generic screen or error message.
Certain handsets in J2ME allow detection of airplane mode, if possible you can detect that and promptly display an appropriate screen.
Also one design that has worked for me is synchronizing UI and networking threads, so that they dont lock up each other (take this bit of advice with heavy dose of salt as I have had quite a few interesting bugs on some samsung and sanyo handsets because of this)
All in all no good answer for you, but different strategies.
It pretty much depends on how you write the code and where you run it. On CLDC the concept of threading is pretty limited and if any thread is doing some long lasting operation other threads might be (and usualy are) blocked by it as well. You should take that into account when designing your application.
You can divide your file data into chunks and then upload with multiple retries on failure. This depends on your application strategy . If your priority is to upload a bulk data with out failure. You need to have assemble the chunks on server to build back your data . This may have the overhead for making connections but the chance is high for your data will get uploaded . If you are not uploading files concurrently this will work with ease .

Resources