I've scaled up the hardware on an azure-databricks cluster ("all-purpose" cluster) appropriately so that it should handle a very large amount of work. The application is designed in a way where incoming data is processed in smallish, discrete chunks. The jobs run in ~20 to 30 seconds. But there is a high degree of concurrent jobs that need to execute at the same time (eg. anywhere from 0 to 50 simultaneous jobs).
The only approach for delivering jobs to the cluster seems to be by way of their REST API in azure databricks (doc: https://docs.databricks.com/dev-tools/api/latest/jobs.html )
Everything behaves normally until the number of concurrent jobs reaches 10 or so. At that point I see an unreasonable deterioration in throughput. But if I check ganglia or custom telemetry, there appears to be no reason for the deteriorated performance.
My suspicion is that the REST API itself is introducing an artificial bottleneck and they are throttling the number of jobs I can send over to my cluster. This was not self-evident to me. If I am paying for a large cluster, I should be allowed to send jobs to it. The REST API seems to be doing little more than serving as a communication channel that allows me to transmit my requests to my cluster. That API is the last place I would expect to find a resource bottleneck. A Spark developer would naturally investigate their code, then the cluster hardware. The REST API is not a reasonable place for Databricks to be introducing some additional, secretive limitations.
Does anyone know of another way to transmit distinct jobs to a cluster without going thru the REST API? Eg. is there a way for the driver node in the cluster to spawn additional/distinct/first-class jobs without being counted against our REST API allowance?
This issue seems silly and artificial. The secretive nature of these limits is bothersome to me as well. If they are throttling the REST API then there should be a warning, error, or ganglia chart for that. Otherwise developers will struggle with the performance issues using trial and error and guesswork.
Any help is appreciated. I'd prefer not to go all the way back to the drawing board, because of an artificial restriction in their REST API (one that was probably put in place to protect an underpowered "control plane").
Spark is awesome, but it isn't designed to be a high-concurrency database. The folks at Databricks have done a lot to lift the concurrency limitations of Spark, it still isn't a high-concurrecy solution.
In other words, your problem isn't the REST API ... it's the Spark engine in Databricks.
I know you don't want to go back to the drawing board, but the choices here are all bad ones:
you can run multiple Databricks clusters ( https://docs.databricks.com/clusters/index.html ) and use NGINX or some other load balancer to distribute the API requests. This will get expensive, quickly, but will avoid redesign.
If your use case supports it, try using a real-time database that supports high concurrency. I like Druid (see https://druid.apache.io or https://imply.io if you want a managed version), but there are others in the same category
NOTE: This question is framed in the context of a private network where the business network operator owns and manages all the nodes on the network as a service and only provides access via a REST API or a web gui.
Assuming that the application is mostly batch based and not real time, is it possible to run nodes in bursts so that they start once an hour, process any transactions and then shut down again when the processing is complete?
Or maybe have a trigger that starts up the node automatically when it is needed.
Azure has per second billing which has the potential to drastically reduce infrastructure costs.
Generally speaking this wouldn't be possible. You can think of nodes being like email servers -- You never know when an email (ie a transaction for a Node) comes in, so they have to be online all the time.
However if you control all nodes on the network, you could build a queuing system outside of Corda, once an hour spin up all the nodes on the network, and then process your own queue by sending the transactions then.
This would be likely to become tricky once you have other entities you don't control on the network though. You could also run the nodes on the smallest possible instances on Azure and keep the cost down that way?
Currently I am solving an engineering problem, and want to open the conversation to the SO community.
I want to implement a task scheduler. I have two separate instances of a nodeJS application sitting behind an elastic load balancer (ELB). The problem is when both instances come up, they try to execute the same tasks logic, causing the tasks run more than once.
My current solution is to use node-schedule to schedule tasks to run, then have them referencing the database to check if the task hasn't already been run since it's specified run time interval.
The logic here is a little messy, and I am wondering if there is a more elegant way I could go about doing this.
Perhaps it is possible to set a particular env variable on a specific instance - so that only that instance will run the tasks.
What do you all think?
What you are describing appears to be a perfect example of a use case for AWS Simple Queue Service.
https://aws.amazon.com/sqs/details/
Key points to look out for in your solution:
Make sure that you pick a visibility timeout that is reflective of your workload (so messages don't reenter the queue whilst still in process by another worker)
Don't store your workload in the message, reference it! A message can only be up to 256kb in size and message sizes have an impact on performance and cost.
Make sure you understand billing! As billing is charged in 64KB chunks, meaning 1 220KB message is charged as 4x 64KB chucks / requests.
If you make your messages small, you can save more money by doing batch requests as your bang for buck will be far greater!
Use longpolling to retrieve messages to get the most value out of your message requests.
Grant your application permissions to SQS by the use of an EC2 IAM Role, as this is the best security practice and the recommended approach by AWS.
It's an excellent service, and should resolve your current need nicely.
Thanks!
Xavier.
I'm running a Windows Azure web role which, on most days, receives very low traffic, but there are some (foreseeable) events which can lead to a high amount of background work which has to be done. The background work consists of many database calls (Azure SQL) and HTTP calls to external web services, so it is not really CPU-intensive, but it requires a lot of threads which are waiting for the database or the web service to answer. The background work is triggered by a normal HTTP request to the web role.
I see two options to orchestrate this, and I'm not sure which one is better.
Option 1, Threads: When the request for the background work comes in, the web role starts as many threads as necessary (or queues the individual work items to the thread pool). In this option, I would configure a larger instance during the heavy workload, because these threads could require a lot of memory.
Option 2, Self-Invoking: When the request for the background work comes in, the web role which receives it generates a HTTP request to itself for every item of background work. In this option, I could configure several web role instances, because the load balancer of Windows Azure balances the HTTP requests across the instances.
Option 1 is somewhat more straightforward, but it has the disadvantage that only one instance can process the background work. If I want more than one Azure instance to participate in the background work, I don't see any other option than sending HTTP requests from the role to itself, so that the load balancer can delegate some of the work to the other instances.
Maybe there are other options?
EDIT: Some more thoughts about option 2: When the request for the background work comes in, the instance that receives it would save the work to be done in some kind of queue (either Windows Azure Queues or some SQL table which works as a task queue). Then, it would generate a lot of HTTP requests to itself, so that the load balancer 'activates' all of the role instances. Each instance then dequeues a task from the queue and performs the task, then fetches the next task etc. until all tasks are done. It's like occasionally using the web role as a worker role.
I'm aware this approach has a smelly air (abusing web roles as worker roles, HTTP requests to the same web role), but I don't see the real disadvantages.
EDIT 2: I see that I should have elaborated a little bit more about the exact circumstances of the app:
The app needs to do some small tasks all the time. These tasks usually don't take more than 1-10 seconds, and they don't require a lot of CPU work. On normal days, we have only 50-100 tasks to be done, but on 'special days' (New Year is one of them), they could go into several 10'000 tasks which have to be done inside of a 1-2 hour window. The tasks are done in a web role, and we have a Cron Job which initiates the tasks every minute. So, every minute the web role receives a request to process new tasks, so it checks which tasks have to be processed, adds them to some sort of queue (currently it's an SQL table with an UPDATE with OUTPUT INSERTED, but we intend to switch to Azure Queues sometime). Currently, the same instance processes the tasks immediately after queueing them, but this won't scale, since the serial processing of several 10'000 tasks takes too long. That's the reason why we're looking for a mechanism to broadcast the event "tasks are available" from the initial instance to the others.
Have you considered using Queues for distribution of work? You can put the "tasks" which needs to be processed in queue and then distribute the work to many worker processes.
The problem I see with approach 1 is that I see this as a "Scale Up" pattern and not "Scale Out" pattern. By deploying many small VM instances instead of one large instance will give you more scalability + availability IMHO. Furthermore you mentioned that your jobs are not CPU intensive. If you consider X-Small instance, for the cost of 1 Small instance ($0.12 / hour), you can deploy 6 X-Small instances ($0.02 / hour) and likewise for the cost of 1 Large instance ($0.48) you could deploy 24 X-Small instances.
Furthermore it's easy to scale in case of a "Scale Out" pattern as you just add or remove instances. In case of "Scale Up" (or "Scale Down") pattern since you're changing the VM Size, you would end up redeploying the package.
Sorry, if I went a bit tangential :) Hope this helps.
I agree with Gaurav and others to consider one of the Azure Queue options. This is really a convenient pattern for cleanly separating concerns while also smoothing out the load.
This basic Queue-Centric Workflow (QCW) pattern has the work request placed on a queue in the handling of the Web Role's HTTP request (the mechanism that triggers the work, apparently done via a cron job that invokes wget). Then the IIS web server in the Web Role goes on doing what it does best: handling HTTP requests. It does not require any support from a load balancer.
The Web Role needs to accept requests as fast as they come (then enqueues a message for each), but the dequeue part is a pull so the load can easily be tuned for available capacity (or capacity tuned for the load! this is the cloud!). You can choose to handle these one at a time, two at a time, or N at a time: whatever your testing (sizing exercise) tells you is the right fit for the size VM you deploy.
As you probably also are aware, the RoleEntryPoint::Run method on the Web Role can also be implemented to do work continually. The default implementation on the Web Role essentially just sleeps forever, but you could implement an infinite loop to query the queue to remove work and process it (and don't forget to Sleep whenever no messages are available from the queue! failure to do so will cause a money leak and may get you throttled). As Gaurav mentions, there are some other considerations in robustly implementing this QCW pattern (what happens if my node fails, or if there's a bad ("poison") message, bug in my code, etc.), but your use case does not seem overly concerned with this since the next kick from the cron job apparently would account for any (rare, but possible) failures in the infrastructure and perhaps assumes no fatal bugs (so you can't get stuck with poison messages), etc.
Decoupling placing items on the queue from processing items from the queue is really a logical design point. By this I mean you could change this at any time and move the processing side (the code pulling from the queue) to another application tier (a service tier) rather easily without breaking any part of the essential design. This gives a lot of flexibility. You could even run everything on a single Web Role node (or two if you need the SLA - not sure you do based on some of your comments) most of the time (two-tier), then go three-tier as needed by adding a bunch of processing VMs, such as for the New Year.
The number of processing nodes could also be adjusted dynamically based on signals from the environment - for example, if the queue length is growing or above some threshold, add more processing nodes. This is the cloud and this machinery can be fully automated.
Now getting more speculative since I don't really know much about your app...
By using the Run method mentioned earlier, you might be able to eliminate the cron job as well and do that work in that infinite loop; this depends on complexity of cron scheduling of course. Or you could also possibly even eliminate the entire Web tier (the Web Role) by having your cron job place work request items directly on the queue (perhaps using one of the SDKs). You still need code to process the requests, which could of course still be your Web Role, but at that point could just as easily use a Worker Role.
[Adding as a separate answer to avoid SO telling me to switch to chat mode + bypass comments length limitation] & thinking out loud :)
I see your point. Basically through HTTP request, you're kind of broadcasting the availability of a new task to be processed to other instances.
So if I understand correctly, when an instance receives request for the task to be processed, it pushes that request in some kind of queue (like you mentioned it could either be Windows Azure Queues [personally I would actually prefer that] or SQL Azure database [Not prefer that because you would have to implement your own message locking algorithm]) and then broadcast a message to all instances that some work needs to be done. Remaining instances (or may be the instance which is broadcasting it) can then see if they're free to process that task. One instance depending on its availability can then fetch the task from the queue and start processing that task.
Assuming you used Windows Azure Queues, when an instance fetched the message, it becomes unavailable to other instances immediately for some amount of time (visibility timeout period of Azure queues) thus avoiding duplicate processing of the task. If the task is processed successfully, the instance working on that task can delete the message.
If for some reason, the task is not processed, it will automatically reappear in the queue after visibility timeout period has expired. This however leads to another problem. Since your instances look for tasks based on a trigger (generating HTTP request) rather than polling, how will you ensure that all tasks get done? Assuming you get to process just one task and one task only and it fails since you didn't get a request to process the 2nd task, the 1st task will never get processed again. Obviously it won't happen in practical situation but something you might want to think about.
Does this make sense?
i would definitely go for a scale out solution: less complex, more manageable and better in pricing. Plus you have a lesser risk on downtime in case of deployment failure (of course the mechanism of fault and upgrade domains should cover that, but nevertheless). so for that matter i completely back Gaurav on this one!
I have an app that I'm thinking about moving to Azure as a Worker Role with an external facing endpoint. It's a small little process that runs in about 200-400ms, but our users would like to start running the little job 50K-100K times a day, per user. Before I go building the Azure prototype, I need to figure out what kind of latency I can expect communicating with an Azure external endpoint. Obviously, the latency depends on the size of information that I'm sending and receiving, and it depends on the speed of my internet connection, but I can't find any metrics anywhere. Are there any kind of base line numbers out there?
For the sake of argument, lets say I'm on a T1 and I'm sending 10K up and 10K down with each job run.
I don't think latency is exactly the term you looking for, that's the delay it takes sending each packet over the network which is affected more by your distance from the server, and the nature of your network.
Having said that, everyones results wrt to latency will be different, the only way to be sure will be to set up a prototype and run some performance tests on it. Also remember with Azure you can specify your data center, so select one near you.