How to execute UFT Scripts in parallel? - multithreading

We have around 5000 UFT Web scripts, and we want to execute them in parallel. How can we do it?
Purchasing of parallel threads has been tried, it wasn't cost effective.

Related

Queuing deep learning training Jobs on a shared server

We have multiple GPU servers for deep learning training. We train our code inside Docker containers.
Today every user can log into these machines interactively and run a job. There is no restriction on job duration, or how many GPUs one can allocate. We haggle over GPUs via a chat channel.
What's the simplest framework for queuing jobs and allocating them to GPUs? I'm looking for GPU-level granularity, and am fine with multiple user running on the same machine.
The desired workflow is:
Allocate GPU
Create docker container
Run job (and grab requirements on top of vanilla container if necessary). Can be a bash file?
Finish job / kill job if running too long and close container
Additionally, I'd like an interactive workflow similar to the above, where the time limits are reduced (say, 2 hours max of interactive time?)
I'd like to prevent someone from hogging a machine for a week, so a job should have a finite runtime and be killed afterward. Also, multiple people shouldn't be able to use the same GPU simultaneously.
I realize I can do this with a cron job that monitors and kills jobs, but am looking for something more elegant, readymade, and preferably with a nice UI. I've tried ClearML, but can't figure out how to use it for this purpose. I know SLURM is used for allocating entire machines. It's unclear to me if it's possible to allocate specific GPUs.

Use multiple supervisord program threads for seperate APScheduler tasks or one program with one scheduler for many tasks?

I'm developing a flask web application that requires a few tasks to be scheduled for the night time and was curious to pick some of your minds.
From the perspective of efficiency and performance of the application itself, would it be better to set up multiple supervisor program threads to each run a specific blocking APScheduler (at different times but same time frame), or to call all tasks back to back in 1 supervisor program thread with 1 blocking scheduler?
Thank you!

Do web workers work properly if the client only has a one core CPU?

Pure curiosity, I'm just wondering if there is any case where a webworker would manage to execute a separate thread if only one thread is available in the CPU, maybe with some virtualization, using the GPU?
Thanks!
There seem to be two premises behind your question: firstly, that web workers use threads; and secondly that multiple threads require multiple cores. But neither is really true.
On the first: there’s no actual requirement that web workers be implemented with threads. User agents are free to use processes, threads or any “equivalent construct” [see the web worker specification]. They could use multitasking within a single thread if they wanted to. Web worker scripts are run concurrently but not necessarily parallel to browser JavaScript.
On the second: it’s quite possible for multiple threads to run on a single CPU. It works a lot like concurrent async functions do in single threaded JavaScript.
So yes, in answer to your question: web workers do run properly on a single core client. You will lose some of the performance benefits but the code will still behave as it would in a multi core system.

(Tomcat) Web service : OutOfMemoryError: unable to create new native thread

I am creating a web service that creates a huge amount of small java timer threads over (10k). I can only seem to create 2k timer threads before I get the OutOfMemoryError: unable to create new native thread. How do i solve this? I am using a macbook pro to run my Tomcat server on. I'v configured the ulimit (-u) max user processes to double what it used to be but I still get the same problem. What are my options, if any, to make this doable?
It's often a bad idea for web applications to start their own (few) threads, let alone 10K threads - and then "as timers"? Seriously? Don't go there.
What can you do?
Don't rely on the ability to create those threads.
Change your architecture! Use a scheduler library that has solved this problem already (e.g. Quartz or others).
If you don't want to use an external library (why wouldn't you?): Implement a single timer thread that executes the scheduled operations when they're due. Do not use a new thread for each scheduled operation
If you wanted to boil 100 eggs, would you buy 100 timers?

How do you run multiple requests or thread groups parallely "for each of the same user" in JMeter

Adding multiple requests to same thread group also seem to run sequentially.
I know you can start thread groups in parallel but I want all thread groups to run or start in parallel for the SAME user.
And then you have the synchronizing timer that starts multiple users at the exact time
http://jmeter.apache.org/usermanual/component_reference.html#Synchronizing_Timer
But this does not scale all users concurrently based on throughput and is very hacky i.e., you have to parameterize your users and the group by in such a way to match expected throughput and number of requests per user
Right now, the work around is to create an HTML page to trigger download embedded resources in parallel for same user in one thread group but that is ugly and only works for GET requests. Also this is very buggy and runs very slow besides occupying full CPU and the throughput is 1/10th of a separate parallel test showing that this does not work correctly.
You could use the same approach as for AJAX testing with JMeter which assumes running multiple requests of different types at the same moment of time. It assumes some scripting so you will have to write a bit of Groovy or Java code in JSR223 Sampler or even create your own sampler, fortunately JMeter module plugin-oriented architecture is very extension-friendly as it evidenced by JMeter Plugins project (take a look by the way, perhaps your use case is already implemented)
See How to Load Test AJAX/XHR Enabled Sites With JMeter guide for explanation of the approach and few code snippets demonstrating how to run parallel asynchronous requests

Resources