Is there any possibility to setup Gatling scenario to run in specific counts of thread? For instance, I want to execute 1M requests during 1hour in 2500 threads.
And also, does each scenario (in setUp(scn.inject())) will be running in different thread? What does "thread" means in Gatling-definition - is it the same as in Java?
I found a topic, but it's not exactly what I need (in case of topic-started he needed only 3 threads, but for me - counts much bigger).
I have
val scn = scenario("Test")
.exec(mine)
}
setUp(
scn.inject(
rampUsers(1000000) over (3600)
)
).assertions(global.successfulRequests.percent.greaterThan(95))
As stated in the topic you've cited, number of threads that Gatling will use to fire the requests against your target system under test is not number of concurrent users. It is implementation detail.
Gatling uses Akka under the hood and issues the requests asynchronously. This asynchronous nature means that Gatling is using a few threads to fire all the requests. If you want to know more see gatling-akka-defaults.conf. It uses Akka Default Dispatcher which uses fork-join pool with aprox. number of CPU cores * 2 threads (not certain at 100%, see doc).
As was already mentioned in cited topic, question is What do you mean by "user"?.
As I understood it, your goal is to have a load 2500 concurrent users against your system. It does not matter if the Gatling will uses 2 or 1000 threads to achieve this.
So if you want 2500 concurrent users (per second) it is easy to write just:
setUp(
scn.inject( constantUsersPerSec(2500) during(3600) )
)...
If you on other hand want a 2500 distinct populations (which is IMO not desired) you can achieve this too, by:
// `scn` have to be function, while scenarios should havce distinct name
def scn(name: String) = scenario(name)
.exec(
http("root").get("/")
)
setUp(
(for {
i <- 0 until 2500 // desired 2500
} yield {
scn(s"Test $i").inject(
rampUsers(1) over (3600)
)
}).toList // setUp can accept List[PopulationBuilder]
)
Populations should be used to inject different scenarios or different type of users at the same time with its own rate and duration. For example see Advanced Tutorial, Step 2. They are not intended to simulate concurrent users. You can see that directly from the code that syntactically the solution is possible but cumbersome.
Related
I am doing more less such a setup in the code:
// loop over the inTopicName(s) {
KStream<String, String> stringInput = kBuilder.stream( STRING_SERDE, STRING_SERDE, inTopicName );
stringInput.filter( streamFilter::passOrFilterMessages ).map( processor_i ).to( outTopicName );
// } end of loop
streams = new KafkaStreams( kBuilder, streamsConfig );
streams.cleanUp();
streams.start();
If there is e.g. num.stream.threads > 1, how tasks are assigned to the prepared and assigned (in the loop) threads?
I suppose (I am not sure) there is thread pool and with some kind of round-robin policy the tasks are assigned to threads, but it can be done fully dynamically in runtime or once at the beginning by creation of the filtering/mapping to structure.
Especially I am interesting in the situation when one topic is getting computing intensive tasks and other not. Is it possible that application will starve because all threads will be assigned to the processor which is time consuming.
Let's play a bit with scenario: num.stream.threads=2, no. partitions=4 per topic, no. topics=2 (huge_topic and slim_topic)
The loop in my question is done once at startup of the app. If in the loop I define 2 topics, and I know from one topic comes messages which are heavy weighted (huge_topic) and from the other comes lightweighted messsages (slim_topic).
Is it possible that both threads from num.stream.threads will be busy only with tasks which are comming from huge_topic? And messages from slimm_topic will have to wait for processing?
Internally, Kafka Streams create tasks based on partitions. Going with your loop example and assume you have 3 input topics A, B, C with 2, 4, and 3 partition respectively. For this, you will get 4 task (ie, max number of partitions over all topics) with the following partition to task assignment:
t0: A-0, B-0, C-0
t1: A-1, B-1, C-1
t2: B-2, C-2
t3: B-3
Partitions are grouped "by number" and assigned to the corresponding task. This is determined at runtime (ie, after you call KafakStreams#start()) because before that, the number of partitions per topic is unknown.
It is not recommended to mess with the partitions grouped if you don't understand all the internal details of Kafka Streams -- you can very easily break stuff! (This interface was deprecated already and will be removed in upcoming 3.0 release.)
With regard to threads: tasks limit the number of threads. For our example, this implies that you can have max 4 thread (if you have more, those threads will be idle, as there is no task left for thread assignment). How you "distribute" those thread is up to you. You can either have 4 single threaded application instances of one single application instance with 4 thread (or anything in between).
If you have fewer tasks than threads, task will be assigned in a load balanced way, based on number of tasks (all tasks are assumed to have the same load).
If there is e.g. num.stream.threads > 1, how tasks are assigned to the
prepared and assigned (in the loop) threads?
Tasks are assigned to threads with the usage of a partition grouper. You can read about it here. AFAIK it's called after a rebalance, so it's not a very dynamic process. That said, I'd argue that there is no option for starvation.
What is the maximum number of tasks supported in AUTOSAR compliant systems?
In Linux, I can check the maximum process IDs supported to get the maximum number of tasks supported.
However, I couldn't find any source that states the maximum number of tasks supported by AUTOSAR.
Thank you very much for your help!
Well, we are still in an embedded automotive world and not on a PC.
There is usually a tradeoff between the number of tasks you have and what it takes to schedule them and what RAM/ROM and runtime resources your configuration uses.
As already said, if you just need a simple timed loop with some interrupts in between, one task may be ok.
It might be also enough, to have e.g. 3 tasks running at 5ms, 10ms and 20ms cycle. But you could also schedule this in simple cases like this with a single 5ms task:
TASK(TASK_5ms)
{
static uint8 cnt = 0;
cnt++;
// XXX and YYY Mainfunctions shall only be called every 10ms
// but do a load balancing, that does not run 3 functions every 10ms
// and 1 every 5ms, but only two every 5ms
if (cnt & 1)
{
XXX_Mainfunction_10ms();
}
else
{
YYY_Mainfunction_10ms();
}
ZZZ_Mainfunction_5ms();
}
So, if you need something to be run every 5, 10 or 20ms, you put these runnables into the corresponding tasks.
The old OSEK also had a notion of BASIC vs EXTENDED Tasks, where only extended tasks where able to react on OsEvents. This tasks might not run cyclically, but only on configured OsEvents. You would have an OS Waitpoint there, where the tasks is more or less stopped and only woken up by the OS on the arrival of an event. There are also OSALARM, which could either directly trigger the activation of a OsTask, or indirectly over an Event, so, you could e.g. wait on the same Waitpoint on both a cyclic event from an OsAlarm or an OsEvent set by something else e.g. by another task or from an ISR.
TASK(TASK_EXT)
{
EventMaskType evt;
for(;;)
{
WaitEvent(EVT_XXX_START | EVT_YYY_START | EVT_YYY_FINISHED);
GetEvent(TASK_EXT, &evt);
// Start XXX if triggered, but YYY has reported to be finished
if ((evt & (EVT_XXX_START | EVT_YYY_FINISHED) == (EVT_XXX_START | EVT_YYY_FINISHED))
{
ClearEvent(EVT_XXX_START);
XXX_Start();
}
// Start YYY if triggered, will report later to start XXX
if (evt & EVT_YYY_START)
{
ClearEvent(EVT_YYY_START);
YYY_Start();
}
}
}
This direct handling of scheduling is now mostly done/generated within the RTE based on the events you have configured for your SWCs and the Event to Task Mapping etc.
Tasks are scheduled mainly by their priority, that's why they can be interrupted anytime by a higher priority taks. Exception here is, if you configure your OS and tasks to be not preemptive but cooperative. Then it might be necessary to also use Schedule() points in your code, to give up the CPU.
On bigger systems and also on MultiCore systems with an MultiCore OS, there will be higher nunbers of Tasks, because Tasks are bound to a Core, though the Tasks on different Cores run independently, except maybe for the Inter-Core-Synchronization. This can also have a negative performance impact (Spinlocks can stop the whole system)
e.g. there could be some Cyclic Tasks for normal BaseSW components and one specific only for Communication components (CAN Stack and Comm-Services).
We usually separate the communication part, since they need a certain cycle time like 5..10ms, since this cycle is used by the Comm-Stack for message transmission scheduling and also reception timeout monitoring.
Then there might be a task to handle the memory stack (Ea/Fls, Eep/Fee, NvM).
There might be also some kind of Event based Tasks to trigger certain HW-control and processing chains of measured data, since they might be put on different cores, and can be scheduled by start or finished events of each other.
On the other side, for all your cyclic tasks, you should also make sure, that the functions run within such task do not run longer than your task cycle, otherwise you get an OS Shutdown due to multiple activation of the same task, since your task is started again, before it actually finished. And you might have some constraints, that require some tasks to finish in your applications expected measurement cycle.
In safety relevant systems (ASIL-A .. ASIL-D) you'll also have at least one task fpr each safety-level to get freedome-from-interference. In AUTOSAR, you already specify that on the OSApplication which the tasks are assigned to, which also allows you to configure the MemoryProtection (e.g. WrAccess to memory partitions by QM, ASIL-A, ASIL-B application and tasks). That is then another part, the OS has to do at runtime, to reconfigure the MPU according to the OsApplications MemoryAccess settings.
But again, the more tasks you create, the higher the usage of RAM, ROM and runtime.
RAM - runtime scheduling structures and different task stacks
ROM - the actual task and event configurations
Runtime - the context switches of the tasks and also the scheduling itself
It seems to vary. I found that ETAS RTA offers 1024 tasks*, whereas Vector's MICROSAR OS has 65535.
For task handling, OSEK/ASR provides the following functions:
StatusType ActivateTask (TaskType TaskID)
StatusType TerminateTask (void)
StatusType Schedule (void)
StatusType GetTaskID (TaskRefType TaskID)
StatusType GetTaskState (TaskType TaskID, TaskStateRefType State)
*Link might change in future, but it is easy to search ETAS page directly for manuals etc.: https://www.etas.com/en/products/download_center.php
Formally you can have an infinite number of OsTasks. According to the spec. the configuration of the Os can have 0..* OsTask.
Apart from that the (OS) software uses data type TaskType for Task-Index variables. Therefore, if TaskType is of uint16 you could not have more than 65535 tasks.
Besides that, if you have a lot of tasks, you might re-think your design.
I have following entry in conf file. But I'm not sure if this dispatcher setting is being picked up and what's ultimate parallelism value being used
akka{
actor{
default-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
throughput = 3
fork-join-executor {
parallelism-min = 40
parallelism-factor = 10
parallelism-max = 100
}
}
}
}
I've 8 core machine so I expect 80 parallel threads to be in ready state
40min < 80 (8*10 factor) < 100max. I'd like to see what value is akka using for max parallel thread.
I created 45 child actors and in my logs, I'm printing the thread id [application-akka.actor.default-dispatcher-xx] and I don't see more than 20 threads running in parallel.
In order to max-out the parallelism factor, all the actors needs to be processing some messages at the same time. Are you sure this is the case in your application?
Take for example the following code
object Test extends App {
val system = ActorSystem()
(1 to 80).foreach{ _ =>
val ref = system.actorOf(Props[Sleeper])
ref ! "hello"
}
}
class Sleeper extends Actor {
override def receive: Receive = {
case msg =>
//Thread.sleep(60000)
println(msg)
}
}
If you consider your config and 8 cores, you will see a small amount of threads being spawned (4, 5?) as the processing of the messages is too quick for some real parallelism to build up.
On the contrary, if you keep your actors CPU-busy uncommenting the nasty Thread.sleep you will see the number of threads will bump up to 80. However, this will only last 1 minute, after which the threads will be gradually be retired from the pool.
I guess the main trick is: don't think of each actor being run on a separate thread. It's whenever one or more messages appear on an actor's mailbox that the dispatcher awakes and - indeed - dispatches the message processing task to a designated pool.
Assuming you have an ActorSystem instance you can check the values set in its configuration. This is how you could get your hand on the values you've set in the config file:
val system = ActorSystem()
val config = system.settings.config.getConfig("akka.actor.default-dispatcher")
config.getString("type")
config.getString("executor")
config.getString("throughput")
config.getInt("fork-join-executor.parallelism-min")
config.getInt("fork-join-executor.parallelism-max")
config.getDouble("fork-join-executor.parallelism-factor")
I hope this helps. You can also consult this page for more details on specific configuration settings.
Update
I've dug up a bit more in Akka to find out exactly what it uses for your settings. As you might already expect it uses a ForkJoinPool. The parallelism used to build it is given by:
object ThreadPoolConfig {
...
def scaledPoolSize(floor: Int, multiplier: Double, ceiling: Int): Int =
math.min(math.max((Runtime.getRuntime.availableProcessors * multiplier).ceil.toInt, floor), ceiling)
...
}
This function is used at some point to build a ForkJoinExecutorServiceFactory:
new ForkJoinExecutorServiceFactory(
validate(tf),
ThreadPoolConfig.scaledPoolSize(
config.getInt("parallelism-min"),
config.getDouble("parallelism-factor"),
config.getInt("parallelism-max")),
asyncMode)
Anyway, this is the parallelism that will be used to create the ForkJoinPool, which is actually an instance of java.lang.ForkJoinPool. Now we have to ask how many thread does this pool use? The short answer is that it will use the whole capacity (80 threads in our case) only if it needs it.
To illustrate this scenario, I've ran a couple of tests with various uses of Thread.sleep inside the actor. What I've found out is that it can use from somewhere around 10 threads (if no sleep call is made) to around the max 80 threads (if I call sleep for 1 second). The tests were made on a machine with 8 cores.
Summing it up, you will need to check the implementation used by Akka to see exactly how that parallelism is used, this is why I looked into ForkJoinPool. Other than looking at the config file and then inspecting that particular implementation I don't think you can do unfortunately :(
I hope this clarifies the answer - initially I thought you wanted to see how the actor system's dispatcher is configured.
I know multi thread with future a little such as :
for(i <- 1 to 5) yield future {
println(i)
}
but this is all the threads do same work.
So, i want to know how to make two threads which do different work concurrently.
Also, I want to know is there any method to know all the thread is complete?
Please, give me something simple.
First of all, chances are you might be happy with parallel collections, especially if all you need is to crunch some data in parallel using multiple threads:
val lines = Seq("foo", "bar", "baz")
lines.par.map(line => line.length)
While parallel collections suitable for finite datasets, Futures are more oriented towards events-like processing and in fact, future defines task, abstracting away from execution details (one thread, multiple threads, how particular task is pinned to thread) -- all of this is controlled with execution context. What you can do with futures though is to add callback (on success, on failure, on both), compose it with another future or await for result. All this concepts are nicely explained in official doc which is worthwhile reading.
In Python, I am using a library called futures, which allows me to do my processing work with a pool of N worker processes, in a succinct and crystal-clear way:
schedulerQ = []
for ... in ...:
workParam = ... # arguments for call to processingFunction(workParam)
schedulerQ.append(workParam)
with futures.ProcessPoolExecutor(max_workers=5) as executor: # 5 CPUs
for retValue in executor.map(processingFunction, schedulerQ):
print "Received result", retValue
(The processingFunction is CPU bound, so there is no point for async machinery here - this is about plain old arithmetic calculations)
I am now looking for the closest possible way to do the same thing in Scala. Notice that in Python, to avoid the GIL issues, I was using processes (hence the use of ProcessPoolExecutor instead of ThreadPoolExecutor) - and the library automagically marshals the workParam argument to each process instance executing processingFunction(workParam) - and it marshals the result back to the main process, for the executor's map loop to consume.
Does this apply to Scala and the JVM? My processingFunction can, in principle, be executed from threads too (there's no global state at all) - but I'd be interested to see solutions for both multiprocessing and multithreading.
The key part of the question is whether there is anything in the world of the JVM with as clear an API as the Python futures you see above... I think this is one of the best SMP APIs I've ever seen - prepare a list with the function arguments of all invocations, and then just two lines: create the poolExecutor, and map the processing function, getting back your results as soon as they are produced by the workers. Results start coming in as soon as the first invocation of processingFunction returns and keep coming until they are all done - at which point the for loop ends.
You have way less boilerplate than that using parallel collections in Scala.
myParameters.par.map(x => f(x))
will do the trick if you want the default number of threads (same as number of cores).
If you insist on setting the number of workers, you can like so:
import scala.collection.parallel._
import scala.concurrent.forkjoin._
val temp = myParameters.par
temp.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(5))
temp.map(x => f(x))
The exact details of return timing are different, but you can put as much machinery as you want into f(x) (i.e. both compute and do something with the result), so this may satisfy your needs.
In general, simply having the results appear as completed is not enough; you then need to process them, maybe fork them, collect them, etc.. If you want to do this in general, Akka Streams (follow links from here) are nearing 1.0 and will facilitate the production of complex graphs of parallel processing.
There is both a Futures api that allows you to run work-units on a thread pool (docs: http://docs.scala-lang.org/overviews/core/futures.html) and a "parallell collections api" that you can use to perform parallell operations on collections: http://docs.scala-lang.org/overviews/parallel-collections/overview.html