How to use RxJava to make multiple thread to run sequentially - multithreading

Assume that there are 3 threads, T1, T2, T3.
How can I make them run sequentially, say, the execution order is T1, T2, T3, T1, T2, T3 ...
Could we use RxJava to implement it?
Could it be possible that there are 3 threads and separately print out T1, T2 or T3, and we can print out T1 T2 T3 sequentially.

You can use Observable.concat(request1, request2, request3), which will execute requests sequentially:
Observable<String> r1 = getObs().subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread());
Observable<String> r2 = getObs().subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread());
Observable<String> r3 = getObs().subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread());
Observable<String> result = Observable.concat(r3, r2, r1);
Or, if request need a result from previuous request - use flatMap:
request1()
.flatMap(d -> request2(d))
.flatMap(d -> request3(d))
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe();

Related

Where does a thread return after being woken up by a semaphore?

My understanding of Semaphore principle
I am currently trying to understand how Semaphores work.
I have understood that when calling P(sem), if sem=0 the thread will get blocked, otherwise the value of the semaphore will be reduced and the thread let into the critical section.
When calling V(sem), the semaphore value will be increased, if sem=0 and a thread is waiting, that thread is woken up.
Now consider this problem:
Two threads are running, thread1 runs function1, thread2 runs function2. There are two semaphores s=0 and m=1 which are shared between both threads.
function1 {
while(true) {
P(s)
P(s)
P(m)
print("a")
V(m)
}}
function2 {
while(true) {
P(m)
print("b")
V(m)
V(s)
}}
What I expected would happen
I have expected the output string to be print b's and a's in some random order.
The threads start. Let's say thread1 enters function1 first:
Step1: P(s)-> s=0 so block the thread
thread2 enters function2
P(m) -> m=1 -> set m=m-1=0
print b
V(m) -> m=m+1=1
V(s) -> s=0 and thread is wating -> set s=s+1=1 and wake up thread
Step2: Thread2 returns to the second P-statement
thread1 continues in function1
P(s) -> s=1 -> set s=s-1=0
P(m) -> m=1 -> set m=m-1=0
print a
V(m) -> m=0 but noone waiting -> set m=m+1=1
Step2: Thread1 runs function1 again
P(s) -> s=0 -> block
Thread2 runs function 2
P(m) -> m=1 -> set m=m-1=0
print b
V(m) -> m=m+1=1
V(s) -> s=0 and thread waiting -> wake up thread 2
Step3: Thread1 returns to function1 second P-statement
P(s) -> s=1 -> set s=s-1=0
P(m) -> m=1 -> set m=m-1=0
print a
V(m) -> set m=1
Step4 Thread2 runs function 2
P(m) -> m=1 -> set m=0
print b
V(m) -> set m=1
V(s) -> s=0 -> set s=1, no thread waiting
Step5 Thread 2 runs function 2 again
print b
... and so on
The Problem/The Questions
I am very unsure, if it is correct, that thread1 returns to the second P-statement after it is woken up when thread2 runs function2.
It seems wrong to me, because if I consider how a semaphore would usually be implemented:
P(s)
* do something *
V(s)
If it where to return after the P-statement, the value of the semaphore does not get decreased, and the V-statement would increase the semaphore to a wrong value of 2.
But if it repeats the first P-statement in the example, that would mean the output string is only b's.
Can someone tell me if my understanding is correct or if not, correct my mistake?

When will Go scheduler create a new M and P?

Just learned golang GMP model, now I understand how goroutines, OS threads, and golang contexts/processors cooperate with each other. But I still don't understand when will an M and P be created?
For example, I have a test code to run some operations on DB and there are two test cases (two batches of goroutines):
func Test_GMP(t *testing.T) {
for _ = range []struct {
name string
}{
{"first batch"},
{"second batch"},
} {
goroutineSize := 50
done := make(chan error, goroutineSize)
for i := 0; i < goroutineSize; i++ {
go func() {
// do some databases operations...
// each goroutine should be blocked here for some time...
// propogate the result
done <- nil
}()
}
for i := 0; i < goroutineSize; i++ {
select {
case err := <-done:
assert.NoError(t, err)
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for txFunc goroutine")
}
}
close(done)
}
}
In my understanding, if M is created in need. In the first batch of goroutines, 8 (the number of virtual cores on my computer) OS threads will be created and the second batch will just reuse the 8 OS threads without creating new ones. Is that correct?
Appreciate if you can provide more materials or blogs on this topic.
M is reusable only if your processes are not blocking or not any sys-calls. In your case you have blocking tasks inside your go func(). So, number of M will not be limited to 8 (the number of virtual cores on my computer). First batch will block and remove from P and wait for blocking processes get finished while new M create an associate with P.
We create a goroutine through Go func ();
There are two queues that store G, one is the local queue of local scheduler P, one is the global G queue. The newly created G will be
saved in the local queue in the P, and if the local queues of P are
full, they will be saved in the global queue;
G can only run in m, one m must hold a P, M and P are 1: 1
relationship. M will pop up a executable G from the local queue of P.
If the local queue is empty, you will think that other MP combinations
steals an executable G to execute;
A process executed by M Scheduling G is a loop mechanism;
When M executes syscall or the remaining blocking operation, M will block, if there are some g in execution, Runtime will remove this
thread M from P, then create one The new operating system thread (if
there is an idle thread available to multiplex idle threads) to serve
this P;
When the M system call ends, this G will try to get an idle P execute and put it into this P's local queue. If you get P, then this
thread m becomes a sleep state, add it to the idle thread, and then
this G will be placed in the global queue.
1. P Quantity:
The environment variable $ GomaxProcs is determined by the Runtime
method gomaxprocs () when the environment variable is scheduled. After
GO1.5, GomaxProcs will be set by default to the available cores, and
before default it is 1.This means that only $ GOMAXPROCS Goroutine is
run at the same time at any time executed.
2. M quantity:
The GO language itself limits: When the GO program starts, the maximum
number of M will set the maximum number of M. However, the kernel is
difficult to support so many threads, so this limit can be ignored.
SetMaxThreads function in runtime / debug, set the maximum number of M
A M blocking, you will create new M.
The number of M and P has no absolute relationship, one m block, p
will create or switch another M, so even if the default number of P is
1, there may be many M out.
Please refer following for more details,
https://www.programmersought.com/article/79557885527/
go-goroutine-os-thread-and-cpu-management

Why this Scala code execute two Futures in one thread?

I've been using multiple threads for a long time, yet can not explain such a simple case.
import java.util.concurrent.Executors
import scala.concurrent._
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
def addOne(x: Int) = Future(x + 1)
def addTwo(x: Int) = Future {addOne(x + 1)}
addTwo(1)
// res5: Future[Future[Int]] = Future(Success(Future(Success(3))))
To my surprise, it works. And I don't know why.
Question:
Why given one thread can it execute two Futures at the same time?
My expectation:
The first Future (addTwo) is occupying the one and only thread (newFixedThreadPool(1)), then it calls another Future (addOne) which again needs another thread.
So the program should end up starved for threads and get stuck.
The reason that your code is working, is that both futures will be executed by the same thread. The ExecutionContext that you are creating will not use a Thread directly for each Future but will instead schedule tasks (Runnable instances) to be executed. In case no more threads are available in the pool these tasks will be put into a BlockingQueue waiting to be executed. (See ThreadPoolExecutor API for details)
If you look at the implementation of Executors.newFixedThreadPool(1) you'll see that creates an Executor with an unbounded queue:
new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue[Runnable])
To get the effect of thread-starvation that you were looking for, you could create an executor with a limited queue yourself:
implicit val ec = ExecutionContext.fromExecutor(new ThreadPoolExecutor(1, 1, 0L,
TimeUnit.MILLISECONDS, new ArrayBlockingQueue[Runnable](1)))
Since the minimal capacity of ArrayBlockingQueue is 1 you would need three futures to reach the limit, and you would also need to add some code to be executed on the result of the future, to keep them from completing (in the example below I do this by adding .map(identity))
The following example
import scala.concurrent._
implicit val ec = ExecutionContext.fromExecutor(new ThreadPoolExecutor(1, 1, 0L,
TimeUnit.MILLISECONDS, new ArrayBlockingQueue[Runnable](1)))
def addOne(x: Int) = Future {
x + 1
}
def addTwo(x: Int) = Future {
addOne(x + 1) .map(identity)
}
def addThree(x: Int) = Future {
addTwo(x + 1).map(identity)
}
println(addThree(1))
fails with
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable#65a264b6 rejected from java.util.concurrent.ThreadPoolExecutor#10d078f4[Running, pool size = 1, active threads = 1, queued tasks = 1, completed tasks = 1]
expand it to Promise is easily to undunderstand
val p1 = Promise[Future[Int]]
ec.execute(() => {
// the fist task is start run
val p2 = Promise[Int]
//the second task is submit , but no run
ec.execute(() => {
p2.complete(Success(1))
println(s"task 2 -> p1:${p1},p2:${p2}")
})
//here the p1 is completed, not wait p2.future finish
p1.complete(Success(p2.future))
println(s"task 1 -> p1:${p1},p2:${p2}")// you can see the p1 is completed but the p2 have not
//first task is finish, will run second task
})
val result: Future[Future[Int]] = p1.future
Thread.sleep(1000)
println(result)

F# Async Task Cancellation without Token

I am trying to parse hundreds of C source files to map dozens of software signal variables to the names of physical hardware pins. I am trying to do this asynchronously in F#
IndirectMappedHWIO
|> Seq.map IndirectMapFromFile //this is the function with the regex in it
|> Async.Parallel
|> Async.RunSynchronously
The issue is that I cannot figure out how to pass in a CancellationToken to end my task. Each task is reading around 300 C files so I want to be able to stop the task's execution as soon as the regex matches. I tried using Thread.CurrentThread.Abort() but that does not seem to work. Any ideas on how to pass in a CancellationToken for each task? Or any other way to cancel a task based on a condition?
let IndirectMapFromFile pin =
async {
let innerFunc filename =
use streamReader = new StreamReader (filePath + filename)
while not streamReader.EndOfStream do
try
let line1 = streamReader.ReadLine()
streamReader.ReadLine() |> ignore
let line2 = streamReader.ReadLine()
if(obj.ReferenceEquals(line2, null)) then
Thread.CurrentThread.Abort() //DOES NOT WORK!!
else
let m1 = Regex.Match(line1, #"^.*((Get|Put)\w+).*$");
let m2 = Regex.Match(line2, #"\s*return\s*\((\s*" + pin.Name + "\s*)\);");
if (m1.Success && m2.Success) then
pin.VariableName <- m1.Groups.[1].Value
Thread.CurrentThread.Abort() //DOES NOT WORK!!
else
()
with
| ex -> ()
()
Directory.GetFiles(filePath, "Rte*") //all C source and header files that start with Rte
|> Array.iter innerFunc
}
Asyncs cancel on designated operations, such as on return!, let!, or do!; they don't just kill the thread in any unknown state, which is not generally safe. If you want your asyncs to stop, they could for example:
be recursive and iterate via return!. The caller would provide a CancellationToken to Async.RunSynchronously and catch the resulting OperationCanceledException when the job is done.
check some thread-safe state and decide to stop depending on it.
Note that those are effectively the same thing: the workers who iterate over the data check what is going on and cancel in an orderly fashion. In other words, it is clear when exactly they check for cancellation.
Using async cancellation might result in something like this:
let canceler = new System.Threading.CancellationTokenSource()
let rec worker myParameters =
async {
// do stuff
if amIDone() then canceler.Cancel()
else return! worker (...) }
let workers = (...) |> Async.Parallel
try Async.RunSynchronously(workers, -1, canceler.Token) |> ignore
with :? System.OperationCanceledException -> ()
Stopping from common state could look like this:
let keepGoing = ref true
let rec worker myParameters =
if !keepGoing then
// do stuff
if amIDone() then keepGoing := false
worker (...)
let makeWorker initParams = async { worker initParams }
// make workers
workers |> Async.Parallel |> Async.RunSynchronously |> ignore
If the exact timing of cancellation is relevant, I believe the second method may not be safe, as there may be a delay until other threads see the variable change. This doesn't seem to matter here though.

Number of times the waiting thread will be executed

Suppose I have two thread T1 and T1.
Thread T1 will call t1_callback() and T2 is calling t2_callback().
T some_global_data;
pthread_mutex_t mutex;
void t1_callback()
{
pthread_mutex_lock(&mutex);
update_global_data(some_global_data);
pthread_mutex_unlock(&mutex);
}
void t2_callback()
{
pthread_mutex_lock(&mutex);
update_global_data(some_global_data);
pthread_mutex_unlock(&mutex);
}
Case
t1_callback() is holding the lock between time (t1 - t2).
In between this time (t1 - t2), if t2_callback has been called for say 10 times.
Question
Then how many times will t2_callback() will be executed, when t1_callback() release the mutex.
If a thread calls t2_callback() while another thread is executing t1_callback() and holding the lock, it (the thread running t2_callback()) will be suspended in pthread_mutex_lock(); until the lock is released. So it doesn't make sense to talk about one thread calling t2_callback() 10 times while the lock is held.
If 10 different threads all call t2_callback() in that time, they'll all be suspended in pthread_mutex_lock();, and they will each proceed one-at-a-time when the lock is released.

Resources