How to execute a rust timer callback synchronously on same lcore without lock - rust

I am trying to use a timer in one of my execution threads, in a dpdk application, which is attached to a specific lcore to process packets. I need to periodically flush a few stats for which I am creating a timer from the same thread. However, since the timer execution happens on a new additional thread (and can get picked by any lcore?), can we force it to stop the caller thread execution when the timer call back is executing to avoid concurrency problem in a lock-free manner? The original solution is meant to attach threads to 1 lcore to execute in a lockless manner.

Related

When a goroutine blocks on I/O how does the scheduler identify that it has stopped blocking?

From what I've read here, the golang scheduler will automatically determine if a goroutine is blocking on I/O, and will automatically switch to processing others goroutines on a thread that isn't blocked.
What I'm wondering is how the scheduler then figures out that that goroutine has stopped blocking on I/O.
Does it just do some kind of polling every so often to check if it's still blocking? Is there some kind of background thread running that checks the status of all goroutines?
For example, if you were to do an HTTP GET request inside a goroutine that took 5s to get a response, it would block while waiting for the response, and the scheduler would switch to processing another goroutine. Now given that, when the server returns a response, how does the scheduler understand that the response has arrived, and it's time to go back to the goroutine that made the GET so that it can process the result of the GET?
All I/O must be done through syscalls, and the way syscalls are implemented in Go, they are always called through code that is controlled by the runtime. This means that when you call a syscall, instead of just calling it directly (thus giving up control of the thread to the kernel), the runtime is notified of the syscall you want to make, and it does it on the goroutine's behalf. This allows it to, for example, do a non-blocking syscall instead of a blocking one (essentially telling the kernel, "please do this thing, but instead of blocking until it's done, return immediately, and let me know later once the result is ready"). This allows it to continue doing other work in the meantime.

Query Regarding Non Preemptive thread

I was reading about non preemptive threads and I found a slide from Princeton University and it shows the following diagram: (Source Link: http://www.cs.princeton.edu/courses/archive/fall11/cos318/lectures/L5_ThreadsImplementation.pdf)
From what I understood is that a thread to be executed is first put into a ready queue. When it pop's out of the queue it is in running state. If it wants to invoke another thread, it calls the yield function, which will store the current state of the thread and insert it in the tail of the queue. And the thread which is in the front of the queue will be executed.
What happens if The thread is blocked (i.e. it is waiting for some resource) ? I thought in non-preemptive thread it will wait for the resource and then carry on execution.
But from the below diagram it looks as though it goes into blocked state and then is put into the ready queue ? Why is that?
As said in the comments, non-preemptive means that another thread cannot interrupt (preempt) a running thread, not that the running thread won't yield when it has to wait for something.
When a thread is waiting for data from memory (for example), it's said to be in blocked state: its context is saved and another thread takes place in the computing resource (CPU core). When data is available in CPU's cache memory, then the first thread is said ready to resume its execution (and it will, as soon as it is the next to be executed and that the currently executed thread yields the computing resource).
This enables overlapping both data movements and threads execution, thus saving time by optimizing resource usage.

set a deadline for each callback in an event-driven/ event-loop based program

In a typical ASIO or event-based programming library like libevent, is there a way to set a deadline for each callback?
I am worried about possible infinite loops within the callbacks. Is there a way to gracefully detect them, remove the misbehaving callback from task queue and continue processing other tasks in the queue?
I can think of a way to detect it through an external thread and kill the event-loop thread and create a different thread but I am trying to see if there are any other commonly used methods. I believe this is a problem which someone has faced at some point of time and thought through a solution
There is no general way to unstick a thread without its cooperation, whether it's running a callback or not. The thread may hold critical locks or may have acquired resources that would never get released if the thread was somehow coerced to stop from the outside.
If you really do need this functionality, then all code that could potentially be interrupted must be designed to support some specific method of interruption. You can start a deadline timer when you enter the callback and cancel it when you're finished. The deadline timer would have to trigger the thread's interruption mechanism. You'd need at least one other thread running the I/O service in order for some thread to run the timer handler while the callback was running in another thread.
You can also isolate the code in its own process with some kind of wrapper. Then if the code fails to terminate, you can kill the process from the outside.

Interrupt while placing process on the waiting queue

Suppose there is a process that is trying to enter the critical region but since it is occupied by some other process, the current process has to wait for it. So, at the time when the process is getting added to the waiting queue of the semaphore, suppose an interrupt comes (ex- battery finished), then what will happen to that process and the waiting queue?
I think that since the battery has finished so this interrupt will have the highest priority and so the context of the process which was placing the process on the waiting queue would be saved and interrupt service routine for this routing will be executed.
And then it will return to the process that was placing the process on the queue.
Please give some hints/suggestions for this question.
This is very hardware / OS dependant, however a few thoughts:
As has been mentioned in the comments, a ‘battery finished’ interrupt may be considered as a special case, simply because the machine may turn off without taking any action, in which case the processes + queue will disappear. In general however, assuming a non-fatal interrupt and an OS that suspends / resumes correctly, I think it’s unlikely there will be any noticeable impact to the execution of either process.
In a multi-core setup, the process may not be immediately suspended. The interrupt could be handled by a different core and neither of the processes you’ve mentioned would be any the wiser.
In a pre-emptive multitasking OS there's also no guarantee that the process adding to the queue would be resumed immediately after the interrupt, the scheduler could decide to activate the process currently in the critical section or another process entirely. What would happen when the process adding itself to the semaphore wait queue resumed would depend on how far through adding it was, how the queue has been implemented and what state the semaphore was in. It may be that it never gets on to the wait queue because it detects that the other process has already woken up and left the critical section, or it may be that it completes adding itself to the queue and suspends as if nothing had happened…
In a single core/processor machine with a cooperative multitasking OS, I think the scenario you’ve described in your question is quite likely, with the executing process being suspended to handle the interrupt and then resumed afterwards until it finished adding itself to the queue and yielded.
It depends on the implementation, but conceptually the same operating process should be performing both the addition of the process to the wait queue and the management of the interrupts, so your process being moved to wait would instead be treated as interrupted from the wait queue.
For Java, see the API for Thread.interrupt()
Interrupts this thread.
Unless the current thread is interrupting itself, which is always permitted, the checkAccess method of this thread is invoked, which may cause a SecurityException to be thrown.
If this thread is blocked in an invocation of the wait(), wait(long), or wait(long, int) methods of the Object class, or of the join(), join(long), join(long, int), sleep(long), or sleep(long, int), methods of this class, then its interrupt status will be cleared and it will receive an InterruptedException.
If this thread is blocked in an I/O operation upon an interruptible channel then the channel will be closed, the thread's interrupt status will be set, and the thread will receive a ClosedByInterruptException.
If this thread is blocked in a Selector then the thread's interrupt status will be set and it will return immediately from the selection operation, possibly with a non-zero value, just as if the selector's wakeup method were invoked.
If none of the previous conditions hold then this thread's interrupt status will be set.
Interrupting a thread that is not alive need not have any effect.

Mechanics of Condition.Signal()

If I had threads as below
void thread(){
while() {
lock.acquire();
if(condition not true)
{
Cond.wait()
}
// blah blah
Cond.Signal();
lock.release();
}
}
Well I guess my main question is that whether the signalling thread continues running for a while after cond.signal() or immediately gives up the CPU?. I would like it in some cases not to release the lock before the woken up thread finishes execution and in some other cases it may be beneficial to release the lock immediately after signalling, without waiting for the other woken thread to finish.
I understand that if there are any threads waiting on the condition then they get woken up on Cond.signal(). But what do you mean by woekn up - put on the ready queue or does the scheduler make sure that it runs immediately?.
and what about the signalling thread.. does it go to sleep on the same condtion upon signalling? .. so then some other thread has to wake it up to make it release the lock?.
This is in large part dependent on your environment (OS, library, language...) and how the synchronisation primitives are implemented. Since you haven't specified any I'll just give a general answer.
When putting a thread to sleep, most environment will choose to remove it from the scheduler's ready queue and the thread will give up its remaining CPU time. When woken up, the thread is simply placed back into the ready queue and will resume execution the next time the scheduler selects it from the queue.
It's also possible that the thread will do some active waiting (spinning) instead of being removed from the scheduler's ready queue. In this case, the thread will resume execution right away. Note that since a thread can still be run out of CPU of time while spinning, it might have to wait to be rescheduled before waking up. This is a useful strategy if your critical sections are very small and you don't want to pay for the scheduling overheads.
A hybrid approach would be to do a small amount of active waiting before removing the thread from the scheduler's ready queue.
As for the signaling thread, unless specified explicitly by your environment (I can't of any reasons but you never know), I wouldn't expect a call to signal() to block in a way that you have to wake it up. Signal() might have to synchronize itself with other threads calling signal() but those are implementation details and you shouldn't have to do anything about it.

Resources