please help explain this code for semaphore channel in SystemC - semaphore

Recently, I am studying systemc and have a question on semaphore channel.
I found an example on asic world (http://www.asic-world.com/systemc/channels3.html) but have a little confused.
At the first 1ns for this example, the first process bus_semaphore() works and can print out two lines "#1 ns ....". At the same time, the semaphore value (bus) changes into 2 (bus.post()) and then wait for the next clock posedge.
For the second process, do_read(), also at the 1ns, the first "#" line can be printed out normally, but then what about the trywait() in the next if-statement? The first and second process should be executed simultaneously, that is to say we cannot determine whether the trywait() executes before or after the bus.post() statement of the first process, so we don't know if the second "#" line of the second process will be printed out.
But the answer shown at the bottom of the page means that the trywait() will execute after the bus.post() executes so that the second "#..." statement will be printed out. How can I be sure that the trywait() will executed after the bus.post()'s execution?
Thanks in advance!

I think I've figured it out. The order of processes defined in SC_CTOR is really important which is reverse order of definitions in SC_CTOR.

You're correct that the example exhibits a number of race conditions. The fact that changing the order of process creation in the constructor affects the module's behavior, is evidence of that.
It's not a great example. No design should feature race conditions such as that.

Related

Reproducing race conditions: continue all threads except one in gdb

I have found a race condition in my application code and now I am wondering how I could create a test case for it, that can be run as a test script that is determined to trigger a specific effect of the race condition and doesn't require a reproduction code patch and/or a manual gdb session.
The situation is a schoolbook example of a race condition: I have an address A and thread 1 wants to write to its location and thread 2 wants to read from it.
So I was thinking of writing a gdb script for this that breaks when thread 1 is about to write at address A then write some garbage into A, then have all threads continue except for thread 1. Then fire the query that causes thread 2 to guarantee to read the garbage at A and then causes a segmentation fault or something.
I guess this is the reverse of set scheduler-locking = on. I was hoping there exist a way to do something like this in a gdb script. But I am afraid there isn't. Hopefully somebody can prove me wrong.
Also I am open to a non-gdb based solution. The main point is that this race condition test, can run automatically without requiring source code modifications. Think of it as an integration test.

Infinite loop inside 'do_select' function of Linux kernel

I am surprised that Linux kernel has infinite loop in 'do_select' function implementation. Is it normal practice?
Also I am interested in how file changes monitoring implemented in Linux kernel? Is it infinite loop again?
select.c source code
This is not an infinite loop; that term is reserved for loops with no exit condition at all. This loop has its exit condition in the middle: http://lxr.linux.no/#linux+v3.9/fs/select.c#L482 This is a very common idiom in C. It's called "loop and a half" and there's a simple pseudocode example here: https://stackoverflow.com/a/10767975/388520 which clearly illustrates why you would want to do this. (That question talks about Java but that's not important; this is a general structured-programming idiom.)
I'm not a kernel expert, but this particular loop appears to have been written this way because the logic of the inner loop needs to run both before and after the call to poll_schedule_timeout at the very bottom of the outer loop. That code is checking whether there are any events to return; if there are already events to return when select is invoked, it's supposed to return immediately; if there aren't any initially, there will be when poll_schedule_timeout returns. So in normal operation the outer loop should cycle either 0.5 or 1.5 times. (There may be edge-case circumstances where the outer loop cycles more times than that.) I might have chosen to pull the inner loop out to its own function, but that might involve passing pointers to too many local variables around.
This is also not a spin loop, by which I mean, the CPU is not wasting electricity checking for events over and over again until one happens. If there are no events to report when control reaches the call to poll_schedule_timeout, that function (by, ultimately, calling __schedule) will cause the calling thread to block -- the CPU is taken away from that thread and assigned to another process that can do something useful with it. (If there are no processes that need the CPU, it'll be put into a low-power "halt" until the next interrupt fires.) When one of the events happens, or the timeout, the thread that called select will get "woken up" and poll_schedule_timeout will return.
On a larger note, operating system kernels often do things that would be considered strange, poor style, or even flat-out wrong, in the service of other engineering goals (efficiency, code reuse, avoidance of race conditions that can only occur on some CPUs, ...) They are written by people who know exactly what they are doing and exactly how far they can get away with bending the rules. You can learn a lot from reading though OS code, but you probably shouldn't try to imitate it until you have a bit more experience. You wouldn't try to pastiche the style of James Joyce as your first exercise in creative writing, ne? Same deal.

3 requirements for synchronization: why does this approach not work?

I'm trying to learn about synchornization and understand there are 3 conditions that need to be met for things to work properly
1)mutual exclusion - no data is being corrupted
2)bounded waiting - a thread won't do nothing forever
3)progress being made - system as a whole is doing work e.g. not just passing around who's turn it is
I don't fully understand why the code bellow doesn't work. According to my notes it has mutual exclusion but doesn't satisfy making progress or bounded waiting. Why? Each thread can do something and as long as now thread crashes everythread will get a turn.
The following are shared variables
int turn; // initially turn = 0
turn == i: Pi can enter its critical section
The code is
do {
while (turn != i){}//wait
critical section
turn = j;//j signifies process Pj in contrast to Pi
remainder section
} while (true);
It's basically slide 10 of these notes.
I think the important bit is that according to slide 6 of your notes the 3 rules apply to the critical section of the algorithm and are exactly as follows:
Progress: If no one is in the critical section and someone wants in,
then those processes not in their remainder section must
be able to decide in a finite time who should go in.
Bounded Wait: All requesters must eventually be let into the critical
section.
How to break it:
Pi executes and its remainder section runs indefinitely (no restriction for this)
Pj runs in its entirety, setting turn:= i so it's now Pi's turn to run the critical section.
Pi is still running its remainder which runs indefinitely.
Pj is back to it's critical section but never gets to run it since Pi never gets back to the point where it can give the turn to Pj.
That breaks the progress rule. No one is in the critical section, Pj wants in but cannot decide in a finite time if it can go in.
That breaks the bounded wait rule. Pj will never be let back in into the critical section.
As Malvavisco correctly points out, if a process never releases a resource no other process will have access to it. This is an uninteresting case and typically it's considered trivial. (In practice, it turns out not to be -- which is why there's a lot of emphasis on being able to manage processes from outside, e.g. forcibly terminate a process with minimal ill effects.)
The slides are actually a little imprecise in their definitions. I find that this Wikipedia page on Peterson's algorithm (Algorithm #3 on slide 12) is more exact. Specifically:
Bounded waiting means that "there exists a bound or limit on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted"
Some thought experimentation makes it pretty clear that Algorithm #1 (slide 10) fails this. There is no bound on the number of times the critical section could be entered by either process if the process switching timing was unfortunate. Suppose process 1 executes, enters the critical section, and from there on process 2 only is switched to while process 1 is in its critical section. Process 1 will never account for this. Peterson's algorithm will as process 1 will forfeit its ability to enter the critical section if process 2 is waiting (and vice versa).

Having intercommunicating asynchronous processes in wxPython

I am working on a big project that puts performance as a high priority. I have a little bit of experience using wxPython to create windows and dialog boxes for software, but I have no experience in getting processes to work in parallel during the course of a single program.
So basically, what I want to accomplish is the following:
I want one main class that controls the high level program. It sets up a configuration either from a config file or from user input. This much I have accomplished on my own.
I need PROCESS #1 to read in a file and a list of commands, execute the commands, and then pass the modified file to PROCESS #2 (this requires that PROCESS #2 is ready to accept new input.) Once the file is passed, PROCESS #1 would begin work on the next set of inputs and wait for PROCESS #2 to finish before the cycle repeats.
PROCESS #2 takes input from PROCESS #1 and writes output to a log file. Once the output is complete, it waits for the next set of output from PROCESS #1.
I know how to use wxTimers and the events associated with that, but what I have found is that a timer event will not execute if the program is otherwise occupied (like in the middle of a method.)
I have seen threads about "threading" and "Pool", but the terminology tends to go over my head, and I haven't gotten any of that sort of stuff to work.
If anybody can point me in the right direction, I would be greatly appreciative.
If you use threads, then I think this would be fairly easy to do. Here's what I would suggest:
Create a button (or some other widget) to execute process #1 in a thread. The thread itself will run BOTH processes. Here's some psuedo-code that might help:
# this is in your thread code:
result = self.call_process_1(args)
self.call_process_2(result)
This will allow you to start another process #1/2 with a new set of commands every time you press the button. Since the two processes are encapsulated in the thread, they don't have to wait for process #2 to finish. You will probably need to log to separate logs for the logs to make sense, but you can label the logs with a timestamp and a thread number or a uuid.
Depending on how many of these processes you need to do, you might need to look into setting up a cluster that's driven with celery or some such. But I think this is a good starting place.

JCL Return code FLUSH

//STE1 IF RC EQ 1 THEN
....
//ENDIF
the return code is giving me FLUSH and all the other job is not executing becaue of this
can anyone help me on this.
is it because i havent given ELSE?
If you have conditions for running steps, either COND or IF, and the condition determines that a step is not run, then there is no "Return Code" from the step. The step is not run, it is FLUSHed, so there is no RC.
If the rest of the steps in your JOB are expecting to run on a RC=0, then you will have to change something.
Consult the JCL Reference, you have other options, like EVEN, ONLY, but these may not suit (haven't a clue, as don't know exactly what you are trying).
//STEPA
...
//STEPB
...
//STEPC
If STEPB depends on STEPA, so will not run with a zero RC from STEPA, you need to decide what is needed for STEPC. You have three situations: STEPB not run; runs with zero RC; runs with non-zero RC. What should STEPC do in each case.
If STEPC has no conditional processing, then it will just run whatever happens to STEPB (except an abend, and no EVEN).
If STEPC needs to run conditionally, you have to decide what it is about STEPA and STEPB which tells you how to run it.
If your JOB is big, and the conditions are complex, consider splitting it into separate JOBs and letting the Scheduler take care of it.
If your JCL is destined for Production, there should be JCL Standards to follow, and if you are unclear how to do something, you should consult those responsible for the Production JCL, they will tell you how they want it, and whether you even need be concerned about it (as they may well just re-write from scratch anyway).
When a particular step in a JOB is skipped due to COND parameter or any other reason, what will be the retuen code that will displayed in the spool

Resources