How to use uvm_test_done objection in test sequence? - verilog

I am doing following in my UVM testbench to create seq and start test.
I've some sequences. I'm copying a code snippet from one of the sequences bellow.
Inside body():
`uvm_create_on(my_seq, p_sequencer.my_sequencer)
my_seq.randomize();
`uvm_send(my_seq)
2.In my test, I'm doing following to start a sequence:
task run_phase(uvm_phase phase);
.....
phase.raise_objection(this);
seq.start(env.virtual_sequencer);
phase.drop_objection(this);
endtask
Now, if I do this, the test is starts and ends at zero time. What I mean is, the DUT is not being driven by my sequence. If I make following change then it seems to work fine:
Option1:changing run_phase in test-
task run_phase(uvm_phase phase);
.....
phase.raise_objection(this);
seq.start(env.virtual_sequencer);
#100000; // Adding delay here.
phase.drop_objection(this);
endtask
If I do this then test starts and I can see that DUT is being driven and everything is working as expected. However, test always ends at time 1000000- even though the sequence is not done sending all the transactions to DUT. It's not good as I don't know how long my DUT will take to complete a test. So, I rather tried something like this:
Option 2: Keeping the default code in test (not adding delay in run_phase). Making following change inside body of my_seq:
Inside body():
uvm_test_done.raise_objection(this);
`uvm_create_on(my_seq, p_sequencer.my_sequencer)
my_seq.randomize();
`uvm_send(my_seq)
uvm_test_done.drop_objection(this);
If I do this then it works fine. Is it the proper way of handling objection ? Going back to my original implementation, I assumed that my sequence is blocking. So, whenever I start a sequence in run_phase of test using start(...), it'll be considered as blocking and will wait at that line until sequence is done sending all the transaction. So, I didn't add any delay in my original code.
I think I'm missing something here. Any help will be greatly appreciated.

If you're doing a fork in your main sequence, then its body() task (which is called by start()) won't block. If you need to do a fork...join_none due to some kind of synchronization you need, you should also implement some kind of mechanism to know when the forked off processes terminate so that you can stall body until then. Example:
// inside your main sequence
event forked_finished;
task body();
fork
`uvm_do(some_sub_seq)
-> forked_finished;
join_none
// do some other stuff here in parallel
// make sure that the forked processes also finished
#forked_finished;
endtask
This code here assumes that the forked process finishes after your other code does. In production code you probably wouldn't rely on this assumption and would use a uvm_event to test first if the event already triggered before waiting.
Because body() waits until everything is finished w.r.t. stimulus, then you shouldn't have any problem setting and objection before starting this sequence and the lowering it once it's done.

You really have to consider the semantics of your sequence. Usually I expect a sequence's body to not complete until it is finished. So doing a fork/join_none would be undesirable because the caller of the sequence would have a way of knowing that the sequence has completed. Similar to what you see in your test.
The solution is to not have my_seq::body return until it is complete.
If the caller of my_seq needs to do something in parallel with my_seq, then it should be their responsibility to do the appropriate fork.

Related

F# / MailBoxProcessor is unresponsive to PostAndReply under nearly 100% load

I have a MailBoxProcessor, which does the following things:
Main loop (type AsyncRunner: https://github.com/kkkmail/ClmFSharp/blob/master/Clm/ContGen/AsyncRun.fs#L257 – the line number may change as I keep updating the code). It generates some "models", compiles each of them into a model specific folder, spawns them as external processes, and then each model uses WCF to "inform" AsyncRunner about its progress by calling updateProgress. A model may take several days to run. Once any of the models is completed, the runner generates / spawns more. It is designed to run at 100% processor load (but with priority: ProcessPriorityClass.BelowNormal), though I can specify a smaller number of logical cores to use (some number between 1 and Environment.ProcessorCount). Currently I "async"-ed almost everything that goes inside MailBoxProcessor by using … |> Async.Start to ensure that I "never ever" block the main loop.
I can "ask" the runner (using WCF) about its state by calling member this.getState () = messageLoop.PostAndReply GetState.
OR I can send some commands to it (again using WCF), e.g. member this.start(), member this.stop(), …
Here is where it gets interesting. Everything works! However, if I run a "monitor", which would ask for a state by effectively calling PostAndReply (exposed as this.getState ()) in an infinite loop, the after a while it sort of hangs up. I mean that it does eventually return, but with some unpredictably large delays (like a few minutes). At that same time, I can issue commands and they do return fast while getState still has not returned.
Is it possible to make it responsive at nearly 100% load? Thanks a lot!
I would suggest not asyncing anything(other than your spawning of processes) in your main program, since your code creates additional processes. Your main loop is waiting on the loop return to continue before processing the GetState() method.

thread with a forever loop with one inherently asynch operation

I'm trying to understand the semantics of async/await in an infinitely looping worker thread started inside a windows service. I'm a newbie at this so give me some leeway here, I'm trying to understand the concept.
The worker thread will loop forever (until the service is stopped) and it processes an external queue resource (in this case a SQL Server Service Broker queue).
The worker thread uses config data which could be changed while the service is running by receiving commands on the main service thread via some kind of IPC. Ideally the worker thread should process those config changes while waiting for the external queue messages to be received. Reading from service broker is inherently asynchronous, you literally issue a "waitfor receive" TSQL statement with a receive timeout.
But I don't quite understand the flow of control I'd need to use to do that.
Let's say I used a concurrentQueue to pass config change messages from the main thread to the worker thread. Then, if I did something like...
void ProcessBrokerMessages() {
foreach (BrokerMessage m in ReadBrokerQueue()) {
ProcessMessage(m);
}
}
// ... inside the worker thread:
while (!serviceStopped) {
foreach (configChange in configChangeConcurrentQueue) {
processConfigChange(configChange);
}
ProcessBrokerMessages();
}
...then the foreach loop to process config changes and the broker processing function need to "take turns" to run. Specifically, the config-change-processing loop won't run while the potentially-long-running broker receive command is running.
My understanding is that simply turning the ProcessBrokerMessages() into an async method doesn't help me in this case (or I don't understand what will happen). To me, with my lack of understanding, the most intuitive interpretation seems to be that when I hit the async call it would go off and do its thing, and execution would continue with a restart of the outer while loop... but that would mean the loop would also execute the ProcessBrokerMessages() function over and over even though it's already running from the invocation in the previous loop, which I don't want.
As far as I know this is not what would happen, though I only "know" that because I've read something along those lines. I don't really understand it.
Arguably the existing flow of control (ie, without the async call) is OK... if config changes affect ProcessBrokerMessages() function (which they can) then the config can't be changed while the function is running anyway. But that seems like it's a point specific to this particular example. I can imagine a case where config changes are changing something else that the thread does, unrelated to the ProcessBrokerMessages() call.
Can someone improve my understanding here? What's the right way to have
a block of code which loops over multiple statements
where one (or some) but not all of those statements are asynchronous
and the async operation should only ever be executing once at a time
but execution should keep looping through the rest of the statements while the single instance of the async operation runs
and the async method should be called again in the loop if the previous invocation has completed
It seems like I could use a BackgroundWorker to run the receive statement, which flips a flag when its job is done, but it also seems weird to me to create a thread specifically for processing the external resource and then, within that thread, create a BackgroundWorker to actually do that job.
You could use a CancelationToken. Most async functions accept one as a parameter, and they cancel the call (the returned Task actually) if the token is signaled. SqlCommand.ExecuteReaderAsync (which you're likely using to issue the WAITFOR RECEIVE is no different. So:
Have a cancellation token passed to the 'execution' thread.
The settings monitor (the one responding to IPC) also has a reference to the token
When a config change occurs, the monitoring makes the config change and then signals the token
the execution thread aborts any pending WAITFOR (or any pending processing in the message processing loop actually, you should use the cancellation token everywhere). any transaction is aborted and rolled back
restart the execution thread, with new cancellation token. It will use the new config
So in this particular case I decided to go with a simpler shared state solution. This is of course a less sound solution in principle, but since there's not a lot of shared state involved, and since the overall application isn't very complicated, it seemed forgivable.
My implementation here is to use locking, but have writes to the config from the service main thread wrapped up in a Task.Run(). The reader doesn't bother with a Task since the reader is already in its own thread.

SystemVerilog : fork - join and writing parallel testbenches

I am following the testbench example at this link:
http://www.verificationguide.com/p/systemverilog-testbench-example-00.html
I have two questions regarding fork-join statements. The test environment has the following tasks for initiating the test:
task test();
fork
gen.main();
driv.main();
join_any
endtask
task post_test();
wait(gen.ended.triggered);
wait(gen.repeat_count == driv.no_transactions);
endtask
task run;
pre_test();
test();
post_test();
$finish;
endtask
My first question is why do we wait for the generator event to be triggered in the post_test() task? why not instead do a regular fork-join which, as far as I understand, will wait for both threads to finish before continuing.
I read another Stack Overflow question (System Verilog fork join - Not actually parallel?) that said these threads are not actually executed in parallel in the CPU sense, but only in the simulation sense.
My second question is what are the point of fork-joins if they are not actually executed in parallel. There would be no performance benefit, so why not follow a sequential algorithm like:
while true:
Create new input
Feed input to module
Check output
To me this seems much simpler than the testbench example.
Thanks for your help!
without having the code for gen and driv, it is difficult to say. However, most likely both driv and gen are communicating with each other in some manner, i.e. gen produces data which driv consumes and drive something else.
If gen and driv are written in as gen input/cousume input fashion, than your loop would make sense, however, most likely they generate and consume data based on some events and cannot be easily split in such functions easily. Something like the following is usually much cleaner.
gen:
while() begin
wait(some event);
generateData;
prepareForTheNextEvent;
end
driv:
while() begin
wait(gen ready);
driveData;
end
so, for the above reason you cannot run them sequentially. They must run in parallel. For all programming purposes they are running in parallel. In more details they run in the same single thread, but verilog schedules their execution based on events generated in simulation. So, you need fork.
As for the join_any, I think, that the test in your case is supposed to finish when either of the threads is done. However the driver has also to finish all outstanding jobs before it can exit. Therefore there are those wait statements in the posttest task.

How do I Yield() to another thread in a Win8 C++/Xaml app?

Note: I'm using C++, not C#.
I have a bit of code that does some computation, and several bits of code that use the result. The bits that use the result are already in tasks, but the original computation is not -- it's actually in the callstack of the main thread's App::App() initialization.
Back in the olden days, I'd use:
while (!computationIsFinished())
std::this_thread::yield(); // or the like, depending on API
Yet this doesn't seem to exist for Windows Store apps (aka WinRT, pka Metro-style). I can't use a continuation because the bits that use the results are unconnected to where the original computation takes place -- in addition to that computation not being a task anyway.
Searching found Concurrency::Context::Yield(), but Context appears not to exist for Windows Store apps.
So... say I'm in a task on the background thread. How do I yield? Especially, how do I yield in a while loop?
First of all, doing expensive computations in a constructor is not usually a good idea. Even less so when it's the "App" class. Also, doing heavy work in the main (ASTA) thread is pretty much forbidden in the WinRT model.
You can use concurrency::task_completion_event<T> to interface code that isn't task-oriented with other pieces of dependent work.
E.g. in the long serial piece of code:
...
task_completion_event<ComputationResult> tce;
task<ComputationResult> computationTask(tce);
// This task is now tied to the completion event.
// Pass it along to interested parties.
try
{
auto result = DoExpensiveComputations();
// Successfully complete the task.
tce.set(result);
}
catch(...)
{
// On failure, propagate the exception to continuations.
tce.set_exception(std::current_exception());
}
...
Should work well, but again, I recommend breaking out the computation into a task of its own, and would probably start by not doing it during construction... surely an anti-pattern for a responsive UI. :)
Qt simply uses Sleep(0) in their WinRT yield implementation.

Lua Script coroutine

Hi need some help on my lua script. I have a script here that will run a server like application (infinite loop). Problem here is it doesn't execute the second coroutine.
Could you tell me whats wrong Thank you.
function startServer()
print( "...Running server" )
--run a server like application infinite loop
os.execute( "server.exe" )
end
function continue()
print("continue")
end
co = coroutine.create( startServer() )
co1 = coroutine.create( continue() )
Lua have cooperative multithreading. Threads are not swtiched automatically, but must yield to others. When one thread is running, every other thread is waiting for it to finish or yield. Your first thread in this example seems to run server.exe, which, I assume, never finishes until interrupted. Thus second thread never gets its turn to run.
You also run threads wrong. In your example you're not running any threads at all. You execute function and then would try to create coroutine with its output, which naturally would fail. But since you never get back from server.exe you didn't notice this problem yet. Remove those brackets after startServer and continue to fix it.
As already noted, there are several issues with the script that prevent you from getting what you want:
os.execute("...") is blocked until the command is completed and in your case it doesn't complete (as it runs an infinite loop). Solution: you need to detach that process from yours by using something like io.popen() instead of os.execute()
co = coroutine.create( startServer() ) doesn't create a coroutine in your case. coroutine.create call accepts a function reference and you pass it the result of startServer call, which is nil. Solution: use co = coroutine.create( startServer ) (note that parenthesis are dropped, so it's not a function call anymore).
You are not yielding from your coroutines; if you want several coroutines to work together, they need to be cooperating by giving control to each other when appropriate. That's what yield command is for and that's why it's called non-preemptive multithreading. Solution: you need to use a combination of resume and yield calls after you create your coroutine.
startServer doesn't need to be a coroutine as you are not giving control back to it; its only purpose is to start the server.
In your case, the solution may not even need coroutines as all you need to do is: (1) start the server and let it detach from your process (for example, using popen) and (2) work with your process using whatever communication protocol it requires (pipes, sockets, etc.).
There are more complex and complete solutions (like LuaLanes) and also several good descriptions on creating simple coroutine dispatchers.
Your coroutine is not yielding

Resources