Why is the installed exception handler not called directly in Jolie? - jolie

I have a scenario where a want to refresh a resource, but I also want to be able to terminate the refresh.
I have the following interfaces:
interface terminate{
OneWay: terminate(void)
}
interface refreshAll {
RequestResponse: refreshAll(void)(void)
}
And the resource:
include "interface.iol"
include "console.iol"
inputPort dummyInput {
Location: "socket://localhost:8002"
Protocol: sodep
Interfaces: refreshAll
}
init{
registerForInput#Console()()
}
main
{
refreshAll( number )( result ) {
println#Console("refresh")();
in(req);
result = void
}
}
And the service I run if I want to terminate:
include "interface.iol"
outputPort term {
Location: "socket://localhost:8000"
Protocol: sodep
Interfaces: terminate
}
main
{
terminate#term()
}
And the program coordinating everything:
include "interface.iol"
include "console.iol"
inputPort terminate {
Location: "socket://localhost:8000"
Protocol: sodep
Interfaces: terminate
}
outputPort resource {
Location: "socket://localhost:8002"
Protocol: sodep
Interfaces: refreshAll
}
main
{
scope(hej){
install(
hello => {
println#Console("terminate")()
}
);
{
refreshAll#resource()()
}|
{
terminate();
throw(hello)
}
}
}
Why is the exception not thrown directly when terminate is received?
That is, in the coordination program the exception handler is not called when terminate is recived. The exception handler is first invoked after the refreshAll#resource()() has finished.
How can I write so the refreshAllis terminated getting a terminate?

In Jolie, a fault (what you're triggering with the throw primitive) does not interrupt a pending solicit-response call (your refreshAll#resource()()): if it hasn't started yet, it's not started at all, but if the request has been sent to the intended receiver (resource here), then Jolie is going to wait for the response (or a timeout) before propagating the fault to the enclosing scope (hej here). That's because the result of the solicit response might be important for the fault management logic.
If you don't care about the result of the solicit-response in your fault handler (and here you don't), then you can just make a little adapter to handle the solicit-response call in two steps, effectively making it asynchronous at the application level (at the implementation level, most Jolie stuff is asynchronous anyway, but here you want to explicitly see that the communication happens in two steps in your program logic).
I modified your coordinator program as follows and then everything worked as you wanted:
include "interface.iol"
include "console.iol"
inputPort terminate {
Location: "socket://localhost:8000"
Protocol: sodep
Interfaces: terminate
}
outputPort resource {
Location: "socket://localhost:8002"
Protocol: sodep
Interfaces: refreshAll
}
// New stuff from here
interface RefreshAsyncAdapterIface {
OneWay:
refreshAllAsync(void)
}
interface RefreshAsyncClientIface {
OneWay:
refreshCompleted(void)
}
inputPort CoordinatorInput {
Location: "local://Coordinator"
Interfaces: RefreshAsyncClientIface
}
outputPort Coordinator {
Location: "local://Coordinator"
Interfaces: RefreshAsyncClientIface
}
// Adapter service to split the solicit-response in two steps
service RefreshAsyncAdapter {
Interfaces: RefreshAsyncAdapterIface
main {
refreshAllAsync();
refreshAll#resource()();
refreshCompleted#Coordinator()
}
}
main
{
scope(hej){
install(
hello => {
println#Console("terminate")()
}
);
{
// Split the call in send and receive steps
refreshAllAsync#RefreshAsyncAdapter();
refreshCompleted()
}|
{
terminate();
throw(hello)
}
}
}
This patterns appears quite often, so we'll probably make it even easier in the future.
References:
Jolie documentation on fault handling: https://jolielang.gitbook.io/docs/fault-handling/termination_and_compensation
The paper describing what I've reported: https://doi.org/10.1109/ECOWS.2008.20
The formal semantics of faults in Jolie: https://doi.org/10.3233/FI-2009-143

Related

Node.js: How to implement a simple and functional Mutex mechanism to avoid racing conditions that bypass the guard statement in simultaneous actions

In the following class, the _busy field acts as a semaphore; but, in "simultaneous" situations it fails to guard!
class Task {
_busy = false;
async run(s) {
try {
if (this._busy)
return;
this._busy = true;
await payload();
} finally {
this._busy = false;
}
}
}
The sole purpose of the run() is to execute the payload() exclusively, denying all the other invocations while it's still being carried on. In other words, when "any" of the invocations reach to to the run() method, I want it to only allow the first one to go through and lock it down (denying all the others) until it's done with its payload; "finally", it opens up once it's done.
In the implementation above, the racing condition do occur by invoking the run() method simultaneously through various parts of the app. Some of the invocations (more than 1) make it past through the "guarding" if statement, since none of them are yet reached to the this._busy = true to lock it down (they get past simultaneously). So, the current implementation doesn't cut it!
I just want to deny the simultaneous invocations while one of them is already being carried out. I'm looking for a simple solution to only resolve this issue. I've designated the async-mutex library as a last resort!
So, how to implement a simple "locking" mechanism to avoid racing conditions that bypass the guard statement in simultaneous actions?
For more clarification, as per the comments below, the following is almost the actual Task class (without the irrelevant).
class Task {
_cb;
_busy = false;
_count = 0;
constructor(cb) {
this._cb = cb;
}
async run(params = []) {
try {
if (this._busy)
return;
this._busy = true;
this._count++;
if (this._count > 1) {
console.log('Race condition!', 'count:', this._count);
this._count--;
return;
}
await this._cb(...params);
} catch (err) {
await someLoggingRoutine();
} finally {
this._busy = false;
this._count--;
}
}
}
I do encounter with the Race condition! log. Also, all the task instances are local to a simple driver file (the instances are not passed down to any other function, they only wander as local instances in a single function.) They are created in the following form:
const t1 = new Task(async () => { await doSth1(); /*...*/ });
const t2 = new Task(async () => { await doSth2(); /*...*/ });
const t3 = new Task(async () => { await doSth3(); /*...*/ });
// ...
I do call them in the various library events, some of which happen concurrently and causing the "race condition" issue; e.g.:
someLib.on('some-event', async function() { /*...*/ t1.run().then(); /*...*/ });
anotherLib.on('event-2', async function() { /*...*/ t1.run().then(); /*...*/ });
Oh god, now I see it. How could I have missed this so long! Here is your implemenation:
async run() {
try {
if (this._busy)
return;
...
} finally {
this._busy = false;
}
}
As per documentations:
The Statements in the finally block are executed before control flow exits the try...catch...finally construct. These statements execute regardless of whether an exception was thrown or caught.
Thus, when it's busy and the flow reaches the guarding if, and then, logically encounters the return statement. The return statement causes the flow to exit the try...catch...finally construct; thus, as per the documentations, the statements in the finally block are executed whatsoever: setting the this._busy = false;, opening the thing up!
So, the first call of run() sets this._busy as being true; then enters the critical section with its longrunning callback. While this callback is running, just another event causes the run() to be invoked. This second call is rationally blocked from entering the critical section by the guarding if statement:
if (this._busy) return;
Encountering the return statement to exit the function (and thus exiting the try...catch...finally construct) causes the statements in the finally block to be executed; thus, this._busy = false resets the flag, even though the first callback is still running! Now suppose a third call to the run() from yet another event is invoked! Since this._busy is just set to false, the flow happily enters the critical section again, even though the first callback is still running! In turn, it sets this._busy back to true. In the meantime, the first callback finishes, and reaches the finally block, where it set this._busy = false again; even though the other callback is still running. So the next call to run() can enter the critical section again with no problems... And so on and so forth...
So to resolve the issue, the check for the critical section should be outside of the try block:
async run() {
if (this._busy) return;
this._busy = true;
try { ... }
finally {
this._busy = false;
}
}

Detecting a ConnectionReset in Rust, instead of having the Thread panic

So I have a multi-threading program in Rust, which sends Get Requests to my Website, and I'm wondering how I can detect a ConnectionReset.
What I'm trying to do is, after the request, check if there was a ConnectionReset, and if there was, wait for a minute, so the thread doesn't panic
The code I'm using right now
let mut req = reqwest::get(&url).unwrap();
And after that was executed I want to check if there's a ConnectionReset, and then println ("Connection Error"), instead of having the thread panic.
The Error, that I want to detect
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value:
Error { kind: Io(Custom { kind: Other, error: Os { code: 10054, kind: ConnectionReset,
message: "An existing connection was forcibly closed by the remote host." } }),
url: Some("https://tayhay.vip") }', src\main.rs:22:43
I also read something about std::panic::catch_unwind, but I am not sure if that's the right way to go.
.unwrap means literally: "panic in case of error". If you don't want to panic, you will have to handle the error yourself. You have three solutions here depending on code you haven't shown us:
Propagate the error up with the ? operator and let the calling function handle it.
Have some default value ready to use (or create on the fly) in case of error:
let mut req = reqwest::get (&url).unwrap_or (default);
or
let mut req = reqwest::get (&url).unwrap_or_else (|_| { default });
(this probably doesn't apply in this specific case since I don't know what would make a sensible default here, but it applies in other error handling situations).
Have some specific error handling code:
match reqwest::get (&url) {
Ok (mut req) => { /* Process the request */ },
Err (e) => { /* Handle the error */ },
}
For more details, the Rust book has a full chapter on error handling.

Reactive Extension .NetCore Observing on MainThread

I'm trying to do a lot of network operation in parallel, and I want to set a timeout to each operation.
Since Parallel.ForEach doesn't have an easy timeout option, I'm using System.Reactive.
this is my code:
public void networkOps(List<MacCpe> source, Action<List<Router>, List<Exception>> onDone) {
var routers = new List<Router>();
var exceptions = new List<Exception>();
Observable.Defer(() => source.ToObservable())
.ObserveOn(Scheduler.CurrentThread)
.SubscribeOn(Scheduler.Default)
.SelectMany(it =>
Observable.Amb(
Observable.Start(() => {
switch(it.type) {
case AntennaType.type1: {
//network stuff
}
break;
case AntennaType.type2: {
//network stuff
}
break;
case AntennaType.type3: {
//network stuff
}
break;
case AntennaType.type4: {
//network stuff
}
break;
default: throw new NullReferenceException("Nothing");
}
}).Select(_ => true),
Observable.Timer(TimeSpan.FromSeconds(60)).Select(_ => false)
),
(it, result) => new { it, result }
)
.Subscribe (
x => {
Console.WriteLine("checked item number " + x.it.Id);
},
ex => {
Console.WriteLine("error string");
}, () => {
onDone(routers, exceptions);
}
);
}
I'm using the Observable.Amb operator to run in parallel a 60 seconds timer, that works as a timeout.
However when I run this method, the program exits immediately without ever getting to the callback onDone.
I see online that I can use ObserveOnDispatcher to observe on the Ui thread while running the blocking code on a pool of threads, but I'm using this on dotnet core on linux on a terminal application server side.
How would one go to observe on the "main thread" in a console application?
Thanks in advance for the responses.
As you are replacing Parallel.ForEach it sounds like you are happy to have a blocking operation. Using Rx the way you have set it up it is not a blocking operation, so hence the method ends immediately.
It's very simple to fix. Just change your .Subscribe to this:
.Do(
x =>
{
Console.WriteLine("checked item number " + x.it.Id);
},
ex =>
{
Console.WriteLine("error string");
}, () =>
{
onDone(routers, exceptions);
}
)
.Wait();
I'd also get rid of your .ObserveOn(Scheduler.CurrentThread) and .SubscribeOn(Scheduler.Default) until you are certain that you need those.

Vertx and Redis: I cannot make them working together

I have my simple Vertx script in Groovy that should send a request to Redis to get a value back:
def eb = vertx.eventBus
def config = [:]
def address = 'vertx.mod-redis-io'
config.address = address
config.host = 'localhost'
config.port = 6379
container.deployModule("io.vertx~mod-redis~1.1.4", config)
eb.send(address, [command: 'get', args: ['mykey']]) { reply ->
if (reply.body.status.equals('ok')) {
println 'ok'
// do something with reply.body.value
} else {
println("Error ${reply.body.message}")
}
}
The value for 'mykey' is stored regularly on my Redis (localhost:6379):
127.0.0.1:6379> get mykey
"Hello"
The script starts correctly but no values are returned (reply).
Am I missing something?
The issue is that you deployModule and send to the EventBus sequentially, even if the call is asynchronous.
So, when you call deployModule the module deployment gets triggered, but is not guaranteed before eb.send is called. By that you are sending the right command but it does not get computed because the module is not there.
Try the following in adding your test command to the AsyncHandler of the deployModule
container.deployModule("io.vertx~mod-redis~1.1.4", config) { asyncResult ->
if(asyncResult.succeeded) {
eb.send(address, [command: 'get', args: ['mykey']]) { reply ->
if (reply.body.status.equals('ok')) {
println 'ok'
// do something with reply.body.value
} else {
println("Error ${reply.body.message}")
}
}
} else {
println 'Deployment broken!'
}
}
The example from the https://github.com/vert-x/mod-redis is maybe not the best because it is just a snippet to point the direction.
This works as it only sends the request to the Bus as soon as the module is deployed and by that someone listening to it. I tested it locally on a Vagrant installment with Redis.
Overall, development in Vert.x is close to always asynchronous because of its key concept. It takes some time to get acquainted with it, but it has its benefits :)
Hope this helps.
Best

Parallel.Invoke - Exception handling

My code runs 4 function to fill in information (Using Invoke) to a class such as:
class Person
{
int Age;
string name;
long ID;
bool isVegeterian
public static Person GetPerson(int LocalID)
{
Person person;
Parallel.Invoke(() => {GetAgeFromWebServiceX(person)},
() => {GetNameFromWebServiceY(person)},
() => {GetIDFromWebServiceZ(person)},
() =>
{
// connect to my database and get information if vegeterian (using LocalID)
....
if (!person.isVegetrian)
return null
....
});
}
}
My question is: I can not return null if he's not a vegeterian, but I want to able to stop all threads, stop processing and just return null. How can it be achieved?
To exit the Parallel.Invoke as early as possible you'd have to do three things:
Schedule the action that detects whether you want to exit early as the first action. It's then scheduled sooner (maybe as first, but that's not guaranteed) so you'll know sooner whether you want to exit.
Throw an exception when you detect the error and catch an AggregateException as Jon's answer indicates.
Use cancellation tokens. However, this only makes sense if you have an opportunity to check their IsCancellationRequested property.
Your code would then look as follows:
var cts = new CancellationTokenSource();
try
{
Parallel.Invoke(
new ParallelOptions { CancellationToken = cts.Token },
() =>
{
if (!person.IsVegetarian)
{
cts.Cancel();
throw new PersonIsNotVegetarianException();
}
},
() => { GetAgeFromWebServiceX(person, cts.Token) },
() => { GetNameFromWebServiceY(person, cts.Token) },
() => { GetIDFromWebServiceZ(person, cts.Token) }
);
}
catch (AggregateException e)
{
var cause = e.InnerExceptions[0];
// Check if cause is a PersonIsNotVegetarianException.
}
However, as I said, cancellation tokens only make sense if you can check them. So there should be an opportunity inside GetAgeFromWebServiceX to check the cancellation token and exit early, otherwise, passing tokens to these methods doesn't make sense.
Well, you can throw an exception from your action, catch AggregateException in GetPerson (i.e. put a try/catch block around Parallel.Invoke), check for it being the right kind of exception, and return null.
That fulfils everything except stopping all the threads. I think it's unlikely that you'll easily be able to stop already running tasks unless you start getting into cancellation tokens. You could stop further tasks from executing by keeping a boolean value to indicate whether any of the tasks so far has failed, and make each task check that before starting... it's somewhat ugly, but it will work.
I suspect that using "full" tasks instead of Parallel.Invoke would make all of this more elegant though.
Surely you need to load your Person from the database first anyway? As it is your code calls the Web services with a null.
If your logic really is sequential, do it sequentially and only do in parallel what makes sense.

Resources