Parallel forking using TSILO module in kamailio - voip

I tried to use TSILO Kamailio module to implement push notification VoIP calls with IOS.
My problem is that i need to do a parallel forking of the call and send INVITES to every registered device for the same user. When i use parallel forking (without TSILO), and one of the devices answer the call, parallel forking automatically cancel the request that went to the other devices.
That is not the case when i use TSILO. Is TSILO ready to cancel branches when doing parallel forking?
Any help will be appreciated.

Outgoing branches management is done by tm module. The tsilo module only keeps track of active transactions per user and can add new outgoing branches as the user is registering. When a branch is answered with 200ok, the transaction is completed, the rest of active branches are canceled (if they got some 1xx response) and no new branches should be created.
In other words, yes, Kamailio should cancel all other active branches when one is answered with 200ok, no matter how the branches are created, with or without tsilo.

Related

Why netflix conductor does not provide a way to run tasks/subworkflows asynchronously?

Only several task can be run async: HTTP, EVENT, KAFKA. But why there is no a way to run SIMPLE tasks async. Especially would be very useful feature to run sub workflows async. The only workaround(and only for subworkflows) is to send event which will be handled by registered event which will run workflow
I'm late to the party here, but check out the FORK operator.
https://orkes.io/content/docs/reference-docs/fork-task
In this workflow the fork splits your workflow into 3 paths - to send an email, SMS and a HTTP notification. Each path runs asynch. You can also set your JOIN (the other half of the fork) to JOIN_ON all or just some of the fork "tines"
If you need to define the number of asynchronous flows at runtime - the Dynamic Fork is the way to go (but it is a bit more complicated to set up).
https://orkes.io/content/docs/reference-docs/dynamic-fork-task

JMeter: What logic would be best to apply if each threadgroup is dependent on previous ones response

We have 2 thread groups which is dependent on previous ones response.
SIGNUP will generate some PHONE NUMBER and PASSWORD in response which will be utilized by LOGIN thread group.
I don't want to use CSV and would like to capture response from SIGNUP and use same credentials (PHONE NUMBER and PASSWORD) to execute LOGIN.
Also, which timer would be better to use.
Any idea how to proceed?
If you have 2 Thread Groups and would like to start 2nd one only when some information from 1st one is available the best way to proceed is using Inter-Thread Communication Plugin
It provides a simple FIFO queue which is accessible by different threads (even if they reside in different thread groups) so you can simply put these PHONE NUMBER and PASSWORD into the queue and configure 2nd Thread Group to operate only when the credentials are available.
There is SynchronizationPluginsExample.jmx test plan which demonstrates sharing cookies between Thread Groups, you can use it as a basis for your implementation.
Inter-Thread Communications plugin can be installed using JMeter Plugins Manager

Asana API Sync Error

I currently have a application running that passes data between Asana and Zendesk.
I have webhooks created for all my Project in Asana and all project events are sent to my webhook end point that verifies the request and tries to identify the event and update Zendesk with relevant data depending on the event type (Some events aren't required).
However I have been receiving the following request from the Webhooks just recently:
"events": [
{
"action": "sync_error",
"message": "There was an error with the event queue, which may have resulted in missed events. If you are keeping resources in sync, you may need to manually re-fetch them.",
"created_at": "2017-05-23T16:29:13.994Z"
}
]
Now because I don't poll the API for event updates I react when the events arrive with me, I haven't considered using a Sync key, the docs suggest this is only required when polling for events. Do I need to use one when using Webhooks also?
What am I missing?
Thanks in advance for any suggestions.
You're correct, you don't need to track a sync key for webhooks - we proactively try to reach out with them when something changes in Asana, and we track the events that haven't yet been delivered across webhooks (essentially, akin to us updating the sync key server-side whenever webhooks have been successfully delivered).
Basically what's happening here is that for some reason, our event queues detect that there's a problem with their internal state. This means that events didn't get recorded, or webhooks didn't get delivered after a long time. Our events and webhooks try to track changes in a best-effort sense, and there are some things that can happen with our production machines that can cause these sorts of issues, like a machine dying at an inopportune time.
Unfortunately, then, the only way to get back to a good state is to do a full scan of the projects you're tracking, which is what is meant by you may need to manually re-fetch them. Basically, a robust implementation of syncing Asana to external resources looks like:
A diff function that, given a particular task and external resource, detects what state is out of date or different between each resource and choose a merge/patch resolution (i.e. "Make Zendesk look like Asana")
Receiving a webhook runs that diff/patch process for that one task in a "live" fashion.
Periodically (on script startup, say, or when webhooks/events are missed and you get an error message like this) update all resources that might have been missed by scanning the entire project and do the diff/patch for every task. This is more expensive, but should be significantly more rare.

CQRS and DDD boundaries

I've have a couple of questions to which I am not finding any exact answer. I've used CQRS before, but probably I was not using it properly.
Say that there are 5 services in the domain: Gateway, Sales, Payments, Credit and Warehouse, and that during the process of a user registering with the application, the front-end submits a few commands, the same front-end will then, once the user is registered, send a few other commands to create an order and apply for a credit.
Now, what I usually do is create a gateway, which receives all pubic commands, which are then validated, and if valid, are transformed into domain commands. I only use events to store data and if one service needs some action to be performed in other service, a domain command is sent directly from one service to the other. But I've seen in other systems that event handlers are used for more than store data. So my question is, what are the limits to what event handlers can do? And is it correct to send commands between services when a specific service requires that some other service performs an action or is it more correct to have the initial event raise and event and let the handler in the other service perform that action in the event handler. I am asking this because I've seen events like: INeedCreditAproved, when I was hoping to see a domain command like: ApprovedCredit.
Any input is welcome.
You're missing an important concept here - Sagas (Process Managers). You have a long-running workflow and it's better expressed centrally.
Sagas listen to events and emit commands. So OrderAccepted event will start a Saga, which then emit ApproveCredit and ReserveStock commands, to be sent to Credit and Warehouse services respectively. Saga can then listen to command success/failure events and compensate approprietely, like say emiting SendEmail command or whatever else.
One year ago I was sending commands like "send commands between services by event handlers when a specific service requires that some other service performs an action" but a stupid decision made by me switched to using events like you said "to have the initial event raise and event and let the handler in the other service perform that action in the event handler" and it worked at first. The most stupid decision I could make. Now I am switching back to sending commands from event handlers.
You can see that other people like Rinat do similar things with event ports/receptors and it is working for them, I think:
http://abdullin.com/journal/2012/7/22/bounded-context-is-a-team-working-together.html
http://abdullin.com/journal/2012/3/31/anatomy-of-distributed-system-a-la-lokad.html
Good luck

Implementing multi-threading in workflows

I'm aware a single workflow instance run in a single thread at a time. I've a workflow with two receive activities inside a pick activity. Message correlation is implemented to make sure the requests to both the activities should be routed to the same instance.
In the first receive branch I've a parallel activity with a delay activity in one branch. The parallel activity will complete either the delay is over or a flag is set to true.
When the parallel activity is waiting for the condition to meet how I can receive calls from the second receive activity? because the flag will be set to true only through through it's branch. I'm waiting for your suggestions or ideas.
Check out my blog The Workflow Parallel Activity and Task Parallelism This will help you understand how WF works
Not quite sure what you are trying to achieve here.
If you have a Pick with 2 branched and both branches contain a Receive it will continue after you receive either of the 2 messages the 2 Receive activities are waiting for. The other will be canceled and not receive anything. The fact that one Receive is in a Parallel will not make a difference here. So unless this is on a loop you will not receive more than one WCF message in your workflow.

Resources