I currently have a application running that passes data between Asana and Zendesk.
I have webhooks created for all my Project in Asana and all project events are sent to my webhook end point that verifies the request and tries to identify the event and update Zendesk with relevant data depending on the event type (Some events aren't required).
However I have been receiving the following request from the Webhooks just recently:
"events": [
{
"action": "sync_error",
"message": "There was an error with the event queue, which may have resulted in missed events. If you are keeping resources in sync, you may need to manually re-fetch them.",
"created_at": "2017-05-23T16:29:13.994Z"
}
]
Now because I don't poll the API for event updates I react when the events arrive with me, I haven't considered using a Sync key, the docs suggest this is only required when polling for events. Do I need to use one when using Webhooks also?
What am I missing?
Thanks in advance for any suggestions.
You're correct, you don't need to track a sync key for webhooks - we proactively try to reach out with them when something changes in Asana, and we track the events that haven't yet been delivered across webhooks (essentially, akin to us updating the sync key server-side whenever webhooks have been successfully delivered).
Basically what's happening here is that for some reason, our event queues detect that there's a problem with their internal state. This means that events didn't get recorded, or webhooks didn't get delivered after a long time. Our events and webhooks try to track changes in a best-effort sense, and there are some things that can happen with our production machines that can cause these sorts of issues, like a machine dying at an inopportune time.
Unfortunately, then, the only way to get back to a good state is to do a full scan of the projects you're tracking, which is what is meant by you may need to manually re-fetch them. Basically, a robust implementation of syncing Asana to external resources looks like:
A diff function that, given a particular task and external resource, detects what state is out of date or different between each resource and choose a merge/patch resolution (i.e. "Make Zendesk look like Asana")
Receiving a webhook runs that diff/patch process for that one task in a "live" fashion.
Periodically (on script startup, say, or when webhooks/events are missed and you get an error message like this) update all resources that might have been missed by scanning the entire project and do the diff/patch for every task. This is more expensive, but should be significantly more rare.
Related
When we maintenance our server, or redeploy our external facing REST services for DocuSign, is there a way we can lock all envelopes that are currently sitting with signers? We use Connect to process signer/document updates from DocuSign, and we don't want these requests coming through while we're under maintenance.
I've seen in the documentation we can lock individual envelopes. Is the best route to run through each envelope that's still pending signature and temporarily lock it? This method seems very resource intensive considering the amount of consecutive API calls needed.
Connect supports exponential retires when the events fail to be sent to your endpoint. How long does your system down time take exactly?
When your system is back up, new events should arrive in your endpoint and you can react to them accordingly. Please let us know if you see otherwise.
https://developers.docusign.com/platform/webhooks/connect/architecture
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
We have a microservices architecture to support a big application. All the services communicate using azure service bus as a medium. Currently, we are sending the notifications(immediate/scheduled) from different services on per need basis. Here comes the need for a separate notifications service that could take that load and responsibility of formatting and sending notifications(email, text etc).
What I have thought:
Notification service will have its own database which will have data related to notifications(setup, templates, schedules etc) and also some master data(copied from other sources). I don't want to copy all the transactional data to this DB(for abvious reasons) but we might need transactional and historic data to form a notification. I am planning to subscribe to service bus events (published by other services) and onus of sending the data needed for formatting the notification will be on service raising the service bus event. Notification service will rely on that data to fill up the template(stored in ots own DB) and then send the notification.
Job of notifications service will be to listen to service bus events and then fill up the template from data in event and then send the notification.
Questions:
What if the data received by notification service from service bus event does not have all necessary data needed in notification template. How do I query/get the missing data from other service.?
Suppose a service publishes 100 events for a single operation and we need to send single notification that that whole operation. How does the notification service manage that since it will get 100 different messages separately.?
Since the notification trigger depends on data sent from other sources(service bus event), what happens when we have a notification which is scheduled(lets say 6am everyday). How do we get the data needed for notification(since data is not there in notification DB)?
I am looking for some experience advice and some material to refer. Thanks in advance.
You might have to implement a notification as a service which means, imagine you are exporting your application as a plugin in Azure itself. few points here.....
your notification will only accept when it is valid information,
Have a caching system both front end(State management) and backend, microservices(Redis or any caching system)
Capture EventId on each operation, it's a good practice we track the complex operation of our application in this way you can solve duplicate notification, take care that if possible avoid such type of notifications to the user, or try to send one notification convening a group of notifications in one message,
3.Put a circuit breaker logic here to handle your invalid notification, put this type of notification in the retry queue of 30mins maybe? and republish the event again
References
https://www.rabbitmq.com/dlx.html
https://microservices.io/patterns/reliability/circuit-breaker.html
https://redis.io/topics/introduction
Happy coding :)
In microservice and domain driven design it's sometimes hard to work out when to start splitting services. Having each service be responsible for construction and sending its own notifications is perfectly valid.
It is when there is a need to have additional decisions be made, that are not related to the 'origin' service, where things become more tricky.
EG. 1
You have an order microservice that sends an email to the sales team and the user when an order is placed.
Then the payment service updates sales and the user with an sms message when the payment is processed.
You could then decide you and the user to manage their notification preferences. They can now decide if they want sms / email / push message, and which messages they would like to receive.
We now have a problem. These notification prefrences would need to be understood by every service sending messages. Any new team or service that starts sending messages needs to also remember to implement these preferences.
You may also want the user to view all historic messages they have been sent. Again you get into a problem where there is no single source for that information.
EG 2
We now have notification service, it is listening for order created, order updated, order completed events and payment processed events.
It is listing for:
Order Created
Order Updated
Only to make sure it has the information it needs to construct the messages. It is common and in a lot of requirements to have system wide redundancy of data when using microservices. You need to imagine that each service is an island, so while it feels wasteful to store that information again, if it is required that service to perform is work then it is valid.
Note: don't store the data wholesale, store only what is relevant for that service.
We can then use the:
Order Complete
Payment Processed
events as triggers to actually start constructing and sending the messages.
Problems:
Understanding if the service has all the required data
This is up to the service to determine. If the Order Complete event comes through, but it has not yet received an order created event, then the service should store the order complete event and try to process again in the future when all the information is available.
100 events resulting in a notification
Data aggregation is also an important microservice concept, and there are many ways to ensure completeness that will come down to your specific use case.
I'm thinking how can I handle sending events when suddenly message broker go down. Please take a look at this code
using (var uow = uowProvider.Create())
{
...
...
var policy = offer.Buy(customer);
uow.Policies.Add(policy);
// DB changes are saved here! but what would happen if...
await uow.CommitChanges();
// ...eventPublisher throw an exception?
await eventPublisher.PublishMessage(PolicyCreated(policy));
return true;
}
IMHO if eventPublisher throw exception the event PolicyCreated won't be published. I don't know how to deal with this situation. The event must be published in system. I suppose that only good solution will be creating some kind of retry mechanism but I'm not sure...
I would like to elaborate a bit on the answers provided by both #Imran Arshad and #VoiceOfUnreason which are, of course, correct.
There are basically 3 patterns when it comes to publishing messages:
exactly once delivery (requires distributed transactions)
at most once delivery (no distributed transaction but may miss messages - like the actor model)
at least once delivery (no distributed transaction but may have duplicate messages)
The following is all in terms of your example.
For exactly once delivery both the database and the queue would need to provide the ability to enlist in distributed transactions. Some queues do not proivde this functionality out-of-the-box (like RabbitMQ) and even though it may be possible to roll your own it may not be the best option. Distributed transactions are typically quite slow.
For at most once delivery we have to accept that we may miss messages and I'm guessing that in most use-cases this is quite troublesome. You would get around this by tracking the progress and picking up the missed messages and resending them if required.
For at least once delivery we would need to ensure that the messages are idempotent. When we get a duplicate messages (usually quite an edge case) they should be ignored or their outcome should be the same as the initial message processed.
Now, there are a couple of ways around your issue. You could start a database transaction and make your database changes. Before you comit you perform the message sending. Should that fail then your transaction would be rolled back. That works fine for sending a single message but in your case some subscribers may have received a message. This complicates matters as all your subscribers need to receive the message or none of them get to receive it.
You could have your subscriber check whether the state is indeed true and whether it should continue processing. This places a burden on the subscriber and introduces some coupling. It could either postpone the action should the state not allow processing, or ignore it.
Another option is that instead of publishing the event you send yourself a command that indicates completion of the step. The command handler would perform the publishing and retry until all subscriber queues receive the message. This would require the relevant subscribers to ignore those messages that they had already processed (idempotence).
The outbox is a store-and-forward approach and will eventually send the message to all subscribers. You could have your outbox perhaps be included in the database transaction. In my Shuttle.Esb service bus one of the folks that used it came across a weird side-effect that I had not planned. He used a sql-based queue as an outbox and the queue connection was to the same database. It was therefore included in the database transasction and would roll back with all the other changes if not committed. Apologies for promoting my own product but I'm sure other service bus offerings may have the same functionality.
There are therefore quite a few things to consider and various techniques to mitigate the risk of a queue outage. I would, however, move the queue interaction to before the database commit.
For reliable system you need to save events locally. If your broker is down you have to retry and publish event.
There are many ways to achieve this but most common is outbox pattern. Just like your mail box your event/message stays locally and you keep retrying until it's sent and you mark the message published in your local DB.
you can read more about here Publish Events
You'll want to review Udi Dahan's discussion of Reliable Messaging without Distributed Transactions.
But very roughly, the PolicyCreated event becomes part of the unit of work; either because it is saved in the Policy representation itself, or because it is saved in an EventRepository that participates in the same transaction as the Policies repository.
Once you've captured the information in your database, retry the publish is relatively straight forward - read the events from the database, publish, optionally mark the events in the database as successfully published so that they can be cleaned up.
I'm not sure how to properly ask this question but here it is:
I'm starting the saga on specific event, then im dispatching the command which is supposed to create some aggregate and then send another event which will be handled by the saga to proceed with the logic.
However each time i'm restarting the application i get an error saying that event for aggregate at sequence x was already inserted, which, i suppose is because the saga has not yet been finished and when im restarting it it starts it again by trying to create new aggregate.
Question is, is there any way in the axoniq to track progress of the saga? Like should i set some flags when i receive event and wrap in ifs the aggregate creation?
Maybe there is another way which i'm not seeing, i just dont want the saga to be replayed from the start.
Thanks
The solution you've posted definitely would work.
Let me explain the scenario you've hit here though, for other peoples reference too.
In an Axon Framework 4.x application, any Event Handling Component, thus also your Saga instances, are backed by a TrackingEventProcessor.
The Tracking Event Processor "keeps track of" which point in the Event Stream it is handling events. It stores this information through a TrackingToken, for which the TokenStore is the delegating piece of work.
If you haven't specified a TokenStore however, you will have in-memory TrackingTokens for every Tracking Event Processor.
This means that on a restart, your Tracking Event Processor thinks "ow, I haven't done any event handling yet, let me start from the beginning of time".
Due to this, your Saga instances will start a new, every time, trying to recreate the given Aggregate instance.
Henceforth, specifying the TokenStore as you did resolved the problem you had.
Note, that in a Spring Boor environment, with for example the Spring Data starter present, Axon will automatically create the JpaTokenStore for you.
I've solved my issue by simply adding token store configuration, it does exactly what i require - track processed events.
Basic spring config:
#Bean
fun tokenStore(client: MongoClient): TokenStore = MongoTokenStore.builder()
.mongoTemplate(DefaultMongoTemplate.builder().mongoDatabase(client).build())
.serializer(JacksonSerializer.builder().build())
.build()
I've have a couple of questions to which I am not finding any exact answer. I've used CQRS before, but probably I was not using it properly.
Say that there are 5 services in the domain: Gateway, Sales, Payments, Credit and Warehouse, and that during the process of a user registering with the application, the front-end submits a few commands, the same front-end will then, once the user is registered, send a few other commands to create an order and apply for a credit.
Now, what I usually do is create a gateway, which receives all pubic commands, which are then validated, and if valid, are transformed into domain commands. I only use events to store data and if one service needs some action to be performed in other service, a domain command is sent directly from one service to the other. But I've seen in other systems that event handlers are used for more than store data. So my question is, what are the limits to what event handlers can do? And is it correct to send commands between services when a specific service requires that some other service performs an action or is it more correct to have the initial event raise and event and let the handler in the other service perform that action in the event handler. I am asking this because I've seen events like: INeedCreditAproved, when I was hoping to see a domain command like: ApprovedCredit.
Any input is welcome.
You're missing an important concept here - Sagas (Process Managers). You have a long-running workflow and it's better expressed centrally.
Sagas listen to events and emit commands. So OrderAccepted event will start a Saga, which then emit ApproveCredit and ReserveStock commands, to be sent to Credit and Warehouse services respectively. Saga can then listen to command success/failure events and compensate approprietely, like say emiting SendEmail command or whatever else.
One year ago I was sending commands like "send commands between services by event handlers when a specific service requires that some other service performs an action" but a stupid decision made by me switched to using events like you said "to have the initial event raise and event and let the handler in the other service perform that action in the event handler" and it worked at first. The most stupid decision I could make. Now I am switching back to sending commands from event handlers.
You can see that other people like Rinat do similar things with event ports/receptors and it is working for them, I think:
http://abdullin.com/journal/2012/7/22/bounded-context-is-a-team-working-together.html
http://abdullin.com/journal/2012/3/31/anatomy-of-distributed-system-a-la-lokad.html
Good luck