Testing Stripe transfer API webhook with real data - stripe-payments

I am using Stripe with the separate charges and transfers flow. The way this goes is that my platform receives the full payment minus the Stripe fees, and then I do a Transfer to the seller's connected account, which is then paid out to their bank. I set up a webhook to run on the "transfer.paid" event, so I can update some book-keeping records on my platform's database when the money is transferred to the connected account. I wish to test this endpoint so that I can see whether my event behaves as expected. However, it seems that the webhook testing available through the Stripe Dashboard sends only dummy data, or only populates a few items of the request body with data from the last transaction made in the account. It seems the only way to receive real data is to allow the event to trigger by itself. In my case, though,the transfers are taking up to seven days to complete, which means I have to send and wait a whole week to see the result, which is really slowing down my development time. This seems really inefficient, unless there is something fundamental that I am not understanding about webhooks. Does anyone have any idea how I can test my webhook endpoints with real data without having to wait so long? Any info will be greatly appreciated.

Unfortunately the only way to test events with 'real' payloads for some things like Subscription-based events and Payouts is to wait for the event to occur in testmode.

Related

Using Azure Service Bus to handle lots of votes and process results with Azure Functions

I am creating a poll app. Users can define one or more poll questions and configure answer options. Guests can join a session and when a poll (question) is activated, start voting. Basically what a standard poll looks like.
For processing the incoming votes, I use the Azure Service Bus. I have an endpoint that accepts votes and sends a message to a Service Bus Queue. Then, an Azure Function with a Service Bus Queue trigger will consume that message and persist the vote somewhere in a repository.
My problem is that I want another 'background process', I imagine another Azure Function, that will be triggered when votes come in, to go and calculate the cumulative of votes to be able to draw a pie chart.
Now I want this Function to be triggered as efficiently as possible. Key is that it must be accurate. What I'm looking for, is a method that will trigger the calculation once when a vote comes in, but when a bunch of votes comes in, I want to trigger the calculation only once after the last vote was persisted. I was thinking of introducing a new queue to send 'calculation commands' to. I use a real-time framework to update the pie chart. I would like to send pie-chart updates frequently, but not necessarily thousands of times a second when huge amounts of votes came in in a short amount of time.
I looked for a solution where I can use the de-duplication of an SB queue, but I think this de-dup also checks for previously sent messages. And using this solution does not guarantee that the calculation takes place after the last vote has been processed, because the message may be recognized as a duplicate and therefore ignored.
Another solution may be to introduce a SessionId for the votes queue allowing me to overcome the problem that vote messages are handled simultaneously, but this feels like an anti-pattern using the Service Bus. In the end, you want the thing to scale like a maniac when large amounts of votes come in, so for that reason, the session is a no go to me.
And now I'm running out of ideas, is there a mechanism that I overlooked that I can take advantage of to (for example) only put a message on a queue when there is no similar message waiting to be processed (e.g. without a lock) or something?
You can trigger the Function using one of the available Event Grid events for Service Bus, if the concern is that you don't want a listener to run at all times.
The Azure Functions approach suggested by Clemens is a viable approach. You probably don't need Event Grid because your function could be triggered by the Service Bus queue.
I want to trigger the calculation only once after the last vote was persisted.
If there is a way to indicate voting period is over, you could have a 2nd function that runs the calculations from the data stored by processing voting messages. One thing to watch out for is how the 1st function that accepts the voting messages stores the data. If the data is stored in append-only mode, you're good. If you're trying to keep a counter only, you'll have contention and don't recommend that approach. Append only is a more efficient approach.

DocuSign - How to handle system downtime

When we maintenance our server, or redeploy our external facing REST services for DocuSign, is there a way we can lock all envelopes that are currently sitting with signers? We use Connect to process signer/document updates from DocuSign, and we don't want these requests coming through while we're under maintenance.
I've seen in the documentation we can lock individual envelopes. Is the best route to run through each envelope that's still pending signature and temporarily lock it? This method seems very resource intensive considering the amount of consecutive API calls needed.
Connect supports exponential retires when the events fail to be sent to your endpoint. How long does your system down time take exactly?
When your system is back up, new events should arrive in your endpoint and you can react to them accordingly. Please let us know if you see otherwise.
https://developers.docusign.com/platform/webhooks/connect/architecture

Notification Service in microservices architecture [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
We have a microservices architecture to support a big application. All the services communicate using azure service bus as a medium. Currently, we are sending the notifications(immediate/scheduled) from different services on per need basis. Here comes the need for a separate notifications service that could take that load and responsibility of formatting and sending notifications(email, text etc).
What I have thought:
Notification service will have its own database which will have data related to notifications(setup, templates, schedules etc) and also some master data(copied from other sources). I don't want to copy all the transactional data to this DB(for abvious reasons) but we might need transactional and historic data to form a notification. I am planning to subscribe to service bus events (published by other services) and onus of sending the data needed for formatting the notification will be on service raising the service bus event. Notification service will rely on that data to fill up the template(stored in ots own DB) and then send the notification.
Job of notifications service will be to listen to service bus events and then fill up the template from data in event and then send the notification.
Questions:
What if the data received by notification service from service bus event does not have all necessary data needed in notification template. How do I query/get the missing data from other service.?
Suppose a service publishes 100 events for a single operation and we need to send single notification that that whole operation. How does the notification service manage that since it will get 100 different messages separately.?
Since the notification trigger depends on data sent from other sources(service bus event), what happens when we have a notification which is scheduled(lets say 6am everyday). How do we get the data needed for notification(since data is not there in notification DB)?
I am looking for some experience advice and some material to refer. Thanks in advance.
You might have to implement a notification as a service which means, imagine you are exporting your application as a plugin in Azure itself. few points here.....
your notification will only accept when it is valid information,
Have a caching system both front end(State management) and backend, microservices(Redis or any caching system)
Capture EventId on each operation, it's a good practice we track the complex operation of our application in this way you can solve duplicate notification, take care that if possible avoid such type of notifications to the user, or try to send one notification convening a group of notifications in one message,
3.Put a circuit breaker logic here to handle your invalid notification, put this type of notification in the retry queue of 30mins maybe? and republish the event again
References
https://www.rabbitmq.com/dlx.html
https://microservices.io/patterns/reliability/circuit-breaker.html
https://redis.io/topics/introduction
Happy coding :)
In microservice and domain driven design it's sometimes hard to work out when to start splitting services. Having each service be responsible for construction and sending its own notifications is perfectly valid.
It is when there is a need to have additional decisions be made, that are not related to the 'origin' service, where things become more tricky.
EG. 1
You have an order microservice that sends an email to the sales team and the user when an order is placed.
Then the payment service updates sales and the user with an sms message when the payment is processed.
You could then decide you and the user to manage their notification preferences. They can now decide if they want sms / email / push message, and which messages they would like to receive.
We now have a problem. These notification prefrences would need to be understood by every service sending messages. Any new team or service that starts sending messages needs to also remember to implement these preferences.
You may also want the user to view all historic messages they have been sent. Again you get into a problem where there is no single source for that information.
EG 2
We now have notification service, it is listening for order created, order updated, order completed events and payment processed events.
It is listing for:
Order Created
Order Updated
Only to make sure it has the information it needs to construct the messages. It is common and in a lot of requirements to have system wide redundancy of data when using microservices. You need to imagine that each service is an island, so while it feels wasteful to store that information again, if it is required that service to perform is work then it is valid.
Note: don't store the data wholesale, store only what is relevant for that service.
We can then use the:
Order Complete
Payment Processed
events as triggers to actually start constructing and sending the messages.
Problems:
Understanding if the service has all the required data
This is up to the service to determine. If the Order Complete event comes through, but it has not yet received an order created event, then the service should store the order complete event and try to process again in the future when all the information is available.
100 events resulting in a notification
Data aggregation is also an important microservice concept, and there are many ways to ensure completeness that will come down to your specific use case.

Asana API Sync Error

I currently have a application running that passes data between Asana and Zendesk.
I have webhooks created for all my Project in Asana and all project events are sent to my webhook end point that verifies the request and tries to identify the event and update Zendesk with relevant data depending on the event type (Some events aren't required).
However I have been receiving the following request from the Webhooks just recently:
"events": [
{
"action": "sync_error",
"message": "There was an error with the event queue, which may have resulted in missed events. If you are keeping resources in sync, you may need to manually re-fetch them.",
"created_at": "2017-05-23T16:29:13.994Z"
}
]
Now because I don't poll the API for event updates I react when the events arrive with me, I haven't considered using a Sync key, the docs suggest this is only required when polling for events. Do I need to use one when using Webhooks also?
What am I missing?
Thanks in advance for any suggestions.
You're correct, you don't need to track a sync key for webhooks - we proactively try to reach out with them when something changes in Asana, and we track the events that haven't yet been delivered across webhooks (essentially, akin to us updating the sync key server-side whenever webhooks have been successfully delivered).
Basically what's happening here is that for some reason, our event queues detect that there's a problem with their internal state. This means that events didn't get recorded, or webhooks didn't get delivered after a long time. Our events and webhooks try to track changes in a best-effort sense, and there are some things that can happen with our production machines that can cause these sorts of issues, like a machine dying at an inopportune time.
Unfortunately, then, the only way to get back to a good state is to do a full scan of the projects you're tracking, which is what is meant by you may need to manually re-fetch them. Basically, a robust implementation of syncing Asana to external resources looks like:
A diff function that, given a particular task and external resource, detects what state is out of date or different between each resource and choose a merge/patch resolution (i.e. "Make Zendesk look like Asana")
Receiving a webhook runs that diff/patch process for that one task in a "live" fashion.
Periodically (on script startup, say, or when webhooks/events are missed and you get an error message like this) update all resources that might have been missed by scanning the entire project and do the diff/patch for every task. This is more expensive, but should be significantly more rare.

API usage Limit in Flurry?

Flurry says that The rate limit for the API is 1 request per second. In other words, you may call the API once every second. I could not understand this.it means whenever the event occurs in mobile Application the request sends to server not as whole thing.Am I right? any help please?
When registering events you don't need to worry about API Limits. All events you fire are stored locally and when your session finishes the whole event package is sent to flurry server.
You could create a queue in your application with the events you want to register with the API, and continuously try to send all items in that queue, with a one second interval.

Resources