I have a scinario where the price calculation happens at realtime by calling external service.
Coming to Spartacus, what would be the best approach.
Writing custom service to make realtime call for getting pricing data and use(call) the service in Spartacus.
Calling the external service with required parameters from Spartacus storefront.
Well, you should create a service, that uses your newly created NGRX store (actions, effects, reducer etc..) and your own connector / adapter to call your external API. Then you can subscribe to the state changes to render the response when your the data is received.
I guess with realtime, you mean something that is calculated on an event, or is it something you need to stream / call every x seconds?
Related
Scenario
When business transactions are performed, we're supposed to make that data available to end clients.
Current Design
Our web app publishes transaction messages are added to a topic on the Azure Service Bus.
We expose APIs to clients through which they can consume the data from those transactions.
Upon calling these APIs, we read the messages from the Subscription and return it to the client.
Problem
We want a guaranteed delivery - we want to make sure the client acknowledges the delivery of the data. So we don't want to remove the message from the subscription immediately. We want to keep it until the client acknowledges it.
So we only want to do a "Peek" instead of "Receive".
So the client calls the first API, to get the data, where we do a Peek.
And once the client has received the packets, the client would call a second API, to acknowledge.
At this point, we want to remove the message from the Subscription, making it Complete.
The current design of the Service Bus Message Receiver is that, a Complete can be performed only using the same Receiver instance that performed the Peek, as per the documentation, and we also observed the same when we tried it out.
Both the APIs, are two separate APIs and we cannot do the Peek and Complete using the same instance of the Receiver.
Thinking about options to somehow make the Receiver as a Singleton, across APIs within that App Service.
However this will be a problem when the App Service scales out.
Is there a different way to achieve what we're trying to do here ?
There is an option available in Azure Service Bus to defer messages. Once a message is deferred, it can be received with the help of it's sequence number.
The first client should receive the message and instead of completing it, it should defer it and return it.
The second client (which has sequence number) can receive the message from the Subscription. Refer here for more details.
Another option would be to not use a Service Bus Client on your backend and instead your clients could directly work with Service Bus using its Service REST API (assuming they can't use the AMQP client if I am understanding your scenario correctly).
There are APIs to
Peek-Lock
Renew Lock
Unlock
Delete (Complete)
You could also proxy these requests if you'd like using your backend itself or a service like APIM if you are already using it.
PS: Cross posting the answer for the same query on the MSDN forum
I am converting a monolithic application to microservices. I have set up an API Management layer and a Service Bus all within a Service Fabric. The idea is to use messages to communicate to the microservices so they do not know about eachother.
When one microservice needs information it posts a message to the service bus and it gets fulfilled and a reply is sent and correlated.
The only problem is that the API Management posts the message to the service bus and returns without waiting for a reply therefore the client does not get a response.
Is there a way to have the API Management wait for a reply?
Would this need a sort of broker service in-between?
Is it better to just have a REST layer on each microservice that the API Management could call but then the services would use the service bus?
Thanks for any help.
UPDATE:
I think the only way to have Api Management wait is use of a logic app. Not sure about this.
Any Azure experts out there?
The way APIM is behaving is actually expected.
Service Bus is meant to decouple different (micro)services and inherently doesn't have a request-response style of operation though it can be implemented that way.
Here is one way to can design/implement you system
First, for a request-response style operation with Service Bus, one way you can achieve it is by using two queues.
One for sending the request (along with some Unique ID - GUID will do) and the other for receiving the response (which again contains the Unique ID sent in the request).
Instead of having APIM work with Service Bus, call a Logic App or Function which does this for you.
Finally, waiting for the response is something that will depend on your use case.
If you have a very long running task, its best to follow the Async Pattern implemented by both Logic Apps and Functions (using Durable Functions), which return a 202 Accepted response immediately with a status URI that your client can poll for updates.
But if its a quick response (before the HTTP request times out), you could probably wait for the response service bus message and return the response then. For this, your Logic App or Function would have to poll/wait for the service bus message with the same unique ID and then return the response.
I have created a webservice by implementing the web connector callback methods and I have 2 apps running in webconnector for 2 usecases: invoices and customers.
I want to determine which app is triggering my web service so that I can decide whether to push customers or invoices to QB. How do I do it? I was hoping to do it through 'AppID' field but its not returned in every SendRequestXML call. In fact, not even in the first SendRequestXML call. Has anyone implemented these scenarious?
If you have two apps in the Web Connector, then each app should be using a different username, and pointing to a different URL.
If they're using the same URL or the same username... then you did something wrong and you should fix your implementation.
With that said... it sounds like you have sort of a wonky implementation anyway. Why do you have two apps for two different use-cases? Why not just send BOTH customers and invoices through the same app...?
I am creating a bot using Microsoft Bot Framework (BotBuilder) and want it to message the user when an appointment is about to begin.
I currently use Microsoft Graph api to access the user's Office 365 calendar and store the appointments. A background thread then keeps track of time and then messages the user when an appointment is about to start.
The current idea is to use Graph webhooks to notify my bot about new appointments.
My question is, would it be smarter to use an Azure service (such as Scheduler) to keep track of the appointments, and send rest messages to my bot, which will then send a message to the user?
My worry is, that as the amount of users rise, the amount of appointments and time checks will become too large, and that maybe Azure services would be able to handle it better.
This is a perfect fit for Azure Functions with a HTTP Trigger.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook
This article explains how to configure and work with HTTP triggers and bindings in Azure Functions. With these, you can use Azure Functions to build serverless APIs and respond to webhooks.
Azure Functions provides the following bindings:
An HTTP trigger lets you invoke a function with an HTTP request. This can be customized to respond to webhooks.
An HTTP output binding allows you to respond to the request.
We are creating a multi tenant application. To allow the users to create bussiness logic, we want to use Logic apps.
Therefore I want to create a web app which will expose the DocumentDB change feed.
When creating a logic app, you can choose between different out of the box connectors. How can we get ours included in the list? Is there any documentation on that?
The idea is to get the logic app running with every document insert.
To achieve this, I have two options: Polling triggers and Webhook triggers.
I prefer the polling trigger because this will be less work than implementing logic to handle all the subscribed URL's per tenant. Anyone who has concerns/suggestions on this approach?
The location header should become my continuation token from the DocumentDB change feed, is that correct?
Logic app will call my api the first time without location header
My api will call DocDb without continuation tokens, which will return all docs one by one, because the max doc count is set to 1
My api will return the first document that is retrieved, and will set the retry-after to 0 and the location to new continuation token that I have received. If no documents are found, the api will return the result like in step 5.
Logic app will start a new instance to handle the document and will call the API again with the continuation token in the header.
Step 3 to 4 will be repeated until all documents are processed. Because I am only processing one document per logic app instance, Azure should be able to scale for me automatically?
When all documents are processed, the api will return a 202 statuscode with a location header set to the latest continuation token and a retry-after to 15.
After 15 seconds, logic app will call our api with the latest continuation token. This will trigger the process again.
Is my solution feasible? What if I need to stop, or clone the logic app configuration for some reason, how can I know what the latest continuation was or do I need to save my continuation tokens in some data store?
Yes what you've described here should be supported. You can use your own connector in a logic app by clicking the dropdown above the search and selecting to use an API from API Management or App Services as detailed here and here.
The continuation token can be preserved in the "trigger state" of the location header assuming you are using the 202 polling pattern above. So for example the header may be https://mydocdbconnector.azurewebsites.net/api/trigger?triggerstate={thisCouldBeTheContinuationToken} -- that way on subsequent polls the last continuation token is sent back to the trigger and can be used in the operation. Trigger state is preserved as long as the trigger remains unchanged in the definition (enabled/disabling/etc all preserve trigger state).
The only part I'm not clear on is the multi-tenant requirements you have. I assume you mean you want each of the users to be able to trigger on their own documentDb instance -- the best supported pattern for this today is to have a logic app per customer - each with it's own triggerState and trigger. This could be leveraging a custom connector as well. This is the pattern that services like Microsoft Flow use which are built on Logic Apps.
Let me know if that helps.