I am trying to workout the best implementation/approach to the following problem
I have customers using our win forms application which has a plugin which will connect to the Azure Queue to check if there are awaiting invoices for the connecting customer at pre conf intervals. If there is then the plugin will download the invoices into the customers local db. There are lots of customers using this application so all of them will connect to the queue. They will all need to download their own invoices
How I thought of implementing this was by having named queues for each customer (the customer GUID will identify the queue). So all the customers will use the same Account key/name to connect to the queue. Now the problem with this is that each customer has the account key/name in the dll which they can reflect and retrieve (smart customers). So is there a way I can encrypt the key/name or is there a better solution that somebody can suggest
I think the only secure option is to stand up a web service somewhere that acts as a front-end to the queues. Otherwise, as you said, you're leaking the account key to the client, which would allow any customer to read/change/delete any data in the account.
Related
We want to migrate Stripe customers, products, and subscriptions from one account to another. The thing is that one of the accounts has recently become suspended and we can't take charges from the customers via this account. The second one is fine and fully active. We want to transfer the data and continue taking charges from clients with active subscriptions. Can we do it or is it against Stripe policy?
We haven’t found any information about migrating subscribers from a suspended account.
You can’t copy individual charges, invoices, plans and subscriptions, coupons, events, and logs from one stripe Account to another. You can only copy the raw customers objects. This process is a copy, not a migration, so all of your old data will remain in your old account. We recommend keeping the original “old” account around so you can access the legacy data there if you ever need to reference it.
You can follow this Stripe Support Article about how you recreate subscriptions in your second account.
I have an online service hosted on Azure, that asynchronously sends data to on-premise clients.
Each client is identified by an unique code.
Actually there is a single topic, with a subscription for each client which has a filter on the unique code, that is sent as a parameter in the message. No message will ever be broadcasted to all the clients.
I feel that using topic this way is wrong.
The alternative that comes to my mind is to use a dedicated queue for each client, that is created on first contact
Could this be a better approach?
Thanks
In my opinion using Topics and Subscriptions is the right way to go. Here's the reason why:
Currently the routing logic (which message needs to go to which subscription) is handled by Azure Service Bus based on the rules you have configured. If you go with queues, the routing logic will need to come to your hosted service. You'll need to ensure that the queue exists before sending each message. I think it will increase the complexity at your service level somehow.
Furthermore, topics and subscriptions would enable you to do build an audit trail kind of functionality (not sure if you're looking for this kind of functionality). You can create a separate subscription that has a rule to deliver all messages (True SQL Rule) to that subscription along with client specific subscription.
Creating a separate Queue for each client is not advisable. This is the problem solved by Topics.
If you have separate Queue for each client, then you need to send messages to multiple Queues from Server. This will become tedious when the number of clients increases.
Having a single Topic and multiple Subscriptions is easy to manage as the message will be sent only to a single Topic from Server.
Following is the proposed transition in our application:
Web Application is deployed in on-premises IIS (Web Server 1).
Web Application has one functionality (for example, Generate Invoice for selected customer).
For each new request of Generate Invoice, the web application is writing message to the Azure Service Bus Queue.
Azure function gets triggered for each new message in Azure Service Bus Queue.
Azure function triggers Web API (deployed on-premises).
Web API generates Invoice for the customer and stores in the local file storage.
As of now, we have everything setup on-premises, and instead of Service Bus and Azure function, we directly consume Web API. With this type of infrastructure in place, we are currently logging all events in an MongoDB collection, and providing single consolidated view to the user. So they can identify what happened to the Generate Invoice request, and at which level and with which error it got failed (in case of failures).
With the new proposed architecture, we are in process of identifying ways for logging and tracing here, and display consolidated view to the users.
The only option, I can think of is to log all events in Azure Cosmos DB from everywhere (i.e., Website, Service bus, function, Web API), and then provide consolidated view.
Can anyone suggest if the suggested approach looks OK? Or if anyone has some better solution?
Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations and diagnose errors without waiting for a user to report them.
Workbooks combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage.
For more details, you could refer to this article.
How would you go about using Service Bus in a scenario where only client applications with unexpired subscriptions can receive messages from the service bus? Let's say you have a paid service where users can buy subscriptions to your messages for a period of time, so you want your service bus to send new messages only to a selected group of clients (clients with active subscriptions). It is much preferred if authorization for this is done on the server side and not on the client app. Looking at the service bus models (queues, topics, relays), none of them seem to fit this use case.
One way I was thinking to implement this was to change the SAS Key every day and get client applications to query the SAS key from a Web API. So only clients with valid subscriptions can refresh their SAS and receive from the service bus. I don't know if SAS could be changed through the API, though.
Is there any better support for this kind of scenario in Azure Service Bus, or can you think of a better way to implement it?
We are working on a SaaS based application (built over azure). In this application Web server and App server is shared among all tenants but their database are separate (Sql Azure).
Now there is a need to implement notification service which can generate notifications based on events subscriptions. System can generate different kind of event (like account locked and many other) and user can configure notification rule on these events. Notification can be in the form of email and sms.
We are planning to implement a queue for events. Event notifier will push an even on this queue. Notification engine will subscribe to this queue. Whenever it receive a new event, it will check if there is a notification rule configured on this type of event or not. If yes, it will create a notification, which will result into emails/sms. These emails/sms can be stored in database or pushed to another queue. A different background process (worker role) can process these emails.
Here are my queries.
Should we keep one single queue (for events) for all tenants or create separate queue for different tenants. If we keep a single queue, we can a shared subscriber service which can subscribe to this queue. We can easily scale in-out this machine.
Since we have different databases for each tenant, we can store their emails to their respective databases and using some service, we can pool database and send email after defined interval. But I am not sure how will we share the subscriber code in this case.
We can store mails in a nosql database (like table storage in azure). A subscriber (window service/worker role) can pool this table and send mails after defined interval. Again, scaling can a challenge here too.
We can store emails in queue (RabbitMQ for instance). A worker role can subscribe to this queue. Scaling of worker role should not be any issue in case we keep a single queue for all tenant.
Please provide your inputs on these points.
Thanks In Advance
I would separate queues not by tenants but by function. So that queue handlers are specific for the type of a message that they are processing.
IE: order processing queue, an account setup queue, and etc.
Creating queues by tenant is a /headache/ to manage when you want to scale based on them and you want to presumably sync/add/remove them as customers come and leave. So, I would avoid this scenario
Ultimately, scaling based on multiple queues will be harder without auto-scaling services such as CloudMonix (a commercial product I help built)
HTH