I'm developing an API that calls subscription and tenant information when a user logs in to azure through azure-cli.
When a user logs in to CSP(internally, CSP login is performed using az login --use-device-code), the user receives the url https://microsoft.com/devicelogin and a secret code.
After completing this process, the user's login information is left on our main server, and the user's subscription and tenant information can be called using this.
In the case of a single user, this process has no problem at all. But, when multiple users make a request at the same time, the information of only the user who made the last request is called.
In addition, another problem is that the main server's process is blocked until the process is finished when requested via azure-cli.
When multiple users request CSP login, what is the way to operate normally without blocking the main server and without overwriting user information?
For reference, the main server is FastAPI.
Based on what you have explained, it looks like you have a fast api server which exposes an api which the users use for login. The response from the api will be the device-login url and the device code for the client to use. All the subsequent calls will be done using this information.
So now the first problem is the main thread issue, where the api should not get block until the process is done.
For this you need to have asynchronous apis, which performs background jobs and inform the user when the task is complete or the user can request for the information periodically until the job status changes.
This means we will need to modify our fastapi server with something that can perform multi-threading, queue tasks and have some lookup cache.
The best approach I suggest is having a fastapi stack with the below libs:
Celery - an asynchronous task manager that lets you run and manage jobs in a queue.
RabbitMQ - a message broker that is used to communicate between the task workers and Celery
Redis - an in-memory cache (key-value store) for storing and retrieving values.
You can use the same Redis cache to solve your second problem asw well where you need to store and use multiple session information without overwriting. This will help you handle multiple session info or multiple user infor.
Please have a look at some sample projects from github below:
jjpizarro/fastapi-celery-1
CloudNua/fastapi-celery
Related
We have an API to fetch the latest transaction data of the user based on the scheduled Next_Refresh_Time. Each user has different scheduled refresh time. Since we have thousands of users we have to run the scheduler to fetch the data. Please suggest me the best way to do it.
Each user has different scheduled refresh time. Since we have thousands of users we have to run the scheduler to fetch the data.
You could add a queue message and specify initialVisibilityDelay with Next_Refresh_Time value when a user login, and then you could create and run a Queue-trigger WebJob to process queue message and featch the latest data (and if current user is still online, add the message (specify same content and initialVisibilityDelay as original message) to queue).
Besides, if you’d like to real-time push the latest data to specific connected user, SignalR would help you implement real-time functionality and SignalR can be used in a variety of client platforms. You could save connection id of a login user in queue message, and then you can call hub method in WebJob function to push data to a connected user based on connection id.
The following thread and article would be helpful to know how to establish connection and call hub method.
SignalR - Broadcasting over a Hub in another Project from outside of
a
Hub
Hubs API for
SignalR
I am creating a bot using Microsoft Bot Framework (BotBuilder) and want it to message the user when an appointment is about to begin.
I currently use Microsoft Graph api to access the user's Office 365 calendar and store the appointments. A background thread then keeps track of time and then messages the user when an appointment is about to start.
The current idea is to use Graph webhooks to notify my bot about new appointments.
My question is, would it be smarter to use an Azure service (such as Scheduler) to keep track of the appointments, and send rest messages to my bot, which will then send a message to the user?
My worry is, that as the amount of users rise, the amount of appointments and time checks will become too large, and that maybe Azure services would be able to handle it better.
This is a perfect fit for Azure Functions with a HTTP Trigger.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook
This article explains how to configure and work with HTTP triggers and bindings in Azure Functions. With these, you can use Azure Functions to build serverless APIs and respond to webhooks.
Azure Functions provides the following bindings:
An HTTP trigger lets you invoke a function with an HTTP request. This can be customized to respond to webhooks.
An HTTP output binding allows you to respond to the request.
I have a Firebase mobile application that serves many users.
The application requires to send email notifications to users. Firebase cannot send functional email.
Zapier is not an option as the webhook service is very limited and cant consume complex JSON such as the email body.
To solve this, I store the “email job” in the Firebase database (include To, subject and body”), I setup a “mail server” using nodejs server (at home) that listen to the Firebase database, so whenever there is a “new email job” it sends the mail and mark the job status to “DONE”.
In order to maintain high availability and scalability, I must be able to run more than one “mail servers” but this will cause duplication mail as all servers will listen to jobs.
I cannot address a job to a specific server, as the server may be down and I will lose jobs. Also, Firebase does not have kind of SELECT FOR UPDATE as SQL databases have to maintain concurrency.
Is there a way to solve this issue using Firebase? If no, any workaround?
I am building out a registration system using PayPal Hosted Pages. From what I understand I can use the Silent POST feature to let my application know when a successful transaction has occurred on the hosted checkout page. I worry that it will be possible to spoof this POST request and manipulate my application into thinking a transaction was successful.
Example:
When a user checks out they are redirected to a URL like
https://payflowlink.paypal.com/?MODE=TEST&SECURETOKENID=XXX&SECURETOKEN=YYY
They can copy XXX and YYY and use an application like cURL to send a POST request to my application endpoint, thus tricking into thinking there was a successful transaction.
Is there a preferred method of securely handling silent POST requests to prevent this scenario? Is there a better method altogether of notifying my application of a successful transaction?
You can use a userid/and matching secure key as well as a date stamp, that way, only a random generated secure key, and a user id can be used, for a given time frame (usually couple minutes)...
I'm trying to piece together the general workflow of giving a user push notifications via the service worker.
I have followed this Google Developers service worker push notifications tutorial and am currently thinking about how I can implement this sort of thing in a small user based web app for experimentation.
In my mind, the general workflow of an web app supporting push notifications is as follows:
Client visits app
Service worker yields a push notification endpoint
Client sends the endpoint to the server
Server associates the endpoint with the current user that the endpoint was generated for
Every time something that your app would say is notification worthy happens, the server grabs the push notification endpoint(s) associated with the user, and hits it to send a push notification to any user devices (possibly with a data payload in Chrome 50+, etc)
Basically I just want to confirm that my general implementation thoughts with this technology are accurate, else get feedback if I am missing something.
You are pretty much bang on, there are some specifics that aren't quite right (but this is largely phrasing and may be done to personally taste).
Client visits app
Register a Service Worker that you want to use for push messaging
Use the service worker registration to subscribe the user to push messaging, at which point the user agent will configure an endpoint + additional values for encrypting payloads (If the the user agent supports it).
Client sends the endpoint to the server
Server store the the endpoint and data for later use (The server can associate the endpoint with the current user if the server if the web app has user accounts).
When ever the server wishes to send a notification to a user(s), it grabs the appropriate endpoints and calls them that will wake up the service worker which can then display a notification.
Payload support in coming in Chrome 50+ and at the time of writing payload is support in Firefox, but there are 3 different versions of encryption used for the payloads in 3 different versions of Firefox, so I'd wait for the payload support story to be ironed out a little before using it / relying on it.