How to use Async/await in FastAPI? - python-3.x

I have a route to accept request from user and return data to the user. There are the step
User send request to my app
Read data from database
Update my database
Return data to user
I want step 3 and step 4 above happen at the same time so the user will not be waiting for the update operation. How can I make this?

The thing you might be looking is called background task and luckily, FastAPI has an awesome documentation about how to implement that. Here you have!
But be careful, your title may be implying something different from your real question. The async/await will keep you away from blocking your server, but with background tasks you can make your user wait less time for their response. Saying that, if you're going to send a response without knowing anything else, remember to send a 202 status code (Accepted).

Related

How to handle the case where the system crashed immediately after a payment is processed?

In the backend of my app, there is a route that first process payment and then writes to a collection in my MongoDB. So:
await call_payment_vendor_api();
// system fails here
await write_to_collection();
Now there is an inconsistency in the system. How do I recover from that?
Transactions across systems are often a bit tricky and there are multiple approaches to handle this. There is no silver bullet, though, which is why many so-called microservice systems end up as distributed monoliths.
The most straightforward way if you're in a sync-context (I.e. need to answer a web request immediately ): try/catch the exception from MongoDB, then refund the payment if it fails.
For a better user experience, I'd try putting the writing into a background job queue, which processes the pending update and maybe retries a couple of times before giving up and refunding. Or maybe escalates to a technical support who can take a look and maybe fix things through a back-office admin UI.
But again, in the context of a web request, you might have to poll the job status to update the website or re-design the flow altogether.
Another possibility:
In MongoDB, create a transaction, first write your data, then call out to the payment provider. If they confirm, you'll only have to commit the transaction which (usually) is much less likely to fail.
I have two ways to solve this.
The First method is to create an exception handling in your code to save the error history. And then you can make an automatic function to execute write_to_collection() for specific data in that database.
try {
await call_payment_vendor_api();
// system fails here
await write_to_collection();
} catch(err) {
await save_error_history_to_database();
}
The Second method is to make sure your vendor API has a Reversal Payment Process. You can return the money to the specific client by calling Reversal API.
My solution would be that if your backend starts, you check the payment vendor API for any payments that have been payed, but not yet been processed by your application.
Different way would be to have multiple instances over multiple hosts (e.g. load-balancing) to ensure that an instance stay up.

How to make an asynchronous http call from bot-composer?

I'm creating a bot in botcomposer(v1.4.0) which involves an http request. What I'm looking for is an asynchronous call so that I don't have to wait for the response from the http call, while the user can continue to chat with the bot.
Also, I want to prompt the user when the response comes.
Thank you in advance.
Proactive messages would be the best option. Unfortunately, they are not implemented in Composer at this time. You might be able to create a custom action and accomplish your goal (my assumption without knowing more of what your requirements are).

How to order database calls in node.js?

I have a realtime app where users are clicking on a button at the exact same time. It is a rideshare app where a ride will show up on all the users's screens, and then 2 users will basically simultaneously push the "accept ride" button. This creates problems because it saves the first user onto the ride via a database save call, but then the second user oversaves the first user. Once a user accepts the ride, another user should not be able to accept the ride. It creates major problems because the first user should just "accept the ride" while, with the second driver, it should just tell them "another driver is accepting the ride". The problem is that I can not even run a query to check if the ride already has a driver because this is all happening so quickly. The first user will hit "accept ride", then it will save them to the ride. The second user will hit "accept ride", and it will check if the ride already has a driver. It doesn't yet because the first save is still finishing. Then the second user is oversaved on the ride. It just is happening way to simultaneously where queries don't really solve the problem.
Sorry if this is a confusing explanation. I have never had to deal with this realtime of a problem, so I am not sure where to start. I think I need to build some queue or something that only lets this happen once at a time. Any direction of what to even google would be helpful. Thank you! My backend is written in node.js and I use MongoDB on Heroku.
You need an atomic check and set operation in your database so in one atomic database operation, you can verify that it's not already accepted and, if not, accept it. That will only allow one person to accept it, any others will fail because it's already accepted and the API can feed that back to the user interface. The key word here is "atomic" and how you achieve it depends upon the specific database. For MongoDB, see Mongo any way to do atomic check and set.
Here's another reference: Help writing an atomic update in mongodb.
These solutions use mongodb's findandmodify so presumably, you would attempt to find a document with this id and that is also not accepted and if found, you would modify it to be accepted. Then, since the findandmodify is atomic, nobody else can get in between your find and your modify so when their findandmodify gets a turn, they won't find a document that is both the right id and is not accepted because someone else accepted it before they got in.

Continuing a conversation after manually calling the signin helper

I'm a bit confused about how to handle the following scenario:
The user triggers the FooBarIntent whose fulfillment requires a linked account from a third party.
I manually call the signin helper from my fulfillment code.
The user authorizes my agent, Actions on Google sends the helpers response with the signin status to Dialogflow, where a SignIn intent picks it up and passes it to my fulfillment service.
Now how do I proceed with fulfilling the original FooBarIntent? I thought this would somehow be handled seamlessly, but the signin helpers response is an entirely new webhook request with no information about the original request. It seems that I could store that information in a context, but that seems rather clumsy. Am I missing something here, or do I really have to tell the user something like "thanks for logging in, now please ask your original question again"?
Saying "Now please ask your original question again" is certainly the wrong approach to take - you have that part correct.
You're also correct that there is no automatic re-triggering of the original Intent. While this seems odd, it is simply because Intents represent what the user has said - not what you're going to be replying with. Both the user's initial statement and their sign-in acknowledgement are separate things that the user has said, and you may wish to handle each differently.
As you suggest - one thing that makes sense to do is to respond to the initial thing they said before you got the results from the helper. In these cases, saving the Intent or Action name and parameters in a context when you request the helper can let you pick back up afterwards. (There are other possible behaviors, however, that could make sense. Consider, for example, if you request sign-in as part of the welcoming intent. Since the user never gets past this first step, you don't need to keep track of the state.)
This pattern of saving the state when you take a detour to get the sign-in is one that is directly supported by the multivocal library, for example. With multivocal, you specify the requirements necessary before an Intent or Action handler is triggered (such as requiring the user to be authenticated). It takes care of meeting those requirements and then making sure the conversation continues where you left off to take the detour.

Caching response for API.ai Node.js webhook

I have a webhook designed in Node.js for API.ai that interacts with multiple API's to gather information and give response to user.
Since, I am interacting with multiple API's the response time taken is more than 5 secs which is causing the API.ai request to timeout.
To overcome this, I am trying to implement caching into the node.js webhook which saves the response from API's until a certain amount of time. This will remove timeout until the max-age header time is reached.
Edit: What is the best node module that I can use to cache the API responses for subsequest requests.
Note: I am using request node module for http requests but it doesnt seem to provide a way to cache the response.
All of the answers given are reasonable for tackling the cache problem on the request side. But since you specified API.AI and Actions, you might also be able to, or need to, store information while the conversation is in progress. You can do this using an API.AI context.
It may even be that if you limit it to just one remote call for each response from the user, you might be able to fit it in the timeframe.
For example, if you were having a conversation about movie times and ticket ordering, the conversation may go something like:
User: "I want to see a movie."
[You use an API to lookup the nearest theater, store the theater's location in a context and reply] "Your nearest theater is the Mall Megaplex. Are you interested in one there?"
User: "Sure"
[You now already have the theater, so you query for what it is playing with another API call and store it in a context] "There are seven different movies playing, including Star Wars and Jaws. Do those sound interesting?"
User: "No"
[You already have the data in the context, so you don't need another call.] "How about Rocky or..."
In this way you're making the same number of calls (generally), but storing the user's results in the session as you go as opposed to collecting all the information for the user, or all the possible results, and then narrowing them.
Finally decided to use the below module:
https://www.npmjs.com/package/memory-cache
This served my scenario better. Might try using Redis soon when i get some time.

Resources