Asynchronous response to Task module fetch - azure

I'm wondering if there is a way to asynchronously respond to MS Teams Task module fetch events. By asynchronous I mean that we would lose the original context of the request because we sent the original request to another service. So one service to receive the requests and another to actually process the events.
I tried to build a new context using TurnContext.getConversationReference along with TurnContext.SendActivity. While this successfully sent the "continue" task module body using the original turnContext, it didn't work using the new context that I created with conversation reference.
// Service A - simply ack the request and formats and enqueues the request to a queue
const conversationReference = TurnContext.getConversationReference(context.activity);
// send this conversationReference as part of the payload to another service
// Service B - dequeues from the queue and processes the request
await botFrameworkAdapter.continueConversation(conversationReference, async (newContext) => {
const response = await newContext.sendActivity({
type: "invokeResponse",
value: { status: 200, body: taskCardResponse },
});
});
Task module is being launched through when a user clicks on a messaging extension. When this is launched, messaging extension task fetch is triggered. The backend then returns a form in task module for the user to fill out and submit.
This is the original implementation and in the new approach, we can't simply return the form to the modal because we don't have access to the original request in the service B.
Diagram of Current vs Future interaction between services

Per the discussion in comments above, the code in the bot to launch the message extension can simply pass via querystring anything that is needed to actual web content page that is launched, and it can do whatever is necessary with what it receives.

Related

Fetch event never gets triggered in Chrome extension service worker. Is this expected?

What I want to do
I'm trying to intercept a third party website's fetch events and modify its request body in a Chrome extension. Modifying the request body is not allowed by the chrome.webRequest.onBeforeRequest event handler. But it looks like regular service workers do have the ability to listen for and modify fetch events and manually respond to the request using my own response, which means I should be able to intercept the request, modify the body, send the modified request to the API, and then respond to the original request with my modified request's response.
The problem
It looks like neither of these event handlers ever get triggered, despite plenty of fetch events being triggered by the website, as I can see in the Network panel.
// background.js
self.onfetch = (event) => console.log(event.request); // never shows
// or
self.addEventListener("fetch", (event) => {
console.log(event.request); // never shows
});
I can verify that the service worker is running by seeing other console.logs appearing in the service worker console, both top-level logs as well as logs triggered by the "install" event
// background.js
console.log(self); // works
self.addEventListener("install", (event) => {
console.log(event); // works
});
Hypothesis
Do the fetch event handlers not get triggered because extension service workers are not allowed access to these for security reasons? That would make sense, I just haven't seen this documented anywhere explicitly so it would be good to know if this is indeed a platform limitation or if I'm doing something wrong.
Alternate solutions?
If this is indeed a limitation of the extensions platform, is there any way other way I can use a Chrome extension to modify request bodies on a third party website?

how does users.watch (in gmail google api) listen for notifications?

I am confused as to how should the watch feature in the gmail API be implemented to recieve the push notificatons inside a node.js script. Should I call the method inside an infinite loop or something so that it doesn't stop listening for notifications for email once after the call is made?
Here's the sample code that I've written in node.js:
const getEmailNotification = () => {
return new Promise(async (resolve, reject) => {
try{
let auth = await authenticate();
const gmail = google.gmail({version: 'v1', auth});
await gmail.users.stop({
userId: '<email id>'
});
let watchResponse = await gmail.users.watch({
userId: '<email id>',
labelIds: ['INBOX'],
topicName: 'projects/<projectName>/topics/<topicName>'
})
return resolve(watchResponse);
} catch(err){
return reject(`Some error occurred`);
}
})
Thank you!
Summary
To receive push notifications through PUB/SUB you need to create a web-hook to receive them. What does this mean? You need a WEB application or any kind of service that exposes a URL where notifications can be received.
As stated in the Push subscription documentation:
The Pub/Sub server sends each message as an HTTPS request to the subscriber application at a pre-configured endpoint.
The endpoint acknowledges the message by returning an HTTP success status code. A non-success response indicates that the message should be resent.
Setup a channel for watch the notifications could be summarized in the following steps (the documentation you refer to indicates them):
Select/Create a project within the Google Cloud Console.
Create a new PUB/SUB topic
Create a subscription (PUSH) for that topic.
Add the necessary permissions, in this case add gmail-api-push#system.gserviceaccount.com as Pub/Sub Publisher.
Indicate what types of mail you want it to listen for via Users.watch() method (which is what you are doing in your script).
Example
I give you an example using Apps Script (it is an easy way to visualize it, but this could be achieved from any kind of WEB application, as you are using Node.js I suppose that you are familiarized with Express.js or related frameworks).
First I created a new Google Apps Script project, this will be my web-hook. Basically I want it to make a log of all HTTP/POST requests inside a Google Doc that I have previously created. For it I use the doPost() equal to app.post() in Express. If you want to know more about how Apps Script works, you can visit this link), but this is not the main topic.
Code.gs
const doPost = (e) => {
const doc = DocumentApp.openById(<DOC_ID>)
doc.getBody().appendParagraph(JSON.stringify(e, null, 2))
}
Later I made a new implementation as a Web App where I say that it is accessible by anyone, I write down the URL for later. This will be similar to deploying your Node.js application to the internet.
I select a project in the Cloud Console, as indicated in the Prerequisites of Cloud Pub/Sub.
Inside this project, I create a new topic that I call GmailAPIPush. After, click in Add Main (in the right bar of the Topics section ) and add gmail-api-push#system.gserviceaccount.com with the Pub/Sub Publisher role. This is a requirement that grants Gmail privileges to publish notification.
In the same project, I create a Subscription. I tell it to be of the Push type and add the URL of the Web App that I have previously created.
This is the most critical part and makes the difference of how you want your application to work. If you want to know which type of subscription best suits your needs (PUSH or PULL), you have a detailed documentation that will help you choose between these two types.
Finally we are left with the simplest part, configuring the Gmail account to send updates on the mailbox. I am going to do this from Apps Script, but it is exactly the same as with Node.
const watchUserGmail = () => {
const request = {
'labelIds': ['INBOX'],
'topicName': 'projects/my_project_name/topics/GmailAPIPush'
}
Gmail.Users.watch(request, 'me')
}
Once the function is executed, I send a test message, and voila, the notification appears in my document.
Returning to the case that you expose, I am going to try to explain it with a metaphor. Imagine you have a mailbox, and you are waiting for a very important letter. As you are nervous, you go every 5 minutes to check if the letter has arrived (similar to what you propose with setInterval), that makes that most of the times that you go to check your mailbox, there is nothing new. However, you train your dog to bark (push notification) every time the mailman comes, so you only go to check your mailbox when you know you have new letters.

Dialogflow fulfillment with Please wait functionality

I am working on custom event integration which will call one by one three event as I am using third party API which is taking more then 11 sec, I need a functionality where user will receive message from first intent saying "Please wait", meanwhile our 2nd and 3rd intent will wait for the API response, as soon as we get the response from our API we will return it to Dialogflow Agent.
In the below code I want to send message(Please wait) from the Demo intent then the other two custom event will call intent(followOne, followUpTwo) this intent will be process for specific time, after that it will send one more message with actual API response.
async function Demo(agent){
// here we call our API and wait to get an response and will set that response in Redis server
await customsleep(1500);
agent.add('Please wait');
agent.setFollowupEvent('followUpOne');
}
async function followOne(agent){
await customsleep(4500);
agent.setFollowupEvent('followUpTwo');
}
async function followUpTwo(agent){
// in this intnet will get response from Redis server which we stored from intnet Demo, This response will return to the user
await customsleep(4700);
agent.add('here we go');
}
Is there anyway to implement this kind of functionality where we can send message(Please wait). Once we get the response from API we can return it to the Dialogflow Agent.
To my knowledge you can't send any message while following up an event. (it will be ignore)
If you want to provide a message you should :
Step 1 :
- call the third party api
- send a message with a quick reply : "Please Wait" + qr: "Click here to display the answer"
Step 2 :
- check if the third party api has answered
- yes : return the answer
- no : follow up a waiting event

How does NodeJS process multiple GET requests from different users/browsers?

I'd like to know how does NodeJS process multiple GET requests from different users/browsers which have event emitted to return the results? I'd like to think of it as each time a user executes the GET request, it's as if a new session is started for that user.
For example if I have this GET request
var tester = require('./tester-class');
app.get('/triggerEv', async function(req, res, next) {
// Start the data processing
tester.startProcessing('some-data');
// tester has event emitters that are triggered when processing is complete (success or fail)
tester.on('success', function(data) {
return res.send('success');
}
tester.on('fail', function(data) {
return res.send('fail');
}
}
What I'm thinking is that if I open a browser and run this GET request by passing some-data and start processing. Then open another browser to execute this GET request with different data (to simulate multiple users accessing it at the same time), it will overwrite the previous startProcessing function and rerun it again with the new data.
So if multiple users execute this GET request at the same time, would it handle it separately for each user as if it was different and independent sessions then return when there's a response for each user's sessions? Or will it do as I mentioned above (this case I will have to somehow manage different sessions for each user that triggers this GET request)?
I want to make it so that each user that executes this GET request doesn't interfere with other users that also execute this GET request at the same time and the correct response is returned for each user based on their own data sent to the startProcessing function.
Thanks, I hope I'm making sense. Will clarify if not.
If you're sharing the global tester object among different requests, then the 2nd request will interfere with the first request. Since all incoming requests use the same global environment in node.js, the usual model is that any request that may be "in-flight" for awhile needs to create its own resources and keep them for itself. Then, if some other request arrives while the first one is still waiting for something to complete, then it will also create its own resources and the two will not conflict.
The server environment does not have a concept of "sessions" in the way you're using the term. There is no separate server-session or server state that each request lives in other than the request and response objects that are created for each incoming request. This is not like PHP - there is not a whole new interpreter state for each request.
I want to make it so that each user that executes this GET request doesn't interfere with other users that also execute this GET request at the same time and the correct response is returned for each user based on their own data sent to the startProcessing function.
Then, don't share any resources between requests and don't use any objects that have global state. I don't know what your tester is, but one way to keep multiple requests separate from each other is to just make a new tester object for each request so they can each use it to their heart's content without any conflict.

Using an Azure Web Function to receive a Twilio SMS Message

I'm trying to get an Azure Web Function to receive a Twilio SMS message - and failing!
I've created a Web Function to successfully send SMS messages - now I want to listen and react to responses.
I've set up a web function as per the below. Its pretty simple at the moment, and is supposed to parrot back the original message:
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
var data = await req.Content.ReadAsStringAsync();
var formValues = data.Split('&')
.Select(value => value.Split('='))
.ToDictionary(pair => Uri.UnescapeDataString(pair[0]).Replace("+", " "),
pair => Uri.UnescapeDataString(pair[1]).Replace("+", " "));
// Perform calculations, API lookups, etc. here
var response = new MessagingResponse()
.Message($"You said: {formValues["Body"]}");
var twiml = response.ToString();
twiml = twiml.Replace("utf-16", "utf-8");
return new HttpResponseMessage
{
Content = new StringContent(twiml, Encoding.UTF8, "application/xml")
};
}
In Twilio, I've configured the phone to use web hooks:
I've deployed the Web Function, however when I try testing by sending a message, I get the following error in the Twilio logs:
11200 There was a failure attempting to retrieve the contents of this URL
Msg Unsupported Media Type
Message: The WebHook request must contain an entity body formatted as JSON.
Does anyone have any experience in how to fix this error?
I just got this working with Twilio's SMS services. In the Azure Portal, if you go to the function, then go to Integrate, change the mode to Standard. This forces the Azure function to return a normal HTTP response and you can control the content type. If you use application/xml it will work fine.
Okay, the current solution to this is .... it can't be done in Azure Web Functions. An Azure Web Function expects a JSON payload. Twilio Webhooks are an XML value/pair. So, the web function will reject the webhook call.
The best/easiest approach is to use a WebAPI or MVC Controller as per the Twilio example. I tried a sample version and had my Twilio Webhooks working to reply to an SMS in about 15 minutes from start to finish.
To debug, I'd use a tool such as Postman or Fiddler to manually replay (or create from scratch) an identical request to what you're expecting from Twilio. You can then see what kind of response you get and not have to solely rely on Twilio's error message.
From the error code, I'd imagine that the problem is either:
Your URL set up in the Twilio number configuration isn't actually reaching your function.
Your function is taking too long to respond. Twilio will throw 11200 if it doesn't get a response in a certain time.
Your response is formatted incorrectly. This is where the aforementioned strategy can help you diagnose the issue.

Resources