Microsoft graph, batch request's nextLink - node.js

I'm currently implementing a sync queue service to sync a webapp's customers to Outlook's contacts.
I'm using the Graph API for the job. The creation and updating of contacts is done using graph's batch request.
There's a part in the docs about the response that I don't fully understand and pretty much ignored. I just want to make sure my implementation is correct.
In addition to the responses property, there might be a nextLink
property in the batch response. This allows Microsoft Graph to return
a batch response as soon as any of the individual requests has
completed. To ensure that all individual responses have been received,
continue to follow the nextLink as long as it exists.
I was wondering about the following:
when does nextLink show up? I've tried sending different requests but never received it. It's not really clear from the docs but my guess is that it appears when for some reason some of the requests in the batch did not complete in time?
Would the pending requests show up as errors in the response or would they just be missing from it?
Will the nextLink be in form of #odata.nextLink like in pagination requests? It does not specify that in the docs.
How should I handle it when/if it does appear? Can I safely ignore it and just count on the next invocation of service (every 15mins) to retry and sync the pending requests?

The paging mechanism mostly applies when you are querying Graph for data.
The nextLink shows up if whatever query you had as part of one of your batch requests requires pagination (just as if you ran the request directly). For example this request as part of your batch job would cause one to appear, provided the targeted user has more than 10 folders:
{
"id":"1",
"method":"GET",
"url":"users/user#domain.tld/mailFolders"
}
The response shows up as normal (with the first page of data included in the response body, along with the nextLink to get to the next page).
Correct. In the above example, the nextLink shows up like this:
"#odata.nextLink":"https://graph.microsoft.com/beta/users/user#domain.tld/mailFolders?$skip=10
You will need to follow the nextLink to get the rest of the data.

Related

Acumatica REST API - Response body is empty when receiving successfully 200 calls. This only occurs intermittently

Once in a while, my API calls to Acumatica will not return a response body, but still return a 200. The same call will be successful after a few minutes.
My guess is that I might be hitting limitations on the number of API requests per minute, or concurrent requests. It's hard to know when the requests are coming back as 200. There are no response headers that indicate an error.
Has anyone else experienced this?
I think you're on the right track regarding reaching a limit. I have experienced delayed requested but not declined requests from the API. There is definitely a limit based on the license on the Acumatica site. You should be able to take a look at the License Monitoring Console screen in Acumatica to see if there are any API violations -- Number of transactions per minute, delayed transactions and declined transactions are all logged and shown in the statistics on this page.
The image below shows an example of this page.

How to send different message to multiple users by hitting single api of FCM?

I have a use case where I want to send a notification using FCM to multiple users(say 1000) every minute.
Below are some conditions that I also need to take care of
Every minute users will be different based on some conditions. So i can't create a group or topic using fcm.
Every user will receive a different message.
I don't want to hit fcm's request 1000 times every single minute.
Please help here
You can send a batch of (up to 500) messages, where each message has its own payload and audience (such a topic of tokens). So for 1000 unique tokens, you'd in that case only hit the API endpoint twice.
The Admin SDK (which is available for Node.js) implement a simple method call for this, but I also recommend having a look at the REST API example on the page I linked, as I found it interesting to see how this is implemented behind the scenes.

Does the DocuSign Intermediate API plan let me use the API to get PDF and form fields?

I tried calling DocuSign sales and support (transferred around 3 times) and no one could give me a straight answer on this. Their "support" actually told be to try stackoverflow, so here I am...
I'm looking at their API pricing levels here: https://www.docusign.com/products-and-pricing/api-plans-b
If I have the Intermediate API, can I make the following API requests?
GET /restapi/v2.1/accounts/{accountId}/envelopes/{envelopeId}/documents/{documentId}
GET /restapi/v2.1/accounts/{accountId}/envelopes/{envelopeId}/form_data
The part that's throwing me for a loop is the DocuSign Connect feature in the Advanced API plan. The description of it is:
The DocuSign Connect module lets you configure webhooks for envelope events and recipient actions for some or all users in your account. It can also be used to transport the completed documents back to your app or website and to retrieve any form field data entered by your recipients.
I don't need the webhooks, but I need to be able to get the completed documents as PDFs and get the form field data. Do I really need the DocuSign Connect feature for that?
You will be fine with the intermediate plan. Here is the basic distinction between polling and Connect - With Connect, we will proactively notify YOU when key envelope events occur.
Otherwise, it's up to you to call GET /envelopes and/or GET /form_data to retrieve that information. Be wary of the resource limits when you poll.
As a quick aside, instead of making two requests to retrive that information, just make one - GET /envelopes?include=recipients,tabs. This will provide you all the information you seek in one request.
The important excerpt from that guide:
You may not exceed one GET request per unique envelope endpoint per 15
minutes. If you exceed this limit the request will not fail, but it
will be flagged as a violation of rate limits which can cause your app
to fail review to go-live review.
For example, the following transactions violate API rules due to the repeated GET requests to the first document and second recipient:
[12:00:00] POST /accounts/12345/envelopes
[12:01:00] GET /accounts/12345/envelopes/AAA/documents/1
[12:02:00] GET /accounts/12345/envelopes/AAA/recipients/2
[12:03:00] POST /accounts/12345/envelopes
[12:04:00] GET /accounts/12345/envelopes/AAA/documents/1 *
[12:05:00] GET /accounts/12345/envelopes/AAA/recipients/2 *
However, the following set of requests comply with API rules and limits and would not be flagged by the platform:
[12:00:00] POST /accounts/12345/envelopes
[12:01:00] GET /accounts/12345/envelopes/AAA
[12:16:00] GET /accounts/12345/envelopes/AAA
[12:17:00] GET /accounts/12345/envelopes/AAA/documents/1
[12:32:00] GET /accounts/12345/envelopes/AAA/documents/1
[12:40:00] PUT /accounts/12345/envelopes/AAA/recipients/1
[12:41:00] PUT /accounts/12345/envelopes/AAA/recipients/1

Caching response for API.ai Node.js webhook

I have a webhook designed in Node.js for API.ai that interacts with multiple API's to gather information and give response to user.
Since, I am interacting with multiple API's the response time taken is more than 5 secs which is causing the API.ai request to timeout.
To overcome this, I am trying to implement caching into the node.js webhook which saves the response from API's until a certain amount of time. This will remove timeout until the max-age header time is reached.
Edit: What is the best node module that I can use to cache the API responses for subsequest requests.
Note: I am using request node module for http requests but it doesnt seem to provide a way to cache the response.
All of the answers given are reasonable for tackling the cache problem on the request side. But since you specified API.AI and Actions, you might also be able to, or need to, store information while the conversation is in progress. You can do this using an API.AI context.
It may even be that if you limit it to just one remote call for each response from the user, you might be able to fit it in the timeframe.
For example, if you were having a conversation about movie times and ticket ordering, the conversation may go something like:
User: "I want to see a movie."
[You use an API to lookup the nearest theater, store the theater's location in a context and reply] "Your nearest theater is the Mall Megaplex. Are you interested in one there?"
User: "Sure"
[You now already have the theater, so you query for what it is playing with another API call and store it in a context] "There are seven different movies playing, including Star Wars and Jaws. Do those sound interesting?"
User: "No"
[You already have the data in the context, so you don't need another call.] "How about Rocky or..."
In this way you're making the same number of calls (generally), but storing the user's results in the session as you go as opposed to collecting all the information for the user, or all the possible results, and then narrowing them.
Finally decided to use the below module:
https://www.npmjs.com/package/memory-cache
This served my scenario better. Might try using Redis soon when i get some time.

Determining frame URL for outgoing requests, using WebRequest API

I'm using the WebRequest API to modify requests that get sent by Chrome. In order to know how to rewrite the request I would like to know what's the frame URL that caused the outgoing request. I see I can get frame IDs and tab IDs, with which I could send a message to the content script to find out the URI. But since messaging is always asynchronous there seems to be no way to ensure that I get that information before the request gets sent.
This is for a testing tool, not something for regular users, so I wouldn't mind incurring some added latency. Does anyone know if there is there another way to do this? I tried using setTimeout but it's blocked by content security policy. Using referrer doesn't quite cut it because it's not set on HTTP requests coming from an HTTPS frame.
I am not sure if i fully understand what are you trying to accomplish, but here is what i think.
Scenario
Main frame google, sub frames facebook and twitter, and you want to modify any requests from the facebook frame !
If that's the case then here is what i am going to try;
1.Register the onCompleted event listener, which will be used to retrieve the completed requests info, i.e. url and frame id, and store them in an array.
2.Register the onBeforeSendHeaders event listener, which will be used to retrieve the request info and compare it against the one you stored in the previous step, if it passed you can modify the headers
So the code will go like this
onCompleted ({store the info -i.e. url and frame id- in an array},...)
onBeforeSendHeaders ({compare the frame id that made the request with the one stored before if they match modify the header},...)
Difference between your approach and the one i listed
In your approach you used asynchronous messages to retrieve info about the frame after the requests get sent, in my approach you will have those info ready for you with no need to do any farther messaging, so whenever a request happens you can use them immediately.
Hope you will find this helpful, good-luck.

Resources