Pulling more than 10 records for proof of delivery API from NetSuite - netsuite

Is there any way we can pull more than 10 records from workwave POD Api for more than 10 records?
Whenever I call workwave API through Map/Reduce script it's giving me an error message to slow down. Has anyone got this experience and how they did they manage to achieve this?
Thanks

If you're using the List Orders or Get Orders API, there is a throttling limit - "Leaky bucket (size:10, refill: 1 per minute)". However, both those APIs allow for retrieving multiple orders in a single call. My suggestion would be to restructure your script so that instead of making the call to Workwave in the reduce stage for a single order, you make it in the Get Input Data stage for all orders you want to operate on, and map the relevant data to the corresponding NetSuite data in the Map stage before passing in through to the Reduce stage.
In other words, you make one call listing multiple order ids rather than multiple calls listing one order id.

Related

Events in Azure Search

Is there a way to attach webhooks or get events from Azure Search?
Specifically we are looking for way to get notified (programmatically) when an indexer completes indexing an index.
Currently, there are no such events. However, you can implement functionality like this yourself. There are several scenarios to consider. Basically, you have two main approaches to adding content. Either define a content source and use pull or use the API to push content to the index.
The simplest scenario would be when you are using push via the API to add a single item. You could create a wrapper method that both submits your item and then queries the index until that item is found. Your wrapper method would need to either call a callback or fire an event. To support updates on an item you would need a marker on the item, like a timestamp property that indicates the time when the item was submitted to the index. Or a version number or something that allows you to distinguish the new item from the old.
A more complex scenario is when you handle batches or volumes of content. Assuming you start from scratch and your corpus is 100.000 items, you could query until the count matches 100.000 items before you fire your event. To handle updates, the best approach is to use some marker. E.g. you submit a batch of 100 updates at 2020-18-08 09:58. You could then query the index, filtering by items that are updated after the timestamp you submitted your content. Once the count from your query matches 100 you can fire your event.
You would also need to handle indexing errors or exceptions when submitting content in these scenarios.
For pull-scenarios your best option is to define a skill that adds a timestamp to items. You could then poll the index with a query, filtering by content with a timestamp after the point indexing started and then fire your event.

Stripe API - Retrieve Info On Multiple Charges Using One Call

I am currently using the retrieve() method to retrieve multiple charges one by one in a loop. We have a page on our app that allows a user to see the status of all payments that he is entitled to. This page takes quite a bit of time to load since we are sometimes calling \Stripe\Charge::retrieve() dozens of times in a row.
Is there anyway for me to make one call where I pass in an array of charge IDs and get info back on multiple charges from the same call? I see there is a list charges method at https://stripe.com/docs/api/charges/list, but this method doesn't allow me to pass in a list of charge IDs.
Unfortunately no, there's no way to make batch retrieval requests. However considering that you are trying to retrieve charges for a single user, you can still use the list API method and pass in the customer ID: https://stripe.com/docs/api/charges/list#list_charges-customer
Then once you have the list (probably time constrained via other properties in the list call) you can filter through and return the status of each.

Azure CosmosDB SQL Record counts

I have a CosmosDB Collection which I'm querying using the REST API.
I'd like to access the total number of documents which match my query. I know I can do a count, but that means two calls, one for the count and a subsequent one to retrieve the actual records.
I would assume this is not possible in a single call, BUT.. the Data Explorer in Azure Portal seems to manage it, so just wondering if anyone has been able to figure out what calls it makes, to get this:
Showing Results 1 - 10
Retrieved document count 342
Retrieved document size 2868425 bytes
Output document count 10
It's the Retrieved Document Count I need - if the portal can do it, there ought to be a way :)
I've tried the JAVA SDK as well as REST but can't see any useful options in there either
As so often is the case in this game, asking a question triggers the answer... so apologies in advance.
The answer is to send the x-ms-documentdb-populatequerymetrics header in the request.
The response then gives a whole bunch of useful stuff in x-ms-documentdb-query-metrics.
What I would like to understand still is whether this has any performance impact?

Truncate feeds in getStream

I would like to limit the number of feed updates (records) in my GetStream app. I want to keep each feed at a constant length of 500 items.
I make heavy use of the 'to:' field, which results in a lot of feeds of different lengths. I want them all to grow to 500 items, so I would rather not remove items by date.
For what it's worth, I store all the updates in my own database which results in a replica of the network activity.
What would be a good way of keeping my feeds short?
There's no straightforward way to limit your feeds to 500 items. There's 2 ways to remove activities from Stream:
the removeActivity method, which will remove 1 activity at a time via the foreign_id or activity id (https://getstream.io/docs/js/#removing-activities)
the "Truncate Data" button on the dashboard for your app, which will remove all activities in Stream.
It might be possible to get the behavior you're looking for by keeping track of all activities that you're adding to Stream, then periodically culling the ones that put you over 500.
Hopefully this helps!

CloudTable.ExecuteQuery : how to get number of transactions

AFAIK the ExecuteQuery handles segmented queries internally. And every request (=call to storage) for a segment counts as a transaction to the storage (I mean billing transaction - http://www.windowsazure.com/en-us/pricing/details/storage/). Right?
Is there a way to understand in how many segments is splitted a query? If I run a query and I get 5000 items, I can suppose that my query was splitted into 5 segments (due to the limit of 1000 items per segment). But in case of a complex query there is also the timeout of 5 seconds per call.
I don't believe there's a way to get at that in the API. You could set up an HTTPS proxy to log the requests, if you just want to check it in development.
If it's really important that you know, use the BeginExecuteSegmented and EndExeceuteSegmented calls instead. Your code will get a callback for each segment, so you can easily track how many calls there are.

Resources