I am currently using the retrieve() method to retrieve multiple charges one by one in a loop. We have a page on our app that allows a user to see the status of all payments that he is entitled to. This page takes quite a bit of time to load since we are sometimes calling \Stripe\Charge::retrieve() dozens of times in a row.
Is there anyway for me to make one call where I pass in an array of charge IDs and get info back on multiple charges from the same call? I see there is a list charges method at https://stripe.com/docs/api/charges/list, but this method doesn't allow me to pass in a list of charge IDs.
Unfortunately no, there's no way to make batch retrieval requests. However considering that you are trying to retrieve charges for a single user, you can still use the list API method and pass in the customer ID: https://stripe.com/docs/api/charges/list#list_charges-customer
Then once you have the list (probably time constrained via other properties in the list call) you can filter through and return the status of each.
Related
I am trying to implement pagination with the below scenario:
Third party system is hitting salesforce API and returning the result from an object in salesforce.
They want to get the results paginated and get the response from salesforce by passing below parameters from their end:
PageIndex/No
PageSize
I dont necessarily have to display the records using VisualForce/LWC,etc.Just the paginated records need to be passed back to 3rd party system.
All the resources I found on the web employ using some VF page,component,etc.If there is a necessity of using the same for implementing this pagination,please do let me know that as well.
Tried looking for resources where pagination can be implemented but the resources involve using a VF page,lightning component,etc
I expected: A simple pagination on the records being returned from the salesforce webservice
For smaller data sets you can use SOQL's LIMIT and OFFSET. You can offset (skip) up to 2000 records that way.
More general solution would be to use salesforce's "query locators" (bit like CURSOR statement in a normal database). You need to read up about queryMore() (if you use SOAP API) to get next chunk of data (no jumping around with page number, just next, next...)
REST API has similar nextRecordsUrl: https://stackoverflow.com/a/56448825/313628
If they can't implement that / you don't want to give access to normal API, just some Apex... at most you can query 50K records in 1 transaction so you could make-do something that way. You'll likely need to put some rules around fixed order of records
Is there a way to attach webhooks or get events from Azure Search?
Specifically we are looking for way to get notified (programmatically) when an indexer completes indexing an index.
Currently, there are no such events. However, you can implement functionality like this yourself. There are several scenarios to consider. Basically, you have two main approaches to adding content. Either define a content source and use pull or use the API to push content to the index.
The simplest scenario would be when you are using push via the API to add a single item. You could create a wrapper method that both submits your item and then queries the index until that item is found. Your wrapper method would need to either call a callback or fire an event. To support updates on an item you would need a marker on the item, like a timestamp property that indicates the time when the item was submitted to the index. Or a version number or something that allows you to distinguish the new item from the old.
A more complex scenario is when you handle batches or volumes of content. Assuming you start from scratch and your corpus is 100.000 items, you could query until the count matches 100.000 items before you fire your event. To handle updates, the best approach is to use some marker. E.g. you submit a batch of 100 updates at 2020-18-08 09:58. You could then query the index, filtering by items that are updated after the timestamp you submitted your content. Once the count from your query matches 100 you can fire your event.
You would also need to handle indexing errors or exceptions when submitting content in these scenarios.
For pull-scenarios your best option is to define a skill that adds a timestamp to items. You could then poll the index with a query, filtering by content with a timestamp after the point indexing started and then fire your event.
Is there any way we can pull more than 10 records from workwave POD Api for more than 10 records?
Whenever I call workwave API through Map/Reduce script it's giving me an error message to slow down. Has anyone got this experience and how they did they manage to achieve this?
Thanks
If you're using the List Orders or Get Orders API, there is a throttling limit - "Leaky bucket (size:10, refill: 1 per minute)". However, both those APIs allow for retrieving multiple orders in a single call. My suggestion would be to restructure your script so that instead of making the call to Workwave in the reduce stage for a single order, you make it in the Get Input Data stage for all orders you want to operate on, and map the relevant data to the corresponding NetSuite data in the Map stage before passing in through to the Reduce stage.
In other words, you make one call listing multiple order ids rather than multiple calls listing one order id.
I've got a node app that works with Salesforce for a few different things. One of the features is letting users fill in a form and pushing it to Salesforce.
The form has a dropdown list, so I query salesforce to get the list of available dropdown items and make them available to my form via res.locals. Currently I'm getting these values via some middleware, storing them in the users session, and then checking if the session value is set, use them, if not, query salesforce and pull them in.
This works, but it means every users session data in Mongo holds a whole bunch of picklist vals (they are the same for all users). I very rarely make changes to the values on the Salesforce side of things, so I'm wondering if there is a "proper" way of storing these vals in my app?
I could pull them into a Mongo collection, and trigger a manual refresh of them whenever they change. I could expire them in Mongo (but realistically if they do need to change, it's because someone needs to access the new values immediately), so not sure that makes the most sense...
Is storing them in everyone's session the best way to tackle this, or is there something else I should be doing?
To answer your question quickly, you could add them to a singleton object (instead of session data, which is per user). But not sure how you will manage their lifetime (i.e. pull them again when they change). A singleton can be implemented using a simple script file that can be required which returns a simple object...
But if I was to do something like this, I would go about doing it differently:
I would create an API endpoint that returns your list data (possibly giving it a query parameters to return different lists).
If you can afford the data being outdated for a short period of time then, you can write your API so that it returns the response cached (http cache, for a short period of time)
If your data has to be realtime fresh, then your API should return an eTag in the response of the API. The eTag header basically acts like a checksum for your data, a good checksum would be "last updated date" of all the records in a collection. Upon receiving a request you check if you have the header "if-none-match" which would contain the checksum, at this point, you do a "lite" call to your database to just pull the checksum, if it matches then you return 304 http code (not modified), otherwise you actually pull the full data you need and return it (alongside the new checksum in the response eTag). Basically you are letting your browser do the caching...
Note that you can also combine caching in points 1 and 2 and use them together.
More resources on this here:
https://devcenter.heroku.com/articles/increasing-application-performance-with-http-cache-headers
https://developers.facebook.com/docs/marketing-api/etags
I am trying to write a node program that takes a stream of data (using xml-stream) and consolidates it and writes it to a database (using mongoose). I am having problems figuring out how to do the consolidation, since the data may not have hit the database by the time I am processing the next record. I am trying to do something like:
on order data being read from stream
look to see if customer exists on mongodb collection
if customer exists
add the order to the document
else
create the customer record with just this order
save the customer
My problem is that two 'nearby' orders for a customer cause duplicate customer records to be written, since the first one hasn't been written before the second one checks to see if it there.
In theory I think I could get around the problem by pausing the xml-stream, but there is a bug preventing me from doing this.
Not sure that this is the best option, but using async queue was what I ended up doing.
At the same time as I was doing that a pull request for xml-stream (which is what I was using to process the stream) that allowed pausing was added.
Is there a unique field on the customer object in the data coming from the stream? You could add a unique restriction to your mongoose schema to prevent duplicates at the database level.
When creating new customers, add some fallback logic to handle the case where you try to create a customer but that same customer is created by another save at the same. When this happens try the save again but first fetch the other customer first and add the order to the fetched customer document