How to get more customer sources using Stripe NodeJS SDK? - node.js

I am able to get customer sources only 1st 10 via get customer API:
# stripe.customers.retrieve
{
"id": "cus_DE8HSMZ75l2Dgo",
...
"sources": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_DE8HSMZ75l2Dgo/sources"
},
...
}
But how do I get more? Is the only way via an AJAX call? I was thinking there should be a function somewhere in the SDK?

When you retrieve a Customer object via the API, Stripe will return the sources property which is a List object. The data property will be an array with up to 10 sources in it.
If you want the ability to get more sources than the 10 most recent ones, you will need to use Pagination. The idea is that you will first get a list of N objects (10 by default). Then you will request the next "page" from Stripe by asking for N objects again but using the parameter starting_after set to the id of the last object in the previous page. You will continue doing that until the has_more property in the page returned is false indicating you retrieved all the objects.
For example if your Customer has 35 sources, you would get the first page (10), then call list to get 10 more (20), then 10 more again (30) and then the last call would return only 5 sources (35) and has_more would be false.
To decrease the number of calls, you can also set limit to a higher value. The maximum value is 100 in that case.
Here's what the code would look like:
// list those cards 3 at a time
var listOptions = {limit: 3};
while(1) {
var sources = await stripe.customers.listSources(
customer.id,
listOptions
);
var nbSourcesRetrieved = sources.data.length;
var lastSourceId = sources.data[nbSourcesRetrieved - 1].id;
console.log("Received " + nbSourcesRetrieved + " - last source: " + lastSourceId + " - has_more: " + sources.has_more);
// Leave if we are done with pagination
if(sources.has_more == false) {
break;
}
// Store the last source id in the options for the next page
listOptions['starting_after'] = lastSourceId;
}
You can see a full running example on Runkit here: https://runkit.com/5a6b26c0e3908200129fbb5d/5b49eabda462940012c33880

Taking a quick look into the sources of the stripe-node package, it seems there is a stripe.customers.listSources method, which takes a customerId as parameter and requests to the correct url. I suppose it works similar to the listCards method. But I couldn't find it in the docs, so you have to treat it as an undocumented feature ... But maybe it's just an error in the docs. You could contact the support about it. We used stripe in an old project and they appreciated any input on their documentation.

As of stripe-node 6.11.0, you may auto-paginate list methods, including customer sources. Stripe provides a few different APIs for this to aid with a variety of node versions and styles.
See the docs here
The important part to notice is .autoPagingEach:
await stripe.customers.listSources({ limit: 100 }).autoPagingEach(async (source) => {
doSomethingWithYourSource(source)
})

Related

How to set payment details with store-api in shopware 6

i want to use shopware as a headless shop with stripe payment provider. The payment works in shopware without problems.
Now im testing the order steps with api only. the last step is to handle the payment through the provider (stripe in this case).
in the shopware documentation its handled with the api call /store-api/handle-payment.
the payload looks like this:
{
"orderId": "string",
"finishUrl": "string",
"errorUrl": "string"
}
now when i request the api i get 500 error with message:
No credit card selected
My question is, how to send credit card data through this api so that Stripe can handle the payment. Is there anyone to solved this problem?
With the advice from Alex I was able to find the following solution:
Find error credit card not selected: This happens only when you try to pay per api request. The reason i found was, that Stripe saves the payment details (credit card id) in the session object. Per api you have no access to this as default and thatswhy u get the message credit card not selected
Take a look at the stripe plugin, especially in your PaymentMethods/Card/CardPaymentConfigurator.
i put the following in the configure method
from Line 46 - 62
$requestDataBag = $stripePaymentContext;
$paymentDetails = $requestDataBag->requestDataBag->get('paymentDetails');
if(!null == $paymentDetails) {
$card = $paymentDetails->get('creditCardId');
} else {
$card = null;
}
$selectedCard = $this->stripePaymentMethodSettings->getSelectedCard();
if ($selectedCard || isset($selectedCard['id'])) {
$selectedCard = $selectedCard['id'];
} elseif ($card) {
$selectedCard = $card;
} else {
throw PaymentIntentPaymentConfiguratorException::noCreditCardSelected();
}
send payment data per handle-payment request:
let payload = {
"orderId": event,
"finishUrl": "https://www.myfinishurl.de",
"errorUrl": "https://www.myurl.de/order/error",
"paymentDetails": {
"creditCardId": "creditcardid"
}
Now do this for all methods you need. It works. Maybe Stripe can implement this in the future.
You have the following options:
Check the local API documentation - it might have more information than the public one, because it honors installed modules (see https://stackoverflow.com/a/67649883/288568)
Contact their support for more information as this is not covered in the API Docs
Make a test-payment via the normal storefront and look at the requests which are made in the network panel of your browser's development tools

jquery jtable deleteConfirmation function not working

I am trying to use the deleteConfimation function option but I find that the default confirmation box pops up before I even get into the deleteConfimation function - what am I missing?
In the code below I can set break points and watch the data object being set up correctly with its new defaultConfirmMessage, but the basic jtable default delete confirmation box has already appeared and I never see an altered one.
$(container).jtable({
title: tablename,
paging: true,
pageSize: 100,
sorting: true,
defaultSorting: sortvar + ' ASC',
selecting: false,
deleteConfirmation: function(data) {
var defaultMessage = 'This record will be deleted - along with all its assignments!<br>Are you sure?';
if(data.record.Item) { // deleting an item
// Check whether item is in any preset lists
var url = 'CampingTablesData.php?action=CheckPresets&Table=items';
$.when(
ReturnAjax(url, {'ID':data.record.ID}, MyError)
).done(
function(retdata, status) {
if(status=='success') {
if(retdata.PresetList) {
data.deleteConfirmMessage = 'Item is in the following lists: ' + retdata.PresetList + 'Do you still want to delete it?';
}
} else {
data.cancel = true;
data.cancelMessage = retdata.Message;
}
}
);
} else {
data.deleteConfirmMessage = defaultMessage;
}
},
messages: {
addNewRecord: 'Add new',
deleteText: deleteTxt
},
actions: {
listAction: function(postData, jtParams) {
<list action code>
},
createAction: function(postData) {
<create action code>
},
updateAction: 'CampingTablesData.php?action=update&Table=' + tablename,
deleteAction: 'CampingTablesData.php?action=delete&Table=' + tablename
},
fields: tableFields --- preset variable
});
==========
After further testing the problem is only when deleting an item and it goes through the $.when().done() section of code. The Ajax call to the deletion url does not wait for this to complete - how do I overcome this?
i don't think you can get your design to work. What does the A in ajax stand for? Asynchronous! Synchronous Ajax has been deprecated for all sorts of good design and performance reasons.
You need to design you application to function asynchronously. Looking at your code, it feels you are misusing the deleteConfirmation event.
Consider changing the default deleteConfirmation message to inform the user, that the delete might not succeed if certain condition are met. Say
messages: {
deleteConfirmation: "This record will be deleted - along with all its assignments, unless in a preset list. Do you wish to try to delete this record?"
},
Then on the server, check the preset lists, and if not deletable, return an error message for jTable to display.
Depending on how dynamic your preset lists are, another approach might be to let the list function return an additional flag or code indicating which, if any, preset lists the item is already in, then your confirmation function can check this flag / indicator without further access to the server.
Thanks to MisterP for his observation and suggestions. I also considered his last approach but ended up setting deleteConfirmation to false (so as not to generate a system prompt) then writing a delete function that did not actually delete, but returned the information I needed to construct my own deleteConfimation message. Then a simple if confirm(myMessage) go ahead and delete with another Ajax call.

How to measure RU in DocumentDB?

Given that Azure DocumentDB uses Requests Units as a measurement for throughput I would like to make sure my queries utilize the least amount of RUs as possible to ncrease my throughput. Is there a tool that will tell me how many RUs a query will take and if the query is actually using an index or not?
As you discovered, certain tools will provide RU's upon completion of a query. This is also available programmatically, as the x-ms-request-charge header is returned in the response, and easily retrievable via the DocumentDB SDKs.
For example, here's a snippet showing RU retrieval using JS/node:
var queryIterator = client.queryDocuments(collLink, querySpec );
queryIterator.executeNext(function (err, results, headers) {
if (err) {
// deal with error...
} else {
// deal with payload...
var ruConsumed = headers['x-ms-request-charge'];
}
});
As far as your question regarding indexing, and determining if a property is indexed (which should then answer your question about a query using or not using an index): You may query the collection, which returns the indexing details in the response header.
For example: given some path dbs/<databaseId>/colls/<collectionId>:
var collLink = 'dbs/' + databaseId+ '/colls/'+ collectionId;
client.readCollection(collLink, function (err, coll) {
if (err) {
// deal with error
} else {
// compare indexingPolicy with your property, to see if it's included or excluded
// this just shows you what these properties look like
console.log("Included: " + JSON.stringify(coll.indexingPolicy.includedPaths))
console.log("Excluded: " + JSON.stringify(coll.indexingPolicy.excludedPaths))
}
});
You'll see includedPaths and excludedPaths looking something like this, and you can then search for your given property in any way you see fit:
Included: [{"path":"/*","indexes":[{"kind":"Range","dataType":"Number","precision":-1},{"kind":"Hash","dataType":"String","precision":3}]}]
Excluded: []
I found DocumentDb Studio which shows the response header that provide the RUs on every query.
Another option is to use the emulator with the trace collection option turned on.
https://learn.microsoft.com/en-us/azure/cosmos-db/local-emulator
I was trying to profile LINQ aggregate queries, which currently seems to be impossible with c# SDK.
Using the trace output from the emulator I was able to identify the request charges and a host of other metrics. There is a lot of data to wade through through.
I found the request charge stored under this event key
DocDBServer/Transport_Channel_Processortask/Genericoperation
Example output:
ThreadID="141,928" FormattedMessage="EndRequest DocumentServiceId localhost, ResourceType 2, OperationType 15, ResourceId 91M7AL+QPQA=, StatusCode 200, HRESULTHex 0, ResponseLength 61, Duration 70,546, HasQuery 1, PartitionId a4cb495b-38c8-11e6-8106-8cdcd42c33be, ReplicaId 1, ConsistencyLevel 3, RequestSessionToken 0:594, ResponseSessionToken 594, HasContinuation 0, HasPreTrigger 0, HasPostTrigger 0, IsFeedUnfiltered 0, IndexingDirective 5, XDate Fri, 09 Jun 2017 08:49:03 GMT, RetryAfterMilliseconds 0, MaxItemCount -1, ActualItemCount 1, ClientVersion 2017-02-22, UserAgent Microsoft.Azure.Documents.Common/1.13.58.2, RequestLength 131, NetworkBucket 2, SubscriptionId 00000000-0000-0000-0000-000000000000, Region South Central US, IpAddress 192.168.56.0, ChannelProtocol RNTBD, RequestCharge 51.424, etc...
This can then be correlated with data from another event which contains the query info:
DocDBServer/ServiceModuletask/Genericoperation
Note you need perfview to view the ETL log files. See here for more info:
https://github.com/Azure/azure-documentdb-dotnet/blob/master/docs/documentdb-sdk_capture_etl.md

The right pattern for returning pagination data with the ember-data RESTAdapter?

I'm displaying a list of articles in a page that are fetched using the Ember Data RESTAdapter. I need to implement a bootstrap'esque paginator (see: http://twitter.github.com/bootstrap/components.html#pagination) and cant seem to find a sane pattern for returning pagination data such as, page count, article count, current page, within a single request.
For example, I'd like the API to return something like:
{
articles: [{...}, {...}],
page: 3,
article_count: 4525,
per_page: 20
}
One idea was to add an App.Paginator DS.Model so the response could look like:
{
articles: [{...}, {...}],
paginator: {
page: 3,
article_count: 4525,
per_page: 20
}
}
But this seems like overkill to hack together for something so trivial. Has anyone solved this problem or found a particular pattern they like? Is there a simple way to manage the RESTAdapter mappings to account for scenarios such as this?
Try to use Ember Pagination Support Mixin and provide your own implementation of the following method. Instead of loading all the content, you can fetch the required content when the user is navigating the pages. All what you need initially is the total account of your records.
didRequestRange: function(rangeStart, rangeStop) {
var content = this.get('fullContent').slice(rangeStart, rangeStop);
this.replace(0, this.get('length'), content);
}
With ember-data-beta3 you can pass a meta-property in your result. The default RESTSerializer looks for that property and stores it.
You can access the meta-data like this:
var meta = this.get("store").metadataFor("post");
If you are not able to change the JSON returned from the server you could override the extractMeta-hook on the ApplicationSerializer (or any other Model-specific serializer).
App.ApplicationSerializer = DS.RESTSerializer.extend({
extractMeta: function(store, type, payload) {
if (payload && payload.total) {
store.metaForType(type, { total: payload.total }); // sets the metadata for "post"
delete payload.total; // keeps ember data from trying to parse "total" as a record
}
}
});
Read more about meta-data here

Subclass QueryReadStore or ItemFileWriteStore to include write api and server side paging and sorting.

I am using Struts 2 and want to include an editable server side paging and sorting grid.
I need to sublclass the QueryReadStore to implement the write and notification APIs. I do not want to inlcude server side REST services so i do not want to use JsonRest store. Any idea how this can be done.? What methods do i have to override and exactly how. I have gone through many examples but i am not getting how this can be done exactly.
Also is it possible to just extend the ItemFileWriteStore and just override its methods to include server side pagination? If so then which methods do i need to override. Can i get an example about how this can be done?
Answer is ofc yes :)
But do you really need to subclass ItemFileWriteStore, does it not fit your needs? A short explaination of the .save() follows.
Clientside does modify / new / delete in the store and in turn those items are marked as dirty. While having dirty items, the store will keep references to those in a has, like so:
store._pending = { _deletedItems: [], _modifiedItems: [], _newItems: [] };
On call save() each of these should be looped, sending requests to server BUT, this does not happen if neither _saveEverything or _saveCustom is defined. WriteStore simply resets its client-side revert feature and saves in client-memory.
See source search "save: function"
Here is my implementation of a simple writeAPI, must be modified to use without its inbuilt validation:
OoCmS._storeAPI
In short, follow this boiler, given that you would have a CRUD pattern on server:
new ItemFileWriteStore( {
url: 'path/to/c**R**ud',
_saveCustom: function() {
for(var i in this._pending._newItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/**C**rud', contents: { id:i }});
}
for(i in this._pending._modifiedItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/cr**U**d', contents: { id:i }});
}
for(i in this._pending._deletedItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/cru**D**', contents: { id:i }});
}
});
Now; as for paging, ItemFileWriteStore has the pagination in it from its superclass mixins.. You just need to call it with two setups, one being directly on store meaning server should only return a subset - or on a model with query capeabilities where server returns a full set.
var pageSize = 5, // lets say 5 items pr request
currentPage = 2; // note, starting on second page (with *one* being offset)
store.fetch({
onComplete: function(itemsReceived) { },
query: { foo: 'bar*' }, // optional filtering, server gets json urlencoded
count: pageSize, // server gets &count=pageSize
start: currentPage*pageSize-pageSize // server gets &start=offsetCalculation
});
quod erat demonstrandum

Resources