how to properly paginate through an Amazon FBA orders list using SPAPI? [closed] - amazon

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 days ago.
Improve this question
I've been scratching my head for a while trying to figure out how to paginate through the SPAPI orders API with FBA and it's gotten a little frustrating trying to sort it out:
For example, this URI gives me the first page of orders for the US based on a "CreatedAfter" cutoff date:
https://sellingpartnerapi-na.amazon.com/orders/v0/orders?MarketplaceIds=ATVPDKIKX0DER&CreatedAfter=2023-01-26
My assumption is that I don't need to pass along a "CreatedBefore" date in my first "hit" of the API, it's just going to give me the first page of orders, and then a "NextToken" and "CreatedBefore" at the bottom of the payload. We have tried it with and without a "CreatedBefore" in the first pass.
My assumption is these two handy pieces of info in the payload go hand-in-hand to ensure that I'm truly getting "the next page," and not just a potentially random list of order IDs that happen to also fall within the date range, so that I can be clear my API feed is pulling all orders and that I don't have to come back later and "backfill" missing orders through any variety of inelegant processes, which is the situation I happen to be in, because we can't seem to get this pagination to work as expected.
The other assumption is that I'll hit an "end" and stop getting a new "NextToken," which doesn't happen.
What happens when I try to use it as expected is that I never reach the end, and just keep getting the same second page over and over again, with a new NextToken each time.
So our code, instead of just consistently hitting the same base URI, with the additional "CreatedBefore" parameter pulled from its first payload and the "NextToken" in the header, currently queries our database to see what was the most recent Amazon Purchase Date and hits the URL again using that CreatedAfter date, which definitely leaves us with holes, so I tried backing that up by 10 seconds, and so on, and it's just kind of a major thorn in my side right now trying to sort this out.
Initially, we tried to sort this out in Postman, just feeding "NextToken" into the same API endpoint, before noticing we were just getting the same payload over and over.
I kind of explained what we ended up compromising within the current code, where we just read a fresh "created after" date from our de-serialized MAX("PurchaseDate") from the orders we've unpacked and loaded into our DB. This sort of works but we are left with major holes in the data, where we can run a report on Amazon FBA and find 40% of the Amazon Order IDs aren't in our DB, which we can force feedback in one at a time or have our code to run from our "go live date" to the present looking for new unprocessed orders within the payloads. We just don't have a precise method for safely paginating through a list of orders from "startdate" to "enddate."

According to the documentation
If NextToken is present, that will be used to retrieve the orders instead of other criteria.
Don't modify any other fields when you submit the second request. Just insert the nexttoken value that was returned by the first call and leave all the other fields the same.

Related

How to create a slackbot that notifies me if a post someone wrote is not answered after a period of time?

Hello this is my first time attempting to create a slackbot using this resource https://botkit.ai/ , the slackbot I am trying to create should notify me if someone' post on a slack channel is not answered after a period of time,say after 30 minutes
So far I have been able to make my slackbot respond to specific keywords
//make slackbot hear for specific keywords and then reply without directly being mentioned
let now = new Date()
controller.hears(['help', 'I need help', 'stuck', 'question'],['ambient'], function (bot,message) {
// do something to respond to message.
bot.reply(message,'Hello <#'+message.user+'> someone needs help!' );
});
At first I was hoping that botkit already had some time tracking features, but it doesn't seem like it does, how can I make my slackbot notify me of posts that have not been answered after a specific period of time??
I would look into storing state someplace. You can query for the messages in a channel and then store off when they were posted. Then, every minute (or more, depending on your needs), you can run through all those and see if they were answered. Now, it is going to be hard to know what "answered" means, unless you can control that answers are either:
in a thread keyed off the question
reference the original question via a link
tag the original question asker (and then you'd have an issue if someone asked two questions in a row)
marked with a token (like 'ANSWERED') (and then you'd have the same issue as the tag solution)
I can't think of any other way to associate an answer with a question.
Anyway, you can store off the time in a database, google spreadsheet or other solution (depending on where you are running your node code). I'm not familiar with botkit, but Transposit (disclosure, I work for them) has integration with Slack and with Google Sheets, and is free to use.

Filtering SharePoint List by Another SharePoint List

I posted this question on Stack Exchange here: (https://sharepoint.stackexchange.com/questions/249418/filtering-sharepoint-list-by-another-sharepoint-list), but just realized I should have posted it to Stack Overflow instead. Hope it's not bad form to cross-post (I'll add a link to this post in the other post).
I've been searching the forums and doing research online with no luck- apologies if this has been answered before.
I have a list with several thousand items in it. I often receive bulk update requests where I need to update several hundred of these items at a time (let's say for this example that we're using a field called "Case ID").
Here's what I've tried:
Searching cases individually, or up to three at a time in datasheet view; this is not time effective
Exporting the list and manually manipulating the data in Excel, then pasting in (and writing over) the data in the column that needs to be updated; this approach is not user friendly, is not necessarily time effective, and has potential side effects (causing errors for users currently modifying items that I am changing in bulk)
Lastly- I know I can create custom views that isolate this data; the problem is that the lists of cases I need to modify generally do not have enough commonalities to isolate them using the view filter logic
So- my guess is that I need two lists, likely connected with a web part. The first list would exist solely for the purpose of querying the second list. I would enter the Case IDs I wanted to filter by in the first list, and the second list would filter to show only the Case IDs in the first list. All items would be deleted from the first list between queries.
I'm not married to this approach- it's just my best guess. I'm open to creative and alternative approached, but the final process needs to be user friendly (business partners will be using it).
Does anyone know how I can accomplish this? I've tried to get something implemented several times over the past few years and have never been successful; posting here is my last resort before I throw in the towel.
I have SP 2013, and have SharePoint Designer; please let me know if I need to add any other information.
Thanks in advance for the support,
Chad
I'd suggest to create a JSOM application that will do all updates. It can query only items for update and do item-by-item update.

Instagram API media/popular

What are the queries we can use with media/popular. Can we localize it according to country or geolocation?
Also is there a way to get the discovery feature's results with the api?
This API is no longer supported.
Ref : https://www.instagram.com/developer/endpoints/media/
I was recently struggling with same problem and came to conclusion there is no other way except the hard one.
If you want location based popular images you must go with location endpoint.
https://api.instagram.com/v1/locations/214413140/media/recent
This link brings up recent media from custom location, key being the location-id. Your job is now to follow simple pagination api and merge responded arrays into one big bunch of JSON. $response['pagination']['next_max_id'] parameter is responsible for pagination, so you simply send every next request with max_id of previous request.
https://api.instagram.com/v1/locations/214413140/media/recent?max_id=1093665959941411696
End result will depend on the amount of information you gathered. In the end you will just gonna need to sort the array with like count and you're up to go whatever you were going to do.
Of course important part is to save images locally rather than generating every time user opens the webpage. Reason being not only generation time but limited amount of requests per hour.
Hope someone will come up better solution or Instagram API will finally support media/popular by location.

Best ways to communicate Backbone with Express.js [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I´m developing a Node.js & Express & Backbone & MongoDB app. i have a list of items fetched via url.
In that list i want to put a select to sort items by price. I don´t know wich is the best way to do this sorting, maybe i have to put links in the select with a determinate url like [URL]&sort=1 and fetch it like i´m fetching all items, or maybe i could not use the link and fetch this collection from backbone in another and optimus way?
Thanks, regards
My view on this is as follows. Unless you must, avoid doing sorting on the server, but do it on the client. It is far more responsive that way. If you have 20 items, and want to sort by name (ascending or descending), quantity in inventory (ascending or descending), or price (same thing), far better to do it on the client.
When you must, is when you have 1,000 items. You simply will not load 1,000 items in a call and then sort them. You probably retrieve around 30-50 at a time, depending on the size of each item. So you need to have a way to tell the server, "give me the following 30 items" or, more accurately, "give me 30 items beginning at point X."
There are lots of ways to do it, even within REST. In general, you are best off passing the starting point and count via query parameters, as you did. So if you are getting widgets, and your API is
GET /widget
then you would have several query fields: field (which field you are sorting on), order (one of asc or des), count (an integer) and start (which point to start, either a record ID or an index).
That gives the following:
GET /widget?field=name&count=30&start=100&order=asc // get widgets sorted by field in ascending order, starting with the 100th widget and going for 30 widgets
GET /widget?field=price&count=20&start=225&order=desc // get widgets sorted by price from highest to lowest (descending order), starting with the 100th widget and going for 20 widgets
There are variants on this, e.g. instead of start and count you might do start and end. I have seen order called sort. But the idea is the same: those four fields give you all your server needs to know to retrieve a defined set of items.

How does the "mark as read" system on webforums work?

I've wondered about this for some time now. I'm wondering webforums implement the option to highlight something you haven't read. How the forum knows.
Since most webforums have a function to show you all posts since your last visit, they must save the last time you visited one of their pages in your userdata in the database.
But that doesn't explain how individual topics are still highlighted after you've read just one.
A many to many table connecting a user to a topic/post with flags for read/favorite etc.
Many web forums store a huge list of the last time you looked at each topic you've looked at.
This gets out of hand quickly, but there are mitigations. See Determining unread items in a forum
Keeping track of what posts a visitor has read is of course not that much of a big deal. Since it's highly likely that the number of posts a visitor read will be much less than the posts not read. So, if you know what posts a visitor has read, you also know what posts this visitor didn't read. To make this less computational intensive you'd normally do this only over a certain period of time, say the last two weeks. Everything before that time will be considered read.
Usually, this list of "unread" items only shows changes that have been made since the last time you logged out.
Use the user's last activity date/time to mark items as "unread" (any activity in a topic after that time is marked "unread"). Then store in a Session variable, a list of topic IDs that the user viewed since last login. Combining these two would give you a relatively accurate list of unread topics.
Of course this data would then be lost on log-out or session expire and the cycle would start again without sacrificing an unnecessary amount of SQL queries.
On the custom forum I used to work with, we used a combination of your last visit time (updated every time you viewed another page - usually cookied), and a "mark read" button on each topic that added a date/time value to a SQL table containing your UserID, the TopicID and the Date/Time.
Thus to view new topics we would look at your last visit date and anything created after that point in time was a new topic.
Once you entered a topic any topic you had clicked "mark read" on would only show the initial topic and then any replies with a date/time added after you clicked the mark read button. If you have fewer viewers and performance to spare you could basically set it up to add an entry to the table for every topic the user clicks on, when they click on it.
Another option you have, and I have actually seen this done before in a vBulletin installation, is to store a comma separated list of viewed topic ids client-side in a cookie.
Server-side, the only thing stored was the time of the user's previous visit. The forum system used this in conjunction with the information in the user's cookie to show 'as read' for any topic where either
Last modified date (ie last post) older than the user's previous visit
Topic ID found in the user's cookie as a topic the user has visited this session.
I'm not saying it's a good idea, but I thought I'd mention it as an alternative - the obvious way to do it has already been stated in other answers, ie store it server-side as a relation table (many to many table).
I guess it does have the advantage of putting less burden on the server of keeping that information.
The downsides are that it ties it to the session, so once a new session is started everything that occurred before the last session is considered 'already read'. Another downside is that a cookie can only hold so much information, and a user may view hundreds of topics in a session, so it approaches the storage limit of the cookie.
One more approach:
Make sure your stylesheet shows a clear difference between visited and non-visited links, taking advantage of the fact that browsers remember visited pages persistently.
For this to work, however, you'd need to have consistent URLs for topics, and most forum systems don't tend to do this. Another downside to this is that users may clear their history, or use more than one browser. This therefore puts this measure into the 'not highly reliable category'; you would probably just do this to augment whatever other measure you are using to track viewed topics.

Resources