Starting from the Scopus ID of a scientist, how can I retrieve the time series of his h-index?
That is, how do I get the h-index as a function of time?
I need to do that in an automatic way, in Python, using the Scopus API (or a wrapper like pybliometrics), or any other API.
I can also use Orcid for this, since I can get the Orcid ID from the Scopus ID.
Unfortunatly, Scopus does not provide a timeseries of the h-index; overall, all information through the Author Retrieval shows exactly the latest state.
You would have to compute the h-index for each year on your own:
Retrieve full publication list for an author via Scopus Search API
For each publication, retrieve yearly citaton via Citation Overview API
Combine all citations so that citations from the same year sum up
Compute the h-index for each year
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 days ago.
Improve this question
I've been scratching my head for a while trying to figure out how to paginate through the SPAPI orders API with FBA and it's gotten a little frustrating trying to sort it out:
For example, this URI gives me the first page of orders for the US based on a "CreatedAfter" cutoff date:
https://sellingpartnerapi-na.amazon.com/orders/v0/orders?MarketplaceIds=ATVPDKIKX0DER&CreatedAfter=2023-01-26
My assumption is that I don't need to pass along a "CreatedBefore" date in my first "hit" of the API, it's just going to give me the first page of orders, and then a "NextToken" and "CreatedBefore" at the bottom of the payload. We have tried it with and without a "CreatedBefore" in the first pass.
My assumption is these two handy pieces of info in the payload go hand-in-hand to ensure that I'm truly getting "the next page," and not just a potentially random list of order IDs that happen to also fall within the date range, so that I can be clear my API feed is pulling all orders and that I don't have to come back later and "backfill" missing orders through any variety of inelegant processes, which is the situation I happen to be in, because we can't seem to get this pagination to work as expected.
The other assumption is that I'll hit an "end" and stop getting a new "NextToken," which doesn't happen.
What happens when I try to use it as expected is that I never reach the end, and just keep getting the same second page over and over again, with a new NextToken each time.
So our code, instead of just consistently hitting the same base URI, with the additional "CreatedBefore" parameter pulled from its first payload and the "NextToken" in the header, currently queries our database to see what was the most recent Amazon Purchase Date and hits the URL again using that CreatedAfter date, which definitely leaves us with holes, so I tried backing that up by 10 seconds, and so on, and it's just kind of a major thorn in my side right now trying to sort this out.
Initially, we tried to sort this out in Postman, just feeding "NextToken" into the same API endpoint, before noticing we were just getting the same payload over and over.
I kind of explained what we ended up compromising within the current code, where we just read a fresh "created after" date from our de-serialized MAX("PurchaseDate") from the orders we've unpacked and loaded into our DB. This sort of works but we are left with major holes in the data, where we can run a report on Amazon FBA and find 40% of the Amazon Order IDs aren't in our DB, which we can force feedback in one at a time or have our code to run from our "go live date" to the present looking for new unprocessed orders within the payloads. We just don't have a precise method for safely paginating through a list of orders from "startdate" to "enddate."
According to the documentation
If NextToken is present, that will be used to retrieve the orders instead of other criteria.
Don't modify any other fields when you submit the second request. Just insert the nexttoken value that was returned by the first call and leave all the other fields the same.
I am using SharePoint's in-built version control as a means of identifying the current approved versions of procedures, so I get x.y as minor (draft) versions and x.0 as the major (approved) versions. This bit works fine.
Our ISO 9001 auditor has asked how we would demonstrate that quality procedures have been reviewed. There will no doubt be many instances for our company in the future where procedures are around ten to twenty years old, but have not needed changing. Unfortunately this then would give the impression that there hasn't been a review of the document.
A clumsy solution to this would simply be to keep a record of all the documents and have a review date in a spreadsheet then record when the review has been done. It would be far easier for me if SharePoint could look after this for us, however, what I don't want to do is have to check out a document to review it and then have publish as the next major version as proof that it has been reviewed.
My question is: is there a way in SharePoint's version control to record that a review of a document has been conducted but no changes made?
I hope my question is clear, but if you have specific questions please ask. Many thanks
Add another field in your document lib. Call it Reviewed and make it Yes/No type.
Edit document properties when you make a review and update the field accordingly.
Add another field, type = dateTime. When editing document properties, insert the date of the review.
This question is complementary to Retrieving more than 150 Instagram comments and a repetition of this older post in the group.
Currently it appears to be impossible to retrieve a full list of likes or comments for a specific post. There are no documented pagination parameters, and it is unclear how one could paginate over likes as they have no publicly exposed timestamps or time-related identifiers.
At the very least the developer documentation on http://instagram.com/developer/endpoints/comments/ and http://instagram.com/developer/endpoints/likes/ should be amended to mention that it is not possible to get a full list of either comments or likes.
Are there any workarounds for this, or plans to support pagination for the comments and likes endpoints?
If no such plans exist, how about allowing for control over the ordering of results? This would at least allow for new entries to be retrieved with reasonable confidence.
At the moment, it looks like the /likes endpoint returns results in newest to oldest order, but unfortunately the comments endpoint uses oldest to newest.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I´m developing a Node.js & Express & Backbone & MongoDB app. i have a list of items fetched via url.
In that list i want to put a select to sort items by price. I don´t know wich is the best way to do this sorting, maybe i have to put links in the select with a determinate url like [URL]&sort=1 and fetch it like i´m fetching all items, or maybe i could not use the link and fetch this collection from backbone in another and optimus way?
Thanks, regards
My view on this is as follows. Unless you must, avoid doing sorting on the server, but do it on the client. It is far more responsive that way. If you have 20 items, and want to sort by name (ascending or descending), quantity in inventory (ascending or descending), or price (same thing), far better to do it on the client.
When you must, is when you have 1,000 items. You simply will not load 1,000 items in a call and then sort them. You probably retrieve around 30-50 at a time, depending on the size of each item. So you need to have a way to tell the server, "give me the following 30 items" or, more accurately, "give me 30 items beginning at point X."
There are lots of ways to do it, even within REST. In general, you are best off passing the starting point and count via query parameters, as you did. So if you are getting widgets, and your API is
GET /widget
then you would have several query fields: field (which field you are sorting on), order (one of asc or des), count (an integer) and start (which point to start, either a record ID or an index).
That gives the following:
GET /widget?field=name&count=30&start=100&order=asc // get widgets sorted by field in ascending order, starting with the 100th widget and going for 30 widgets
GET /widget?field=price&count=20&start=225&order=desc // get widgets sorted by price from highest to lowest (descending order), starting with the 100th widget and going for 20 widgets
There are variants on this, e.g. instead of start and count you might do start and end. I have seen order called sort. But the idea is the same: those four fields give you all your server needs to know to retrieve a defined set of items.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
So Amazon has lots of different APIs for different things, and it's hard to find the one I'm looking for.
I have a client that sells things and checks Amazon's lowest price to know where to price their things (slightly under the lowest thing there). They want functionality integrated into their inventory system that would automatically find the product's lowest price on Amazon and display that. I was wondering which AWS service is best suited to this task.
I see the Product Advertising API, and that looks like the closest thing right now. Is that so?
I don't really want to rely on a scraper when Amazon provides a programmatic interface to this information somewhere, which I know they do because many other products have this. Some say that they can just download a dump of Amazon's products and use that locally -- I'm open to that option too if anyone can point me in its direction.
Yes, the technically appropriate API is the Product Advertising API, using the ItemLookup/ItemSearch operations or the Seller* operations.
https://affiliate-program.amazon.com/gp/advertising/api/detail/main.html
I would also advise you to check the licensing agreement for this API, notably clause 4 (i).
You can use the Amazon Marketplace Web Service (api, description)
This service can group all of the available offers into ‘buckets’ and shows the lowest price from each bucket bucket.
Each bucket has a unique combination of:
Sub-Condition (New, Like New, Very Good, Good, Acceptable)
FulfillmentChannel (FBA or Merchant-Fulfilled)
ShipsDomestically (True, False, Unknown)
ShippingTime (0-2 days, 3-7 days, 8-13 days, 14 or more days)
SellerPositiveFeedbackRating (98-100%, 95-97%, 90-94%, 80-89%,
70-79%, Less than 70%, Just launched)
Someone made a really cool demo of the API here
We cannot get the entire amazon products using API.They had made certain restrictions to the usage of API such that it would be more relevant to advertising use case only.
I wrote that small python module to achieve such a task: https://github.com/iMilnb/awstools/blob/master/mods/awsprice.py
Basically, it fetches the prices from Amazon's website and convert them to a nice and parsable python dict.
I wrote two example functions that show how to use the resulting dict to dump an instance price on various terms along with a CSV converter.
There is a reply to a similar question which lists all the .js files containing the prices, which are barely JSON files (with only a callback(...); statement to remove).
Here is an exemple for Linux On Demand prices : http://aws-assets-pricing-prod.s3.amazonaws.com/pricing/ec2/linux-od.js
(Get the full list directly on that reply)