Sort an artists albums by release date with spotify web-api - spotify

The title kinda describes everything.
I would love to know if its possible to get every data ordered by release date on https://api.spotify.com/v1/artists/{id}/albums
I see that it responds with albums ordered by release date, afterward it responds with singles ordered by release date and so on for album,single,compilation,appears_on.
But I was wondering it was possible to get all of them only ordered by release date. And not in individual groups.
I would prefer to not do any processing after receiving the data.

Related

Geting a field from a large number of firebase documents without it being to costly

I'm working on a project that will have a large number (thousands, possibly millions) of documents on a firebase collection, I need to access the average of the value by day of documents that are the same type, each one of them has a field "registered_value", "date" and a "code" to identify its value and type and registered date.
I need to show users the average value by day of the documents that have the same code.
Users can add new documents, edit existing ones or delete the ones created by them
Since I need to get this data frequently it will be too expensive to always read the entire collection every time a user loads the pages that display this info is there a better way store or get the avarege?
I'm working with ReactJS and Node.js
There's nothing built into Firestore to get aggregated values like that. The common approach is to store the aggregated value in a separate document somewhere, and update that document upon every relevant write operation. You can do this either from client-side code, or from Cloud Functions.
For more on this, see the Firebase documentation on aggregation queries and on distributed counters.

How can I aggregate data in Time Series Insights preview using the hierarchy?

I am storing 15 minute electricity consumption measurements in a TSI preview environment. Is it possible to aggregate the total energy consumption per day of multiple meters using the TSI query API?
I have configured a hierarchy as Area-Building and the Time Series ID is the 'MeterId' of the Meter.
The query API (https://learn.microsoft.com/en-us/rest/api/time-series-insights/preview-query#aggregate-series-api) enabled me to aggregate to consumption per day for a single meter. Then I expected to find an API to aggregate the electricity consumption to Building and Area, but could only find the aggregate operation with a single "timeSeriesId" or "timeSeriesName" as required parameter. Is aggregation to a level in the hierarchy not possible? If not, what would be a good alternative (within or outside TSI) to obtain these aggregated values?
What you may do, is get all the instances you need with the search api(docs).(mind that the documentation is wrong for the
url, it should contain "search" instead of "suggest" like this:
)
Then loop through the instances you get in the response to call the aggregates by id one by one. And finally sum the results yourself to have a daily result for all the telemetry sensors responding to your search.
Note: You can only make 9 aggregate calls at the same time(limitations).
I hope they fix aggregates soon.
In the meanwhile I hope it helps you out.
Good luck,

Truncate feeds in getStream

I would like to limit the number of feed updates (records) in my GetStream app. I want to keep each feed at a constant length of 500 items.
I make heavy use of the 'to:' field, which results in a lot of feeds of different lengths. I want them all to grow to 500 items, so I would rather not remove items by date.
For what it's worth, I store all the updates in my own database which results in a replica of the network activity.
What would be a good way of keeping my feeds short?
There's no straightforward way to limit your feeds to 500 items. There's 2 ways to remove activities from Stream:
the removeActivity method, which will remove 1 activity at a time via the foreign_id or activity id (https://getstream.io/docs/js/#removing-activities)
the "Truncate Data" button on the dashboard for your app, which will remove all activities in Stream.
It might be possible to get the behavior you're looking for by keeping track of all activities that you're adding to Stream, then periodically culling the ones that put you over 500.
Hopefully this helps!

CQRS and Event Sourcing Query Historical Data

When using CQRS and Event Sourcing how does one query historical data. As an example if I am building a timesheet system that has a report for revenue I need to query against hours, pay rate, and bill rate for each employee. There is a EMPLOYEE_PAY_RATE table that has EmployeeID, PayRate, and EffectiveDate, as well as a BILL_RATE table which has ClientID, EmployeeID, Rate, and EffectiveDate. The effective date in those tables is basically keeping the running history so we can report accurately.
If we were to take a DDD, CQRS, and Event Sourcing Route how would we generate such a report? It's not like we can query the event store in the same way. I've looked at frameworks like Axon but not sure if it would allow us to do what we need to do from a reporting perspective.
When using CQRS and Event Sourcing how does one query historical data.
Pretty much the same way you query live data: you build the views that you want from the event history, and then query the views for the data that you want.
To borrow your example - your view might be supported by an EMPLOYEE_PAY_RATE table and a BILL_RATE table. Replay your events, as something interesting happens update the appropriate table. TaDa.
An important idea that may not be obvious - for something low latency like a history report, you'll probably want the historical aggregator to be pulling the events from the event store, rather than having a bus push events to the aggregator. The pull approach makes it a lot easier to keep track of where you are, so that you don't need to repeat a lot of work, worry about whether you've received all of the events you should, ordering, and so on.
You report is just another read-model/projection of the events, for example a SQL table that is populated by listening to the relevant events.
If the table is big, i.e. a lot of employees, in order to be fast, you should avoid using joins, by keeping the data denormalized; so, for every employee and day (or whatever granularity you want) you would have a row in a table containing the Employee ID and name, the start date and the end date of the day and other columns containing relevant data, i.e. the pay rate. You put heer the employee name also in order to avoid the joins and you keep it up-to-date by listening to the relevant employee events (like EmployeeChangedName).

Whats the proper way to keep track of changes to documents so that client can poll for deltas?

I'm storing key-value documents on a mongo collection, while multiple clients are pushing updates to this collection (posting to an API endpoint) at a very fast pace (updates will come in faster than once per second).
I need to expose another endpoint so that a watcher can poll all changes, in delta format, since last poll. Each diff must have a sequence number and/or timestamp.
What I'm thinking is:
For each update I calculate a diff and store it.
I store each diff on a mongo collection, with current timestamp (using Node Date object)
On each poll for changes: get all diffs from the collection, delete them and return.
The questions are:
Is it safe to use timestamps for sequencing changes?
Should I be using Mongo to store all diffs as changes are coming or some kind of message queue would be a better solution?
thanks!
On each poll for changes: get all diffs from the collection, delete them and return.
This sounds terribly fragile. What if client didn't receive the data (he crashed/network disappeared in the middle of receiving the response)? He retries the request, but oops, doesn't see anything. What I would do is that client remembers last version it saw and asks for updates like this:
GET /thing/:id/deltas?after_version=XYZ
When it receives a new batch of deltas, it gets the last version of that batch and updates its cached value.
Is it safe to use timestamps for sequencing changes?
I think so. ObjectId already contains a timestamp, so you might use just that, no need for separate time field.
Should I be using Mongo to store all diffs as changes are coming or some kind of message queue would be a better solution?
Depends on your requirements. Mongo should work well here. Especially if you'll be cleaning old data.
at a very fast pace (updates will come in faster than once per second)
By modern standards, 1 per second is nothing. 10 per second - same. 10k per second - now we're talking.

Resources