Truncate feeds in getStream - getstream-io

I would like to limit the number of feed updates (records) in my GetStream app. I want to keep each feed at a constant length of 500 items.
I make heavy use of the 'to:' field, which results in a lot of feeds of different lengths. I want them all to grow to 500 items, so I would rather not remove items by date.
For what it's worth, I store all the updates in my own database which results in a replica of the network activity.
What would be a good way of keeping my feeds short?

There's no straightforward way to limit your feeds to 500 items. There's 2 ways to remove activities from Stream:
the removeActivity method, which will remove 1 activity at a time via the foreign_id or activity id (https://getstream.io/docs/js/#removing-activities)
the "Truncate Data" button on the dashboard for your app, which will remove all activities in Stream.
It might be possible to get the behavior you're looking for by keeping track of all activities that you're adding to Stream, then periodically culling the ones that put you over 500.
Hopefully this helps!

Related

Is it better to prewrite the dashboard data or fetch and do the calculation on demand?

So the thing is that i have some data in my Mongodb that i want to represent in a dashboard,
And its taking some time to fetch the selected documents from different collections and do the calculations needed to send the results back to the client.
So i had this idea to pre-write the required data in the required format in a dedicated collection and whenever the client asks for the dashboard i just fetch its data directly, so that i don t have to wait to fetching data across different collections and to do the calculations when he asks for it.
by the way these data are not getting updated frequently… lets say about 100 updates max per day.
Does this idea sound right or it has some drawbacks that i didn t think about?
Thank you in advance,
That's caching, your idea sounds just right.

Events in Azure Search

Is there a way to attach webhooks or get events from Azure Search?
Specifically we are looking for way to get notified (programmatically) when an indexer completes indexing an index.
Currently, there are no such events. However, you can implement functionality like this yourself. There are several scenarios to consider. Basically, you have two main approaches to adding content. Either define a content source and use pull or use the API to push content to the index.
The simplest scenario would be when you are using push via the API to add a single item. You could create a wrapper method that both submits your item and then queries the index until that item is found. Your wrapper method would need to either call a callback or fire an event. To support updates on an item you would need a marker on the item, like a timestamp property that indicates the time when the item was submitted to the index. Or a version number or something that allows you to distinguish the new item from the old.
A more complex scenario is when you handle batches or volumes of content. Assuming you start from scratch and your corpus is 100.000 items, you could query until the count matches 100.000 items before you fire your event. To handle updates, the best approach is to use some marker. E.g. you submit a batch of 100 updates at 2020-18-08 09:58. You could then query the index, filtering by items that are updated after the timestamp you submitted your content. Once the count from your query matches 100 you can fire your event.
You would also need to handle indexing errors or exceptions when submitting content in these scenarios.
For pull-scenarios your best option is to define a skill that adds a timestamp to items. You could then poll the index with a query, filtering by content with a timestamp after the point indexing started and then fire your event.

Pulling more than 10 records for proof of delivery API from NetSuite

Is there any way we can pull more than 10 records from workwave POD Api for more than 10 records?
Whenever I call workwave API through Map/Reduce script it's giving me an error message to slow down. Has anyone got this experience and how they did they manage to achieve this?
Thanks
If you're using the List Orders or Get Orders API, there is a throttling limit - "Leaky bucket (size:10, refill: 1 per minute)". However, both those APIs allow for retrieving multiple orders in a single call. My suggestion would be to restructure your script so that instead of making the call to Workwave in the reduce stage for a single order, you make it in the Get Input Data stage for all orders you want to operate on, and map the relevant data to the corresponding NetSuite data in the Map stage before passing in through to the Reduce stage.
In other words, you make one call listing multiple order ids rather than multiple calls listing one order id.

Create flat feeds in batch process

I want to know if we can create flat feeds in a batch process.
Let me give you some context: We want to create feeds for university students based on the courses they are taking. We want them to see feeds for every course that is available at their campus. We have the list of those courses in our MongoDB. Is it possible to create a flat feed for each course on that list in a batch process? There are more than 5000 courses in total.
Definitely. Also, until you actually put activities into feeds, nothing is being done.
Feed group should be created at your dashboard, for example course in this case. Then, actual feed, for example math-101, will come to existence when you push data to it.
Nonexisting feed read will simply return empty result set (assuming access policies permit).
nonexisting:math-101: error since feed group is missing
nonexisting:nonexisting: error as previous
course:nonexisting: empty response
course:math-101: your data
Finally, if you're ingesting a lot of data, there is an import mechanism to process your data efficiently.

Pagination after reindex in Azure Search

I am new in Azure Search Service and I am not sure I got one important thing about it:
Let's pretend the situation when I am as a client scrolling down through results of my search query:
"New Y". I have 1000 elements, every page contains 10 of it. But during my scroll reindex operation has been started and some elements changed their position concerning new updates in data source (Azure Table).
Will I see next pages during my scrolling after reindex with probably some duplicated data or it still be the old "snapshot" of data I was scrolling before?
You'll see the changes as you execute subsequent requests. To Azure Search each request is independent and it represents a new search (caching aside), which for paging scenarios just happens to have a different "skip" number.
This means that if your data is changing you might see an item more than once (if it moves across pages due to changes) or even skip one (if it moves from a page you didn't see yet to a page you already saw).
There's no way to get a strictly consistent view of search matches outside of a single result. If you need to approximate this behavior you can request a larger page (using "top"), cache the results and present them in chunks. We find that in practice this is rarely needed for most search scenarios, but if search is backing a part of an app that needs consistency you might need to something along those lines.

Resources