Getstream - Pin an activity in a timeline feed to list pinned posts at the top of the feed - getstream-io

In Getstream, how do I pin an activity so that the pinned activities are listed in the timeline feed first?

You can come up with many different score functions and here is one of them:
{"score": "decay_linear(time) + is_pinned", "defaults": {"is_pinned": 0}}
By default, activities are ranked according to their time since is_pinned is zero.
When you want to pin activities, you need to update them to have custom is_pinned field to have 1 which will take them upper in ranking.

Related

How to ensure data consistency between two different aggregates in an event-driven architecture?

I will try to keep this as generic as possible using the “order” and “product” example, to try and help others that come across this question.
The Structure:
In the application we have 3 different services, 2 services that follow the event sourcing pattern and one that is designed for read only having the separation between our read and write views:
- Order service (write)
- Product service (write)
- Order details service (Read)
The Background:
We are currently storing the relationship between the order and product in only one of the write services, for example within order we have a property called ‘productItems’ which contains a list of the aggregate Ids from Product for the products that have been added to the order. Each product added to an order is emitted onto Kafka where the read service will update the view and form the relationships between the data.
 
The Problem:
As we pull back by aggregate Id for the order and the product to update them, if a product was to be deleted, there is no way to disassociate the product from the order on the write side.
 
This in turn means we have inconsistency, that the order holds a reference to a product that no longer exists within the product service.
The Ideas:
Master the relationship on both sides, which means when the product is deleted, we can look at the associated orders and trigger an update to remove from each order (this would cause duplication of reference).
Create another view of the data that shows the relationships and use a saga to do a clean-up. When a delete is triggered, it will look up the view database, see the relationships within the data and then trigger an update for each of the orders that have the product associated.
Does it really matter having the inconsistencies if the Product details service shows the correct information? Because the view database will consume the product deleted event, it will be able to safely remove the relationship that means clients will be able to get the correct view of the data even if the write models appear inconsistent. Based on the order of the events, the state will always appear correct in the read view.
Another thought: as the aggregate Id is deleted, it should never be reused which means when we have checks on the aggregate such as: “is this product in the order already?” will never trigger as the aggregate Id will never be repurposed meaning the inconsistency should not cause an issue when running commands in the future.
Sorry for the long read, but these are all the ideas we have thought of so far, and I am keen to gain some insight from the community, to make sure we are on the right track or if there is another approach to consider.
 
Thank you in advance for your help.
Event sourcing suites very well human and specifically human-paced processes. It helps a lot to imagine that every event in an event-sourced system is delivered by some clerk printed on a sheet of paper. Than it will be much easier to figure out the suitable solution.
What's the purpose of an order? So that your back-office personnel would secure the necessary units at a warehouse, then customer would do a payment and you start shipping process.
So, I guess, after an order is placed, some back-office system can process it and confirm that it can be taken into work and invoicing. Or it can return the order with remarks that this and that line are no longer available, so that a customer could agree to the reduced order or pick other options.
Another option is, since the probability of a customer ordering a discontinued item is low, just not do this check. But if at the shipping it still occurs - then issue a refund and some coupon for inconvenience. Why is it low? Because the goods are added from an online catalogue, which reflects the current state. The availability check can be done on the 'Submit' button click. So, an inconsistency may occur if an item is discontinued the same minute (or second) the order has been submitted. And usually the actual decision to discontinue is made up well before the information was updated in the Product service due to some external reasons.
Hence, I suggest to use eventual consistency. Since an event-sourced entity should only be responsible for its own consistency and not try to fulfil someone else's responsibility.

Tracking tire mileage in MAXIMO as measurements on meter readings

I am helping on a project and we are managing a tire warehouse in MAXIMO. That was OK, but now our business guys want us to track mileage for these tires. As these are stock parts, I do not understand how we can manage these and capture mileage for each tire.
A rotating item is a serialized asset, such as a pump or a tire, that you define with a common item number. You designate an item as rotating because it shares properties of both items and assets. A rotating item can have an inventory value, metered mileage and an issue cost.
A rotating item is an inventory item with a generic item number, a current balance,
and multiple instances that can be used in various locations around a plant with individual asset numbers.
A rotating item cannot be consumed and is maintained as an asset. After creating an item and adding it to a storeroom, you can either use the Assets application to create the asset record for the item you want to track, or create a purchase order for the rotating item and serialize it when you receive it.
When you associate an asset with a rotating item, balances can be displayed and tracked for the item. A rotating item is tracked both by its item number in Inventory records and by its asset number in Assets records. An item cannot be both a spare part and a rotating item.
Avoid using rotating asset/item. It is too complicated to use and very difficult to train on. Many people recommend it as a solution, but in practice, all the customers I've worked with don't like it. Eventually, they learn it but the work flow is completely different from issues and returns. Wait until you have to move the asset from a storeroom to location or vice versa.
You can use item condition code if you want to tires and what percentage tread is left: https://www.ibm.com/support/pages/understanding-condition-codes

How do I paginate a Stream ranked feed?

I was pretty deep into integrating Stream into my existing pagination implementation (which is also used for paginating non-activity data stored in MySQL) when I came across this line in the Stream documentation under "Custom Ranking":
Please note: offset and id_lt cannot be used to read ranked feeds. Use score_lt for pagination instead.
This seems to be the only mention of score_lt in the docs. I can't find it discussed anywhere else, nor can I find an example of what its value should be. Should it be the same UUID I would use for id_lt if I were paginating a non-ranked feed? Or is it meant to be a score value of some kind that would be returned only by a ranked feed?
Normally I'd just try it and see, but ranked feeds are only available to paid plans and I'm still evaluating Stream.
This could have significant implications for how I implement pagination though, since I do want to be able to use ranked feeds in the future if I move forward with Stream.
When retrieving activities from a ranked feed using a specific ranking config, each activity will include a score attribute. You can use the score_lt to paginate through the items in the ranked feed (along with the limit parameter).
(When paginating through items on non-ranked feeds, we usually recommend using the id_lt parameter, which will just return activities by creation date, in chronological order from most-recent to least-recent. However, since older content in a ranked feed might be ranked higher than newer content, we have to paginate and order via the score attribute.)
--
Whenever you create a ranked feed, you'll create at least one ranked feed config. I'm going to name my ranked feed config ranked-feed-config-one (you can have as many as you'd like) which will look something like this:
{
"score": "decay_linear(time) * popularity ^ 0.5",
"defaults": {
"popularity": 1
}
}
Whenever you send a new activity into stream, you'll also provide an optional popularity parameter. (If you don't provide one, popularity will default to 1.)
Then, whenever you retrieve activities from the ranked feed, you can specify what ranking config you'd like to use (ranked-feed-config-one), like this:
someFeed.get({ ranking: 'ranked-feed-config-one' })
Each activity will be returned with (and ordered by) a score attribute. You'll save the last score attribute, and use that when supplying the score_lt parameter for future pagination calls.
--
Hopefully that helps clear things up! Let me know if there's anything else I can help answer for you.
You can use Limit & Offset Pagination.
someFeed.get({limit:20, offset:20})

Paging in Azure search when results have equal scores

I'm using Azure Search on my e-commerce site, and now i faced the problem with paging on my search page. When i reload the search page i can get different order of products. So when i'm using paging i can see same products on different pages, and this is critical.
I started researching what's going wrong, and i've found this info on Microsoft docs https://learn.microsoft.com/en-us/rest/api/searchservice/add-scoring-profiles-to-a-search-index#what-is-default-scoring
Search score values can be repeated throughout a result set. For
example, you might have 10 items with a score of 1.2, 20 items with a
score of 1.0, and 20 items with a score of 0.5. When multiple hits
have the same search score, the ordering of same scored items is not
defined, and is not stable. Run the query again, and you might see
items shift position. Given two items with an identical score, there
is no guarantee which one appears first.
So if i got it correctly, i face this issue because products has same score.
How to fix this?
You got it correctly! Because the products you are getting have the same score, there is no guarantee which one appears first.
In order to avoid it in this stage, you can add to your $orderby parameter a field that has unique values, and that way you guarantee the same order. However, this approach doesn’t take scoring into account. We are currently working on a solution to this problem. We will update this answer once the solution is available (the ETA at this point is weeks, not months).
Please note that you can now use search.score() function to order by score:
From the link below:
https://learn.microsoft.com/en-us/rest/api/searchservice/odata-expression-syntax-for-azure-search.
"You can specify multiple sort criteria. The order of expressions determines the final sort order. For example, to sort descending by score, followed by rating, the syntax would be $orderby=search.score() desc,rating desc."

Solr Merge results

I use Solr for product filtering on our website,
for example you can have a product filter where you can filter database of Televisions by size, price, company etc.,. I found Solr+FilterQuery to be very efficeint for such functionality. I have a separate core that has the product info of all TVs in our DB.
I have another Core for product reviews. The review can be on a specific product type or company. So someone can write a review on a Samsung TV or Samsung customer service. So when someone searches for a text (for example "Samsung TV review" or "Samsung customer service"), I search this core.
Now I want to merge the results from the above cores. So when someone searches for 'samsung 46 lcd contrast ratio review', I esentially want to filter the TVs by Company (Samsung), then by size (46") and then find reviews that have text "contrast ratio review". I have no clue how to do this. Basically I want to merge the results by document ID and add additional colums for result 2 into result 1.
I have seen suggestions to flatten out the data but I want to use reviews index on a lot of other filters. So I am not sure if thats a good idea. Moreover if new reviews start coming in I dont want to reindex all the cores of products (even delta reindexing will touch lot of products).
Any ideas on how to acheieve this?
If I got your question right what you are looking for is JOIN functionality.
http://www.slideshare.net/lucenerevolution/grouping-and-joining-in-lucenesolr
http://wiki.apache.org/solr/Join

Resources