When a notification feed unfollows a flat feed, are activities removed? - getstream-io

The title pretty much says it all, but I'll repeat in the body with more detail.
When notification feed notification:user1 follows flat feed posts:user2, activities are copied from posts:user2 to notification:user1. The precise number of activities to be copied can be optionally specified by passing an activityCopyLimit integer.
However, when a feed unfollows another feed, there is no similar option to control this behavior. The documentation simply states:
Existing activities in the feed coming from the target feed will be purged (asynchronously)
So my question is: is this also the case when it comes to notification feeds?
Whether it is or not, the option to not purge activities would be very useful. Just because a user no longer needs to receive activities from a given feed doesn't necessarily mean that the history of what has been received should disappear.
Thanks much.

It is possible to not purge the history when unfollowing feeds using the keep_history parameter. This feature is still not available on all official clients but it is described in the API rest documentation. The parameter needs to be provided as part of the query parameters and have value true or 1. If your client is not yet supported, you should open a ticket on its Github repository.

This is currently not possible, the feed will always be purged. I do understand your use case and we will consider adding this feature to our roadmap.

Related

getstream.io How do handle activity permissions?

If a user creates a new activity and wants all their followers to see it except 1, how can this be implemented? Do we simply push the activity, and then immediately delete it from the specific follower's timeline feed? This seems like a hack.
https://github.com/GetStream/stream-js/issues/210
this use case hasn't come up before. Why would someone want everyone except one person to see a post? Do they want that person to unfollow them? Are there "rings" or levels of people to choose from when posting? If that's the case, you can create separate feeds with follows to them for those levels (and will likely need to use the TO field as well since fanout only goes 1 level deep).
There's no built in mechanism to specify which feeds to fan out to or which not to. The fanout is intended to happen as fast as possible (milliseconds) so doing those kinds of checks wouldn't be optimal. Your solution to quickly delete from that feed will work.

Inferring the user intention from the event stream in an event store. Is this even a correct thing to do?

We are using an event store that stores a single aggregate - a user's order (imagine an Amazon order than can be updated at any moment by both a client or someone in the e-commerce company before it actually gets dispatched).
For the first time we're going to allow our company's employees to see the order's history, as until now they could only see its current state.
We are now realizing that the events that form up the aggregate root don't really show the intent or what the user actually did. They only serve to build the current state of the order when applied sequencially to an empty order. The question is: should they?
Imagine a user that initially had one copy of book X and then removed it and added 2 again. Should we consider this as an event "User added 1 book" or events "User removed 1 book" + "User added 2 books" (we seem to have followed this approach)?
In some cases we have one initial event that then is followed by other events. I, developer, know for sure that all these events were triggered by a single command, but it seems incredibly brittle for me to make that kind of assumptions when generating on the fly this "order history" functionality for the user to see. But if I don't treat them, at least in the order history feature as a single action, it will seem like there were lots of order amendments when in fact there was just one, big one.
Should I have "macro" events that contain "micro events" inside? Should I just attach the command's id to the event so I can then easily inferr what event happened at the same and which ones not (an alternative would be relying on timestamps.. but that's disgusting).
What's the standard approch to deal with this kind of situations? I would like to be able to look at any time to the aggregate's history and generate this report (I don't want to build the report incrementally every time the order is updated).
Thanks
Command names should ideally be descriptive of intent. Which should mean it's possible to create event names which make the original intent clear. As a rule of thumb, the events in the event stream should be understandable to the relevant members of the business. It's a good rule of thumb. It should contain stuff like 'cartUpdated' etc.
Given the above, I would have expected that the showing the event stream should be fine. But I totally get why it may not be ideal in some circumstances. I.e. it may be too detailed. In which case maybe create a 'summeriser' read model fed the events.
It is common to include the command’s ID in the resulting events’ metadata, along with an optional correlation ID (useful for long running processes). This then makes it easier to build the order history projection. Alternatively, you could just use the event time stamps to correlate batches in whatever way you want (perhaps you might only want one entry even for multiple commands, if they happened in a short window).
Events (past tense) do not always capture human - or system - user intent. Commands (imperative mood) do. As all command data cannot always be easily retraced from the events it generated, keeping a structured log of commands looks like a good option here.

Instagram API media/popular

What are the queries we can use with media/popular. Can we localize it according to country or geolocation?
Also is there a way to get the discovery feature's results with the api?
This API is no longer supported.
Ref : https://www.instagram.com/developer/endpoints/media/
I was recently struggling with same problem and came to conclusion there is no other way except the hard one.
If you want location based popular images you must go with location endpoint.
https://api.instagram.com/v1/locations/214413140/media/recent
This link brings up recent media from custom location, key being the location-id. Your job is now to follow simple pagination api and merge responded arrays into one big bunch of JSON. $response['pagination']['next_max_id'] parameter is responsible for pagination, so you simply send every next request with max_id of previous request.
https://api.instagram.com/v1/locations/214413140/media/recent?max_id=1093665959941411696
End result will depend on the amount of information you gathered. In the end you will just gonna need to sort the array with like count and you're up to go whatever you were going to do.
Of course important part is to save images locally rather than generating every time user opens the webpage. Reason being not only generation time but limited amount of requests per hour.
Hope someone will come up better solution or Instagram API will finally support media/popular by location.

CQRS aggregates

I'm new to the CQRS/ES world and I have a question. I'm working on an invoicing web application which uses event sourcing and CQRS.
My question is this - to my understanding, a new command coming into the system (let's say ChangeLineItemPrice) should pass through the domain model so it can be validated as a legal command (for example, to check if this line item actually exists, the price doesn't violate any business rules, etc). If all goes well (the command is not rejected) - then the appropriate event is created and stored (for example LineItemPriceChanged)
The thing I didn't quite get is how do I keep this aggregate in memory to begin with, before trying to apply the command. If I have a million invoices in the system, should I playback the whole history every time I want to apply a command? Do I always save the event without any validations and do the validations when constructing the view models / projections?
If I misunderstood any part of the process I would appreciate your feedback.
Thanks for your help!
You are not alone, this is a common misunderstanding. Let me answer the validation part first:
There are 2 types of validation which take place in this kind of system. The first is the kind where you look for valid email addresses, numeric only or required fields. This type is done before the command is even issued. A command which contains these sorts of problems should not be raised as commands (for belt and braces you can check at the domain side but this is not a domain concern and you are better off just preventing this scenario).
The next type of validation is when it is a domain concern. It could be the kind of thing you mention where you check prices are within a set of specified parameters. This is a domain concept the business people would understand, do and be able to articulate.
The next phase is for the domain to apply the state change and raise the associated events. These are then persisted and on success, published for the rest of the app.
All of this is can be done with the aggregate in memory. The actions are coordinated with a domain service which handles the command. It loads the aggregate, apply's all it's past events (or loads a snapshot) then issues the command. On success of the command it requests all the new uncommitted events and tries to persist them. On success it publishes the new events.
As you see it only loads the events for that specific aggregate. Even with a lot of events this process is lightning fast. If performance is a problem there are strategies such as keeping aggregates in memory or snapshotting which you can apply.
To your last point about validating events. As they can only be generated by your aggregate they are trustworthy.
If you want more detail check out my overview of CQRS and ES here. And take a look at my post about how to build aggregate roots here.
Good luck - I hope they help!
It is right that you have to replay the event to 'rehydrate' the domain aggregate. But you don't have to replay all events for all invoices. If you store the entity id of the root aggregate in the events, you can just select and replay the events that with the relevant id.
Then, how do you find the relevant aggregate root id? One of the read repositories should contain the relevant information to get the id, based on a set of search criteria.

Wordpress Hooks For Responding to Post Content

This is a followup to my question about programmatically adding wordpress categories based on post content.
Which hook (or hooks) are most appropriate for use in a function or plugin that requires access to the content of a post seeks to run code at the time that post is committed to the database?
The former question led to the suggestion of using the edit_post hook. However my reading makes me wonder if I should use publish_post or save_post instead, or, indeed, if there is an even better option out there that I am not considering.
What, exactly, is the difference between these three hooks? If I want something to run at the time a post is made AND at the time any edits are made, is there one of these that encompasses both events, or do I need to tie in to multiple hooks?
save_post is the most reliable single method that you're looking for. More details here: Details
publish_post does not fire if you save a post as a draft or schedule it to be published later. Details
edit_post is not fired in the case where a new post is created. However, edit_post is fired at many other times like when a new comment is created/edited etc. Details

Resources