I'm using the server side Segment Ruby SDK to track events from the backend, and those events along with custom properties attached will end up in Amplitude where they are analyzed to produce product metrics. I noticed that the events tracked from the frontend using the Segment JavaScript SDK all have properties like session_id, os_name, device_brand, etc. automatically collected as part of the event properties when they are sent to Amplitude, but the backend tracking calls don't have any of those properties automatically collected. So I am wondering what do I need to do so that Amplitude can auto collect session_id, os_name, device_brand, etc. through the Segment backend (Ruby) tracking calls.
Related
I am trying to design a real time in-app chat for social application using PubNub. I found that the best architecture to do so for one to one chat with PubNub is detailed in this article http://pubnub.github.io/pubnub-design-patterns/2015/03/05/Inbound-Channel-Pattern.html
now my next problem is I have to display the list of users in the chatting window, how can I sort this list with the users who have sent messages latest at the top and the ones who have not interacted for a long time in the bottom. if I start fetching message from the inbound channel, I will have to always traverse inbound channel to the beginning every time a user has logged in, this will be a resource expensive call and also is not feasible if we have large user base and heavy message volumes.
I will also be using PAM to control authorization of users to read / write on channels.
That is indeed a great blog entry!
If you are in hybrid mode, so you do use a replicate channel for History feeding anyway, then I would use that same channel and intercept it the content with a function and simply store in the channel Object the list of latest visitors ordered, by the latest showing first, you can even add any extra info you would like to it. Then, any time a user can access the Object values from a REST function to PubNub, so as to retrieve the "hybrid channel" associated Object values (stored earlier) and send that list that is always updated to the Chat user. This has an advantage: if you do not want to retrieve messages until a user taped on one of the contacts in the contact list to avoid pre-loading, then you would load no messages for any channel, except maybe the first user, but then its always less to load from History then all messages from all of the channels, and its always available, before fetched, so fastest.
I am trying to implement transactions notifications to GA property with Enhanced E-Commerce enabled from a stripe events handler written in nodejs.
My hits look like so:
v=1&tid=UA-12345678-1&uid=193p6r3o6g1i203d325g181r645bfu1m6ph6&dl=https%3A%2F%2Fdev.example.com%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dh=dev.example.com&dp=%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dt=Stripe%20-%20Payment%20Succeed&ua=events-server%2F1.0.0%20(Linux%3B%20Backend%3B%20Service%2Fstorage-api%3B%20Service-version%2Ftests)&t=pageview&qt=610
v=1&tid=UA-12345678-1&uid=193p6r3o6g1i203d325g181r645bfu1m6ph6&dl=https%3A%2F%2Fdev.example.com%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dh=dev.example.com&dp=%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dt=Stripe%20-%20Payment%20Succeed&ua=events-server%2F1.0.0%20(Linux%3B%20Backend%3B%20Service%2Fstorage-api%3B%20Service-version%2Ftests)&t=transaction&ti=transaction-1IdHQpKlqMBPMjkvcHFWTU9t&tr=359.4&qt=610
v=1&tid=UA-12345678-1&uid=193p6r3o6g1i203d325g181r645bfu1m6ph6&dl=https%3A%2F%2Fdev.example.com%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dh=dev.example.com&dp=%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dt=Stripe%20-%20Payment%20Succeed&ua=events-server%2F1.0.0%20(Linux%3B%20Backend%3B%20Service%2Fstorage-api%3B%20Service-version%2Ftests)&t=item&ti=transaction-1IdHQpKlqMBPMjkvcHFWTU9t&ic=lifetime-access&in=example%20Perpertual%20License&ip=599&iq=1&iv=permanent-license&qt=610
v=1&tid=UA-12345678-1&uid=193p6r3o6g1i203d325g181r645bfu1m6ph6&dl=https%3A%2F%2Fdev.example.com%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dh=dev.example.com&dp=%2Fvirtual-pages%2Fstripe%2Fpayment-succeed&dt=Stripe%20-%20Payment%20Succeed&ua=events-server%2F1.0.0%20(Linux%3B%20Backend%3B%20Service%2Fstorage-api%3B%20Service-version%2Ftests)&ec=E-Commerce&ea=Purchase&el=lifetime-access%2Fpermanent-license&ev=360&t=event&qt=611&ti=transaction-1IdHQpKlqMBPMjkvcHFWTU9t&tr=359.4&tcc=40DISCO&pa=purchase&pr1id=lifetime-access&pr1nm=example%20Perpertual%20License&pr1br=example&pr1va=permanent-license&pr1pr=599&pr1qt=1
I replaced some values for lexically similar for security purposes of course.
I tried all these variants:
no pageview, transaction and item hits
no pageview, event with products list, transaction details, and pa=purchase
pageview, transaction and item hits
pageview, transaction and item hits,
pageview, transaction and item hits, and transaction details and pa=purchase
I am in sort of despair about this, because I feel that something simple is overlooked, since when I use enhanced e-commerce events, the events themselves arrive at Realtime tab but they does not contain/show the e-commerce properties of course.
Basic and enhanced e-commerce features are enabled on view settings.
No transactions shown in the reports at all. Even from my tests I run 2-3 days ago in various combinations of hits.
I checked my hits with the debug endpoint (/debug/collect) and it claims my hits are valid and no warnings shown.
If you are looking at the data in a normal view (not a User-ID view) you will not see them because the request contains only the uid as identifier, while you also need the cid (clientId).
I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of documents (with joins) from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc. by read model). At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
The operation succeeds, data has changed and the UI should be updated to reflect these changes.
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Without completion of projection, I can't fetch inserted/updated data and send back to UI by a real-time sockets. One solution, I can send the state of the same aggregate and aggregateId but in case of joins, how could I send full updates back (data with joins) to UI?
You have a few options for updating the UI.
If possible, it's easier if your API synchronously returns success/failure. If you get a failure, you can immediately report to the user, and you don't need to update the UI. If you get a success, you have some options.
Include in the success response some entity version information. Include in your read model some version information, and poll the query API until the version is new enough.
Include in the success response some entity version information. Have your query API allow you to specify that you want the data as-of at least that version (in a header, query parameter, or whatever). Either you can have the query API immediately return failure if it isn't yet up to date, or you can have it block until it is up to date (with a timeout) then return the up-to-date data.
Use some kind of client notification system such as web sockets to notify the client whenever the read model it is 'subscribed' to changes. Either the notification can include the information the client needs to update, or it can make another query.
On top of those options, you can also employ the technique of optimistic updates. In this case, after making the appropriate client side checks before issuing a command, you assume instantly that the command succeeded (if it's the kind of command that usually does succeed), update the UI immediately using client-side code. Then:
If the command comes back with failure, undo the client-side update (you could do this by re-fetching the read model, or using client-side code).
If the command succeeds, consider re-fetching the read model, if there's any likelihood that the client-side optimistic update isn't quite right. This should use one of the strategies listed above to ensure the command's effect is present in the read model.
This also works quite well with offline support - you queue commands up persistently in the client, and optimistically assume they succeed (perhaps with a UI indication that you are offline and data is stale) using client-side code to update the read model, until you can actually send the commands through to the server and re-fetch the server-generated read model.
I have an app that
- initiates checkins via the API
- receives checkin data via the push mechanism
The JSON object returned by an API call contains a source parameter denoting the app. [This actually seems unnecessary, since my app is initiating the API call...]
The corresponding real-time push response sent to my app does NOT have the source parameter included. Why is that?
I'm attempting to filter out the push data related to my app's checkins, and the easiest way would be to inspect source parameter. [I could also inspect the checkin ids, and watch for duplicates in the two paths; but that seems unnecessary if the source parameter was always included.]
Unfortunately, that particular field isn't currently passed along in our Push API. What exactly are you trying to do?
If you're looking to get real-time notifications about your own app's check-ins, it seems like you don't really need foursquare's push API? You could just have your app send-up the info you want to your own servers at the same time (or immediately after) you check in the user on foursquare, so that you still get real-time info.
If that doesn't work for you, if the user has authorized your application (which will be the case if you're using the User Push API), you can query our check-in detail endpoint (https://developer.foursquare.com/docs/checkins/checkins) to get that info, and filter away the check-ins you're not interested in.
I've studied some CQRS sample implementations (Java / .Net) which use event sourcing as the event store and a simple (No)SQL stores as the 'report store'.
Looks all good, but I seem to be missing something in all sample implementations.
How to handle the addition of new report stores / screens, after an application has gone in production? and how to import existing (latest) data from the event store to the new report store?
Ie:
Imagine a basic DDD/CQRS driven CRM application.
Every screen (view really) has it's own structured report store (a SQL table).
All these views get updated using handlers listening to the domain events (CustomerCreated / CustomerHasMoved, etc).
One feature of the CRM is that it can log phone calls (PhoneCallLogged event). Due to time constraints we only implemented the logging of phone calls in V1 of the CRM (viewing and reporting of who handled which phone call will be implemented in V2)
After a time running in production, we want to implement the 'reporting' of logged phone calls per customer and sales representative.
So we need to add some screens (views) and the supporting report tables (in the report store) and fill it with the data already collected in the Event Store...
That is where I get stuck while looking at the samples I studied. They don't handle the import of existing (history) data from the event store to a (new) report store.
All samples of the EventRepository (DomainRepository) only have a method 'GetById' and 'Add', they don't support getting ALL aggregate roots in once to fill a new report table.
Without this initial data import, the new screens are only updated for newly occurred events. Not for the phone calls already logged (because there was no report listener for the PhoneCallLogged event)
Any suggestions, recommendations ?
Thanks in advance,
Remco
You re-run the handler on the existing event log (eg you play the old events through the new event handler)
Consider you example ... you have a ton of PhoneCallLoggedEvents in your event log. Take your new Handles and play all the old events through it. It is then like it has always been running and will just continue to process any new events that arrive.
Cheers,
Greg
For example in Axon Framework, this can be done via:
JdbcEventStore eventStore = ...;
ReplayingCluster replayingCluster = new ReplayingCluster(
new SimpleCluster("replaying"),
eventStore,
new NoTransactionManager(),
0,
new BackloggingIncomingMessageHandler());
replayingCluster.startReplay();
Event replay is an area that is not completely documented and lacks mature tooling, but here are some starting points:
http://www.axonframework.org/docs/2.4/event-processing.html#d5e1852
https://groups.google.com/forum/#!searchin/axonframework/ReplayingCluster/axonframework/brCxc7Uha7I/Hr4LJpBJIWMJ
The 'EventRepository' only contains these methods because you only need them in production.
When adding a new denormalization for reporting, you can send all event from start to you handler.
You can do this on your development site this way :
Load your event log to the dev site
Send all events to your denormalization handler
Move your new view + handler to your production site
Run events that happened inbetween
Now you're ready