How to disable metadata tracking in MIKROS Analytics? - mikros

How can I disable data tracking for specific analytics without removing Mikros package? Currently, I have Mikros sdk 1.1.0 integrated with my Android game with Unity 2021.3.6f1. In my Mikros settings I have Auto Initialize Mikros checked.

The scope of the PRIVACY_LEVEL is as follows:
1. PRIVACY_LEVEL.DEFAULT
a) Track Session : TRUE
b) Track Metadata : TRUE
c) Track Events : TRUE
d) Track Memory : TRUE
e) Track Crash : TRUE
2. PRIVACY_LEVEL.HIGH
a) Track Session : TRUE
b) Track Metadata : FALSE
c) Track Events : TRUE
d) Track Memory : TRUE
e) Track Crash : TRUE
3. PRIVACY_LEVEL.EXTREME
a) Track Session : FALSE
b) Track Metadata : FALSE
c) Track Events : FALSE
d) Track Memory : FALSE
e) Track Crash : FALSE
You can also individually toggle each of the tracking (Session, Metadata, Events, Memory, Crash) by the following way:
// to change only session tracking settings (Optional)
MikrosManager.Instance.ConfigurationController.SetAutoTrackUserSession(true);
// to change only metadata tracking settings (Optional).
MikrosManager.Instance.ConfigurationController.SetAutoTrackUserMetadata(true);
// to change only event logging settings (Optional).
MikrosManager.Instance.ConfigurationController.SetEventLogging(true);
// to change only device memory tracking settings (Optional).
MikrosManager.Instance.ConfigurationController.SetAutoTrackDeviceMemory(true);
// to change only crash reporting settings (Optional).
MikrosManager.Instance.ConfigurationController.SetAutoCrashReporting(true);

You can disable the analytics from Mikros for metadata by adding this to your script:
MikrosManager.Instance.ClientConfigurationController.SetAutoTrackUserMetadata(false);
Make sure if you do add this to your script that you have this added to the top:
import MikrosClient;
import MikrosClient.Analytics;
I found this information in the documentation:
ref- https://developer.tatumgames.com/documentation/disable-mikros-analytics
You can also always join the Mikros Slack to communicate with the community and developers here:
ref- https://join.slack.com/t/mikros-community/shared_invite/zt-owl845v6-UMLsx9m8W_8VwSrfvciX8Q

You can disable auto tracking of user metadata, as well as auto tracking of sessions, crash reporting or even disable logging of events completely. This can be done a couple ways. You can do so explicitly,
// MIKROS won't track user session information
MikrosManager.Instance.ConfigurationController.SetAutoTrackUserSession(false);
// MIKROS won't track user metadata information e.g. device info, network info, ect
MikrosManager.Instance.ConfigurationController.SetAutoTrackUserMetadata(false);
// MIKROS won't record app crashes
MikrosManager.Instance.ConfigurationController.SetAutoCrashReporting(false);
// Even if you logEvents() MIKROS will ignore it.
// This is a shutoff valve for all logged events.
MikrosManager.Instance.ConfigurationController.SetEventLogging(false);
// To have MIKROS not track anything you can use
MikrosManager.Instance.ConfigurationController.SetAllTrackingEnabled(false);
Alternatively, you can update the Configuration privacy settings.
Configuration configuration = Configuration.Builder().SetPrivacyLevel(privacyLevel).Create();
MikrosManager.Instance.InitializeMikrosSDK(configuration);
PRIVACY_LEVEL.DEFAULT (Recommended) MIKROS tracks user metadata and session in the background.
PRIVACY_LEVEL.HIGH MIKROS no longer tracks any metadata information in the background; only session info is tracked.
PRIVACY_LEVEL.EXTREME MIKROS no longer tracks any metadata or session info in the background. Integrators will have to track this manually.

Related

How to confirm cleverTap event push

I am using this below code inside my app, but I am not sure if the events are being pushed to my dashboard and I am not even seeing any error, how to debug clever tap events
var clevertap = {event:[], profile:[], account:[], onUserLogin:[], notifications:[], privacy:[]};
// replace with the CLEVERTAP_ACCOUNT_ID with the actual ACCOUNT ID value from your Dashboard -> Settings page
clevertap.account.push({"id": "CLEVERTAP_ACCOUNT_ID"});
clevertap.privacy.push({optOut: false}); //set the flag to true, if the user of the device opts out of sharing their data
clevertap.privacy.push({useIP: false}); //set the flag to true, if the user agrees to share their IP data
(function () {
var wzrk = document.createElement('script');
wzrk.type = 'text/javascript';
wzrk.async = true;
wzrk.src = ('https:' == document.location.protocol ? 'https://d2r1yp2w7bby2u.cloudfront.net' : 'http://static.clevertap.com') + '/js/a.js';
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(wzrk, s);
})();
I know it is a bit late, but here is my answer :
When I integrated clevertap SDK, I used to point the app to the test clevertap account and test and validate the events there. The only challenge is that you have to find your profile on the dashboard. That should be easy once you know the clevertap ID OR the profile identity(if you are setting one)by looking at the debug console using ADB(for android)
Sending events is one thing, but the actual use of that data will be by the people who analyse in on the clevertap dashboard. Therefore, this method will help you understand
How people will see that event/data on the dashboard.
If profiles have been merged incorrectly (in case of multiple users)
If there are any issues with receiving events on the clevertap side
If the event names and parameters re being received on the dashboard and are correct.
More importantly, another team member (even one who is not a developer) can help you validate events(Step 4) without him having to setup anything on his/her laptop and installing a debug build. He can just look it up on the clevertap dashboard by firing events from an app that points to the clevertap TEST account!
Apart from this, I recommend using test account for testing events. This is because it helps you keep test data separate from production data and you can completely clear TEST account from the dashboard in case the data becomes too much of a mess.
In cleverTap if the events are pushed successfully then you will get it notified in the clevertap dashboard. Segments -> Find People -> By Identity(enter the identity) -> In the profile under Activity you can see all the events that are tracked in cleveTap dashboard,so in this we can confirm.

How do I provide real time update in nodejs?

I am working on an e-commerce site. There are times where a product would no longer be available but the user would have added it to the cart or added to their saved items. How do I implement the feature such that if the product has been updated, the user would be notified as soon as possible?
I thought about doing a cron job that would check the status of the product if it still available or has been recently updated. But I do not know if that is feasible. I am open to better ideas
Thanks
Similar images are included below
What you are trying to achieve falls into real-time updates category and technically there would be more than one option to achieve this.
The chosen solution would depend on your application architecture and requirements. Meanwhile, I can suggest looking into Ably SDK for Node.js which can offer a good starter.
Here down a sample implementation where on the back-end you will be publishing messages upon item's stock reaching its limit:
// create client
var client = new Ably.Realtime('your-api-key');
// get appropriate channel
var channel = client.channels.get('product');
// publish a named (may be the product type in your case) message (you can set the quantity as the message payload
channel.publish('some-product-type', 0);
On the subscriber side, which would be your web client, you can subscribe to messages and update your UI accordingly:
// create client using same API key
var client = new Ably.Realtime('your-api-key');
// get product channel
var channel = client.channels.get('product');
// subscribe to messages and update your UI
channel.subscribe(function (message) {
const productName = message.name;
const updatedQuantity = message.data;
// update your UI or perform whatever action
});
Did a live betting app once and of course live updates are the most important part.
I suggest taking a look into websockets. The idea is pretty straight forward. On backend you emit an event let's say itemGotDisabled and on frontend you just connect to your websocket and listen to events.
You can create a custom component that will handle the logic related to webscoket events in order to have a cleaner and more organized code an you can do any type of logic you want to updated to component as easy as yourFEWebsocketInstance.onmessage = (event) => {}.
Of course it's not the only way and I am sure there are packages that implements this in an even more easy to understand and straight forward way.

Is it possible to add a new tag to all current registrations efficiently?

I currently have 160k active registered devices on my notification hub. Each one has a set of tags. I have added a new feature in my application and the user can turn notification on/off for this feature. We currently manage the on/off state by registering a tag. We would like to deploy this feature with everyone on by default, which means we would need to add this tag to every registration. Is it possible to do this efficiently? My current solution is taking way too long:
var result = await HubClient.GetAllRegistrationsAsync(currentToken, 100);
foreach(var r in result)
{
//get installationId from tags
var id = ...;
var installation = await HubClient.GetInstallationAsync(id);
installation.Tags.Add("newtag");
await HubClient.CreateOrUpdateInstallationAsync(installation);
}
This is taking way too long and even resulting in QuotaExceededExceptions. Is there a more efficient way of doing this? Is it possible to avoid the GetInstallationAsync call? Possibly getting all Installations directly, instead of going through the Registrations? How about updating the tags through the Registration without going through the Installation?
The short answer to your question is "no".
Does your app register its push token each time the user starts a session? If so maybe the best approach is to add the tag when each user launches the new version of the app for the first time.
Do you also have a server that keeps track of all the push registrations and user preferences? You could also update your table of all the user preferences with the new default setting, if the new feature does not depend on the user having the latest build of the mobile app.

redux security - is it possible to access previous state after state reset?

I've got an app that will keep user specific data in redux that must be cleared when user logs out i.e. next user that logs into the app in the same browser tab, should not be able to access the data.
So, app fetches user specific data from API using redux middleware and then that data is stored as part app state in redux. When user logs out, I dispatch log out action to clear the state:
const rootReducer = (state: {}, action: AnyAction) => {
if( action.type === 'LOG_OUT')
return {};
return state;
}
All seems good, state is reset to empty state but my question is - when user logs out and user doesn't close browser tab, can someone use dev tools or else to somehow see state before user logged out? Does redux (without using additional persist/history middleware/enhancers) store state somewhere out of the box? or is it all purely in browser memory and when state is reset on logout then previous state is gone and not accessible?
If there's no remaining references to the previous state object, then the browser's JS engine will garbage-collect the old state. That also implies that there would be no variables that the browser's DevTools could inspect.
The Redux core itself does not retain references to prior state objects.

Is there any way to use our own server for storage of data generated using PUBNUB api? [duplicate]

I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)

Resources