The Batch Activity Add section of documentation says:
The batch method doesn't trigger a fanout - therefore the followers of these feeds won't receive an update.
So it doesn't add it into the followers. But does it trigger the Fire hose or web hooks?
For Example. Will this activity trigger web hook or fire hose for all three notification feeds?
var feeds = ['notification:1', 'notification:2', 'notification:3'];
var activity = { 'actor': 'user:2', 'verb': 'pin', 'object': 'place:42', 'target': 'board:1' };
client.addToMany(activity, feeds);
Our firehose will read only what's in the feed themselves, so if a feed does not get an activity then it won't show up on the firehose either.
For example, user A is followed by user B and user C. If user A adds an activity to their feed using the add-to-many batch endpoint, then user B and user C will not get the activity in their feeds, which means the firehose will also not get that activity.
Related
I am working on an e-commerce site. There are times where a product would no longer be available but the user would have added it to the cart or added to their saved items. How do I implement the feature such that if the product has been updated, the user would be notified as soon as possible?
I thought about doing a cron job that would check the status of the product if it still available or has been recently updated. But I do not know if that is feasible. I am open to better ideas
Thanks
Similar images are included below
What you are trying to achieve falls into real-time updates category and technically there would be more than one option to achieve this.
The chosen solution would depend on your application architecture and requirements. Meanwhile, I can suggest looking into Ably SDK for Node.js which can offer a good starter.
Here down a sample implementation where on the back-end you will be publishing messages upon item's stock reaching its limit:
// create client
var client = new Ably.Realtime('your-api-key');
// get appropriate channel
var channel = client.channels.get('product');
// publish a named (may be the product type in your case) message (you can set the quantity as the message payload
channel.publish('some-product-type', 0);
On the subscriber side, which would be your web client, you can subscribe to messages and update your UI accordingly:
// create client using same API key
var client = new Ably.Realtime('your-api-key');
// get product channel
var channel = client.channels.get('product');
// subscribe to messages and update your UI
channel.subscribe(function (message) {
const productName = message.name;
const updatedQuantity = message.data;
// update your UI or perform whatever action
});
Did a live betting app once and of course live updates are the most important part.
I suggest taking a look into websockets. The idea is pretty straight forward. On backend you emit an event let's say itemGotDisabled and on frontend you just connect to your websocket and listen to events.
You can create a custom component that will handle the logic related to webscoket events in order to have a cleaner and more organized code an you can do any type of logic you want to updated to component as easy as yourFEWebsocketInstance.onmessage = (event) => {}.
Of course it's not the only way and I am sure there are packages that implements this in an even more easy to understand and straight forward way.
I have a webchat for a user connected to the bot through directline.
I want a second user to join to the same conversation, but I want the second user to be able to read the full conversation.
Right now when the second user connects to the conversation it doesn't see anything of the first user conversation because he doesn't join with a watermark value.
I have this code on bot builder v4 right now:
const options = {
method: 'GET',
uri: 'https://myuri/addRow?conversationId='+stepContext.context.activity.conversation.id,
};
await req-promise(options);
I would like to send something like this:
const options = {
method: 'GET',
uri: 'https://myuri/addRow?conversationId='+stepContext.context.activity.conversation.id+'watermark='+watermark,
};
await req-promise(options);
Is there anyway to get that watermark value?
Thanks
Per this GitHub issue.
The cache of messages in the Direct Line connector service is intended to be used as a connection reliability mechanism, not as an actual message history store.
If you require more granular control over conversation history, you will need implement an a transcript store server side. And, you can use the SendConversationHistoryAsync api to send chunks of history messages to the conversation.
We do not currently have a complete example demonstrating this, but it is in the works.
I would recommend using a transcript logger to store and manage your own conversation history instead of trying to pull the messages from the cache. Also, if you try to use the watermark, you'll run into permission issues since one conversation doesn't have the ability to see another conversation's data.
Hope this helps!
Hello i'm newbie and im hardly to understand this notification in service-worker, and because my knowledge isn't good yet then probably i will unable to explain my problem clearly.
so here's the code :
// triggered everytime, when a push notification is received.
self.addEventListener('push', function(event) {
console.info('Event: Push');
var title = 'New commit on Github Repo: RIL';
var body = {
'body': 'Click to see the latest commit',
'tag': 'pwa',
'icon': './images/48x48.png'
};
event.waitUntil(
self.registration.showNotification(title, body)
);
});
this is the code that trigger to POP the notification, what I do not understand is where the argument to accept/ receive the data ?
I've been searched a lot: https://auth0.com/blog/introduction-to-progressive-web-apps-push-notifications-part-3/ ,
https://developers.google.com/web/updates/2015/03/push-notifications-on-the-open-web
there's some new data JSON or from git-server or push api, but I still hardly to understand where's to accept the data.
sorry if you still do not understand what's my problem.
Here to make it simple what I want :
Let's say i make a button, and everytime i click the button it will value as 'True' and I want that 'True' value to pass into argument and trigger the push of notication in service-worker.
2nd questions: am I able to trigger notification with header or text in html ? since we can manipulate the text with DOM ?
am I able to trigger notification without GCM, or API cause I just want a simple notification in serivce-worker like above without passing much data.
If you give more advice or maybe notification without service-worker but real time , I am surely happy to read it but I hope Im able to understand.
There are basically two concepts involved that work well together but can be used independently. The first is the visible UI shown to a user that tells them information or prompts them for an action. The second is sending an event from a server to the browser without requiring the user to currently be active on the site. For full details I recommend reading Google's Web Push docs.
Before either of those scenarios you have to request permission from the user. Once permission is granted you can just create a notification. No server or service worker required.
If you want to send events from a server you will need a service worker and you will need to get a subscription for the user. Once you have a subscription you would send it to a server for when you want to send an event to that specific browser instance.
Once you receive a push event from a server you display the UI the same as in the first scenario except you have to do it from the service worker.
We're setting up Push Notifications for our app. We are creating a console app now that will determine which users to send to, and then send them to said users. What's not obvious to us at this point is how do we know when each one of them completed? Or didn't complete? There's not a lot of documentation provided by Microsoft (big surprise there) and whatever documentation there is doesn't really explain how to read the responses.
For instance, here's an example snippet of what we think we need to implement, because we could have thousands of people receiving a notification at one time, we would want them to run parallel and not block UI.
public async Task GenerateNotifications()
{
NotificationHubClient hub = NotificationHubClient.CreateClientFromConnectionString(AppHelper.AzureNotificationHubConnectionString, "myhub");
List<Task<NotificationOutcome>> notificaitonTasks = new List<Task<NotificationOutcome>>();
for (int i = 0; i < 10; i++)
{
var notification = new
{
aps = new
{
alert = string.Format("Awesome Notification {0}", i),
sound = "default"
}
};
string notificationJSON = JsonConvert.SerializeObject(notification);
notificaitonTasks.Add(hub.SendAppleNativeNotificationAsync(notificationJSON, "mytag"));
}
await Task.WhenAll(notificaitonTasks);
}
This makes sense to us, we use the WhenAll method to execute all the tasks on parralel threads. But then is there a way to know what happens with EACH task that gets run? For instance, the ContinueWith method seems to do what we want, except we think that this will only run after ALL the tasks are completed, and not after each one (please correct me if I'm wrong).
So, is there a way to read each response of a WhenAll call? If no, is there a better way to do what we are trying to do? I will supply any other information needed, please just ask.
You can use per message Telemetry feature to get the result of each notification sent. You will need to however upgrade to Standard Tier to do that. See link
Per Message Telemetry
Per Message Telemetry Blog
On a side note you can use NotificationOutcome.Result property by using the EnableTestSend property as shown in the link EnableTestSend property(Search for Debug Failed notifications). This will only send the notification to 10 devices that matches your condition. This is primarily used for debugging purpose
I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)