I am building a messaging app and curious how stream and sendbird handle notifications.
stream and sendbird have caps (x% of MAU) on concurrent connections. A client needs a connection to a server to receive a message but is that the case with notifications? Because if that is the case, all clients need connections all the time and the concurrent connections will be around 100% of MAU which is very expensive.
Thanks,
DK
From Sendbird's perspective (I am an employee of Sendbird), notifications are typically sent only to offline users meaning that an active connection would not be necessary. Depending on your implementation, and what devices are utilized, notifications are sent via APNs for Apple, FCM for Android and HMS for Huawei
https://sendbird.com/docs/chat/v3/ios/guides/push-notifications#1-push-notifications
Push notifications support both single and multi-device users and they are delivered only when a user is fully offline from all devices even when they use only one device. In other words, if a user is online on one or more devices, notifications aren't delivered and thus not displayed on any devices.
Additional multi-device support for push notifications is also provided. If selected from your dashboard, for multi-device users, notifications are delivered to all online and offline devices. However, through iOS, notifications are displayed only on offline devices.
https://sendbird.com/docs/chat/v3/android/guides/push-notifications#1-push-notifications
Push notifications support both single and multi-device users and they are delivered only when a user is fully offline from all devices even when they use only one device. In other words, if a user is online on one or more devices, notifications aren't delivered and thus not displayed on any devices.
Sendbird provides two options for push notifications. Choose an appropriate option upon consideration of how much support for multi-device push notifications your client app requires. Compared to this general push notification option, with Multi-Device support, push notifications are delivered to all offline devices even when a user is online on one or more devices. Refer to Understanding the differences in the Multi-Device Support page to understand the differences between two options in detail.
Feel free to head on over to the Sendbird Community if you have additional questions!
Related
I have been up and down the documentation and all over the web looking for an answer to this question but have not had any luck. I have a project where I am looking to retrieve the live audio stream from an Avaya telephone call and then transcribe the call as its happening.
Does Avaya support this functionality?
You may use DMCC (which has bindings to different languages and also a language-agnostic XML interface), which implements CSTA ECMA-269 industrial standard. It has several methods to start an API session (StartApplicationSession), subscribe to events (MonitorStart) and assume first-party control over a device (RegisterTerminal). If a device is registered by an application in client-media mode, you may directly access the RTP media stream coming into and going out from the phone. RTP address, port and codec are contained in MediaStartEvent responses which you can receive via DMCC once you've set the event subscription properly. There's also a distinctive RecordMessage method that writes the audio stream from a device to a file (which you may process later).
Registering a device will likely consume a «DMCC license» (one for each registered device). If you use third-party call control methods (such as MakeCall or AnswerCall), a «Basic TSAPI license» will also be consumed for every controlled device. I've discovered out that a «Basic TSAPI» license is consumed as soon as you issue a MonitorStart request to subscribe to events. You may want to consult your vendor about how to obtain the appropriate amount of licenses for your AES. I personally found Avaya licenses rather complicated in terms of understanding what license set exactly your application may need. There's my thread on Avaya's DevConnect resource which may shed some light here.
Playing with your Avaya AES installation using the DMCC Dashboard is a good way to familiarize yourself with DMCC API.
For example, if I have a friend over and he wants to show me a video using a given app that runs both on my device and his device. Could that app display a QR code on the screen or something that he could scan and instantly be granted access to my Chromecast device?
As Ali mentioned, Chromecast devices are discovered and apps launched via local network applications. One an app is started, it could easily connect with a cloud service that allows other (non-local) devices to talk w/ your Chromecast via the cloud service. A Chromecast Receiver application is just a HTML5 application (HTML, CSS, and JavaScript). You can really do what ever you want once your application gets launched.
If displaying a QR code that allows some kind of rendezvous with your cloud application is what you want to do, you can certainly do that.
I presume your friend's mobile device is on the same wifi network, right? Currently, a chromecast device has no identity outside of its local wifi network, so if the sender is not on the same network as the receiver, there is no way they can exchange messages. Back to your question, if your friend is on your network, then he could see your device except from those applications for which your device is not whitelisted for. Is that the case you want to handle through, say, a QR code? If so, that is currently not doable either since whitelisting is not just a local setup. Maybe I misunderstood your question?
Based on your questions, you are saying that both you and your friend have the same app. If so, and if your friend connects to your wifi network, then he will see your chromecast (you do not whitelist a device for a phone, you whitelist a device for an app id and as long as your friend has the same app (hence the same app id I resume), your whitelisted device will be discoverable by his phone. On the other hand, if you do not want to give him credentials to get on your network, then you need a cloud backend and a lot of work, since although your chromecast device can send a message to cloud and your cloud service can notify the other user's phone (using, say notification or some other mechanism that you employ in your app), the reverse (i.e. sending a message from your friend's phone to your chromecast (through your cloud service)) is much harder. Your friend's phone can send your phone a message (again via a back end service, a bluetooth communication, NFC, etc and then your phone using your app can send that to the chromecast receiver but I am sure you are getting the idea that it is a lot of work. Signing up on your wifi network can be made easier with a QR code or something so at this pint, that would be the easiest solution.
I want my app to be able to receive notification from the server while it is running in the background. I don't like pooling, that will drain user's battery since I need almost realtime response, does Series 40 support that?
To answer the question if you can intercept SMS on an S40 phone then I'd say yes. You can use push registry. See this link for the j2me sample. This is only half of the problem as this is the receiving end. You need to create a server that "pushes" a message to a certain mobile number on it's push registry port. You can use an SMS gateway maybe to push messages?
I heard there is USSD Commands in Mobile.But i dont know what it is? i was googling two more sites.i did not understand it.Please anybody having knowledge about USSD Commands, share with me.
How it is useful when we using USSD Commands with our j2me midp 2.0 application development?
Please suggest me some useful URL's to get this properly.
Also, i would like to here about AT Commands too?
Thanks & Regards,
P.SARAVANAN
USSD is Unstructured Supplementary Service Data.
GSM standardizes on the syntax (i.e. message transport) of USSD but not on the semantics (i.e. what one can do with USSD is network-operator-specific).
USSD applies a request/response pattern. A user sends a USSD request which is processed by the network and eventually answered with an USSD reponse. In a nutshell, USSD allows an end user sending numerical commands. These commands are transported by protcol functionality within the SS7 signalling stack from the mobile device to the mobile network MSC (mobile switching center, the nework node controlling the mobile network). The network operator configures the MSC to handle specific USSD requests, typically to forward them to various other network elements. Among them are:
HLR (home location register, the user database) to switch on/off telephony services.
IN (intelligent network, the realtime billing platform, among others) voucher management system for prepaid top-up.
USSD gateway to branch out USSD messages to external systems.
These network elements then generate USSD responses which are transported back to the user.
Using USSD from J2ME is offered via:
Devices supporting JSR 120 (Wireless Messaging API). Consult manufacturers development documentation or device databases to check which devices are covered.
AT command (AT+CUSD) via serial interface emulation.
The user composes some message—usually rather cryptic—on the phone keyboard.
The phone sends it to the phone company network, where it is received by a computer dedicated to USSD.
The answer from this computer is sent back to the phone.
The answer could be seen on the phone screen, but it is usually with a very basic presentation.
The messages sent over USSD are not defined by any standardization body, so each network operator can implement whatever it finds suitable for its customers.
Can anyone tell me how to send receive data between two applications over an ActiveSync connection?
In my scenario there will be one application running on a desktop and another on a windows mobile device, both these applications need to communicate among them. The connection between the desktop and the mobile device can be ActiveSync over USB or Bluetooth. I need the applications to exchange a continuous stream of data, more like a chat application. Ideally, the mobile device application will be sending out data 10-15 times a second (maybe more) and the desktop application will receive the data and display it.
For e.g., let’s consider the ‘Notes’ application for mobile device. Basically it allows user to save small textual notes. Now my application would be something similar, with the exception that it will send out all input it receives to the desktop application. The desktop app will receive the ‘inputs’ and process it.
Finally, I'm open to using any other option then ActiveSync, provided it supports Bluetooth.
You should check out ActiveSync api documentation for informations.
There is also an alternative solution, which I use.
Windows Mobile activates a temporary LAN when the device is connected on the USB.
You can use Window Sockets for the communication and avoid ActiveSync,
if it's not too much trouble for you.
Usually, the device gets IP 169.254.2.1 and the PC the 169.254.2.2.