In my project I have to send user's location periodically (E.g. 30Sec) to server using Android application and the PHP server returns nearest users and app displays them on the map. I am looking for a publish subscribe protocol for the real time data transfer.
Here is the architecture I was planned. Each mobile app users id will be considered as a topic or channel so that we can send response to each user. PHP server will considered as a topic e.g. "APP_SERVER_TRACKER". All mobile apps publishes location to Server topic. Server prepares the list and publish response to individual user's by using their id as topics.
I would like to know how my PHP Backend Server subscribe topic?
In reality all my users are connecting to my PHP Backend Server using a single channel via PubNub. After getting location of each user server process it and publishes response to end user. In this scenario do I need to scale my PHP Server? Is there a way for directly handling all my customer request within the PHP Server itself? Please advice?
The application will target at least 10 Million users. should support half of concurrent users. PHP Server Hosting will be in AWS EC2 m3.2xlarge 8 core instance. Please let me know if AWS Server configuration is adequate?
PubNub Geohash Use Case
You don't need your PHP server to do this. Just have your Android app subscribe to a channel that is what is call a geo-hashed channel (a channel name based on your location). Your app will also publish its location to that channel.
So now you are receiving location updates from others near you while at the same time you are sending your location to those that are near you.
See PubNub Geohash use cases for more details.
Related
I'm looking for general guidance on my systems design.
I currently have two services:
Vendor service that is connected to a Mongo database which contains specific Vendor data and including their twitterids
Twitter service that has no db connectivity but listens to live twitter feeds of specific vendors. Once there is a new tweet the Twitter service processes the tweet and sends it back to the vendor service for storage via a message queue.
Problem: The Twitter service upon startup doesn't know what vendors to listen to because it doesn't have access to the database of Vendors.
My Solution: Upon startup the Twitter service sends a message to Vendor service requesting a list of all the Vendor Twitter ID's to listen to. The Vendor service upon receiving the message then gets all the Vendor Twitter ID's from the database and sends the list back to the Twitter Service via the message queue. The Twitter Service then begins to listen on those Vendor's feeds.
My question is, is my solution correct or would it just be easier for the Twitter service to connect to the mongo database. I think the one plus in my solution is that if I want to scale up the Twitter Service the Vendor service can manage which Vendor ID's get sent to the Twitter service to listen on. I'm just not entirely certain any second opinion would be appreciated.
My opinion, worth everything you are paying for it, is that you are on the right track.
With the Twitter service asking the Vendor service you get these benefits:
You can enforce any access control you may need at the service request boundary
You can change the database (or anything else) on the Vendor service side without disturbing the Twitter service.
You can still send in the request if Vendor Service isn't running yet (remove problems based on the order of startup).
Sounds like you're doing well... keep going.
I have created a watson conversation using assistant-simple github repo in node.js and it is working fine locally and in ibm cloud also. Now I want to log these conversation messages in a database. How can I log these conversation messages using database in node.js.
Assistant will keep the messages in a log for a small period of time List logs for a workspace and see Log limits.
Alternatively you will have to code putting messages into a database inside the NodeJS (or other language) server Orchestrator layer (which the UI communicates with). This layer gets all the user messages and Assistant responses and so can store them where you want.
I am not aware of a sample which directly communicates with Assistant and stores the user messages in a database. You would need to take various pieces of code and put them together to achieve this.
For example this sample shows how to upload information to a Cloudant database running on IBM Cloud, using NodeJS.
Alternatively if you don't want to write the code locally, you can invoke App Connect to store the data in a database. This Assistant and App Connect sample shows how to use Assistant actions to invoke AppConnect at some point in the dialog flow, either from the Assistant service (using a Cloud Function) or from the Orchestrator layer (as a client action).
The sample passes a user Id found in the utterance, but the approach is to take some data from Assistant, invoke App Connect and pass it to App Connect, and App Connect calls some other external system with the data. In your case, the data could be the user utterance and Assistant response, and App Connect could store this in a database.
One option would be to leverage cloud functions to make a call to another service.
Depending on what you want to do with the conversation data. If you wanted to access chatlogs and metrics you could send it to a logging service such as www.chatseer.com so you can access the logs.
I have a Firebase mobile application that serves many users.
The application requires to send email notifications to users. Firebase cannot send functional email.
Zapier is not an option as the webhook service is very limited and cant consume complex JSON such as the email body.
To solve this, I store the “email job” in the Firebase database (include To, subject and body”), I setup a “mail server” using nodejs server (at home) that listen to the Firebase database, so whenever there is a “new email job” it sends the mail and mark the job status to “DONE”.
In order to maintain high availability and scalability, I must be able to run more than one “mail servers” but this will cause duplication mail as all servers will listen to jobs.
I cannot address a job to a specific server, as the server may be down and I will lose jobs. Also, Firebase does not have kind of SELECT FOR UPDATE as SQL databases have to maintain concurrency.
Is there a way to solve this issue using Firebase? If no, any workaround?
How do you store permanent data in a Slack Application?
For example, the Opsidian slack app has a command to add your AWS keys. Where does it store those keys and how does it know to use specific keys for specific teams?
Is this on the Opsidian side? If that's the case does it just use the team.info endpoint and use that every command to match it up?
I have searched their documentation and Google with no luck.
A slack app usually consists of program code (e.g. PHP) and a database (e.g. MySQL) that runs on a server and interfaces with Slack through one of the APIs. All Slack team specific information is stored in a custom datadase using the unique team ID as key. The server needs to be accessible from the Internet, so that Slack can communicate with it. The server to run the program code of the app and the custom database for the app is not provided by Slack, but needs to be setup and maintained by the Slack app developer.
Slack itself only stores the basic configuration for an app (everything you see under "Your apps", e.g. Validation token) and some basic configuration per team after installation (e.g. That an app is installed and who installed it). Any other application specific information has to be stored by the app itself in its database.
The Slack app developer also needs to provide a custom website to allow installation of a Slack app for a team. See this answer for more info about the installation process and how to obtain a team specific access token.
I'm trying to piece together the general workflow of giving a user push notifications via the service worker.
I have followed this Google Developers service worker push notifications tutorial and am currently thinking about how I can implement this sort of thing in a small user based web app for experimentation.
In my mind, the general workflow of an web app supporting push notifications is as follows:
Client visits app
Service worker yields a push notification endpoint
Client sends the endpoint to the server
Server associates the endpoint with the current user that the endpoint was generated for
Every time something that your app would say is notification worthy happens, the server grabs the push notification endpoint(s) associated with the user, and hits it to send a push notification to any user devices (possibly with a data payload in Chrome 50+, etc)
Basically I just want to confirm that my general implementation thoughts with this technology are accurate, else get feedback if I am missing something.
You are pretty much bang on, there are some specifics that aren't quite right (but this is largely phrasing and may be done to personally taste).
Client visits app
Register a Service Worker that you want to use for push messaging
Use the service worker registration to subscribe the user to push messaging, at which point the user agent will configure an endpoint + additional values for encrypting payloads (If the the user agent supports it).
Client sends the endpoint to the server
Server store the the endpoint and data for later use (The server can associate the endpoint with the current user if the server if the web app has user accounts).
When ever the server wishes to send a notification to a user(s), it grabs the appropriate endpoints and calls them that will wake up the service worker which can then display a notification.
Payload support in coming in Chrome 50+ and at the time of writing payload is support in Firefox, but there are 3 different versions of encryption used for the payloads in 3 different versions of Firefox, so I'd wait for the payload support story to be ironed out a little before using it / relying on it.