I have a setup like this:
Client <----> Realtime Database <----> AppEngine Server
The AppEngine server has some code inside the servlet init() method.
#Override
public void init(ServletConfig sc) throws ServletException {
// Setup Firebase....
firebase.addChildEventListener(..nested SingleValueEventListener..);
}
Whenever the client updates a node in firebase, the AppEngine will listen for this change, and do some processing and update some other nodes.
This setup works for testing, as I am a single user. But what if 100 people are using this app? Am I guaranteed that this childEventListener will run code for every user? Will those nested SingleValueEventListeners also trigger?
Or will I have to deal creating threads on every different firebase request? Or is this all taken care of by Firebase Java Server SDK?
Also, is the init() method, the right place to put the ChildEventListeners and can I add like... 10 listeners in there?
On app engine firebase uses background threads to listen to changes on firebase. By adding ChildEventListener you create a new long living thread in background that will handle everything for you, no need to worry about it and create a new one or etc. It will be triggered by changes in firebase no matter who did those changes(any user of yours can do it). But to use long living background threads on app engine manual scaling is needed to be enabled, that means that only one instance of your back end will run. And it will proceed as many request as it has capabilities, so you have a fixed limit of changes per second that your back end can handle.
I have a similar application going on and we've been working with it for some months. I do not recommend you to use appEngine standard environment for this mean, as it is not prepared to keep persistent connections (we started this way)
Because of this, sometimes we lost connection with firebase and after making some research we found out that this was a common issue. The only way of solving it was migrating the server to the flexible environment
https://cloud.google.com/appengine/docs/flexible/java/migrating-an-existing-app
It is a beta release and I'm not sure about the pricing in the future, but so long it works fine with our application.
Hope this helps you!
Related
Forgive me if I'm heading down the wrong path here, if so, would be grateful if someone could point me in the right direction.
I'm curious about building a snapshot listener in Node/Express that returns database updates similar to how the snapshot listener on cloud firestore works.
For example, a front-end client would be able to listen through a single call, then receive updates in real-time without having to make additional calls.
For simplicity's sake, imagine for some reason we wanted to wrap Firestore's snapshot listener in a node/express function, then pass it onto the client and have identical functionality. How would you go about doing this, or am I totally wide of the mark?
Answering this as Community wiki. As mentioned in the comments,
Building your own persistent listener is definitely possible. If Firebase can do it, so can others.
Web sockets are an option indeed, but not required. Firestore's realtime listeners don't use web sockets for example, but the listeners on Firebase's other database (Realtime Database) do.
We are using Firebase for an Mobile APP. We have thousands of users.
Expected to hit 100 thousand.
We are having a portal to configure data to be shown to the user.
Based on user input we need to manipulate lot of data.
Currently we are using a flag for each user for which we are having on() listener. So we are going to have thousands of listeners.
These listeners are handled from a Node JS server hosted on Heroku.
Earlier we used Parse and we had Parse cloud code to manipulate Parse Core DB on cloud code call.
But in Firebase we will eventually need to create a REST API which will do the job for us instead of having 100 thousands listeners for 100 thousand users.
But then we will need to have rewrite network code on the APP side for rest API call which is currently handled by Firebase library for which we had gone with Firebase in the fir
Every Firebase listener is a separate websocket connection, and I'm not sure how easy it will be for you to create hundreds of thousands of connections on a single node.js Heroku dyno (although see here how something similar was apparently achieved on an appropriately configured 15GB rackspace cloud server).
If you plan to run your Firebase listeners on multiple Heroku dynos, you will need a way to distribute your listeners across the different dyno instances.
I am parse.com user, and now I look for another service.
How can I write back end logic to firebase?
let say I want to validate all the values on server side, or trigger things. I thought about one solution, but I want to know the recommended way.
I think to
create nodejs server, that uses express.
create middlewares to handle the logic.
send rest request from the app, that triggers the middlewares
use the nodejs sdk of firebase to update the values according to the params of the http request.
And implement on the app firebase handler that listen to changes
their something simpler? In parse I used cloud code, I want that the logic will not be on the client side but on a server side.
Update (March 10, 2017): While the architecture I outline below is still valid and can be used to combine Firebase with any existing infrastructure, Firebase just released Cloud Functions for Firebase, which allows you to run JavaScript functions on Google's servers in response to Firebase events (such as database changes, users signing in and much more).
The common architectures of Firebase applications are pretty well-defined in this blog post Where does Firebase fit in your app?.
The architecture you propose is closest to architecture 3, where your client-side code talks both directly to Firebase and to your node.js server directly.
I also highly recommend that you consider option 2, where all interaction between clients and server runs through Firebase. A great example of this type of architecture is the Flashlight search integration. Clients write their search queries into the Firebase database. The server listens for such requests, executes the query and writes the response back to the database. The client waits for that response.
A simple outline for this server could be:
var ref = new Firebase('https://yours.firebaseio.com/searches');
ref.child('requests').on('child_added', function(requestSnapshot) {
// TODO: execute your operation for the request
var responseRef = ref.child('responses').child(requestSnapshot.key());
responseRef.set(result, function(error) {
if (!error) {
// remove the request, since we've handled it
requestSnapshot.ref().remove();
}
});
})
With this last approach the client never directly talks to your server, which removes all kind of potential problems that you have to worry about. For this reason I sometimes refer to them as "bots", instead of servers.
2017
Today Google announced Cloud Functions for Firebase
https://firebase.google.com/features/functions/
This is a great solution for the architectures and back end logic in Firebase.
Here's what I would do:
Validade all the inputs with the ".validate" rules. No server needed for that.
If you have tasks to run, use Firebase Queue, a bot to run the tasks and you are done.
If you don't do the last one, you may have two problems:
If you try use the diagram you posted it will be a little tricky to get the auth object at the server (but not impossible). Go ahead if you don't need to validate the user to allow the request.
If you use just the regular firebase app to listen to changes and respond (editing the object for instance, like Frank van Puffelen's example code), you might have scalability problems. Once your back end scales to two (or more) instances, a firebase edit will trigger the task on all of them. Each instance will notice there was a change, then run the same task once each, add/replace the response object once each and try to remove the request object once each..
Using Firebase Queue avoids both of these problems.
You can combine these two behaviors simultaneously:
Client side communicates directly with the Database
One excelent thing about the Firebase Realtime & Firestore is that you are able to listen in realtime to database changes. But is important to configure the Security Rules so the client can't modify or read data that he is not suppose to.
Client communicates with a Node.js server (or other server)
The node.js server will have adminstrative privilegies by using the Firebase Admin SDK, it can perform any change in the database regardless how the Firebase Security Rules are configured.
The Client Side should use the Firebase Authentication library to obtain the
ID Token, it will inform to the server on each request (e.g. on headers). For each received request, the node.js server verifies if the ID Token is valid by using the Firebase Admin SDK.
I created a documented GitHub project of a Node.js server that uses Firestore Database and Firebase Authentication, check the example here.
I am still pretty new to NodeJS and want to know if I am looking at this in the wrong way.
Background:
I am making an app that runs once a week, generates a report, and then emails that out to a list of recipients. My initial reason for using Node was because I have an existing front end already built using angular and I wanted to be able to reuse code in order to simplify maintenance. My main idea was to have 4+ individual node apps running in parallel on our server.
The first app would use node-cron in order to run every Sunday. This would check the database for all scheduled tasks and retrieve the stored parameters for the reports it is running.
The next app is a simple queue that would store the scheduled tasks and pass them to the worker tasks.
The actual pdf generation would be somewhat CPU intensive, so this would be a cluster of n apps that would retrieve and run individual reports from the queue.
When done making the pdf, they would pass to a final email app that would send the file out.
My main concerns are communication between apps. At the moment I am setting up the 3 lower levels (ie. all but the scheduler) on separate ports with express, and opening http requests to them when needed. Is there a better way to handle this? Would the basic 'net' work better than the 'http' package? Is Express even necessary for something like this, or would I be better off running everything as a basic http/net server? So far the only real use I've made of Express is to specifically listen to a path for put requests and to parse the incoming json. I was led to asking here because in tracking logs so far I see every so often the http request is reset, which doesn't appear to affect the data received on the child process, but I still like to avoid errors in my coding.
I think that his kind of decoupling could leverage some sort of stateful priority queue with features like retry on failure, clustering, ...
I've used Kue.js in the past with great sucess, it's redis backed and has nice documentation and interface http://automattic.github.io/kue/
I have a good old-style LAMP webapp. A week ago I needed to add a push notification mechanism to it.
Therefore, what I did was to add node.js+socket.io on the server and poll the MySQL database every 10 seconds using node.js to check whether there were new items: if so, I would have sent them to the client(s) with socket.io.
I was pretty happy with the result, even if that is not a proper realtime notification (as there is a lag of up to 10 secs).
Now, I am about to build a new webapp which will need push notifications, too. I am wondering whether to go with the same approach as the first one (that I believe is more stable and mature) or to go totally Node.js, without PHP and Apache. As for the database, I have already decided to go for MongoDB.
Finally, my question is: if I go for Node.js+Socket.io+MongoDB will I get a truly near-real-time webapp? I mean, as soon as a new record is inserted into MongoDB, will there be some sort of event triggered that I can catch via node.js, do some checking on it and, if relevant, send the notification to the client? Or will there be anyway some sort of polling on the db server-side and lag, as with my first LAMP webapp?
A related question: can you build a realtime webapp on MySQL without doing any polling as I did with my first app. Or do you need MongoDB (or Redis)?
I hope this question is not too silly - sorry, I am just starting with Node.js and co.
Thanks.
I understand your problem because I switched to node.js from php/apache/mysql too.
Generally node.js is stable, modules and your scripts are the main reasons for errors
Real-time has nothing to do with database, it's all about client and server, you can query as many data as you want in your requests and push it to the other client.
Choosing node.js is very wise but it's harder to implement.
When you insert a new record to your db, the event is the request itself, you will make a push event along with the database query something like:
// Please note this is not real code, just an example of the idea
app.get('/query', function(request, response){
// Query your database
db.query('SELECT * FROM users', function(rows){
// Push notification to dan
socket.emit('database_query_executed', 'to_dan', rows);
// End request
response.end('success');
})
})
Of course you can use MySQL! And any database you want, as I said real-time has nothing to do with databases because the database is in the middle of the process and it's totally optional.
If you want to use node.js for push notifications and php/apache for mysql then you will need to create 2 requests for each server something like:
// this is javascript
ajax('http://node.yoursite.com/push', node_options)
ajax('http://php.yoursite.com/mysql_query', php_options)
or if you want just one request, or you want to use a form, you can call your php and inside php you can create an http or net request to node.js from php, something like:
// this is php
new HttpRequest('http://node.youtsite.com/push', HttpRequest::METH_GET);
Using:
A regular MongoDB Collection as the Store,
A MongoDB Capped Collection with Tailable Cursors as the Queue,
A Node worker with Socket.IO watching the Queue as the Worker,
A Node server to serve the page with the Socket.IO client, and to receive POSTed data (or however else the data gets added) as the Server
It goes like:
The new data gets sent to the Server,
The Server puts the data in the Store,
The Server adds the data's ObjectID to the Queue,
The Queue will send the newly arrived ObjectID to the open Tailable Cursor on the Worker,
The Worker goes and gets the actual data in the ObjectID from the Store,
The Worker emits the data through the socket,
The client receives the data from the socket.
This is 'push' from the initial addition of the data all the way to receipt at the client - no polling, so as real-time as you can get given the processing time at each step.
Re: triggers in MongoDB - please see this answer: https://stackoverflow.com/a/12405093/1651408
There are much more convenient triggers in MySQL, but to call Node.js from them would require a bit of work with MySQL UDFs (user-defined functions), for instance pushing data through a Unix socket. Please note that this is necessary only when other applications (besides your Node.js process) are updating the database, and be sure to choose InnoDB as storage in this case (row- vs. table-level locking).
Can see no big problem with your technology choice of sockets.io, even if client-side web sockets aren't supported, you'll fall back (gracefully, I hope) to polling.
Finally, your question is not silly at all, since push technology is definitely superior to the flood of polling requests - it scales better. EDIT: However, would not describe either technology as real-time.
Another EDIT: for a quite well-known and successful setup of this kind please read this: http://blog.fogcreek.com/the-trello-tech-stack/
Have you discovered Chole? It works separately from your web sever and interfaces with it by using HTTP POSTs. That way you can code your web app any which way you want.
Actually Using Push Technology like Socket.IO helps you to use
the server's resource efficiently and also helps you to leverage old browsers to modern browsers making websocket or websocket-like connection.
10 sec polling is a HTTP request which is expensive especially when a lot of users present.
Unlike polling technology, push technology is relatively cheap. Users' client is opening a dedicated socket(ie. websocket) to listen to the server's push notification.
And usually your client-side JavaScript do some actions when the push notification is received.
Using your LAMP stack and Socket.IO with different port (other than 80) will be good enough to implement what you need.
But using Node.js + MongoDB + Socket.IO actually helps you to manage your server's resource much efficiently.
Because those three have non-blocking nature.
If you understand non-blocking concept correctly and implement your app appropriately,
your identical app, an app with same feature but with different language and different database, would be able to handle a lot more requests than general LAMP stack.
Above picture is a famous chart of comparing Non-blocking vs Thread way to handle concurrency
Apache(Thread) vs Nginx(Non-blocking)
MySQL is a great database. I believe you won't need join and transactions for realtime notification.
MongoDB does not have those two features unless you implement similar features by yourself.
Because of not having those two and some characteristics of its own, MongoDB can store and fetch data much faster than traditional SQL databases.
Switching from MySQL to MongoDB will decrease the time taking to insert and fetch data.
with JS you can open a socket to your server (not old browser), the server will have a ah-hoc program (on an ad-hoc port, so you need the permission to open door and run program on your server) that will send data (almost) realtime from and to the client, and without the HTTP's protocol overhead.old browser will just fall-back to polling mechanism.
I can't see other way to do this (probably there are already "coocked" framework that do this)