How to make on Fluid framework work offline? - fluid-framework

We tested the fluid framework with minimal requirements.
Everything works fine in the demo application. https://github.com/microsoft/FluidExamples
questions:
How does the Fluid framework work offline?
Our backend is written in asp.net core. The server side of the fluid framework is written in node.js.
How to integrate the asp.net core backend with the fluid service to get the latest document state?
Need to write a driver for a Fluid service?

Offline.
Offline is an interesting topic for CRDTs generally. The Fluid Framework handles intermittent (short) offline scenarios well as long as all the connected clients have the metadata necessary to merge the change in. When the user makes changes, she does so with respect to a minimum sequence number (MinSeq.) If the her offline changes get added to the total order broadcast such that they are above her change's MinSeq, they will be merged with no additional handling.
MinSeq deserves a whole post, but it's the sequence number beneath which all connected clients can garbage collect the metadata necessary to merge changes. Therefore, as long as every client has that metadata, even if you were offline, the merge is easy.
For longer offline scenarios, the reconnecting client could request to lower the MinSeq (probably to the last MinSeq of the offline client.) All clients would then fetch the ops since the requested MinSeq and recreate the metadata. At this point, new changes could easily go in as we have mimicked the scenario where the new changes are above the minimum sequence number.
This could cause temporary perf & memory issues, but hasn't been a problem in reasonable experiments. I don't believe this feature is currently implemented in the Fluid Framework code base, but has been designed as above. (A PR on this topic would be very interesting!)
For even longer offline scenarios, you would probably need a three-way merge. For example: Two users open a document. User A goes on an airplane (loses internet) and writes Macbeth and user B writes Pride & Prejudice, what is the expected behavior when user A rejoins the session? These are entirely incompatible document states, so we'd need to present the users with a dialogue of some sort.
This is not implemented, but we have discussed some of the framework ergonomics of handling the three-way merge (i.e. what APIs would the app developer need.)
Running Fluid in a non NodeJS backend
Document state is managed by the clients and is opaque to the server. There's a similar discussion in this issue. The easiest way to get Fluid Framework document state in an .NET backend would be to run a JS engine and open the Fluid document. You could also run a NodeJS micro service that hosts the Fluid document. This service would have a simple API surface that lets you fetch the data you need.

Related

How does an api compare to directly querying your database

I am kind of confused about when an API is needed. I have recently created a mobile app with flutter and cloud firestore as the database where i simply queried and wrote to the database when needed. Now i am learning full stack web development and I recently watched a tutorial where he built like an Express API with GET, POST, and DELETE functionality for a simple item in the database.
Coming from a background where i just directly accessed the database i am not sure why an API in this case is necessary, is it so I wouldnt have to rewrite the queries every time? This is a very simple project so he's definitely not making a 3rd party api for other developers to use. Am i misunderstanding what an API does exactly?
It was really simple, there was one collection in a MongoDB database and he was using postman to read and write to and from the database to check if it works.
API is a standard way with which your front-end (web/mobile) stores/gets information for your application. Your front-end can/should not directly access database ever. Understand the purpose of front-end which is to just display the interface and should do minimal processing. All the application logic should be at your backend (API server) which is exposed to your frontend via API (GET, POST etc) calls. So to store an item in your database, you will write data storing logic in your backend, and expose an API end-point which when triggered will perform the storing operation. That API call should be used by your front-end to trigger the storing process. In this way your logic of storing/database or any other thing is not exposed, only the API URL is. The purpose of front-end is to be exposed whereas backend/database should never be exposed and used from front-end
May be for you, an API is not necessary. But, the use-cases of an API is a lot.
For example:
You don't have to write business logic for every platform. (iOS, Android, Web, Whatever)
Your app will be lightweight since some computation would be offloaded to server.
Your app can be reverse engineered to get secret informations. (or, Your secret algorithm may be?)
What if you need to store something in filesystem that you want share with others?
Also a good read: Why we should use REST?
In your case, you are using a pre-written SDK which knows how to connect to Firestore, does caching and updates application data when needed, and provides a standard method of reading, writing and deleting data in Firestore (with associated documentation and example data from google).
Therefore, using an API (as described for the mongoDB) is not required and is undesirable.
There are some cases where you might want to have no read or write access to a firestore collection or document, and in this case, you could write a cloud function which your app calls with parameters, that receives the data that you want to write and does some sort of checking or manipulation beyond the capabilities of cloud firestore rules (although these can get pretty sophisticated). See https://firebase.google.com/docs/firestore/security/get-started
Todd (in the video contained in this link) does a few good videos on this subject.
However, this is not really working in the same was as the API you mentioned in your question.
So in the case of using Firestore, you should use the SDK and not re-invent the wheel by creating your own API.
If you want to share photos for example, you can also store them in firebase storage and then provide a URL for other devices to access them without your app being installed.
If you want to write something to firestore which is then sent to all other users then you can use listeners on each app, and the data will be sent to the apps after it arrives at Firestore.
https://firebase.google.com/docs/firestore/query-data/listen gives an overview of this.
One thing to always look at with firebase is the cost of doing anything. Cloud functions cost more than doing a read of a firestore document.
This gives an overview of pricing for different capabilities within the firebase set of capabilities.
https://firebase.google.com/pricing
Another most important factor is coupling. To add to #Dijkstra API provides a way to decouple the logic from each other, thus allowing for more application reliability, maintainability, fault-tolerance and if required scalability.
Thus there is no right or wrong here, or the comparison of API vs DB call is in itself not justified for the fact that fetching the data from Database is the ultimate aim. Even if you use a REST API or Query a database.
The means to achieve the same can differ based on specific requirements. For example, fetching water from the well.
You can always climb down the well and fetch a bucket of water if you need 1 bucket per day and you are the only user.
But if there are many users you would want to install a pull and wheel where people use it to pour fetched water into their bucket, yet again this will depend if there are 100 users per day using or more than that. As this will not work in the case of more than 100 users.
IF the case is that an entire community of say 1000 user are going to need the water you would go with a more complex solution of installing a motorized water pump to pump out the water and supply it to the user's home via a pipeline. This solution has many benefits like fast supply, easy to use, filtered water, scheduled, etc. But the cost and effort to achieve the solution is higher as well.
All in all, It comes down to the cost-vs-benefit ratio which you and only you can chart out, for different solutions vs the particular problem, as you are the best judge of scale and future user flow.
While doing that you can ask the following question about the solution to help decide :
Is the solution satisfying the primary requirement of the problem?
How much time is it going to take to build it?
For the time we spend to build a solution, is it going to working at more than 75% or more of its capacity?
If not is there a simpler solution that I can use to satisfy the problem and scale it as the requirement increases?
HTH.

Integrating ngrx/Store to an existing Angular project

I have an Angular 7 project that is currently and has lots of components that talk to an API and updates data from it.
The is constantly refreshed using setTimeout so it's getting very busy with all the components refreshing data from the API.
I am therefore thinking of adding ngrx/Store to the project.
Is ngrx/Store a solution for this kind of issue or should I look for other solutions?
To make it's easy for you. I would say yes because ngrx provide API to break the application into smaller peice.
If you have multiple feature you will have 1 main store and for every feature you will have feature store connect to main store. So you can easy manage everything into feature seperation concern.
Another thing is ngrx provide effect middleware allow user to working with side effect like API call so you will have completely seprate code when working with API
I would recommend you read there small demo here for every feature they create effects, actions, reducers, selector, services.

Using Node.js along with old Java Web application (Spring MVC)

I have an existing web application. It uses Spring MVC on Tomcat and MySQL at the back end with transactions (multi table updates). I am also using spring security for role based authorization. This is very typical website without any real time notification, chat etc.
Now my client wants to add real time notifications like Facebook, chat module etc. Typically on front end some action will be taken, and all or specific logged in users need to get notified. After receiving notification, I need to update some <div> content. Frequency would be high. (Currently user needs to refresh browser.)
I completed POCs using Node.js/Express and looks like it's easy to accomplish these 2 things with Node.js.
I have 3 options:
Move front end to Node.js and may be Angular.js. Manage form validations / security through Node.js/Angular.js but all database calls are still managed by my old website (in a RESTful manner) as I can reuse my existing stuff. Most of the method calls return to a tile but we can easily convert to return JSON object.
Keep existing website as it is and plug in Node.js into it just for real time notification / chat etc.
Scrap existing website and redo everything including security, transactions using Node.js. There are many posts saying as Node.js is kind of new, not preferable for enterprise application and also this might be too much.
Approach 2 would be my preferred approach as we have expertise in Spring, Node would be completely new for us but I don't know whether it's practical & doable approach, what kind of issues I might be facing. I am also not sure how to integrate Spring MVC running on Tomcat and Node.js together.
Please could anybody help me in deciding, out of the three what's the best way to go? Any other approach which might be easier?
You already have an existing Spring MVC codebase and you can reuse it. However you can go ahead with AngularJS for your front-end technology. Your front-end AngularJS code can talk to your existing Spring MVC via REST api and to NodeJS for real-time features which you plan to develop.
You need to use Socket.io within NodeJS which will provide the features you are looking for.
One major problem you might face is related to session when talking to two different backend stack.

Realtime multiplayer game along side REST api

I'm a Rails developer who has just migrated to Node and I've decided to write an angular application backed by an postgres/express.js REST api. I use the api primarily for CRUD operations thus far, but I want to start a realtime game instance when two players visit a certain page(challenge each other). I'm thinking of using socket.io to accomplish the realtime functionality.
The game is similar to that of pokemon on gameboy, in which to players take turn performing certain actions until one of them wins.
I have the following questions:
Should I have a separate server to handle the game using socket.io, or can i use the same as the one my API operates on?
Should I use a service like Pusher or can I create the architecture myself?
How would I go about making sure no data is lost, if say, a player disconnects during a game?
At which point (number of concurrent connections/request per second) would I run into performance issues? 100, 1000, 10000?
Thanks
If the realtime logic is closely related to the CRUD stuff (i.e. realtime events are a direct result of writes to the API), and you expect somewhat equal usage of both aspects of the system, then I'd put both on the same server.
I highly recommend using a realtime push service if possible (disclaimer: I work for Fanout.io). It'll be simpler and probably less expensive too.
The key to making sure data is not lost is to persist it on the server before sending. Don't depend on the realtime layer for persistence (biggest mistake you can make). When the client reconnects, it can request data it may have missed via the normal API. So, just get your CRUD stuff correct and then layer realtime eventing on top. You can create a very network resilient service this way.
You should be able to get to a few hundred concurrent connections without much thought. Going beyond will take architecture planning. Of course, if you delegate to a push service then you don't have to worry about this, at least for the realtime part.

Single page app + node.js backend (REST) + CMS - best concepts/practices

We are going to build big social web app. We have to implements 2 big modules:
FrontEnd - single page app (Backbone.js)
CMS - system to manage contents of FrontEnd (daily content, sponsors, banners, links, special offers, upload media etcetc)
FrontEnd will use Node.js powered REST api which will use DB in the cloud (PG or Mongo - didnt decide yet).
My question is: should CMS also use same REST api as FrontEnd? Or should we make separate app (not node.js neccessery) for CMS that would "talk" with db in the cloud directly? My question arises because on previous project we had this issue:
Single REST api for FrontEnd and CMS.
When we wanted new functionality in CMS we had to implement it in RESTapi - and then we had to restart whole APP (RESTapi) which was problematic in production...
So:
Implement 2 RestApis - one for FrontEnd and one for BackEnd?
Implement 1 RestApi for FrontEnd and implemnt CMS as separate app talking directly to database?
How do you do it?
Out goal is to implement super-fast FrontEnd and Big/Heavy CMS (its is going to be bigger than FrontEnd). So we are thinking of completly separating CMS module from FrontEnd module. Eventual need for communication between modules would be implemented through redis pub/sub for example - What do you think?
Software architecture decisions are always very contextual - the people most qualified to make the call are you and your team, since you know way more than we do. That being said, based on the info you've shared, here are some things to consider:
Content Management as a problem space is pretty mature. Unless part of your revenue model involves innovations in how you handle Content Management, you would be unwise to build one. There are fantastic CMSes both open source and commercial, ranging in price from hundreds of dollars to hundreds of thousands. I cannot caution you strongly enough against the common developer fallacy to discount the value of our own time. Even if you spend an entire engineer's salary-worth on a CMS, you'll almost certainly come out ahead.
An architecture that uses a CMS should reflect the reality of #1, that CMSes are mature and stable. You want a strong and well-defined interface boundary between the parts of your system that are unique to you and specific to your revenue model, and the parts that are interchangeable with COTS (commercial off-the-shelf) - even if you do end up building it yourself (which again, I strongly caution against) - you'll run into impedance mismatch problems that are very hard to get out of and create friction to new feature delivery across the entire system if you design something as if it's bespoke when it's not (or vice versa).

Resources