Does MongoDB Atlas provide offline support? - node.js

I am creating an App in expo and using nodejs express as backend and mongodb atlas as database.
So I was thinking that if a user is offline and the actions he will perform during offline will automatically sync with online data when he will be offline.
Does this feature mongodb atlas provides ? Or any other option in mongodb provide this feature ?

Check the paragraph offline first on MongoDB website:
Realm Sync is built on the assumption that connectivity will drop. We call this mentality offline-first. After you make changes to the local realm on the client device, the Realm SDK automatically sends the changes to the server as soon as possible.
Check also the paragraph conflict resolution:
MongoDB Realm's sync conflict resolution engine is deterministic. Changes received out-of-order eventually converge on the same state across the server and all clients. As such, Realm Sync is strongly eventually consistent.
In simple terms, Realm Sync's conflict resolution comes down to last write wins. Realm Sync also uses more sophisticated techniques like operational transform to handle, for example, insertions into lists.

Related

design sync database mechanism for clients in Nodejs

how to serve a large database to the clients?
how to notify clients to update only the changes?
event better, how to provide this functionality with sync-mechanism?
scenario:
the scenario itself has some requirements I'll try to explain:
there is an offline database here that devices need to get for working offline independently.
clients only have to sync themself. some replication mechanism like master-slave .
the master can write data to DB and slaves only have to sync and read data .
I have two bottlenecks here:
database is about: 60 Mb but it can grow much more.
because of multi-platform use case : client's devices maybe be in macos , windows, andriod, ios
first I was using google-firestore for this purpose but our data is some kind of sensitive and we cannot use migration strategy in the future. so I created large sqllite db for clients and . the clients can download the database manually. this is not right. even with small updates, our clients have to download db again.
is it possible to create a self synced mechanism that backend notify clients for getting updates?

Do DynamoDB and Cloudant store data at edge locations?

Trying to decide between DynamoDB and CouchDB for my website. It's a static site (built with a static site generator) and I'm planning on using a JavaScript module to build a comment system.
I'm toying with using PouchDB and CouchDB so that synchronizing is easy. I'm also considering DynamoDB.
I have a performance question. From these databases, do any of them push data out to edge locations so that latency is reduced? Or is my Database essentially sitting on one virtual server somewhere?
From what I know, neither of these solutions utilise edge locations ootb.
Since you're mentioning PouchDB, I assume you want to use a client-side database in your app?
If that's the case you should keep in mind that, in order to sync, a client-side DB needs to have access to your cloud db. So it's not really suitable for a comment system since all client could just drop comments of others, edit them, etc.

Request for feedback: Couchdb setup with client replication (pouchdb) for multiple users/accounts

Before jumping into development, I'd like to get feedback on a change I'm thinking of making, moving from mongo to couch.
Basically I've got a webapp which is used to help organize users activities (todolist, calendar, notes, journal). It currently uses mongodb, but i'm thinking to move it to couch, mainly due to couches replication ability, and clientdb interaction (pouchdb). I have a similar homegrown setup on the browser using localstorage, backed by mongo, but am looking for a more mature solution.
Due to how couchdb differs from mongodb, I'm thinking that each user should have their own couch db, and their documents being each of my app components. Basically I have to move everything up a level with couch due to local db replication, and due to security.
I have 3 questions.
1) I assume that couch does not have document level security/authentication, correct? (Hence me moving each user assets to their own database, good idea?)
2) My plan is have users login to the website, then my backend nodejs code authenticates them, and sends them down some auth/session token. The javascript on the client then uses its local pouchdb data to set itself up, and also sends the replication request directly to the couchdb server (using the auth token it got from my server-side process). They should only have access to their database, since I can do per database auth access (correct?)
What do you think of that setup? It should work?
3) Regarding couchdb service providers, why do they vary so much on their couch version? IE, happycouch, 1.6.1, iris 1.5, cloudant, 1.0.2? And I also hear about couchdb 2.0 coming out soon... I'd like to use cloudant, but 1.0.2 is so many versions back from a 1.6 or 1.5, if I'm not doing anything exotic, does it matter?
Bonus question :p Continuing from the last question, do you know of any services that host node.js and have local instances of couchdb available? I'd like to use my backend server code as a proxy, but not at the expense of another network hop.
Thank you very much for your feedback,
Paul
Due to how couchdb differs from mongodb, I'm thinking that each user should have their own couch db
This is a CouchDB best practice. Good choice.
I assume that couch does not have document level security/authentication, correct? (Hence me moving each user assets to their own database, good idea?)
You are correct: https://github.com/nolanlawson/pouchdb-authentication
My plan is have users login to the website...
Yep. You can just pass the cookie headers straight through from Node.js to CouchDB, and it'll work fine. nano has some docs on how to do that: https://github.com/dscape/nano#using-cookie-authentication
Regarding couchdb service providers, why do they vary so much on their couch version
The Couch community is one big happy fragmented family. :)
I'd like to use cloudant, but 1.0.2 is so many versions back from a 1.6 or 1.5, if I'm not doing anything exotic, does it matter?
1.0.2 refers to when Cloudant forked CouchDB. They've added so many of their own features since then, that they're pretty much feature-equivalent by now.
The biggest difference between the various Couch implementation is in authentication. Everybody (Cloudant, CouchDB, Couchbase) does it differently.

MongoDB - how to make Replica set Step down truly seamless

The problem is - when your Replica Set is forced to step down while your application is running, all mainstream Mongo clients will throw at least one exception per connection. This happens because their database connections are hardwired to the physical server which used to be the primary, and no longer accepts queries. So, while MongoDB architects might think that the StepDown process does not create any downtime, in reality if you handle connections according to their documentation, each step down will cause a full blown crash for at least one user, and might even create a data integrity issue. I hope, this can be avoided with a simple wrapper that captures some specific Mongo exceptions and handles them by automatically re-connecting to the Replica Set, and re-running the failed query. If you already have a solution for this, please share! I am particularly interested in a solution that works with any major Mongo driver for Node.JS.
You are correct -- this is the exact behavior I experienced with both mainstream ODMs as well as the official native MongoDB driver for Node.js.
Replica set step-downs would cause my outstanding queries to fail with "Could not locate any valid servers in initial seed list", "sockets closed", and "ECONNRESET" before additional queries would get buffered up even though bufferMaxEntries is correctly configured.
Therefore, I developed Monkster to provide seamless replica set step-down and overall high-availability for MongoDB clusters for Node.js developers using the popular Monk ODM.
Monkster is a Node.js package that provides high availability for Monk, the wise MongoDB API. It implements smart error handling and retry logic to handle temporary network connectivity issues and replica set step-downs seamlessly.
https://www.npmjs.com/package/monkster

Node Module for Neo4j

My app has Node JS. I'm trying to connect NodeJS and Neo4j together. Can some one tell me, how to connect both? My queries need to work with labels on Neo4j. Please let me know which module should I use in Node Js to achieve this?I have spent lot of time already with-out luck.
Last I checked there are at least 4 popular and actively developed node.js modules (ordered by number of stars):
https://github.com/thingdom/node-neo4j (npm install neo4j)
https://github.com/bretcope/neo4j-js (npm install neo4j-js)
https://github.com/philippkueng/node-neo4j (npm install node-neo4j)
https://github.com/brikteknologier/seraph (npm install seraph)
They all support the Cypher endpoint, which was a requirement for my inclusion. One key feature that stands out from the list is that philippkueng/node-neo4j is the only one that has transactional API support. Another is the ability to ask for labels of nodes, and that is supported only by seraph and philippkueng/node-neo4j. (usually you can avoid needing to ask for labels of a node if you make your Cypher query ask for labels explicitly, which avoids a request back and forth)
On the other hand, it's really not hard to just implement a few HTTP requests, directly accessing the Cypher or Transactional Cypher endpoints, massaging the results as you see fit for your application.
Another cool new development I've seen recently was https://github.com/brian-gates/cypher-stream, which emits a stream of results from Cypher, enabling streaming JSON parsing, which is another performance-oriented feature lacking from the four listed above.
Edit: 03/2016 There is a new official JS driver for use with the new bolt protocol (binary). For new development this should definitely be considered. Bolt is planned for release in Neo4j 3.0. https://github.com/neo4j/neo4j-javascript-driver
Check out the koa-neo4j framework, it uses the official neo4j-driver under the hood.
One can write native Cypher (as .cyp files) in it on top of the latest stable neo4j (3.0.3 at the time of this writing) which, among other things, allows querying labels.
https://github.com/assister-ai/koa-neo4j
https://github.com/assister-ai/koa-neo4j-starter-kit
In a Neo4j enabled application, conducting queries directly from client side might not be the best choice:
Database is exposed to the client, unless some explicit security mechanism is in place; one can see the innards of the database by View page source
There is no one server to rule them all, queries are strings, scattered around different clients (web, mobile, etc.)
Third-party developers might not be familiar with Cypher
koa-neo4j addresses all of the above issues:
Stands as a middle layer between clients and database
Gives structure to your server's logic in form of a file-based project; finally a home for Cypher! All of the clients can then talk to an instance of this server
Converts Cypher files to REST routes, a cross-platform web standard that developers are familiar with, it does so on top of the widely-adapted koa server, ripe for further customization
Disclosure I was the original author of koa-neo4j
neode - Neo4j OGM for Node JS. here

Resources