ODM/ORM for PouchDB in JavaScript - node.js

I'm interested in using PouchDB for a offline-first mobile application (Cordova) and I'm wondering if there is any leightweight ORM/ODM for PouchDB written in JavaScript. Couldn't find one.
Is PouchDB the most common way to implement "offlinability" in JavaScript? Or are there other better ways to do that (without going native).

Another one is https://github.com/iyobo/pouchorm, claiming to be 'The definitive ORM for working with PouchDB. Native support for Typescript.'
At the time of writing it appears to be still actively developed.
I did some research and consider PouchDB and RxDB the two most feasible options if you go for open source and want to avoid larger costs for the servers. There still is CouchBase which provides libraries for many languages.

PouchORM (https://github.com/iyobo/pouchorm) is a solid option.
It's light weight and yet super-functional.
It has all the features I would care for when using pouchdb; namely it's simplicity, data validation and it's cool use of separating data across a single DB using collections.
The collections are also indexed by their type so this does not affect speed.

Related

If I'm integrating two seperate API's, is a MERN stack appropriate?

I have two separate cloud-based APIs that I am working on integrating together. Neither software directly talks to each other so I am creating something in the middle to get them to communicate. I have had trouble finding examples or documentation on how exactly to do this, does anyone know of any resources that could help me out?
My plan going in was to use a MERN Stack, running on a local server to do GET and POST requests to both APIs, use some mapping and logic to transpose the data into the correct format and send it to the other software. I do not have a client per se (other than myself) on my end, so I really will be skipping the React part of MERN, at least that is what I'm thinking. I'll be using Mongo to keep track of both sets of data for redundancy. I also considered using a LAMP Stack but felt that MERN would be faster in handling the data, and Mongo is more flexible in handling different data formats. If there is another process or technology that could help me that I'm not thinking of, I would be grateful to hear about it.
Has anyone encountered something like this before? Thank you.
As with most architecture questions, there's no completely right or wrong answer here. You could certainly design a well-built system to handle for this purpose with either stack; even more-so when you mention that your front-end framework is not an important consideration. Instead, ask yourself questions like this:
Which stack do you have more experience with, and is this an appropriate time to learn a new set of technologies, or is it important to do the best work you're capable of right now (how important is time, cost, or quality in this case)?
Another generalization I'll stick my neck out for is a data-first approach; what sort of data are you dealing with from each cloud integration, and what kind of data do you need to support and/or create in order to make your system work? Mongo, being a NoSQL persistence layer, will allow you to change your data model and handle more varied data in a quicker and easier manner than a SQL solution will. This is a double-edged sword, however, as lack of validation and a strongly-constrained (typed?) data model will make your application harder to work with and debug as it grows. In short - how big might this application grow?
If you have a handy and familiar way to manage the three different data models you're dealing with (cloud service 1, cloud service 2, and your app) via MySQL, then that's a compelling reason to use it. However, if your style is to start dumping data into your database and you're comfortable with a more iterative approach (which may require more, albeit shorter rounds of refactoring), then Mongo with MERN may be the preferable choice.
Finally, will others ever be working on this application? If so, which language would you prefer to be dealing with them upon - PHP or Javascript?

Node-Neo4j-Embedded limits (call for new benchmark)

Is there anybody out there using Node-Neo4j-embedded in production mode ?
What kind of limits are expectable ?
Because I think this module is pushing the Cypher queries directly to the node-java module, what uses them directly with Neo4j java libs, I belief there shouldn't be any limits.
I feel it is dangerous to decide to use a lib what isn't maintained for about 2 years (see: github) - and it shouldn't be on Neo4j docs if it isn't maintained (see: README.md dead link about API-Docs).
It looks like there could be a new trend to power up node.js support like first citizen languages by other distributor(s) for (in_memory) graph databases. Maybe Neo4j also should review this and the unmaintained node module (like OrentDB did). The trend had bin initiated by a benchmark-battle between ArangoDB and OrientDB.
I would love to see an Node-Neo4j-embedded benchmark answer to the open source benchmark of ArangoDB - done by professional Neo4j people like OrientDB people had done. But note: They hadn't been fair enough (read the last lines about enabling query caches...).
Or it has to be a new benchmark focused on most possible first citizens-like access by NodeJS. There are three possible solutions to test. I am not experienced enough to do such a test what would be really acceptable. But I would like to help by verifying this.
Please support this call for action with comments and (several types of) answers. A better (native like access) and wider range of supporting in_memory and graph solutions would help the node community very much. A new benchmark would force innovation
Short note about ArangoDBs benchmark: They've tested the REST-APIs. But if you think about performance, you don't like to use a REST-API - you like to use direct library access.
#editors: you are welcome
We (ArangoDB) think that the scalability of embedded databases is to limited. It also limits the number of databases which you may want to compare. Users prefer to implement their solutions in their Application stack of choice, so you would limit the number of people potentially interested in your comparison.
The better way of doing this is to compare the officialy supported interface of the database vendor into the client stack that is commonly supported amongst all players in the field. This is why we have chosen nodejs.
There is enough chatter about benchmarks and how to compare them on stackoverflow, so in doubt, start out to create a usecase and implement code for it, present your results in a reproducable way and request that for comment, instead of demanding others to do this for you.

Are relational databases a poor fit for Node.js?

Recently I've been playing around with Node.js a little bit. In my particular case I wound up using MongoDB, partly because it made sense for that project because it was very simple, and partly because Mongoose seemed to be an extremely simple way to get started with it.
I've noticed that there seems to be a degree of antipathy towards relational databases when using Node.js. They seem to be poorly supported compared to non-relational databases within the Node.js ecosystem, but I can't seem to find a concise reason for this.
So, my question is, is there a solid technical reason why relational databases are a poorer fit for working with Node.js than alternatives such as MongoDB?
EDIT: Just want to clarify a few things:
I'm specifically not looking for details relating to a specific application I'm building
Nor am I looking for non-technical reasons (for example, I'm not after answers like "Node and MongoDB are both new so developers use them together")
What I am looking for is entirely technical reasons, ONLY. For instance, if there were a technical reason why relational databases performed unusually poorly when used with Node.js, then that would be the kind of thing I'm looking for (note that from the answers so far it doesn't appear that is the case)
No, there isn't a technical reason. It's mostly just opinion and using NoSQL with Node.js is currently a popular choice.
Granted, Node's ecosystem is largely community-driven. Everything beyond Node's core API requires community involvement. And, certainly, people will be more likely to support what aligns with their personal preferences.
But, many still use and support relational databases with Node.js. Some notable projects include:
mysql
pg
sequelize
I love Node.js, but with Node it actually makes more sense to use a RDBMs, as opposed to a non-relational DB. With a noSQL/non-relational solution you often need to do manual joins in your Node.js code and sometimes work with a lack of transactions, a technical feature of RDBMs that have commit/rollback features. Here are some potential problems with using Non-Relational DBs + Node.js servers:
(a) the joins are slower and responses are slower, because Node is not C/C++
(b) the expensive joins block your
event loop, because the join is happening in your Node.js code not on some database server
(c) manually writing joins is often difficult and error-prone; your
noSQL queries could easily be incorrect or your join code might be
incorrect or suboptimal; optimized joins have been done before by the masters of
RDBMs, and joins in RDBMs are proven to be correct, mathematically in most cases.
(d) Some non-relational databases, like MongoDB, do not support transactions - in my team's case, that means we have to use an external distributed lock so that multiple queries can be grouped together into an atomic transaction. It would be somewhat easier if we could just use transactions and avoid application level locks.
with a more powerful relational database system that can do optimized joins in C/C++ on the database server rather than in your Node.js code, you let your Node.js server do what it's best at.
With that being said, I think it's pretty f*ing stupid that many major noSQL vendors don't support joins (?) Complete de-normalization is only a dream as far as I can see it. And the lack of transactions can be a bit weird. Without transactions, only one query is atomic, you cannot make multiple queries atomic without an application level locking mechanism :/
Take-aways:
If you want non-relational persistence - why not simply de-normalize a relational database? There is nobody forcing you to use a traditional database in a relational manner.
If you use a relational DB with Node.js I recommend this ORM:
https://github.com/typeorm/typeorm
As an aside, I prefer the term "non-relational" as opposed to "noSQL".
In my experience node tends to be popular with databases that have a stateless API, this fits very nicely into nodes async nature. Most relational databases utilize stateful connections for transactions, this minimizes the primary advantages of async non-block i/o.
Can you explain exactly what specific problems you are facing with your chosen database and node.js?
A few reasons why MongoDB could be more popular than relational databases:
MongoDB is essentially a JSON object store, so it translates very well for a javascript application. MongoDB functions are javascript functions.
I am just guessing here, but since NoSQL databases are newer and have more enthusiastic programmers experimenting with it, you probably have more involvement in those NPM modules.
Apart from this, Node.js technically is a perfect choice for any sort of database application. I have personally worked on a small Node.js/MySQL application and I didn't face any hurdles.
But back to my main point, we could talk about this all day, and that is not what this forum is for. If you have any specific issues in any code with Node.js and your database of choice, please ask those questions instead.
Edit: Strictly technical reasons, apart from the JSON compatibility on both sides: There are none.
Anyone wondering about the same question in 2021-
Node has nothing to do with type of databse you choose.
You can choose database of your choice as per your requirement.
If you need to maintain strict data structure then choose relational db, else you can go for NO-SQL.
There are NPM packages for PostgreSQL, MySql and other db which are non-blocking. These db clients will not block the Node process while performing queries.

CouchDB - share functions across views, across design documents, across databases

Ok, here's the thing.
I have a good JS background, had my share of JS in the past, and have lots of cool bare-bones tools I take with me from project to project that act like a library.
I'm trying to formulate work with CouchDB.
Now, after getting used to luxury of cool tools that you wrote and simplify the language for you - I find it a little frustrating to write many things in bare-bones manner.
I'm looking for a way I can load to the database context a limited, highly efficient and generic set of tools that focus on the pure language and makes the work with the language much more groovy (and gosh, no, im not talking about jquery or any of the even more busty libraries out there).
If on top of that, there could be found a way where I can add to the execution context of the couchDB JS engine some of my own logic tools (BL model functions) - it would present a great and admirable power and make couchDB the new home for a JavaScript-er like me.
Maybe I'm aiming too low.
I'd be satisfied with a way I can allocate a set of extensions even for a specific database, and I don't mind do it for every database in separate. Or worse - to add it to every design document, so I can teach for example several views in the same design-doc what a Person is, what a Worker is, and use their methods to retrieve data from them according to logic in a reusably coded manner.
Can anybody point me the the way?
Whatever way you can point me - I'll be very verrry grateful.
If there are ways for all of these - then great.
Trust me to know the difference of what logic belongs to what layer...
You open my possibilities - I promise to use them :D
CouchDB now supports code sharing as CommonJS modules.
http://docs.couchbase.org/couchdb-release-1.1/index.html#couchdb-release-1.1-commonjs
http://caolanmcmahon.com/posts/commonjs_modules_in_couchdb
In this way, you can share your javascript modules between views, lists, and shows in the same design doc. (Server-side)
Also, you can load these modules on the browser side with this library:
https://github.com/couchapp/couchapp/blob/master/couchapp/templates/vendor/couchapp/_attachments/jquery.couch.app.js
You also might want to look at Kanso:
http://kansojs.org/
It does a really good job of making your javascript work seemless between the server and client.
You can find some helpful tools here : https://github.com/vivekpathak/casters
The running examples and test cases may particularly help you.

Which database out of CouchDB, MongoDB and Redis is good for starting out with Node.js?

I'm getting more into Node.js and am enjoying it. I'm moving more into web application development.
I have wrapped my head around Node.js and currently using Backbone for the front end. I'm making a few applications that uses Backbone to communicate with the server using a RESTful API. In Node.js, I will be using the Express framework.
I'm reaching a point where I need a simple database on the server. I'm used to PostgreSQL and MySQL with Django, but what I'm needing here is some simple data storage etc. I know about CouchDB, MongoDB and Redis, but I'm just not sure which one to use?
Is any one of them better suited for Node.js? Is any one of them better for beginners, moving from relational databases? I'm just needing some guidance on which to choose, I've come this far, but when it's coming to these sort of databases, I'm just not sure...
Is any one of them better suited for
Node JS?
Better suited especially for node.js probably no, but each of them is better suited for certain scenarios based on your application needs or use cases.
Redis is an advanced key-value store and probably the fastest one among the three NoSQL solutions. Besides basic key data manipulation it supports rich data structures such as lists, sets, hashes or pub/sub functionality which can be really handy, namely in statistics or other real-time madness. It however lacks some sort of querying language.
CouchDB is document oriented store which is very durable, offers MVCC, REST interface, great replication system and map-reduce querying. It can be used for wide area of scenarios and substitute your RDBMS, however if you are used to ad hoc SQL queries then you may have certain problems with it's map-reduce views.
MongoDB is also document oriented store like CouchDB and it supports ad hoc querying besides map-reduce which is probably one of the crucial features why people searching for DRBMS substitution choose MongoDB over the other NoSQL solutions.
Is any one of them better for
beginners, moving from relational
databases?
Since you are coming from the RDBMS world and you are probably used to SQL then, I think, you should go with the Mongodb because, unlike Redis or CouchDB, it supports ad hoc queries and the querying mechanism is similar to SQL. However there may be areas, depending on your application scenarios, where Redis or CouchDB may be better suited to do the job.

Resources