Local NoSQL database for desktop application - node.js

Is there a NoSQL database solution for desktop applications similar to Sqlite where the database is a file on the user's machine? This database would be called by a nodejs application that is used on the desktop.

I see this is an old question, but a newer option for you would be AceBase which is a fast, low memory, transactional, index & query enabled NoSQL database engine and server for node.js and browser. Definitely a good NoSQL alternative for SQLite and very easy to use:
const { AceBase } = require('acebase');
const db = new AceBase('mydb');
// Add question to database:
const questionRef = await db.ref('stackoverflow/questions').push({
title: 'Local NoSQL database for desktop application',
askedBy: 'tt9',
date: new Date(),
question: 'Is there a NoSQL database solution for desktop applications similar to Sqlite where the database is a file on the user\'s machine? ..'
});
// questionRef is now a reference to the saved database path,
// eg: "stackoverflow/questions/ky9v13mr00001s7b829tmwk1"
// Add my answer to it:
const answerRef = await questionRef.child('answers').push({
text: 'Use AceBase!'
});
// answerRef is now reference to the saved answer in the database,
// eg: "stackoverflow/questions/ky9v13mr00001s7b829tmwk1/answers/ky9v5atd0000eo7btxid7uic"
// Load the question (and all answers) from the database:
const questionSnapshot = await questionRef.get();
// A snapshot contains the value and relevant metadata, such as the used reference:
console.log(`Got question from path "${questionSnapshot.ref.path}":`, questionSnapshot.val());
// We can also monitor data changes in realtime
// To monitor new answers being added to the question:
questionRef.child('answers').on('child_added').subscribe(newAnswerSnapshot => {
console.log(`A new answer was added:`, newAnswerSnapshot.val());
});
// Monitor any answer's number of upvotes:
answerRef.child('upvotes').on('value').subscribe(snapshot => {
const prevValue = snapshot.previous();
const newValue = snapshot.val();
console.log(`The number of upvotes changed from ${prevValue} to ${newValue}`);
});
// Updating my answer text:
await answerRef.update({ text: 'I recommend AceBase!' });
// Or, using .set on the text itself:
await answerRef.child('text').set('I can really recommend AceBase');
// Adding an upvote to my answer using a transaction:
await answerRef.child('upvotes').transaction(snapshot => {
let upvotes = snapshot.val();
return upvotes + 1; // Return new value to store
});
// Query all given answers sorted by upvotes:
let querySnapshots = await questionRef.child('answers')
.query()
.sort('upvotes', false) // descending order, most upvotes first
.get();
// Limit the query results to the top 10 with "AceBase" in their answers:
querySnapshots = await questionRef.child('answers')
.query()
.filter('text', 'like', '*AceBase*')
.take(10)
.sort('upvotes', false) // descending order, most upvotes first
.get();
// We can also load the question in memory and make it "live":
// The in-memory value will automatically be updated if the database value changes, and
// all changes to the in-memory object will automatically update the database:
const questionProxy = await questionRef.proxy();
const liveQuestion = questionProxy.value;
// Changing a property updates the database automatically:
liveQuestion.tags = ['node.js','database','nosql'];
// ... db value of tags is updated in the background ...
// And changes to the database will update the liveQuestion object:
let now = new Date();
await questionRef.update({ edited: now });
// In the next tick, the live proxy value will have updated:
process.nextTick(() => {
liveQuestion.edited === now; // true
});
I hope this is of help to anyone reading this, AceBase is a fairly new kid on the block that is starting to make waves!
Note that AceBase is also able to run in the browser, and as a remote database server with full authentication and authorization options. It can synchronize with server and other clients in realtime and upon reconnect after having been offline.
For more info and documentation check out AceBase at GitHub
If you want to quickly try the above examples, you can copy/paste the code into the editor at RunKit: https://npm.runkit.com/acebase

I use a mongodb local instance. It is super easy to setup. Here is an easy how to guide on setting up MongoDB

you can also try couchdb. There is an example along with electron
http://blog.couchbase.com/build-a-desktop-app-with-github-electron-and-couchbase

Related

Google Cloud Functions Firestore Limitations

I have written a function which gets a Querysnapshot within all changed Documents of the past 24 hours in Firestore. I loop through this Querysnapshot to get the relevant informations. The informations out of this docs I want to save into maps which are unique for every user. Every user generates in average 10 documents a day. So every map gets written 10 times in average. Now I'm wondering if the whole thing is scalable or will hit the 500 writes per transaction limit given in Firebase as more users will use the app.
The limitation im speaking about is documented in Google documentation.
Furthermore Im pretty sure that my code is really slow. So im thankful for every optimization.
exports.setAnalyseData = functions.pubsub
.schedule('every 24 hours')
.onRun(async (context) => {
const date = new Date().toISOString();
const convertedDate = date.split('T');
//Get documents (that could be way more than 500)
const querySnapshot = await admin.firestore().collectionGroup('exercises').where('lastModified', '>=', `${convertedDate}`).get();
//iterate through documents
querySnapshot.forEach(async (doc) => {
//some calculations
//get document to store the calculated data
const oldRefPath = doc.ref.path.split('/trainings/');
const newRefPath = `${oldRefPath[0]}/exercises/`;
const document = await getDocumentSnapshotToSave(newRefPath, doc.data().exercise);
document.forEach(async (doc) => {
//check if value exists
const getDocument = await admin.firestore().doc(`${doc.ref.path}`).collection('AnalyseData').doc(`${year}`).get();
if (getDocument && getDocument.exists) {
await document.update({
//map filled with data which gets added to the exisiting map
})
} else {
await document.set({
//set document if it is not existing
}, {
merge: true
});
await document.update({
//update document after set
})
}
})
})
})
The code you have in your question does not use a transaction on Firestore, so is not tied to the limit you quote/link.
I'd still recommend putting a limit on your query through, and processing the documents in reasonable batches (a couple of hundred being reasonable) so that you don't put an unpredictable memory load on your code.

NodeJS - Which layer should I roll back the transaction with multiple inserts?

I'm using Controller Layer/Service Layer/Repository Layer in my API with Postgres database (I'm using node-postgres).
In a given service, before inserting certain information A, I need to insert other information in other tables in the database. However, if there is a problem with one of the inserts, I would like to roll back the transaction. In node-postgres, rollbacks are done as follows:
const { Pool } = require('pg')
const pool = new Pool()
;(async () => {
// note: we don't try/catch this because if connecting throws an exception
// we don't need to dispose of the client (it will be undefined)
const client = await pool.connect()
try {
await client.query('BEGIN')
const queryText = 'INSERT INTO users(name) VALUES($1) RETURNING id'
const res = await client.query(queryText, ['brianc'])
const insertPhotoText = 'INSERT INTO photos(user_id, photo_url) VALUES ($1, $2)'
const insertPhotoValues = [res.rows[0].id, 's3.bucket.foo']
await client.query(insertPhotoText, insertPhotoValues)
await client.query('COMMIT')
} catch (e) {
await client.query('ROLLBACK')
throw e
} finally {
client.release()
}
})().catch(e => console.error(e.stack))
The database connection is called on the Repository Layer. However, the rollback situation will only happen on the Service Layer. How do I solve this situation, since for architectural reasons, I can't call the database connection directly in the Service Layer? Is there a problem with my architecture?
The easiest way to accomplish this is to put all the related transactions into a single method in your repository layer. This is generally "OK" because they are all fundamentally a single transaction.
If you need to support distributed transactions, you are better off using a unit of work pattern to implement holding onto the various transactions, and rolling back on the whole unit of work.

Firebase Firestore not returning documents

I was attempting to fetch all documents from a collection in a Node.js environment. The documentation advises the following:
import * as admin from "firebase-admin";
const db = admin.firestore();
const citiesRef = db.collection('cities');
const snapshot = await citiesRef.get();
console.log(snapshot.size);
snapshot.forEach(doc => {
console.log(doc.id, '=>', doc.data());
});
I have 20 documents in the 'cities' collection. However, the logging statement for the snapshot size comes back as 0.
Why is that?
Edit: I can write to the Firestore without issue. I can also get details of a single document, for example:
const city = citiesRef.doc("city-name").get();
console.log(city.id);
will log city-name to the console.
Ensure that Firebase has been initialized and verify the collection name matches your database exactly, hidden spaces and letter case can break the link to Firestore. One way to test this is to create a new document within the collection to validate the path.
db.collection('cities').doc("TEST").set({test:"value"}).catch(err => console.log(err));
This should result in a document in the correct path, and you can also catch it to see if there are any issues with Security Rules.
Update
To list all documents in a collection, you can do this with the admin sdk through a server environment such as the Cloud Functions using the listDocuments() method but this does not reduce the number of Reads.
const documentReferences = await admin.firestore()
.collection('someCollection')
.listDocuments()
const documentIds = documentReferences.map(it => it.id)
To reduce reads, you will want to aggregate the data in the parent document or in a dedicated collection, this would double the writes for any updates but crush read count to a minimal amount.

How to update a quantity in another document when creating a new document in the firebase firestore collection?

When I create a new document in the note collection, I want to update the quantity in the info document. What am I doing wrong?
exports.addNote = functions.region('europe-west1').firestore
.collection('users/{userId}/notes').onCreate((snap,context) => {
const uid = admin.user.uid.toString();
var t;
db.collection('users').doc('{userId}').collection('info').doc('info').get((querySnapshot) => {
querySnapshot.forEach((doc) => {
t = doc.get("countMutable").toString();
});
});
let data = {
countMutable: t+1;
};
db.collection("users").doc(uid).collection("info").doc("info").update({countMutable: data.get("countMutable")});
});
You have... a lot going on here. A few problems:
You can't trigger firestore functions on collections, you have to supply a document.
It isn't clear you're being consistent about how to treat the user id.
You aren't using promises properly (you need to chain them, and return them out of the function if you want them to execute properly).
I'm not clear about the relationship between the userId context parameter and the uid you are getting from the auth object. As far as I can tell, admin.user isn't actually part of the Admin SDK.
You risk multiple function calls doing an increment at the same time giving inconsistent results, since you aren't using a transaction or the increment operation. (Learn More Here)
The document won't be created if it doesn't already exist. Maybe this is ok?
In short, this all means you can do this a lot more simply.
This should do you though. I'm assuming that the uid you actually want is actually the one on the document that is triggering the update. If not, adjust as necessary.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const db = admin.firestore();
exports.addNote = functions.firestore.document('users/{userId}/notes/{noteId}').onCreate((snap,context) => {
const uid = context.params.userId;
return db.collection("users").doc(uid).collection("info").doc("info").set({
countMutable: admin.firestore.FieldValue.increment(1)
}, { merge: true });
});
If you don't want to create the info document if it doesn't exist, and instead you want to get an error, you can use update instead of set:
return db.collection("users").doc(uid).collection("info").doc("info").update({
countMutable: admin.firestore.FieldValue.increment(1)
});

One connection per user

I know that this question was asked already, but it seems that some more things have to be clarified. :)
Database is designed in the way that each user has proper privileges to read documents, so the connection pool needs to have a connection with different users, which is out of connection pool concept. Because of the optimization and the performance I need to call so-called "user preparation" which includes setting session variables, calculating and caching values in a cache, etc, and after then execute queries.
For now, I have two solutions. In the first solution, I first check that everything is prepared for the user and then execute one or more queries. In case it is not prepared then I need to call "user preparation", and then execute query or queries. With this solution, I lose a lot of performance because every time I have to do the checking and so I've decided for another solution.
The second solution includes "database pool" where each pool is for one user. Only at the first connection useCount === 0 (I do not use {direct: true}) I call "user preparation" (it is stored procedure that sets some session variables and prepares cache) and then execute sql queries.
User preparation I’ve done in the connect event within the initOptions parameter for initializing the pgPromise. I used the pg-promise-demo so I do not need to explain the rest of the code.
The code for pgp initialization with the wrapper of database pooling looks like this:
import * as promise from "bluebird";
import pgPromise from "pg-promise";
import { IDatabase, IMain, IOptions } from "pg-promise";
import { IExtensions, ProductsRepository, UsersRepository, Session, getUserFromJWT } from "../db/repos";
import { dbConfig } from "../server/config";
// pg-promise initialization options:
export const initOptions: IOptions<IExtensions> = {
promiseLib: promise,
async connect(client: any, dc: any, useCount: number) {
if (useCount === 0) {
try {
await client.query(pgp.as.format("select prepareUser($1)", [getUserFromJWT(session.JWT)]));
} catch(error) {
console.error(error);
}
}
},
extend(obj: IExtensions, dc: any) {
obj.users = new UsersRepository(obj);
obj.products = new ProductsRepository(obj);
}
};
type DB = IDatabase<IExtensions>&IExtensions;
const pgp: IMain = pgPromise(initOptions);
class DBPool {
private pool = new Map();
public get = (ct: any): DB => {
const checkConfig = {...dbConfig, ...ct};
const {host, port, database, user} = checkConfig;
const dbKey = JSON.stringify({host, port, database, user})
let db: DB = this.pool.get(dbKey) as DB;
if (!db) {
// const pgp: IMain = pgPromise(initOptions);
db = pgp(checkConfig) as DB;
this.pool.set(dbKey, db);
}
return db;
}
}
export const dbPool = new DBPool();
import diagnostics = require("./diagnostics");
diagnostics.init(initOptions);
And web api looks like:
GET("/api/getuser/:id", (req: Request) => {
const user = getUserFromJWT(session.JWT);
const db = dbPool.get({ user });
return db.users.findById(req.params.id);
});
I'm interested in whether the source code correctly instantiates pgp or should be instantiated within the if block inside get method (the line is commented)?
I've seen that pg-promise uses DatabasePool singleton exported from dbPool.js which is similar to my DBPool class, but with the purpose of giving “WARNING: Creating a duplicate database object for the same connection”. Is it possible to use DatabasePool singleton instead of my dbPool singleton?
It seems to me that dbContext (the second parameter in pgp initialization) can solve my problem, but only if it could be forwarded as a function, not as a value or object. Am I wrong or can dbContext be dynamic when accessing a database object?
I wonder if there is a third (better) solution? Or any other suggestion.
If you are troubled by this warning:
WARNING: Creating a duplicate database object for the same connection
but your intent is to maintain a separate pool per user, you can indicate so by providing any unique parameter for the connection. For example, you can include custom property with the user name:
const cn = {
database: 'my-db',
port: 12345,
user: 'my-login-user',
password: 'my-login-password'
....
my_dynamic_user: 'john-doe'
}
This will be enough for the library to see that there is something unique in your connection, which doesn't match the other connections, and so it won't produce that warning.
This will work for connection strings as well.
Please note that what you are trying to achieve can only work well when the total number of connections well exceeds the number of users. For example, if you can use up to 100 connections, with up to 10 users. Then you can allocate 10 pools, each with up to 10 connections in it. Otherwise, scalability of your system will suffer, as total number of connections is a very limited resource, you would typically never go beyond 100 connections, as it creates excessive load on the CPU running so many physical connections concurrently. That's why sharing a single connection pool scales much better.

Resources