I'm using Controller Layer/Service Layer/Repository Layer in my API with Postgres database (I'm using node-postgres).
In a given service, before inserting certain information A, I need to insert other information in other tables in the database. However, if there is a problem with one of the inserts, I would like to roll back the transaction. In node-postgres, rollbacks are done as follows:
const { Pool } = require('pg')
const pool = new Pool()
;(async () => {
// note: we don't try/catch this because if connecting throws an exception
// we don't need to dispose of the client (it will be undefined)
const client = await pool.connect()
try {
await client.query('BEGIN')
const queryText = 'INSERT INTO users(name) VALUES($1) RETURNING id'
const res = await client.query(queryText, ['brianc'])
const insertPhotoText = 'INSERT INTO photos(user_id, photo_url) VALUES ($1, $2)'
const insertPhotoValues = [res.rows[0].id, 's3.bucket.foo']
await client.query(insertPhotoText, insertPhotoValues)
await client.query('COMMIT')
} catch (e) {
await client.query('ROLLBACK')
throw e
} finally {
client.release()
}
})().catch(e => console.error(e.stack))
The database connection is called on the Repository Layer. However, the rollback situation will only happen on the Service Layer. How do I solve this situation, since for architectural reasons, I can't call the database connection directly in the Service Layer? Is there a problem with my architecture?
The easiest way to accomplish this is to put all the related transactions into a single method in your repository layer. This is generally "OK" because they are all fundamentally a single transaction.
If you need to support distributed transactions, you are better off using a unit of work pattern to implement holding onto the various transactions, and rolling back on the whole unit of work.
Related
I want to make a atomic transaction.I'm using typeorm in my repository. And my transaction contains two different entity. What is the best way to do it ?
In my opinion better way is queryRunner.
Im using this template to do all operation in one transaction. I have Postgresql and queryRunner create one connection from pool. It is more managment by developer if your pool is small number.
const queryRunner = this.connection.createQueryRunner();
await queryRunner.connect();
await queryRunner.startTransaction();
try {
await queryRunner.commitTransaction();
return response;
}
except(err) {
await queryRunner.rollbackTransaction();
throw new InternalServerErrorException(err);
}
finally {
await queryRunner.release();
}
When I create a new document in the note collection, I want to update the quantity in the info document. What am I doing wrong?
exports.addNote = functions.region('europe-west1').firestore
.collection('users/{userId}/notes').onCreate((snap,context) => {
const uid = admin.user.uid.toString();
var t;
db.collection('users').doc('{userId}').collection('info').doc('info').get((querySnapshot) => {
querySnapshot.forEach((doc) => {
t = doc.get("countMutable").toString();
});
});
let data = {
countMutable: t+1;
};
db.collection("users").doc(uid).collection("info").doc("info").update({countMutable: data.get("countMutable")});
});
You have... a lot going on here. A few problems:
You can't trigger firestore functions on collections, you have to supply a document.
It isn't clear you're being consistent about how to treat the user id.
You aren't using promises properly (you need to chain them, and return them out of the function if you want them to execute properly).
I'm not clear about the relationship between the userId context parameter and the uid you are getting from the auth object. As far as I can tell, admin.user isn't actually part of the Admin SDK.
You risk multiple function calls doing an increment at the same time giving inconsistent results, since you aren't using a transaction or the increment operation. (Learn More Here)
The document won't be created if it doesn't already exist. Maybe this is ok?
In short, this all means you can do this a lot more simply.
This should do you though. I'm assuming that the uid you actually want is actually the one on the document that is triggering the update. If not, adjust as necessary.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const db = admin.firestore();
exports.addNote = functions.firestore.document('users/{userId}/notes/{noteId}').onCreate((snap,context) => {
const uid = context.params.userId;
return db.collection("users").doc(uid).collection("info").doc("info").set({
countMutable: admin.firestore.FieldValue.increment(1)
}, { merge: true });
});
If you don't want to create the info document if it doesn't exist, and instead you want to get an error, you can use update instead of set:
return db.collection("users").doc(uid).collection("info").doc("info").update({
countMutable: admin.firestore.FieldValue.increment(1)
});
I'm setting up the ability for my node server to load up the proper information from my DB (postgres) to render a certain client View. I'm currently refactoring my server code to follow an Object Oriented Approach with class constructors.
I currently have it so that Readers are a class of functions that are responsible for, well, running read queries on my database. I have inherited classes like MainViewReader and MatchViewReader, and they all inherit from a "Reader" class which instantiates a connection with postgres using the pg-promise library.
The issue with this is that I can't use two view readers or they will be opening up duplicate connections, therefore I am finding myself writing redundant code. So I believe I have two design choices, and I was wondering what was more efficient:
Instead of setting the pattern to be by servlet view, instead set
the pattern to be by the table read using that class, ie
NewsTableReader" MatchTableReader. Pro's of this is that none of
the code is redundant and can be used in different servlets, Con's
is that I would have to end the connection to postgres on every
instance of the Reader class before instantiating a new one as such:
const NewsTableReader = NewsTableReader()
await NewsTableReader.close()
const MatchTableReader = MatchTableReader()
await MatchTableReader.close()
Just having view readers. Pro's is that this is only one persisting
connection, cons is that there is lots of redundant code if i'm
loading data from the same tables in different views, example:
const MatchViewReader = MatchViewReader()
await MatchViewReader.load_news()
await MatchViewReader.load_matches()
Which approach is going to affect my performance negatively the most?
You've correctly ascertained that you should not create multiple connection pools with the same connection options 1. But this doesn't have to influence the structure of your code.
You could create a global pool, and pass that to your Reader constructors, as a kind of Dependency injection:
class Reader {
constructor(db) {
this._db = db
}
}
class NewsTableReader extends Reader {}
class MatchTableReader extends Reader {}
const pgp = require('pg-promise')(/* library options */)
const db = (/* connection options */)
const newsTableReader = new NewsTableReader(db)
const matchTableReader = new MatchTableReader(db)
await newsTableReader.load()
await matchTableReader.load()
// await Promise.all([newsTableReader.load(), matchTableReader.load()])
An other way to go is to use the same classes with the extend event of the pg-promise library:
const pgp = require('pg-promise')({
extend(obj, dc) {
obj.newsTableReader = new NewsTableReader(obj);
obj.matchTableReader = new MatchTableReader(obj);
}
})
const db = (/* connection options */)
await db.newsTableReader.load()
await db.tx(async t => {
const news = await t.newsTableReader.load();
const match = await t.matchTableReader.load();
return {news, match};
});
The upside of the extend event is, that you can use all of the functionality (eg. transactions and tasks) provided by the pg-promise library across different models. The thing to keep in mind is, that it creates new objects on every db.task(), db.tx() and db.connect() call.
I know that this question was asked already, but it seems that some more things have to be clarified. :)
Database is designed in the way that each user has proper privileges to read documents, so the connection pool needs to have a connection with different users, which is out of connection pool concept. Because of the optimization and the performance I need to call so-called "user preparation" which includes setting session variables, calculating and caching values in a cache, etc, and after then execute queries.
For now, I have two solutions. In the first solution, I first check that everything is prepared for the user and then execute one or more queries. In case it is not prepared then I need to call "user preparation", and then execute query or queries. With this solution, I lose a lot of performance because every time I have to do the checking and so I've decided for another solution.
The second solution includes "database pool" where each pool is for one user. Only at the first connection useCount === 0 (I do not use {direct: true}) I call "user preparation" (it is stored procedure that sets some session variables and prepares cache) and then execute sql queries.
User preparation I’ve done in the connect event within the initOptions parameter for initializing the pgPromise. I used the pg-promise-demo so I do not need to explain the rest of the code.
The code for pgp initialization with the wrapper of database pooling looks like this:
import * as promise from "bluebird";
import pgPromise from "pg-promise";
import { IDatabase, IMain, IOptions } from "pg-promise";
import { IExtensions, ProductsRepository, UsersRepository, Session, getUserFromJWT } from "../db/repos";
import { dbConfig } from "../server/config";
// pg-promise initialization options:
export const initOptions: IOptions<IExtensions> = {
promiseLib: promise,
async connect(client: any, dc: any, useCount: number) {
if (useCount === 0) {
try {
await client.query(pgp.as.format("select prepareUser($1)", [getUserFromJWT(session.JWT)]));
} catch(error) {
console.error(error);
}
}
},
extend(obj: IExtensions, dc: any) {
obj.users = new UsersRepository(obj);
obj.products = new ProductsRepository(obj);
}
};
type DB = IDatabase<IExtensions>&IExtensions;
const pgp: IMain = pgPromise(initOptions);
class DBPool {
private pool = new Map();
public get = (ct: any): DB => {
const checkConfig = {...dbConfig, ...ct};
const {host, port, database, user} = checkConfig;
const dbKey = JSON.stringify({host, port, database, user})
let db: DB = this.pool.get(dbKey) as DB;
if (!db) {
// const pgp: IMain = pgPromise(initOptions);
db = pgp(checkConfig) as DB;
this.pool.set(dbKey, db);
}
return db;
}
}
export const dbPool = new DBPool();
import diagnostics = require("./diagnostics");
diagnostics.init(initOptions);
And web api looks like:
GET("/api/getuser/:id", (req: Request) => {
const user = getUserFromJWT(session.JWT);
const db = dbPool.get({ user });
return db.users.findById(req.params.id);
});
I'm interested in whether the source code correctly instantiates pgp or should be instantiated within the if block inside get method (the line is commented)?
I've seen that pg-promise uses DatabasePool singleton exported from dbPool.js which is similar to my DBPool class, but with the purpose of giving “WARNING: Creating a duplicate database object for the same connection”. Is it possible to use DatabasePool singleton instead of my dbPool singleton?
It seems to me that dbContext (the second parameter in pgp initialization) can solve my problem, but only if it could be forwarded as a function, not as a value or object. Am I wrong or can dbContext be dynamic when accessing a database object?
I wonder if there is a third (better) solution? Or any other suggestion.
If you are troubled by this warning:
WARNING: Creating a duplicate database object for the same connection
but your intent is to maintain a separate pool per user, you can indicate so by providing any unique parameter for the connection. For example, you can include custom property with the user name:
const cn = {
database: 'my-db',
port: 12345,
user: 'my-login-user',
password: 'my-login-password'
....
my_dynamic_user: 'john-doe'
}
This will be enough for the library to see that there is something unique in your connection, which doesn't match the other connections, and so it won't produce that warning.
This will work for connection strings as well.
Please note that what you are trying to achieve can only work well when the total number of connections well exceeds the number of users. For example, if you can use up to 100 connections, with up to 10 users. Then you can allocate 10 pools, each with up to 10 connections in it. Otherwise, scalability of your system will suffer, as total number of connections is a very limited resource, you would typically never go beyond 100 connections, as it creates excessive load on the CPU running so many physical connections concurrently. That's why sharing a single connection pool scales much better.
Is there a NoSQL database solution for desktop applications similar to Sqlite where the database is a file on the user's machine? This database would be called by a nodejs application that is used on the desktop.
I see this is an old question, but a newer option for you would be AceBase which is a fast, low memory, transactional, index & query enabled NoSQL database engine and server for node.js and browser. Definitely a good NoSQL alternative for SQLite and very easy to use:
const { AceBase } = require('acebase');
const db = new AceBase('mydb');
// Add question to database:
const questionRef = await db.ref('stackoverflow/questions').push({
title: 'Local NoSQL database for desktop application',
askedBy: 'tt9',
date: new Date(),
question: 'Is there a NoSQL database solution for desktop applications similar to Sqlite where the database is a file on the user\'s machine? ..'
});
// questionRef is now a reference to the saved database path,
// eg: "stackoverflow/questions/ky9v13mr00001s7b829tmwk1"
// Add my answer to it:
const answerRef = await questionRef.child('answers').push({
text: 'Use AceBase!'
});
// answerRef is now reference to the saved answer in the database,
// eg: "stackoverflow/questions/ky9v13mr00001s7b829tmwk1/answers/ky9v5atd0000eo7btxid7uic"
// Load the question (and all answers) from the database:
const questionSnapshot = await questionRef.get();
// A snapshot contains the value and relevant metadata, such as the used reference:
console.log(`Got question from path "${questionSnapshot.ref.path}":`, questionSnapshot.val());
// We can also monitor data changes in realtime
// To monitor new answers being added to the question:
questionRef.child('answers').on('child_added').subscribe(newAnswerSnapshot => {
console.log(`A new answer was added:`, newAnswerSnapshot.val());
});
// Monitor any answer's number of upvotes:
answerRef.child('upvotes').on('value').subscribe(snapshot => {
const prevValue = snapshot.previous();
const newValue = snapshot.val();
console.log(`The number of upvotes changed from ${prevValue} to ${newValue}`);
});
// Updating my answer text:
await answerRef.update({ text: 'I recommend AceBase!' });
// Or, using .set on the text itself:
await answerRef.child('text').set('I can really recommend AceBase');
// Adding an upvote to my answer using a transaction:
await answerRef.child('upvotes').transaction(snapshot => {
let upvotes = snapshot.val();
return upvotes + 1; // Return new value to store
});
// Query all given answers sorted by upvotes:
let querySnapshots = await questionRef.child('answers')
.query()
.sort('upvotes', false) // descending order, most upvotes first
.get();
// Limit the query results to the top 10 with "AceBase" in their answers:
querySnapshots = await questionRef.child('answers')
.query()
.filter('text', 'like', '*AceBase*')
.take(10)
.sort('upvotes', false) // descending order, most upvotes first
.get();
// We can also load the question in memory and make it "live":
// The in-memory value will automatically be updated if the database value changes, and
// all changes to the in-memory object will automatically update the database:
const questionProxy = await questionRef.proxy();
const liveQuestion = questionProxy.value;
// Changing a property updates the database automatically:
liveQuestion.tags = ['node.js','database','nosql'];
// ... db value of tags is updated in the background ...
// And changes to the database will update the liveQuestion object:
let now = new Date();
await questionRef.update({ edited: now });
// In the next tick, the live proxy value will have updated:
process.nextTick(() => {
liveQuestion.edited === now; // true
});
I hope this is of help to anyone reading this, AceBase is a fairly new kid on the block that is starting to make waves!
Note that AceBase is also able to run in the browser, and as a remote database server with full authentication and authorization options. It can synchronize with server and other clients in realtime and upon reconnect after having been offline.
For more info and documentation check out AceBase at GitHub
If you want to quickly try the above examples, you can copy/paste the code into the editor at RunKit: https://npm.runkit.com/acebase
I use a mongodb local instance. It is super easy to setup. Here is an easy how to guide on setting up MongoDB
you can also try couchdb. There is an example along with electron
http://blog.couchbase.com/build-a-desktop-app-with-github-electron-and-couchbase