What are the conventions around when and where to create datastore client objects?
datastore = new Datastore({});
In the docs a new Datastore instance seems to be created in every single file. Would there be any benefit in creating a singleton that initialises the Datastore connection and returns the same instance to each part of the application that requires it?
It depends on the underlying code if new Datastore({}) actually creates a new instance or returns a singleton, you'd have to check that.
What you could do is move the creation of the datastore instance to a seperate file and require that instance in every file you need access to datastore. Since dependencies you require are cached you will always get the same instance.
Pseudo code:
datastore.js
const datastore = new Datastore({});
module.exports = datastore;
foo.js
const datastore = require('./datastore');
// do something with datastore
In reply to your follow-up question.
If you look at the source code of the nodejs/Datastore module you will see the same pattern:
src/index.js
* #example <caption>Import the client library</caption>
* const Datastore = require('#google-cloud/datastore');
// ...
module.exports = Datastore;
No matter where you require the client library:
const Datastore = require('#google-cloud/datastore');
It will always return the same instance. Datastore will handle scaling and connections (pooling) for you.
In conclusion: There's no functional difference between requiring the client library in each file or wrapping it in a seperate file and require that in the files where you need a connection.
Personally, I prefer wrapping the connection in a seperate file and require that in my data access files. Benefits for this are:
* You abstract away the actual implementation. If you ever need to change datastore or the way you connect to it it will only ever be in one place.
* In case you need to supply connection parameters (like a password) you only have to do that once. It saves you from writing the same code over and over again.
Related
I have a Next.js app with a very simple database using lowDb. The idea was to have one module with a simple get and set functions, and inside a module, a db is set as a top level constant:
const dbAdapter = new JSONFile(dbFile)
const db = new Low(dbAdapter)
let randomId = generateRandomId()
console.log(`DB: Using database in ${dbFile}, instance ${randomId}`)
/// A promise we call at the beginning of get and set functions to ensure db is initialized
let initPromise = new Promise<undefined>(async function(resolve, reject){
console.log(`DB: init, reading database ...`)
// Read data from JSON file, this will set db.data content
await db.read()
// If file.json doesn't exist, db.data will be null
// Set default data, node >= 15.x
db.data ||= {
tracks: {},
}
resolve(undefined)
})
export async function getTracks() {
await initPromise
return db.data['tracks']
}
...
The getTracks() is then called inside getServerSideProps function of multiple routes to provide actual data to both SSR and client rendered page.
The problem seems to be that as I use the database in multiple pages, Next.js is instantiating this module multiple times, leading to each page having its own separate database (so whatever one page saves, the other one doesn't see, as each instance keeps it's own in-memory cache, and doesn't re-read the db file on every read).
Here is what the log shows:
wait - compiling / (client and server)...
event - compiled client and server successfully in 100 ms (387 modules)
DB: Using database in tracks.json, instance tpzoj
DB: init, reading database ...
Then I navigate to another page, and this happens - a module is built again for another page's bundle, and another instance of db is created:
wait - compiling /edit (client and server)...
event - compiled client and server successfully in 115 ms (394 modules)
DB: Using database in tracks.json, instance gjndv
DB: init, reading database ...
Is there a way to force Next.js to only build a certain module once (instead of bundling it into each page separately)?
This seems to be related: https://github.com/vercel/next.js/issues/10933 - basically the takeaway seems to be "avoid in-process memory cache", and use filesystem or redis instead :(.
Or, in my case wait for db.read() inside every getTracks(), to force re-read of a db on every request :(. Is there a better way?
I couldn't find an answer to something I wonder.
With Mysql in Expressjs, when I declared the MySQL connection in a post handling function, it would create a new connection every time my ExpressJs server got a request. Then, the server would throw an error, when the maximum number of connections were established between the processing server and the database server.
I was wondering if there is the same problem with DynamoDB.DocumentClient()? What is the best way of doing operations with DynamoDB?
Should I have the DocumentClient global as below, or is it okay if I leave it in the post/get functions?
...
// DocumentClient is out of the post function below
const docClient = new AWS.DynamoDB.DocumentClient();
router.post('/loglogbaby', function(req, res){
var params = { ... };
docClient.get(params, function(err,data){...});
req.json({response:"nonobaby"});
}
...
Well it doesn't matter because DynamoDB works with HTTP requests in the back not with connections and pooling etc.. DocumentClient creates a HTTP request at the end. Its a library to make low level api easier. (See here).
So basically you create a programming level object every time you create it. Not new connections. And objects are cheap to create.
Since AWS DynamoDB is already hosted service, there is no problem with where you create DocumentClient object.
Its good practice if you create a global object for it.
You can find here a comparison between MySQL and DyanmoDB.
I'm trying to take advantage of db connection reuse in Lambda, by keeping the code outside of the handler.
For example - something like:
import dbconnection from './connection'
const handler(event, context, callback){
//use dbconnection
}
The issue is I don't decide what database to connect to until I do a lookup to see where they should be connecting. In my specific case I have 'customer=foo' in a query param then I can look to see that foo should connect to database1.
So what I need to do is something like this :
const dbconnection = require('./connection)('database1')
The way it is now I need to do this in every handler method which is expensive.
Is there some way I can pull the query parameter, look up my database and set it / switch it globally within the Lambda execution context?
I've tried this:
import dbconnection from './connection'
const handler(event, context, callback){
const client = dbconnection.setDatabase('database1')
}
....
./connection.js
setDatabase(database) {
if(this.currentDatabase !== database) {
// connect to different database
this.currentDatabase = database;
}
}
Everything works locally with sls offline but doesn't work through the AWS Lambda execution context. Thoughts?
You can either hardcode (or provide it via environment variable) it or not. If you can, then pull it out of then handler and it will not be executed each time. If you can't, as you have mentioned, then what you are trying to do is to make lambda stateful. Lambda was designed to be stateless and AWS intentionally doesn't expose specific informations about the underlying containers so that you don't start doing something like what you are trying to do now - introducing state to it.
I am using firebase realtime database and I was wondering which is a better pattern regarding
firebase.database()
is it considered bad practice to have multiple instances of this. Is it better if I have a single instance of the database which is exported within the node app. Or is it basically the same thing to create a new instance for every single action creator file.
import * as firebase from 'firebase';
firebase.initializeApp(config);
export const provider = new firebase.auth.GoogleAuthProvider();
export const auth = firebase.auth();
export default firebase;
I have this approach for the firebase app instance and I am unsure if a similar pattern is required for the database instance as well. There weren't any specifications within the firebase docs.
Every time you call one of the product methods on the firebase object that you get from the import, it will give you exactly the same object in return. So, every time you call firebase.auth(), you'll get the same thing back, and every time you call firebase.database(), you'll get the same thing. How you want to manage those instances is completely your preference.
I am looking for MongoDB API compatible DB engine that does not require a full blown mongod process to run (kind of SQLite for Node).
From multiple candidates that persistently store data on a local disk with similar API ended up with two:
NeDB https://github.com/louischatriot/nedb
tingodb http://www.tingodb.com/
Problem
I have worked with neither of them.
I am also very new to the API of MongoDB, so it is difficult for me to judge about comparability.
Requirements
I need your help/advice on picking only one library that satisfies
It is stable enough.
It is fast to handle ~1Mb JSON documents on disk or bigger.
I want to be able to switch to MongoDB as a data backend in the future or by demand by changing a config file. I don't want to duplicate code.
DB initialization api is different
Now only tingodb claims the API compatibility. Even initialization looks fairly similar.
tingodb
var Db = require('tingodb')().Db, assert = require('assert');
vs
mongodb
var Db = require('mongodb').Db,
Server = require('mongodb').Server,
assert = require('assert');
In case of NeDB it looks a bit different because it uses the datastore abstraction:
// Type 1: In-memory only datastore (no need to load the database)
var Datastore = require('nedb')
, db = new Datastore();
QUESTION
Obliviously initialization is not compatible. But what about CRUD? How difficult it is to adopt it?
Since most of the code I do not want to duplicate will be CRUD operations, I need to know how similar they are, i.e. how agnostic can be my code about the fact which backend I have.
// If doc is a JSON object to be stored, then
db.insert(doc); // which is a NeDB method which is compatiable
// How about *WriteResult*? does not look like it..
db.insert(doc, function (err, newDoc) { // Callback is optional
// newDoc is the newly inserted document, including its _id
// newDoc has no key called notToBeSaved since its value was undefined
});
I will appreciate your insight in this choice!
Also see:
Lightweight Javascript DB for use in Node.js
Has anyone used Tungus ? Is it mature?
NeDB CRUD operations are upwards compatible with MongoDB, but initialization is indeed not. NeDB implements part of MongoDB's API but not all, the part implemented is upwards compatible.
It's definitely fast enough for your requirements, and we've made it very stable over the past few months (no more bug reports)