I'm building a very rudimentary ORM while I learn Node. I have the constructor currently accept either a name or id parameter but not both. If the name is provided, the class creates the record in the database. If the id is provided, it looks up the record. Here's the complete file.
const mysql = require('mysql2/promise');
const { v4: uuid } = require('uuid');
const pool = require('../helpers/pool.js');
const xor = require('../helpers/xor.js');
class List {
constructor({ name = null, id = null} = {}) {
if (!xor(name, id)) {
throw new TypeError('Lists must have either a name or an id');
}
this.table_name = 'lists';
if (name) {
this.name = name;
this.uuid = uuid();
this.#insert_record();
return;
}
this.id = id;
this.#retrieve_record();
}
async #insert_record() {
await pool.query(
`INSERT INTO ${this.table_name} SET ?`,
{
name: this.name,
uuid: this.uuid
}
).then(async (results) => {
this.id = results[0].insertId;
return this.#retrieve_record();
});
}
async #retrieve_record() {
return await pool.execute(
`SELECT * FROM ${this.table_name} WHERE id = ?`,
[this.id]
).then(([records, fields]) => {
this.#assign_props(records[0], fields);
pool.end();
})
}
#assign_props(record, fields) {
fields.forEach((field) => {
this[field.name] = record[field.name];
})
console.log(this);
}
}
const list = new List({name: 'my list'});
const db_list = new List({id: 50});
You probably see the problem run this as is. I get intermittent errors. Sometimes everything works fine. Generally I see a console log of the retrieved list first, and then a see a log of the new list. But sometimes the pool gets closed with the retrieval before the insert can happen.
I tried placing the pool within the class, but that just caused other errors.
So, what's the proper way to have an ORM class use a connection pool? Note that I'm building features as I learn, and eventually there'll be a Table class from which all of the entity classes will inherit. But I'm first just trying to get this one class to work properly on its own.
Related
The final solution is at the bottom of this post.
I have a nodeJS server application that listens to a rather big collection:
//here was old code
This works perfectly fine: these are lots of documents and the server can serve them from cache instead of database, which saves me tons of document reads (and is a lot faster).
I want to make sure, this collection is staying alive forever, this means reconnecting if a change is not coming trough.
Is there any way to create this certainty? This server might be online for years.
Final solution:
database listener that saves the timestamp on a change
export const lastRolesChange = functions.firestore
.document(`${COLLECTIONS.ROLES}/{id}`)
.onWrite(async (_change, context) => {
return firebase()
.admin.firestore()
.collection('syncstatus')
.doc(COLLECTIONS.ROLES)
.set({
lastModified: context.timestamp,
docId: context.params.id
});
});
logic that checks if the server has the same updated timesta.mp as the database. If it is still listening, it should have, otherwise refresh listener because it might have stalled.
import { firebase } from '../google/auth';
import { COLLECTIONS } from '../../../configs/collections.enum';
class DataObjectTemplate {
constructor() {
for (const key in COLLECTIONS) {
if (key) {
this[COLLECTIONS[key]] = [] as { id: string; data: any }[];
}
}
}
}
const dataObject = new DataObjectTemplate();
const timestamps: {
[key in COLLECTIONS]?: Date;
} = {};
let unsubscribe: Function;
export const getCachedData = async (type: COLLECTIONS) => {
return firebase()
.admin.firestore()
.collection(COLLECTIONS.SYNCSTATUS)
.doc(type)
.get()
.then(async snap => {
const lastUpdate = snap.data();
/* we compare the last update of the roles collection with the last update we
* got from the listener. If the listener would have failed to sync, we
* will find out here and reset the listener.
*/
// first check if we already have a timestamp, otherwise, we set it in the past.
let timestamp = timestamps[type];
if (!timestamp) {
timestamp = new Date(2020, 0, 1);
}
// if we don't have a last update for some reason, there is something wrong
if (!lastUpdate) {
throw new Error('Missing sync data for ' + type);
}
const lastModified = new Date(lastUpdate.lastModified);
if (lastModified.getTime() > timestamp.getTime()) {
console.warn('Out of sync: refresh!');
console.warn('Resetting listener');
if (unsubscribe) {
unsubscribe();
}
await startCache(type);
return dataObject[type] as { id: string; data: any }[];
}
return dataObject[type] as { id: string; data: any }[];
});
};
export const startCache = async (type: COLLECTIONS) => {
// tslint:disable-next-line:no-console
console.warn('Building ' + type + ' cache.');
const timeStamps: number[] = [];
// start with clean array
dataObject[type] = [];
return new Promise(resolve => {
unsubscribe = firebase()
.admin.firestore()
.collection(type)
.onSnapshot(querySnapshot => {
querySnapshot.docChanges().map(change => {
timeStamps.push(change.doc.updateTime.toMillis());
if (change.oldIndex !== -1) {
dataObject[type].splice(change.oldIndex, 1);
}
if (change.newIndex !== -1) {
dataObject[type].splice(change.newIndex, 0, {
id: change.doc.id,
data: change.doc.data()
});
}
});
// tslint:disable-next-line:no-console
console.log(dataObject[type].length + ' ' + type + ' in cache.');
timestamps[type] = new Date(Math.max(...timeStamps));
resolve(true);
});
});
};
If you want to be sure you have all changes, you'll have to:
keep a lastModified type field in each document,
use a query to get documents that we modified since you last looked,
store the last time you queried on your server.
Unrelated to that, you might also be interested in the recently launched ability to serve bundled Firestore content as it's another way to reduce the number of charged reads you have to do against the Firestore server.
I am trying to build a EJS form with three fields and I need to pass two sets of data to it at the same time: users and books using promises. Unfortunatly books are not getting passed and stay 'undefined'. I cannot figure out why.
Form
Textfield (irrelevant for this example)
Dropdown box with a list of users
Dropdown box with a list of books
For (2) and (3) I query my mysql database to get the data so that I can fill the form drop-down-boxes.
/controller.js
const User = require('../models/user.js');
const Book = require('../models/book.js');
exports.getAddNeueAusleihe = (req, res, next) => {
// storing users and books in this object
let view_data_for_my_view = {};
// fetch users for dropdown-box nr.1
User.fetchAll()
.then(([users_rows]) => {
view_data.users = users_rows;
// fetch books for dropdown-box nr. 2
return Book.fetchAll(([books_rows]) => {
view_data.books = books_rows;
});
})
.then(() => {
// send data to view
res.render('neue-ausleihe.ejs', {
users: view_data.users,
books: view_data.books,
pageTitle: 'Neue Ausleihe'
});
});
}
The User-fetch works fine. But the Book-fetch does return "undefined", although the SQL code in the books model works fine. It actually jumps into the books model, but does not get the values to the view. Here is my SQL-code for models.
/models/user.js
const db = require('../util/database.js');
module.exports = class User {
constructor(id, name) {
this.id = id;
this.name = name;
}
static fetchAll() {
return db.execute('SELECT * from dbuser WHERE UserFlag = "active"');
};
}
/models/books.js
const db = require('../util/database.js');
module.exports = class Book {
constructor(id, name) {
this.id = id;
this.name = name;
}
static fetchAll() {
return db.execute('SELECT * from books WHERE status = "free"');
}
}
Assuming db.execute returns a promise that resolves to an array, for which the first entry is the actual result, your code should look more like this:
exports.getAddNeueAusleihe = (req, res, next) => {
// storing users and books in this object
const view_data = {};
// fetch users for dropdown-box nr.1
User.fetchAll()
.then(([users_rows]) => {
view_data.users = users_rows;
// fetch books for dropdown-box nr. 2
return Book.fetchAll();
})
.then([book_rows] => {
view_data.books = books_rows;
})
.then(() => {
// send data to view
res.render('neue-ausleihe.ejs', {
users: view_data.users,
books: view_data.books,
pageTitle: 'Neue Ausleihe'
});
});
}
Bonus version:
exports.getAddNeueAusleihe = async(req, res, next) =>
// send data to view
res.render('neue-ausleihe.ejs', {
pageTitle: 'Neue Ausleihe',
users: (await User.fetchAll())[0]
books: (await Book.fetchAll())[0]
});
}
Even as the person writing this, it still blows my mind how much more readable async/await-based code is vs promise chains.
The official documentation of the Node.js Driver version 3.6 contains the following example for the .find() method:
const { MongoClient } = require("mongodb");
// Replace the uri string with your MongoDB deployment's connection string.
const uri = "mongodb+srv://<user>:<password>#<cluster-url>?w=majority";
const client = new MongoClient(uri);
async function run() {
try {
await client.connect();
const database = client.db("sample_mflix");
const collection = database.collection("movies");
// query for movies that have a runtime less than 15 minutes
const query = { runtime: { $lt: 15 } };
const options = {
// sort returned documents in ascending order by title (A->Z)
sort: { title: 1 },
// Include only the `title` and `imdb` fields in each returned document
projection: { _id: 0, title: 1, imdb: 1 },
};
const cursor = collection.find(query, options);
// print a message if no documents were found
if ((await cursor.count()) === 0) {
console.log("No documents found!");
}
await cursor.forEach(console.dir);
} finally {
await client.close();
}
}
To me this somewhat implies that I would have to create a new connection for each DB request I make.
Is this correct? If not, then what is the best practise to keep the connection alive for various routes?
You can use mongoose to set a connection with your database.
mongoose.connect('mongodb://localhost:27017/myapp', {useNewUrlParser: true});
then you need to define your models which you will use to communicate with your DB in your routes.
const MyModel = mongoose.model('Test', new Schema({ name: String }));
MyModel.findOne(function(error, result) { /* ... */ });
https://mongoosejs.com/docs/connections.html
It's 2022 and I stumbled upon your post because I've been running into the same issue. All the tutorials and guides I've found so far have setups that require reconnecting in order to do anything with the Database.
I found one solution from someone on github, that creates a class to create, save and check if a client connection exist. So, it only recreates a client connection if it doesn't already exist.
const MongoClient = require('mongodb').MongoClient
class MDB {
static async getClient() {
if (this.client) {
return this.client
}
this.client = await MongoClient.connect(this.url);
return this.client
}
}
MDB.url='<your_connection_url>'
app.get('/yourroute', async (req, res) => {
try {
const client = await MDB.getClient()
const db = client.db('your_db')
const collection = db.collection('your_collection');
const results = await collection.find({}).toArray();
res.json(results)
} catch (error) {
console.log('error:', error);
}
})
I am maintaining the bot state in a local MongoDB storage. When I am trying to hand-off the conversation to an agent using directline-js, it shows an error of BotFrameworkAdapter.sendActivity(): Missing Conversation ID. The conversation ID is being saved in MongoDB
The issue is arising when I change the middle layer from Array to MongoDB. I have already successfully implemented the same bot-human hand-off using directline-js with an Array and the default Memory Storage.
MemoryStorage in BotFramework
const { BotFrameworkAdapter, MemoryStorage, ConversationState, UserState } = require('botbuilder')
const memoryStorage = new MemoryStorage();
conversationState = new ConversationState(memoryStorage);
userState = new UserState(memoryStorage);
Middle Layer for Hand-Off to Agent
case '#connect':
const user = await this.provider.connectToAgent(conversationReference);
if (user) {
await turnContext.sendActivity(`You are connected to
${ user.userReference.user.name }\n ${ JSON.stringify(user.messages) }`);
await this.adapter.continueConversation(user.userReference, async
(userContext) => {
await userContext.sendActivity('You are now connected to an agent!');
});
}
else {
await turnContext.sendActivity('There are no users in the Queue right now.');
}
The this.adapter.continueConversation throws the error when using MongoDB.
While using Array it works fine. The MongoDB and Array object are both similar in structure.
Since this works with MemoryStorage and not your MongoDB implementation, I'm guessing that there's something wrong with your MongoDB implementation. This answer will focus on that. If this isn't the case, please provide your MongoDb implementation and/or a link to your repo and I can work off that.
Mongoose is only necessary if you want to use custom models/types/interfaces. For storage that implements BotState, you just need to write a custom Storage adapter.
The basics of this are documented here. Although written for C#, you can still apply the concepts to Node.
1. Install mongodb
npm i -S mongodb
2. Create a MongoDbStorage class file
MongoDbStorage.js
var MongoClient = require('mongodb').MongoClient;
module.exports = class MongoDbStorage {
constructor(connectionUrl, db, collection) {
this.url = connectionUrl;
this.db = db;
this.collection = collection;
this.mongoOptions = {
useNewUrlParser: true,
useUnifiedTopology: true
};
}
async read(keys) {
const client = await this.getClient();
try {
var col = await this.getCollection(client);
const data = {};
await Promise.all(keys.map(async (key) => {
const doc = await col.findOne({ _id: key });
data[key] = doc ? doc.document : null;
}));
return data;
} finally {
client.close();
}
}
async write(changes) {
const client = await this.getClient();
try {
var col = await this.getCollection(client);
await Promise.all(Object.keys(changes).map((key) => {
const changesCopy = { ...changes[key] };
const documentChange = {
_id: key,
document: changesCopy
};
const eTag = changes[key].eTag;
if (!eTag || eTag === '*') {
col.updateOne({ _id: key }, { $set: { ...documentChange } }, { upsert: true });
} else if (eTag.length > 0) {
col.replaceOne({ _id: eTag }, documentChange);
} else {
throw new Error('eTag empty');
}
}));
} finally {
client.close();
}
}
async delete(keys) {
const client = await this.getClient();
try {
var col = await this.getCollection(client);
await Promise.all(Object.keys(keys).map((key) => {
col.deleteOne({ _id: key });
}));
} finally {
client.close();
}
}
async getClient() {
const client = await MongoClient.connect(this.url, this.mongoOptions)
.catch(err => { throw err; });
if (!client) throw new Error('Unable to create MongoDB client');
return client;
}
async getCollection(client) {
return client.db(this.db).collection(this.collection);
}
};
Note: I've only done a little testing on this--enough to get it to work great with the Multi-Turn-Prompt Sample. Use at your own risk and modify as necessary.
I based this off of a combination of these three storage implementations:
memoryStorage
blobStorage
cosmosDbStorage
3. Use it in your bot
index.js
const MongoDbStorage = require('./MongoDbStorage');
const mongoDbStorage = new MongoDbStorage('mongodb://localhost:27017/', 'testDatabase', 'testCollection');
const conversationState = new ConversationState(mongoDbStorage);
const userState = new UserState(mongoDbStorage);
I am having a hard time interpretting the documentation from Neo4j about transactions. Their documentation seems to indicate preference to doing it this way rather than explicitly declaring tx.commit() and tx.rollback().
Does this look best practice with respect to multi-statement transactions and neo4j-driver?
const register = async (container, user) => {
const session = driver.session()
const timestamp = Date.now()
const saltRounds = 10
const pwd = await utils.bcrypt.hash(user.password, saltRounds)
try {
//Start registration transaction
const registerUser = session.writeTransaction(async (transaction) => {
const initialCommit = await transaction
.run(`
CREATE (p:Person {
email: '${user.email}',
tel: '${user.tel}',
pwd: '${pwd}',
created: '${timestamp}'
})
RETURN p AS Person
`)
const initialResult = initialCommit.records
.map((x) => {
return {
id: x.get('Person').identity.low,
created: x.get('Person').properties.created
}
})
.shift()
//Generate serial
const data = `${initialResult.id}${initialResult.created}`
const serial = crypto.sha256(data)
const finalCommit = await transaction
.run(`
MATCH (p:Person)
WHERE p.email = '${user.email}'
SET p.serialNumber = '${serial}'
RETURN p AS Person
`)
const finalResult = finalCommit.records
.map((x) => {
return {
serialNumber: x.get('Person').properties.serialNumber,
email: x.get('Person').properties.email,
tel: x.get('Person').properties.tel
}
})
.shift()
//Merge both results for complete person data
return Object.assign({}, initialResult, finalResult)
})
//Commit or rollback transaction
return registerUser
.then((commit) => {
session.close()
return commit
})
.catch((rollback) => {
console.log(`Transaction problem: ${JSON.stringify(rollback, null, 2)}`)
throw [`reg1`]
})
} catch (error) {
session.close()
throw error
}
}
Here is the reduced version of the logic:
const register = (user) => {
const session = driver.session()
const performTransaction = session.writeTransaction(async (tx) => {
const statementOne = await tx.run(queryOne)
const resultOne = statementOne.records.map((x) => x.get('node')).slice()
// Do some work that uses data from statementOne
const statementTwo = await tx.run(queryTwo)
const resultTwo = statementTwo.records.map((x) => x.get('node')).slice()
// Do final processing
return finalResult
})
return performTransaction.then((commit) => {
session.close()
return commit
}).catch((rollback) => {
throw rollback
})
}
Neo4j experts, is the above code the correct use of neo4j-driver ?
I would rather do this because its more linear and synchronous:
const register = (user) => {
const session = driver.session()
const tx = session.beginTransaction()
const statementOne = await tx.run(queryOne)
const resultOne = statementOne.records.map((x) => x.get('node')).slice()
// Do some work that uses data from statementOne
const statementTwo = await tx.run(queryTwo)
const resultTwo = statementTwo.records.map((x) => x.get('node')).slice()
// Do final processing
const finalResult = { obj1, ...obj2 }
let success = true
if (success) {
tx.commit()
session.close()
return finalResult
} else {
tx.rollback()
session.close()
return false
}
}
I'm sorry for the long post, but I cannot find any references anywhere, so the community needs this data.
After much more work, this is the syntax we have settled on for multi-statement transactions:
Start session
Start transaction
Use try/catch block after (to enable proper scope in catch block)
Perform queries in the try block
Rollback in the catch block
.
const someQuery = async () => {
const session = Neo4J.session()
const tx = session.beginTransaction()
try {
const props = {
one: 'Bob',
two: 'Alice'
}
const tx1 = await tx
.run(`
MATCH (n:Node)-[r:REL]-(o:Other)
WHERE n.one = $props.one
AND n.two = $props.two
RETURN n AS One, o AS Two
`, { props })
.then((result) => {
return {
data: '...'
}
})
.catch((err) => {
throw 'Problem in first query. ' + e
})
// Do some work using tx1
const updatedProps = {
_id: 3,
four: 'excellent'
}
const tx2 = await tx
.run(`
MATCH (n:Node)
WHERE id(n) = toInteger($updatedProps._id)
SET n.four = $updatedProps.four
RETURN n AS One, o AS Two
`, { updatedProps })
.then((result) => {
return {
data: '...'
}
})
.catch((err) => {
throw 'Problem in second query. ' + e
})
// Do some work using tx2
if (problem) throw 'Rollback ASAP.'
await tx.commit
session.close()
return Object.assign({}, tx1, { tx2 })
} catch (e) {
tx.rollback()
session.close()
throw 'someQuery# ' + e
}
}
I will just note that if you are passing numbers into Neo4j, you should wrap them inside the Cypher Query with toInteger() so that they are parsed correctly.
I included examples of query parameters also and how to use them. I found it cleans up the code a little.
Besides that, you basically can chain as many queries inside the transaction as you want, but keep in mind 2 things:
Neo4j write-locks all involved nodes during a transaction, so if you have several processes all performing operations on the same node, you will see that only one process can complete a transaction at a time. We made our own business logic to handle write issues and opted to not even use transactions. It is working very well so far, writing 100,000 nodes and creating 100,000 relationships in about 30 seconds spread over 10 processes. It took 10 times longer to do in a transaction. We experience no deadlocking or race conditions using UNWIND.
You have to await the tx.commit() or it won't commit before it nukes the session.
My opinion is that this type of transaction works great if you are using Polyglot (multiple databases) and need to create a node, and then write a document to MongoDB and then set the Mongo ID on the node.
It's very easy to reason about, and extend as needed.