Oracle DB | Node.js - Insertion Event Tracking - node.js

DB Team inserts new data into a table. If new data is inserted I do need to send messages.
Is there any way I could track new data using Nodejs. There is no specific duration for data insertion.

If your DB is remoteDB then you do require full-duplex connection.
After getting successful connections from the telnet from both servers do as follows:
const connection = await oracledb.getConnection({
user : user,
password : password,
connectString : connectString,
events : true
});
function myCallback(message) {
console.log('CQN Triggered');
}
const options = {
callback: myCallback, // method called by notifications
sql: `SELECT ID FROM table where STATUS= 0`, // query
timeout: 600,
qos : oracledb.SUBSCR_QOS_ROWIDS, // SUBSCR_QOS_QUERY: generate notifications when new rows with STATUS= 0 are found
clientInitiated : true //By Default it's false.
};
await connection.subscribe('mysub', options);

See the node-oracledb documentation on Continuous Query Notification, which lets your Node.js app be notified if data has changed in the database.
There are examples in cqn1.js and cqn2.js. When using Oracle Database and Oracle client libraries 19.4, or later, you'll find testing easier if you set the optional subscriptions attribute clientInitiated property to true:
const connection = await oracledb.getConnection({
user : "hr",
password : mypw, // mypw contains the hr schema password
connectString : "localhost/XEPDB1",
events : true
});
function myCallback(message) {
console.log(message);
}
const options = {
sql : `SELECT * FROM mytable`, // query of interest
callback : myCallback, // method called by notifications
clientInitated : true
};
await connection.subscribe('mysub', options);
You could also look at Advanced Queuing as another way to propagate messages, though you would still need to use something like a table trigger or CQN to initiate an AQ message.

Related

is it okay if I intentionally make my Google Cloud Functions has multiple errors?

I have collection and sub-collection like this
users/{userID}/followers/{followerID}
everytime a follower document is deleted in followers sub-collection, then it will trigger this firestore trigger below to decrease the numberOfFollowers field in user document. this is triggered when a user click unfollow button
exports.onDeleteFollower = functions
.firestore.document("users/{userID}/followers/{followerID}")
.onDelete((snapshot, context) => {
// normally triggered after a user push unfollow button
// then update the user document
const userID = context.params.userID;
const updatedData = {
numberOfFollowers: admin.firestore.FieldValue.increment(-1),
};
return db.doc(`users/${userID}`).update(updatedData);
});
now I have a case like this ....
if a user deletes their account, then I will delete the user document ( users/{userID} ), but if I delete a user document, it will not automatically delete all documents inside its sub-collection, right
so after I delete the user document, I have another function to delete all documents inside the followers sub-collection.
but the problem is, the onDeleteFollower triggers function above will be executed multiple times, and it will throw error multiple times, because the user document has been deleted ( the function above will be used to a update a field in deleted user doc)
I will have this error in functions emulator
⚠ functions: Error: 5 NOT_FOUND: no entity to update: app: "myApp"
path <
Element {
type: "users"
name: "u1-HuWQ5hoCQnOAwh0zRQM0nOe96K03"
}
>
⚠ Your function was killed because it raised an unhandled error.
I actually can write a logic to check if a user document still exist or not. if exist then update numberOfFollowers field
but deleting a user document is very rare if compared to a user click the unfollow button, I think it is not very efficient.
I have a plan like this, I will intentionally let the errors happened. say a user has 1000 followers, then it will trigger the onDeleteFollower function above, then I will have 1000 function errors
my question is .....
is it okay if I have multiple errors in a short time like that? will Google Cloud Function terminates my function, or .... I don't know, I am worried something bad will happen that I don't know
as far as I know, cloud functions will automatically run the function again after it is killed, will my function always ready again after an error like that?
I can't let the follower update the organizer (user) document directly from the client app, because it is not safe. creating security rules to facilitate this is complicated and it seems error prone
Have you considered instead of setting/removing users/{userID}/followers/{followerID} directly, that you create a "follow request" system?
"users/{userID}/followRequests/{requestID}": { // requestID would be auto-generated
user: "{followerID}",
follow: true // true = add user as follower, false = remove user as follower
}
This then allows you to use a single onCreate trigger to update your followers list eliminating the need for your current onCreate and onDelete triggers on users/{userID}/followers/{followerID}. From this function you can implement restrictions on following other users like follow limits or denying follow requests for blocked users.
export const newFollowRequest = functions.firestore
.document('users/{userId}/followRequests/{requestId}')
.onCreate(async (snap, context) => {
const request = snap.data();
const followingUserId = request.user;
const followedUserId = context.params.userId;
const db = admin.firestore();
const userDocRef = db.doc(`users/${followedUserId}`);
const followerDocRef = userDocRef.child(`followers/${followingUserId}`);
// /users/${followingUserId}/following/${followedUserId} ?
try {
if (request.follow) {
// Example restriction: Is the user who is attempting to follow
// blocked by followedUserId?
// await assertUserIDHasNotBlockedUserID(followedUserId, followingUserId);
// following
db.update(userDocRef, {
numberOfFollowers: admin.firestore.FieldValue.increment(1),
});
db.set(followerDocRef, {
/* ... */
});
} else {
// unfollowing
db.update(userDocRef, {
numberOfFollowers: admin.firestore.FieldValue.increment(-1),
});
db.delete(followerDocRef);
}
// delete this request when successful
db.delete(snap.ref);
// commit database changes
await db.commit();
console.log(`#${followingUserId} ${request.follow ? "followed" : "unfollowed"} #${followedUserId} successfully`);
} catch (err) {
// something went wrong, update this document with a failure reason (to show on the client);
let failureReason = undefined;
switch (err.message) {
case "other user is blocked":
failureReason = "You are blocked by #otherUser";
break;
case "user is blocked":
failureReason = "You have blocked #otherUser";
break;
}
return db.ref(snap.ref)
.update({
failureReason: failureReason || "Unknown server error";
})
.then(() => {
if (failureReason) {
console.log("REQUEST REJECTED: " + failureReason);
} else {
console.error("UNEXPECTED ERROR:", err)
}
},
(err) => {
console.error("UNEXPECTED FIRESTORE ERROR:", err);
});
}
});

Using wildcards in firestore get query

I want to create a cloud function in firebase that gets triggered whenever a user logs in for the first time. The function needs to add the UID from the authentication of the specific user to a specific, already existing document in firestore. The problem is that the UID needs to be added to a document of which I do not know the location. The code I have right now doesn't completely do that, but this is the part where it goes wrong. The database looks like this when simplified
organisations
[randomly generated id]
people
[randomly generated id] (in here, a specific document needs to be found based on known email
adress)
There are multiple different organisations and it is unknown to which organisation the user belongs. I thought of using a wildcard, something like the following:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const db = admin.firestore();
console.log('function ready');
//Detect first login from user
//if(firebase.auth.UserCredential.isNewUser()){
if(true){
//User is logged in for the first time
//const userID = firebase.auth().currentUser.UID;
//const userEmail = firebase.auth().currentUser.email;
const userID = '1234567890';
const userEmail = 'example#example.com';
//Get email, either personal or work
console.log('Taking a snapshot...');
const snapshot = db.collection('organisations/{orgID}/people').get()
.then(function(querySnapshot) {
querySnapshot.forEach(function(doc) {
console.log(doc.data());
});
});
}
I commented out some authentication-based lines for testing purposes. I know the code still runs, because hardcoding the orgID does return the right values. Also, looping trough every organisation is not an option, because I need to have the possibility of having a lot of organisations.
A lot of solutions are based on firestore triggers, like onWrite, where you can use wildcards like this.
However, I don't think that's possible in this case
The solution to the problem above:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const db = admin.firestore();
//Add UID to document in DB[FMIS-94]
//Detect first login from user
//if(firebase.auth.UserCredential.isNewUser()){
if(true){
//User is logged in for the first time
//const userID = firebase.auth().currentUser.UID;
//const userEmail = firebase.auth().currentUser.email;
const userID = '1234567890';
const userEmail = 'example#example.com';
var docFound = false;
//Get email, either personal or work
console.log('Taking a snapshot...');
//Test for work email
const snapshot = db.collectionGroup('people').where('email.work', '==', userEmail).get()
.then(function(querySnapshot){
querySnapshot.forEach(function(doc){
//work email found
console.log('work email found');
console.log(doc.data());
docFound = true;
const organisationID = doc.ref.parent.parent.id;
writeUID(doc.id, userID, organisationID);
});
});
if(!docFound){
//Test for personal email
const snapshot = db.collectionGroup('people').where('email.personal', '==', userEmail).get()
.then(function(querySnapshot){
querySnapshot.forEach(function(doc){
//personal email found
console.log('personal email found');
console.log(doc.data());
const organisationID = doc.ref.parent.parent.id;
writeUID(doc.id, userID, organisationID);
});
});
}
}
async function writeUID(doc, uid, organisationID){
const res = db.collection(`organisations/${organisationID}/people`).doc(doc).set({
userId: uid
}, { merge: true });
}
This was exactly what I needed, thanks for all your help everyone!
It is not possible to trigger a Cloud Function when a user logs in to your frontend application. There is no such trigger among the Firebase Authentication triggers.
If you want to update a document based on some characteristics of the user (uid or email), you can do that from the app, after the user has logged in.
You mention, in your question, "in here, a specific document needs to be found based on known email address". You should first build a query to find this document and then update it, all of that from the app.
Another classical approach is to create, for each user, a specific document which uses the user uid as document ID, for example in a users collection. It is then very easy to identify/find this document, since, as soon the user is logged in you know his uid.
I'm not sure I understand you correctly, but if you want to search across all people collections not matter what organizations document they're under, the solution is to use a collection group query for that.
db.collectionGroup('people').get()
.then(function(querySnapshot) {
querySnapshot.forEach(function(doc) {
console.log("user: "+doc.id+" in organization: "+doc.ref.parent.parent.id);
});
});
This will return a snapshot across all people collections in your entire Firestore database.
First setup Cloud Functions according to the official Documentation.
Then after setting up create functions like this:
exports.YOURFUNCTIONNAME= functions.firestore
.document('organisations/[randomly generated id]/people/[randomly generated id]')
.oncreate(res => {
const data = res.data();
const email = data.email;/----Your field name goes here-----/
/-----------------Then apply your logic here---------/
)}
This will triggers the function whenever you create the People -> Random ID

InputException Occuring in GetStream.io when tring to Add Reactions for an activity form NodeJS

When I try to add reaction to an activity, it shows an error message that user_id cannot be empty. As per the doc, there where no field to pass the user id in JS/NODE sample code. please help.
Language: Node JS
Code Used :
await client.reactions.add("like", activity.id );
also tried
await client.reactions.add("like", activity.id, "jack" );
Response: Error
Details : '{"detail":"Errors for fields \'user_id\'","status_code":400,"code":4,"exception":"InputException","exception_fields":{"user_id":["user_id is a required field"]},"duration":"0.16ms"} with HTTP status code 400' }
If your client is a serverside client(initialized with api key and api secret), you need to specify the user_id in the call to reactions.add. If you use a clientside integration(initialized with api key, user token and app_id), the user_id doesn't need to be specified.
For serverisde your code will be:
const userId = 'bob';
await client.reactions.add('like', activity.id, null, { userId });
For clientside you will have:
const token = srvClient.createUserToken('bob'); //srvClient is initialized with apiKey and apiSecret
const client = stream.connect(
apiKey,
token,
appId,
);
await client.reactions.add('like', activity.id);

Live Notification - DB Polling - Best Practice

I'm providing my users with live notifications.
I'm debating two options and can't decide which is the best way to go when polling the DB.
(The notifications are transmitted using WebSockets.)
Option 1 (current):
I hold a list of all the logged in users.
Every 1000 ms, I check the db for new notifications and if there is any, I send a message via WS to the appropriate user.
Pros:
This task is rather not expensive on resources
Cons:
In off-times, where's there's only a new notification every 1 minute, I poll the DB 60 times for no reason.
In a sense, it's not real-time because it takes 1 full second for new notifications to update. Had it been a chat service, 1 second is a very long time to wait.
Option 2:
Create a hook that whenever a new notification is saved (or deleted), the db get polled.
Pros:
The db does not get polled when there are no new notifications
Actual real-time response.
Cons:
In rush-hour, when there might be as many as 10 new notifications generated every second, the db will be polled very often, potentially blocking its response time for other elements of the site.
In case a user was not logged in when their notification was generated, the notification update will be lost (since I only poll the db for logged in users), unless I also perform a count whenever a user logs in to retrieve their notifications for when they were offline. So now, not only do I poll the DB when ever my notification hook is triggered, but also I poll the db every time a user logs-in. If I have notifications generated every second, and 10 log-ins every second, I will end up polling my DB 20 times a second, which is very expensive for this task.
Which option would you choose? 1, 2? or neither? Why?
Here is the code I am currently using (option 1):
var activeSockets = [] //whenever a user logs in or out, the array gets updated to only contain the logged-in users in any given moment
var count = function () {
process.nextTick(function () {
var ids = Object.keys(activeSockets) //the ids of all the logged in users
//every user document has a field called newNotification that updates whenever a new notification is available. 0=no new notifications, >0=there are new notifications
User.find({_id:{$in:ids}}).select({newNotifications:1}).lean().exec(function (err,users) {
for(var i=0;i<users.length;i++) {
var ws = activeSockets[String(users[i]._id)]
if(ws!=undefined) {
if (ws.readyState === ws.OPEN) {
//send the ws message only if it wasn't sent before.
if(ws.notifCount!=users[i].newNotifications) {
ws.send(JSON.stringify({notifications:users[i].newNotifications}));
activeSockets[String(users[i]._id)].notifCount = users[i].newNotifications
}
}
else {
//if the user logged out while I was polling, remove them from the active users array
delete activeSockets[String(users[i]._id)]
}
}
}
setTimeout(function () {
count()
},1000)
})
})
}
The implementation of Option 2 would be just as simple. Instead of calling
count()
using
setTimeout()
I only call it in my "new notification", "delete notification", and "log-in" hooks.
Code:
var activeSockets = [] //whenever a user logs in or out, the array gets updated to only contain the logged-in users in any given moment
var count = function () {
process.nextTick(function () {
var ids = Object.keys(activeSockets) //the ids of all the logged in users
//every user document has a field called newNotification that updates whenever a new notification is available. 0=no new notifications, >0=there are new notifications
User.find({_id:{$in:ids}}).select({newNotifications:1}).lean().exec(function (err,users) {
for(var i=0;i<users.length;i++) {
var ws = activeSockets[String(users[i]._id)]
if(ws!=undefined) {
if (ws.readyState === ws.OPEN) {
//send the ws message only if it wasn't sent before.
if(ws.notifCount!=users[i].newNotifications) {
ws.send(JSON.stringify({notifications:users[i].newNotifications}));
activeSockets[String(users[i]._id)].notifCount = users[i].newNotifications
}
}
else {
//if the user logged out while I was polling, remove them from the active users array
delete activeSockets[String(users[i]._id)]
}
}
}
//setTimeout was removed
})
})
}
Hooks:
hooks = {
notifications : {
add: function () {
count()
//and other actions
},
remove: function () {
count()
//and other actions
}
},
users: {
logIn: function () {
count()
//and other actions
}
}
}
So, Which option would you choose? 1, 2? or neither? Why?

Local NoSQL database for desktop application

Is there a NoSQL database solution for desktop applications similar to Sqlite where the database is a file on the user's machine? This database would be called by a nodejs application that is used on the desktop.
I see this is an old question, but a newer option for you would be AceBase which is a fast, low memory, transactional, index & query enabled NoSQL database engine and server for node.js and browser. Definitely a good NoSQL alternative for SQLite and very easy to use:
const { AceBase } = require('acebase');
const db = new AceBase('mydb');
// Add question to database:
const questionRef = await db.ref('stackoverflow/questions').push({
title: 'Local NoSQL database for desktop application',
askedBy: 'tt9',
date: new Date(),
question: 'Is there a NoSQL database solution for desktop applications similar to Sqlite where the database is a file on the user\'s machine? ..'
});
// questionRef is now a reference to the saved database path,
// eg: "stackoverflow/questions/ky9v13mr00001s7b829tmwk1"
// Add my answer to it:
const answerRef = await questionRef.child('answers').push({
text: 'Use AceBase!'
});
// answerRef is now reference to the saved answer in the database,
// eg: "stackoverflow/questions/ky9v13mr00001s7b829tmwk1/answers/ky9v5atd0000eo7btxid7uic"
// Load the question (and all answers) from the database:
const questionSnapshot = await questionRef.get();
// A snapshot contains the value and relevant metadata, such as the used reference:
console.log(`Got question from path "${questionSnapshot.ref.path}":`, questionSnapshot.val());
// We can also monitor data changes in realtime
// To monitor new answers being added to the question:
questionRef.child('answers').on('child_added').subscribe(newAnswerSnapshot => {
console.log(`A new answer was added:`, newAnswerSnapshot.val());
});
// Monitor any answer's number of upvotes:
answerRef.child('upvotes').on('value').subscribe(snapshot => {
const prevValue = snapshot.previous();
const newValue = snapshot.val();
console.log(`The number of upvotes changed from ${prevValue} to ${newValue}`);
});
// Updating my answer text:
await answerRef.update({ text: 'I recommend AceBase!' });
// Or, using .set on the text itself:
await answerRef.child('text').set('I can really recommend AceBase');
// Adding an upvote to my answer using a transaction:
await answerRef.child('upvotes').transaction(snapshot => {
let upvotes = snapshot.val();
return upvotes + 1; // Return new value to store
});
// Query all given answers sorted by upvotes:
let querySnapshots = await questionRef.child('answers')
.query()
.sort('upvotes', false) // descending order, most upvotes first
.get();
// Limit the query results to the top 10 with "AceBase" in their answers:
querySnapshots = await questionRef.child('answers')
.query()
.filter('text', 'like', '*AceBase*')
.take(10)
.sort('upvotes', false) // descending order, most upvotes first
.get();
// We can also load the question in memory and make it "live":
// The in-memory value will automatically be updated if the database value changes, and
// all changes to the in-memory object will automatically update the database:
const questionProxy = await questionRef.proxy();
const liveQuestion = questionProxy.value;
// Changing a property updates the database automatically:
liveQuestion.tags = ['node.js','database','nosql'];
// ... db value of tags is updated in the background ...
// And changes to the database will update the liveQuestion object:
let now = new Date();
await questionRef.update({ edited: now });
// In the next tick, the live proxy value will have updated:
process.nextTick(() => {
liveQuestion.edited === now; // true
});
I hope this is of help to anyone reading this, AceBase is a fairly new kid on the block that is starting to make waves!
Note that AceBase is also able to run in the browser, and as a remote database server with full authentication and authorization options. It can synchronize with server and other clients in realtime and upon reconnect after having been offline.
For more info and documentation check out AceBase at GitHub
If you want to quickly try the above examples, you can copy/paste the code into the editor at RunKit: https://npm.runkit.com/acebase
I use a mongodb local instance. It is super easy to setup. Here is an easy how to guide on setting up MongoDB
you can also try couchdb. There is an example along with electron
http://blog.couchbase.com/build-a-desktop-app-with-github-electron-and-couchbase

Resources