How to update the cache when delete, update or add data? - node.js

I use cache-all for data caching. Suppose need to add new information. It is added, and when a request to display all data occurs, new information is not displayed among all data. And so that the new data added also become displayed in the request, need to wait until the cache storage timer disappears. How can you make the cache update when you add, update, or delete data?
index.js:
const cache = require('cache-all')
cache.init({
expireIn: 90,
isEnable: true
})
app.listen(port, () => {
console.log(`Server has been started on ${port}`)
})
route:
const cache = require('cache-all')
router.get('/get_all', cache.middleware(90), controller.getAll)
Or advise the normal module for data caching but that it was easy to use.

you've stumbled upon one of the greatest problems in computer science, "Cache Invalidation".
The approach will greatly depend on your application, but usually a good way to get going is to manually invalidate the cache when you know you've changed it. For example, if you cache user profiles, you might use this on a /user/:user_id call to get that users profile. To invalidate the cache (or update it) you would remove the cache entry when a call is made that changes the users profile. Here's some pseudo code to illustrate.
const cache = require('cache-all');
router.get('/user/:username', (req, res) => {
const username = req.params.username;
return cache.get('user:'+username).then(userProfile => {
if (!userProfile) {
// There were no entries in the cache, so we had a "cache miss".
// We will need to look this up in the database, then potentially
// add it to the cache after.
}
return res.json(userProfile);
});
});
router.patch('/user/:username', (req, res) => {
const username = req.params.username;
const profileChanges = req.body.profile;
let profileToReturn = {};
return database.user.update(username, profileChanges).then(newProfile => {
profileToReturn = newProfile;
// We have updated something we know will be in the cache, so we need
// to either invalidate it (removing the entry) or update it. In this
// case we've decided to update the cache since we think it'll be used
// again very quickly.
return cache.set('user:'+username, profileToReturn);
}).then(cacheResult => {
return res.json(profileToReturn);
});
})
You can see from this example we have two endpoints, one which reads from the cache if it can (otherwise it goes to the database). And one which updates a value, and also updates the cache. Much of this will be up to your application, your reasons for caching, your load, etc. But this should help you along.

Can't you use cache.set('foo', 'bar') to set the new value? Would that not update the cache?

Related

How to load selected item from database?

So I have a default model set up for viewing my data, and a form for inputting the data. I want to know what the best practice is for retrieving the one item of selected data? it's for a MERN stack
Currently I am using window hash and adding the id onto the url and retrieving from database that way, I feel this is janky though and trying to add update functionality it seems like it might get confusing.
I've thought about adding a currentID to redux, but then I can see problems occurring when that is persisted and you go to create a recipe after viewing and end up editing instead of creating.
retrieving id from url
const recipeId = window.location.hash.substr(1);
const recipe = useSelector((state) =>
state.recipes.find((r) => r._id === recipeId)
);
I get my recipes from mongo
export const recipeList = async (req, res) => {
try {
const recipes = await recipeSheet.find();
res.status(200).json(recipes);
} catch (error) {
res.status(404).json({ message: error.message });
}
};
and store to redux
export const getRecipes = () => async (dispatch) => {
try {
const { data } = await api.fetchRecipes();
dispatch({ type: "FETCH_ALL_RECIPES", payload: data });
} catch (error) {
console.log(error.message);
}
};
It depends on how large is your data. It'd better define a new GET path to retrieve a single record, like BASE_URL/api/recipes/123 or you can add query acceptance for the current endpoint to find a specific id in DB and return it, like BASE_URL/api/recipes?id=123. The reason for that is besides the optimization (for large data sets), the record may change after you store all records to the redux store, and by the current solution, you show the old data to the user. Best practices tell us to choose the first way as your solution, the second way is usually for filtering the data. Then simply by sending the new URL by the user, trigger a new API call to the new endpoint and get the single record.

Node / Express generate calendar URL

I have a database with a bunch of dates and an online overview where you can view them, now I know I can copy a URL from my Google Agenda and import this in other calendar clients so I can view the events there.
I want to generate an Express endpoint where I fetch every event every time the endpoint is called and return it in a format that can be imported by other calendar clients. Now with packages like iCal-generator I could generate, read, and return the file whenever a user requests the URL. but it feels redudent to write a file to my storage to then read it, return it and delete it every time it's requested.
What is the most effiecent way to go about this?
Instead of generating the file/calendar data on every request, you could implement a simple caching mechanism. That is, upon start of your node app you generate the calendar data and put it in your cache with corresponding time to live value. Once the data has expired or new entries are inserted into your DB you invalidate the cache, re-generate the data and cache it again.
Here's a very simple example for an in-memory cache that uses the node-cache library:
const NodeCache = require('node-cache');
const cacheService = new NodeCache();
// ...
const calendarDataCacheKey = 'calender-data';
// at the start of your app, generate the calendar data and cache it with a ttl of 30 min
cacheCalendarData(generateCalendarData());
function cacheCalendarData (calendarData) {
cacheService.set(calendarDataCacheKey, calendarData, 1800);
}
// in your express handler first try to get the value from the cache
// if not - generate it and cache it
app.get('/calendar-data', (req, res) => {
let calendarData = cacheService.get(calendarDataCacheKey);
if (calendarData === undefined) {
calendarData = generateCalendarData();
cacheCalendarData(calendarData);
}
res.send(calendarData);
});
If your app is scaled horizontally you should consider using redis.
100% untested, but I have code similar to this that exports to a .csv from a db query, and it might get you close:
const { Readable } = require('stream');
async function getCalendar(req, res) {
const events = await db.getCalendarEvents();
const filename = 'some_file.ics';
res.set({
'Content-Type': 'text/calendar',
'Content-Disposition': `attachment; filename=${filename}`,
});
const input = new Readable({ objectMode: true });
input.pipe(res)
.on('error', (err) => {
console.error('SOME ERROR', err);
res.status(500).end();
});
events.forEach(e => input.push(e));
input.push(null);
}
if you were going to use the iCal generator package, you would do your transforms within the forEach method before pushing to the stream.

Am I having a local Firestore database?

I want to understand what kind of Firestore database is installed to my box.
The code is running with node.js 9.
If I remove the internet for X minutes and put it back, I can see all the cached transactions going to Firestore (add, updates, deletes).
If I add firebase.firestore().enablePersistence() line after 'firebase.initializeApp(fbconfig), I am getting this error:
Error enabling offline persistence. Falling back to persistence
disabled: FirebaseError: [code=unimplemented]: This platform is either
missing IndexedDB or is known to have an incomplete implementation.
Offline persistence has been disabled.
Now, my question is. If I don't have persistence enabled or can't have it, how come when disconnecting my device from internet, I still have internal transaction going on? Am I really seeing it the proper way?
To me, beside not seeing the console.log() that I have inside the "then()" to batch.commit or transaction.update right away (only when putting back the internet) tells me that I have some kind of internal database persistence, don't you think?
Thanks in advance for your help.
UPDATE
When sendUpdate is called, it looks like the batch.commit is executed because I can see something going on in listenMyDocs(), but the console.log "Commit successfully!" is not shown until the internet comes back
function sendUpdate(response) {
const db = firebase.firestore();
let batch = db.batch();
let ref = db.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection')
.doc('my-new-doc');
batch.update(ref, { "variable": response.state });
batch.commit().then(() => {
console.log("Commit successfully!");
}).catch((error) => {
console.error("Commit error: ", error);
});
}
function listenMyDocs() {
const firebase = connector.getFirebase()
const db = firebase.firestore()
.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection');
const query = db.where('var1', '==', "true")
.where('var2', '==', false);
query.onSnapshot(snapshot => {
snapshot.docChanges().forEach(change => {
if (change.type === 'added') {
console.log('ADDED');
}
if (change.type === 'modified') {
console.log('MODIFIED');
}
if (change.type === 'removed') {
console.log('DELETED');
}
});
});
the console.log "Commit successfully!" is not shown until the internet comes back
This is the expected behavior. Completion listeners fire once the data is committed on the server.
Local events may fire before completion, in an effort to allow your UI to update optimistically. If the server changes the behavior that the client raised events for (for example: if the server rejects a write), the client will fire reconciliatory events (so if an add was rejected, it will firebase a change.type = 'removed event once that is detected).
I am not entirely sure if this applies to batch updates though, and it might be tricky to test that from a Node.js script as those usually bypass the security rules.

What is the reason for using GET instead of POST in this instance?

I'm walking through the Javascript demos of pg-promise-demo and I have a question about the route /api/users/:name.
Running this locally works, the user is entered into the database, but is there a reason this wouldn't be a POST? Is there some sort of advantage to creating a user in the database using GET?
// index.js
// --------
app.get('/api/users/:name', async (req, res) => {
try {
const data = (req) => {
return db.task('add-user', async (t) => {
const user = await t.users.findByName(req.params.name);
return user || t.users.add(req.params.name);
});
};
} catch (err) {
// do something with error
}
});
For brevity I'll omit the code for t.users.findByName(name) and t.users.add(name) but they use QueryFile to execute a SQL command.
EDIT: Update link to pg-promise-demo.
The reason is explained right at the top of that file:
IMPORTANT:
Do not re-use the HTTP-service part of the code from here!
It is an over-simplified HTTP service with just GET handlers, because:
This demo is to be tested by typing URL-s manually in the browser;
The focus here is on a proper database layer only, not an HTTP service.
I think it is pretty clear that you are not supposed to follow the HTTP implementation of the demo, rather its database layer only. The demo's purpose is to teach you how to organize a database layer in a large application, and not how to develop HTTP services.

Firebase Cloud Function, getting a 304 error

I have a firebase cloud function that resets a number under every user's UID every day back to 0. I have about 600 users and so far it's been working perfectly fine.
But today, it's giving me a 304 error and not reseting the value. Here is a screenshot:
And here is the function code:
export const resetDailyQuestsCount = functions.https.onRequest((req, res) => {
const ref = db.ref('users');
ref.once('value').then(snap => {
snap.forEach(item => {
const uid = item.child('uid').val();
ref.child(uid).update({ dailyQuestsCount: 0 }).catch(err => {
res.status(500).send(err);
});
});
}).catch(err => {
res.status(500).send(err);
})
res.status(200).send('daily quest count reset');
})
Could this be my userbase growing too large? I doubt it, 600 is not that big.
Any help would be really appreciated! This is really affecting my users.
An HTTP function must only send a single response to the client. This means a single call to send(). Your function can possibly attempt to send multiple responses to the client in the even that there are multiple updates that fail. Your logging isn't complete enough to demonstrate this, but it's a very real possibility with what you've shown.
Also bear in mind that this function is very much not scalable since it reads all of your users prior to processing them. For large number of users, this presents memory problems. You should look into ways to limit the number of nodes read by your query in order to prevent future problems.

Resources