KNEX: How to get nested data from foreign key using join? - node.js

I've been trying to structure a fetch call with a query to return joined data between the two tables in my database. I have one table for projects, and another for palettes that includes a foreign key of "project_id".
Below is one of many iterations I've tried so far that isn't working (it's probably a total mess by now). I tried a join for a while and then totally gave up, because fields with the same name were overwriting each other.
I also couldn't figure out how to get the palette data nested inside the project data, which would also resolve the issue of names overwriting. Finally I got to this point, just forgetting joins altogether and trying to manually structure the output, but I don't get data back or even any error message.
.select()
.then(projects => {
return projects.map(async project => {
return database('palettes')
.where({ project_id: project.id })
.then(palettes => ({ ...project, palettes }))
})
}).then(projects => res.status(200).json(projects))
.catch(error => res.status(500).json({ error }))```

You did not provide your database type and schema structure.
Assuming: projects (project_id, name), palettes (palette_id, name, project_id)
and that you want to find all projects with a 1:1 relation to their palette this should suffice:
knex
.select(
'projects.project_id',
'projects.name as project_name',
'palettes.palette_id',
'palettes.name as palette_name'
)
.from('projects')
.innerJoin('palettes', 'projects.project_id', 'palettes.project_id')

Related

What happens in CouchDB when I create an index repeatedly?

To implement sorting, in CouchDB we have to create an index (otherwise the corresponding mango query fails). I haven't found a way to do this in Fauxton (if I have missed something, please comment in Github), so I've decided to create it programmatically. As I'm using couchdb-nano, I've added:
this.clientAuthPromise.then(async () => {
try {
await this.client.use('test_polling_storage').createIndex({
index: {
fields: [
'isoDate',
],
},
name: 'test_polling_storage--time_index',
})
console.log('index created?')
} catch (error) {
console.log(`failed to create index:`, error)
}
})
into the storage class constructor, where
this.clientAuthPromise = this.client.auth(connectionParams.auth.user, connectionParams.auth.password)
Now, on each run of the server, I'm getting index created?, so the createIndex method (which presumably POSTs to /db/_index) doesn't fail (and sorting works, too). But as I haven't found indexes viewer in Fauxton either, I wonder what actually happens on each call of createIndex: does it create a new index? Does it rebuild the index? Or sees that the index with such name already exists and doesn't do anything? It's annoying to deal with this in a blind fashion, so please clarify or suggest a way to clarify.
Ok, as the docs suggest that the response will contain "created" or "exists", I've tried
const result = await this.client.use('test_polling_storage').createIndex({
...
console.log('index created?', result.result)
got index created? exists and concluded that if the index was created before, it won't be re-created. It's not clear what will happen if I try to change the index, but at least now I have a mean to find out.

Firebase cloud function to count and update collections

I have three collections in my Firebase project, one contains locations that users have checked in from, and the other two are intended to hold leaderboards with the cities and suburbs with the most check ins.
However, as a bit of a newbie to NOSQL databases, I'm not quite sure how to do the queries I need to get and set the data I want.
Currently, my checkins collection has this structure:
{ Suburb:,
City:,
Leaderboard:}
The leaderboard entry is a boolean to mark if the check in has already been added to the leaderboard.
What I want to do is query for all results where leaderboard is false, count the entries for all cities, count the entries for all suburbs, then add the city and suburb data to a separate collection, then update the leaderboard boolean to indicate they've been counted.
exports.updateLeaderboard = functions.pubsub.schedule('30 * * * *').onRun(async context => {
db.collection('Bears')
.where('Leaderboard', '==', 'false')
.get()
.then(snap =>{
snap.forEach(x => {
//Count unique cities and return object SELECT cities,COUNT(*) AS `count` FROM Bears GROUP BY cities
})
})
.then(() => {
console.log({result: 'success'});
})
.catch(error => {
console.error(error);
});
})
Unfortunately, I've come to about the limit of my knowledge here and would love some help.
Firebase is meant to be a real-time platform, and most of your business logic is going to be expressed in Functions. Because the ability to query is so limited, lots of problems like this are usually solved with triggers and data denormalization.
For instance, if you want a count of all mentions of a city, then you have to maintain that count at event-time.
// On document create
await firestore()
.collection("city-count")
.doc(doc.city)
.set({
count: firebase.firestore.FieldValue.increment(1),
}, { merge: true });
Since it's a serverless platform, it's built to run a lot of very small, very fast functions like this. Firebase is very bad at doing large computations -- you can quickly run in to mb/minute and doc/minute write limits.
Edit: Here is how Firebase solved this exact problem from the perspective of a SQL trained developer https://www.youtube.com/watch?v=vKqXSZLLnHA
As clarified in this other post from the Community here, Firestore doesn't have a built-in API for counting documents via query. You will need to read the whole collection and load it to a variable and work with the data then, counting how many of them have False as values in their Leaderboard document. While doing this, you can start adding these cities and suburbs to arrays that after, will be written in the database, updating the other two collections.
The below sample code - untested - returns the values from the Database where the Leaderboard is null, increment a count and shows where you need to copy the value of the City and Suburb to the other collections. I basically changed some of the orders of your codes and changed the variables to generic ones, for better understanding, adding a comment of where to add the copy of values to other collections.
...
// Create a reference to the collection of checkin
let checkinRef = db.collection('cities');
// Create a query against the collection
let queryRef = checkinRef.where('Leaderboard', '==', false);
var count = 0;
queryRef.get().
.then(snap =>{
snap.forEach(x => {
//add the cities and suburbs to their collections here and update the counter
count++;
})
})
...
You are very close to the solution, just need now to copy the values from one collection to the others, once you have all of them that have False in leaderboard. You can get some good examples in copying documents from a Collection to another, in this other post from the Community: Cloud Functions: How to copy Firestore Collection to a new document?
Let me know if the information helped you!

How to change and save the association of a model instance?

I'm using sequelize for node.js. I define this relationship:
// 1:M - Platform table with Governance status table
dbSchema.Platform_governance.belongsTo(dbSchema.Governance_status, {foreignKey: 'platform_status'});
dbSchema.Governance_status.hasMany(dbSchema.Platform_governance, {foreignKey: 'platform_status'});
So that means I have a table called platform_governance which has a foreign key that points to a governance_status row. A user wants to change this foreign key to point to a different governance_status row.
So I want to first search that this governance_status row the user selected actually exists and is unique, and then make the foreign key point to it. This is what I currently have:
// first I select the platform_governance of which I want to change it's foreign key
dbSchema.Platform_governance.findById(5, {include: [dbSchema.Governance_status]}).then(result => {
// Now I search for the user-requested governance_status
dbSchema.Governance_status.findAll({where:{user_input}}).then(answer => {
// I check that one and only one row was found:
if (answer.length != 1 ) {
console.log('error')
} else {
// Here I want to update the foreign key
// I want and need to do it through the associated model, not the foreign key name
result.set('governance_status', answer[0])
result.save().then(result => console.log(result.get({plain:true}))).catch(err => console.log(err))
}
The result.save() promise returns successfully and the object printed in console is correct, with the new governance_status correctly set. But if I go to the database NOTHING has changed. Nothing was really saved.
Oops just found the problem. When setting associations like this you shouldn't use the set() method. Instead, sequelize creates setters for each association. In my case I had to use setGovernance_status():
// Update corresponding foreign keys
result.setGovernance_status(answer[0])
If anyone finds anywhere in the documentation where this is documented I would appreciate :)

improve mongo query performance

I'm using a node based CMS system called Keystone, which uses MongoDB for a data store, giving fairly liberal control over data and access. I have a very complex model called Family, which has about 250 fields, a bunch of relationships, and a dozen or so methods. I have a form on my site which allows the user to enter in the required information to create a new Family record, however the processing time is running long (12s on localhost and over 30s on my Heroku instance). The issue I'm running into is that Heroku emits an application error for any processes that run over 30s, which means I need to optimize my query. All processing happens very quickly except one function. Below is the offending function:
const Family = keystone.list( 'Family' );
exports.getNextRegistrationNumber = ( req, res, done ) => {
console.time('get registration number');
const locals = res.locals;
Family.model.find()
.select( 'registrationNumber' )
.exec()
.then( families => {
// get an array of registration numbers
const registrationNumbers = families.map( family => family.get( 'registrationNumber' ) );
// get the largest registration number
locals.newRegistrationNumber = Math.max( ...registrationNumbers ) + 1;
console.timeEnd('get registration number');
done();
}, err => {
console.timeEnd('get registration number');
console.log( 'error setting registration number' );
console.log( err );
done();
});
};
the processing in my .then() happens in milliseconds, however, the Family.model.find() takes way too long to execute. Any advice on how to speed things up would be greatly appreciated. There are about 40,000 Family records the query is trying to dig through, and there is already an index on the registrationNumber field.
It makes sense that the then() executes quickly but the find() takes a while; finding the largest value in a set of records is a relatively quick database operation while getting the set could potentially be very time-consuming depending on a number of factors.
If you are simply reading the data and presenting it to the user via REST or some sort of visual interface, you can make use of lean() which will return plain javascript objects. By default, you are returning a mongoose.Document which in your case is unnecessary as there does not appear to be any data manipulation after your read query; you are just getting the data.
More importantly, it appears that all you need is one record: the record with the largest registrationNumber. You should always use findOne() when you are looking for one record in any set of records to maximize performance.
See previous answer detailing using findOne in a node.js implementation, or see mongoDB documentation for general information about this collection method.

Need Suggestion in designing Redis structure

We are developing system that deals with millions of records.
Redis Structure
Regarding Redis structure we are planning to use Hashes for each device,users etc and all searchable/querying fields would be added as sets in the hash like below
Below is the node.js code (Its POC code)
var key ="dvc:"+data.id;
client.hmset(key,data,function(err, item){
client.sadd("tag:"+data.Tag,key,function(err, result1){
})
client.sadd("serialNo:"+data.SerialNo,key,function(err, result1){
})
client.sadd("currentStatus:"+data.Status,key,function(err, result1){
})
client.sadd("createdOn:"+data.CreatedDate,key,function(err, result1){
})
if(data.Boxes && data.Boxes.length > 0) {
for loop to add all boxes
client.sadd("boxes:" + data.Boxes[i].id, key, function (err, result1) {
})
}
Once this is done plan is to use set queries to get all different tags , status , date , and Boxes range queries. Do you guys recommend this ? we are planning to run the load test as well .. but wanted to check if there is any better approach
Requirement 2 we Also need Quick Search (Auto Complete )
for this we are planning to put all searchable text in ZSET with score and use
zrangebylex for searching below is sample code
client.zadd("zset",1,data.Tag+"|tag",function(err, result1){
console.log("imei "+result1);
})
client.zadd("zset",1,data.SerialNo+"|SerialNo",function(err, result1){
console.log("zset "+result1);
})
for get querying
zrangebylex zset [2073 "[2073\xff" LIMIT 0 10
Everything looks good but any suggestion.
Indeed, you put data in hashes and later you organize it in sets by different criterias, either in unsorted and sorted sets.
Actually, when you store data in Redis you do like if you were directly building and consuming indexes.

Resources