Updating a many to many join table using sequelize for nodejs - node.js

I have a Products table and a Categories table. A single Product can have many Categories and a single Category can have many Products, therefore I have a ProductsCategories table to handle the many-to-many join.
In the example below, I'm trying to associate one of my products (that has an ID of 1) with 3 different categories (that have IDs of 1, 2, & 3). I know something is off in my code snippet below because I'm getting an ugly SQL error message indicating that I'm trying to insert an object into the ProductsCategories join table. I have no idea how to fix the snippet below or if I'm even on the right track here. The Sequelize documentation is pretty sparse for this kind of thing.
models.Product.find({ where: {id: 1} }).on('success', function(product) {
models.Category.findAll({where: {id: [1,2,3]}}).on('success', function(category){
product.setCategories([category]);
});
});
I'd really appreciate some help here, thanks. Also, I'm using Postgres, not sure if that matters.

models.Category.findAll returns an array. By doing setCategories([category]); you are wrapping that array in an array. Try changing it to setCategories(category); instead

I think you are close. I had a similar issue with some of my code. Try iterating over your found categories and then add them. I think this might do the trick.
models.Category.findAll({where: {id: [1,2,3]}}).on('success', function(category){
for(var i=0; i<category.length; i++){
product.setCategories([category[i]]);
}
});

Related

Select one column from Type-ORM query - Node

I have a type ORM query that returns five columns. I just want the company column returned but I need to select all five columns to generate the correct response.
Is there a way to wrap my query in another select statement or transform the results to just get the company column I want?
See my code below:
This is what the query returns currently:
https://i.stack.imgur.com/MghEJ.png
I want it to return:
https://i.stack.imgur.com/qkXJK.png
const qb = createQueryBuilder(Entity, 'stats_table');
qb.select('stats_table.company', 'company');
qb.addSelect('stats_table.title', 'title');
qb.addSelect('city_code');
qb.addSelect('country_code');
qb.addSelect('SUM(count)', 'sum');
qb.where('city_code IS NOT NULL OR country_code IS NOT NULL');
qb.addGroupBy('company');
qb.addGroupBy('stats_table.title');
qb.addGroupBy('country_code');
qb.addGroupBy('city_code');
qb.addOrderBy('sum', 'DESC');
qb.addOrderBy('company');
qb.addOrderBy('title');
qb.limit(3);
qb.cache(true);
return qb.getRawMany();
};```
[1]: https://i.stack.imgur.com/MghEJ.png
[2]: https://i.stack.imgur.com/qkXJK.png
TypeORM didn't meet my criteria, so I'm not experienced with it, but as long as it doesn't cause problems with TypeORM, I see an easy SQL solution and an almost as easy TypeScript solution.
The SQL solution is to simply not select the undesired columns. SQL will allow you to use fields you did not select in WHERE, GROUP BY, and/or ORDER BY clauses, though obviously you'll need to use 'SUM(count)' instead of 'sum' for the order. I have encountered some ORMs that are not happy with this though.
The TS solution is to map the return from qb.getRawMany() so that you only have the field you're interested in. Assuming getRawMany() is returning an array of objects, that would look something like this:
getRawMany().map(companyRecord => {return {company: companyRecord.company}});
That may not be exactly correct, I've taken the day off precisely because I'm sick and my brain is fuzzy enough I was making too many stupid mistakes, but the concept should work even if the code itself doesn't.
EDIT: Also note that map returns a new array, it does not modify the existing array, so you would use this in place of the getRawMany() when assigning, not after the assignment.

Speeding up my cloudant query

I was wondering whether someone could provide some advice on my cloudant query below. It is now taking upwards of 20 seconds to execute against a DB of 50,000 documents - I suspect I could be getting better speed than this.
The purpose of the query is to find all of my documents with the attribute "searchCode" equalling a specific value plus a further list of specific IDs.
Both searchCode and _id are indexed - any ideas why my query would be taking so long / what I could do to speed it up?
mydb.find({selector: {"$or":[{"searchCode": searchCode},{"_id":{"$in":idList}}]}}, function (err, result) {
if(!err){
fulfill(result.docs);
}
else{
console.error(err);
}
});
Thanks,
James
You could try doing separate calls for the queries
find me documents where the searchCode = 'some value'
find me documents whose ids match a list of ids
The first can be achieved with a find call and a query like so:
{ selector: {"searchCode": searchCode} }
The second can be achieved by hitting the databases's _all_docs endpoint, passing in the list of ids as a keys parameter e.g.
GET /db/_all_docs?keys=["a","b","c"]
You might find that running both requests in parallel and merging the results gives you better performance.

Is there a way to only read a certain field from Mongoose?

I have a DB with a couple of levels deep nested stuff, sometimes pretty big.
now i have searched the doc and google/so, but couldn't find a simple answer:
if the schema is like:
{
roomId : String,
created : Date,
teacher : String,
students : Object,
problems : Array
}
is there a way to just read the roomId of every entry?
Not return the whole thing, but just an array of the roomIds?
(usecase: i want to make a list of all saved rooms, therefore i need absolutely nothing of all the other data, just the IDs. I want to avoid that overhead)
i'm pretty sure it can be done, but couldn't find how
Yes, use a projection
Model.findOne({...}, {roomId: 1})....

Using more than one geospatial index per collection in MongoDB

Current MongoDB documentation states the following:
You may only have 1 geospatial index per collection, for now. While
MongoDB may allow to create multiple indexes, this behavior is
unsupported. Because MongoDB can only use one index to support a
single query, in most cases, having multiple geo indexes will produce
undesirable behavior.
However, when I create two geospatial indices in a collection (using Mongoose), they work just fine:
MySchema.index({
'loc1': '2d',
extraField1: 1,
extraField2: 1
});
MySchema.index({
'loc2': '2d',
extraField1: 1,
extraField2: 1
});
My question is this: while it seems to work, the MongoDB documentation says this could "produce undesirable behavior". So far, nothing undesirable has not yet been discovered neither in testing or use.
Should I be concerned about this? If the answer is yes then what would you recommend as a workaround?
It is still not supported, so even although you can create two of them, it doesn't mean they are actually used properly. I would investigate explain output, on the mongo shell and issue a few queries that make use of the loc and loc2 fields in a geospatial way. For example with:
use yourDbName
db.yourCollection.find( { loc: { $nearSphere: [ 0, 0 ] } } ).explain();
and:
db.yourCollection.find( { loc2: { $nearSphere: [ 0, 0 ] } } ).explain();
And then compare what the explain information gives you. You will likely see that only the first created geo index is used for both searches. There are a few tickets in JIRA for this that you might want to vote on:
https://jira.mongodb.org/browse/SERVER-2331
https://jira.mongodb.org/browse/SERVER-3653

Node.js + Mongoose / Mongo & a shortened _id field

I'd like the unique _id field in one of my models to be relatively short: 8 letters/numbers, instead of the usual Mongo _id which is much longer. Having a short unique-index like this helps elsewhere in my code, for reasons I'll skip over here. I've successfully created a schema that does the trick (randomString is a function that generates a string of the given length):
new Schema('Activities', {
'_id': { type: String, unique: true, 'default': function(){ return randomString(8); } },
// ... other definitions
}
This works well so far, but I am concerned about duplicate IDs generated from the randomString function. There are 36^8 possible IDs, so right now it is not a problem... but as the set of possible IDs fills up, I am worried about insert commands failing due to a duplicate ID.
Obviously, I could do an extra query to check if the ID was taken before doing an insert... but that makes me cry inside.
I'm sure there's a better way to be doing this, but I'm not seeing it in the documentation.
This shortid lib https://github.com/dylang/shortid is being used by Doodle or Die, seems to be battle tested.
By creating a unique index on _id you'll get an error if you try to insert a document with a duplicate key. So wrap error handling around any inserts you do that looks for the error and then generates another ID and retries the insert in that case. You could add a method to your schema that implements this enhanced save to keep things clean and DRY.

Resources