I insert an Entity:
datastore.save({
key: datastore.key(['Users', 'bob']),
method: 'insert',
data: [
{
name: 'email',
value: 'bob#gmail.com',
excludeFromIndexes: false
}
]
}, function(err) {
if (!err) {
console.log('insert was a success');
}else{
console.log(err);
}
});
Then I want to query the user by email:
var query = datastore.createQuery('Users').filter('email', 'bob#gmail.com');
datastore.runInTransaction(function(transaction, done) {
transaction.runQuery(query, function(err, entities) {
if (!err) {
//insert another thing into the datastore here ...
}else{
console.log('err = ' + err);
transaction.rollback(done);
return;
}
});
});
But I get the error:
global queries do not support strong consistency
I saw I can't modify the consistency in node in the docs, so how do I query?
When you query against Datastore, these operations by default are eventually consistent. This means results that you have recently written may not show up in your query.
You can make sure that Datastore queries are strongly consistent by adding an you can only perform strongly consistent queries., which restricts the query to a single Entity Group (the unit of consistency in Datastore).
When you are running in a transaction, you can only perform strongly consistent queries. Since you are running in a transaction but not specifying an ancestor filter, you get this error.
Related
I'm diving into NoSql injection so I'm trying to hack my db with postman to see if there is any vulnerability. My requests use parameters to query fields as:
User.find({
// name: req.params.name
name: req.query.name
},
function (err, result) {
if (err) {
console.log('Mongoose findUsers name error: ', err);
res.status(505).send({ error: "Internal error." });
return;
}
if (result != null) {
console.log("Mongoose findUsers name: ", result);
res.status(200).send({
message: "Found users :",
data: result
});
} else {
console.log("Mongoose findUsers name: No user found");
res.status(404).send({
message: "No user found."
});
}
});
So I'm trying to pass a name parameter as {"$ne": null} or {"$gt": ""} , so the query will be localhost:5000/api/users?name={"$ne": null}. Though the response I get from the findOne method is not null, it will return an empty array. Does than mean that I'm already protected against NoSql injections and not need to sanitize queries parameters or am I just not using the right value to perform an injection? What other test could I run to try and check properly if nosql injection are possible ?
As always many thanks for your help.
Cheers.
I am using sails JS with Mongo DB.
My model is:
module.exports = {
attributes: {
title:{type:"string",required:true},
content:{type:"string",required:true},
date:{type:"string",required:true},
filename:{type:"string",required:true},
},
};
My Controller is:
fetchposts:function(req,res){
console.log("in fetch posts")
mysort={$id:-1}
Cyberblog.find().sort(mysort).limit(5).exec(function(err, result) {
if (err || !result) {
message="no records fetched";
console.log(message);
res.redirect('/newpost');
}
else{
console.log(result)
}
I am facing an error saying that
"Warning: The sort clause in the provided criteria is specified as a dictionary (plain JS object),
meaning that it is presumably using Mongo-Esque semantics (something like { fullName: -1, rank: 1 }).
But as of Sails v1/Waterline 0.13, this is no longer the recommended usage. Instead, please use either
a string like 'fullName DESC', or an array-like [ { fullName: 'DESC' } ].
(Since I get what you mean, tolerating & remapping this usage for now...)
and I am unable to fetch any records. It is showing no records fetched.
So I have one warning on Sort and no records coming from DB. Please help me resolve the issue.
Sort clause allow send string:
var users = await User.find({ name: 'Jake'})
.sort('age ASC');
return res.json(users);
Or an array:
var users = await User.find({ name: 'Finn'})
.sort([
{ age: 'ASC' },
{ createdAt: 'ASC' },
]);
return res.json(users);
Check this out in the documentation:
https://sailsjs.com/documentation/reference/waterline-orm/queries/sort
I am tying to query an existing Azure CosmosDB database with NodeJS.
getItems: function (callback) {
var self = this;
var querySpec = {
query: 'SELECT * FROM root'
};
self.client.queryDocuments(self.collection._self, querySpec).toArray(function (err, results) {
if (err) {
console.log(err.body);
callback(err);
} else {
callback(null, results);
}
});
}
For some reason it is keep complaining about Cross Partition Query. I am not really sure what it is. Any ideas where I may find this Partition Key and how to set it? Also, how can I avoid this exception in the query?
Full error message: Cross partition query is required but disabled. Please set x-ms-documentdb-query-enablecrosspartition to true, specify x-ms-documentdb-partitionkey, or revise your query to avoid this exception.
P.S. I know there are few similar questions asked already, but none of them addresses with NodeJS.
You're getting this error because the document collection you're querying against is a partitioned collection.
In order to query against a partitioned collection, you would either need to specify a partition against which to execute the query or specify that you want to query across partitions. BTW, former is recommended.
For cross-partition queries, you would need to specify the same in options. So your code would be:
getItems: function (callback) {
var self = this;
var querySpec = {
query: 'SELECT * FROM root'
};
const options = {//Query options
enableCrossPartitionQuery: true
};
self.client.queryDocuments(self.collection._self, querySpec, options).toArray(function (err, results) {
if (err) {
console.log(err.body);
callback(err);
} else {
callback(null, results);
}
});
}
If you're querying against a single partition, you will need to specify the partition key value in the query. In this case, your options would look something like:
const options = {
partitionKey: 'partition-key-value'
};
What is an efficient approach to maintaining data history for models in Sails? For instance, how do we maintain the data if a user updates the database and we'd like to keep versions for reverting the data as it updates.
I've seen numerous examples of using a "changes" tag that keeps the date as a sub-tag. Is this the most efficient?
On model update we can copy previous information. Are there any better suggestions or links with examples?
{
text:"This is the latest paragraph",
changes:{
text:{
"1470685677694":"This was the paragraph",
"1470685577694":"This was the original paragraph"
}
}
}
I am hoping to find a good solution to find / search and optimize this data to allow the user to revert if necessary.
You can define your model like the following and so you can preserve the history as well which you can use to restore any previous values.
api/models/Test.js
module.exports = {
connectionName:"someMongodbServer",
tableName:"test",
autoCreatedAt:false,
autoUpdatedAt:false,
autoPk:false,
attributes: {
id:{
type:"integer",
primaryKey:true,
autoIncrement:true
},
name:"string",
history:{
type:'json',
defaultsTo:[]
}
},
beforeCreate:function(value,cb){
Test.count().exec(function (err, cnt) {
if(err)
return cb(err);
value.id = cnt + 1;
cb();
});
},
beforeUpdate:function(values,cb){
Test.native(function(err,collection){
if(err)
return cb(err);
collection.findOne({_id:values.id},{history:0,_id:0},function(err,data){
if(err)
return cb(err);
collection.updateOne({_id:values.id},{$push:{history:data}},function(err,res){
if(err)
return cb(err);
console.log(res);
cb();
});
})
});
}
};
I'm trying to use this Sequelize ORM stuff for my project. I've integrated it as on example https://github.com/sequelize/express-example. So, cool - for now it's working with all relations and other goods. The problem is, that pm2 show's that my memory usage grows and never coming back.
This is my test script, that eats 100 Mb of RAM per launch. Have I missed something?
router.get('/test', hutils.authChecker, function(req, res, next) {
Project.findById(1,{ include : [Player]}).then(function(project) {
return Promise.denodeify(async.map)(project.Players, function(player, callback) {
Player.create({
project_id : 1,
name : 'iter_'+Math.floor(Math.random() * 1000000)+Math.floor(Math.random() * 1000000)
}).then(function(gamer) {
callback(null, gamer)
});
});
}).then(function(plrs) {
return Promise.denodeify(async.map)(plrs, function(guy, callback) {
guy.update({name : sqlRequest+'zzzzz'+Math.random()}).then(function(number) {
callback(null, number);
});
});
}).then(function(numbers) {
return Player.findAll({where : {name : {$like : '%zzzzz%'}}});
}).then(function(zets) {
return Promise.denodeify(async.map)(zets, function(zet, callback) {
zet.destroy().then(function(number) {
callback(null, number);
});
});
}).catch(function(err) {
next(err);
});
});
P.S. It`s make no sense, just to look how the ORM works. If it's matter, i have 1k players, for this project.
In queries that have results with a lot of rows, all of them will get loaded into memory before the callback.
So in this example, the query Project.findById(1,{ include : [Player]}) deserializes project 1 and all of its 1,000 players into JavaScript objects before returning them in the .then(function(project). Furthermore, the array of plrs, numbers and zets are all similarly stored in memory before they get returned thus increasing memory usage.
A way around this would be to get the database to do the heavy lifting. For example don't return each gamer that gets created and instead perform a update query on the db.
Player.update({
name : sqlRequest + 'zzzzz' + Math.random(),
}, {
where: {
createdAt: {
$gte: new Date() // or some other filter condition that identifies the records you need.
}
}
});
And then instead of destroying each zet, perform a delete query on the db.
Player.destroy({
where: {
name : {
$like : '%zzzzz%'
}
}
});