Firestore security rules with set {merge} - node.js

In my Firestore data I have the following
ItemID
amountOfItemsToPurchase: 1
itemsLeft: 3
If a user wants to update the items to purchase for the same ItemID or create the document if the ItemID is not present, I use set(, {merge:true}, however in terms of the Firestore Security rules things get complicated.
I've written the following test:
const initialUserDoc = adminFirestore.collection("Users").doc(VALID_USER_ID).collection("Cart").doc(documentID);
await initialUserDoc.set({
"amountOfItemsToPurchase": 1,
"itemsLeft": 3
});
// Get the user's node and grab the example user
const userTestRef = db.collection("Users").doc(VALID_USER_ID).collection("Cart").doc(documentID);
await firebase.assertFails(userTestRef.set({
"amountOfItemsToPurchase": firebase.firestore.FieldValue.increment(1000),
}, {merge: true}));
This test results in the following: Error: Expected request to fail, but it succeeded
What I want for every scenario (update or create), to avoid that the amountOfItemsToPurchase exceeds the itemsLeft, for that I use the following:
request.resource.data.amountOfItemsToPurchase <= request.resource.data.itemsLeft => This will be in the allow create portion.
This rises the following:
Does the allow create or the allow update** will be called? and also why is not taking into account the itemsLeft variable

From your description and the comments, your security rule looks OK.
Let me write it for futher discussion:
match /items/{id} {
allow create, update:
if request.resource.data.amountOfItemsToPurchase < request.resource.data.itemsLeft;
}
First gotcha: use request.resource to be able to read the value of the field after the update.
Second gotcha: when using set, the FieldValue.increment sentinel does not increment but sets the value. You should use the update function to actually increment the value.
Third gotcha: FieldValue.increment with set function and {merge: true} does increment the value!
So in the end the rule works fine as is, I can confirm it works for me.

Related

How to chain express-validator based on query values?

I am trying to find a solution to chain conditions based on the query values passed to the route.
Task1:
// if value for query A = noIdNeeeded, then i do not need to search for a second queryB
/endpoint?queryA=noIdNeeded
Task2:
// if value for query A = idNeeded, then i need to ensure second queryB exists
/endpoint?queryA=idNeeded&queryB=SomeId
I am having trouble with the writing parameter for Task 2.
For task 1 i have use this logic and works fine [query('page').exists().notEmpty().isIn(seoPageTypes),]
So far I have seen there is an if clause that we can use probably (link) however implementing this has been a challenge due to lack of examples and no prior experience.
If anyone can guide on how to do this correctly or any hint is much appreciated.
Make sure you have a new version of express-validator installed.
The following should do your job.
query('queryA').exists().notEmpty().isIn(seoPageTypes),
query('queryB')
.if(query('queryA').equals('idNeeded'))
.exists().notEmpty().withMessage('queryB must exist'),
Another approach is to use a custom validator
query('queryA').exists().notEmpty().isIn(seoPageTypes)
.custom((value, {req}) => {
if (value === "idNeeded" && !req.query.queryB) {
throw new Error('queryB must exist');
}
return true;
}),
Use what suits you more :)

How to update user permission discord js

So, I want to update a permission for some users.
First, I tried to create a text channel and overwrite a user Permission. Then I do a for loop to update the permission, but I dont know how.
Things I've tried to update the permission :
channel.overwritePermissions({
permissionOverwrites: [
{
id: guild.id,
deny: ['VIEW_CHANNEL'],
},
{
id: playerData[i],
allow: ['VIEW_CHANNEL'],
},
],
});
//it said channel not defined?
message.guild.channels.cache.find('🐺werewolves').overwritePermissions({
permissionOverwrites: [
{
id: playerData[i],
allow:['VIEW_CHANNEL'],
}
]
})
//it said : fn is not a function
I've I have seen this solution and read the documentation, but the instruction isn't clear.
PS : The permission update must be a loop because the number of users who get the permission always changing.
Regarding first codeblock
channel is not defined, because it's not defined. You cannot use variables without defining them, that's how JavaScript works. To solve it, you would have to define it using for example const channel = ..., or access it from other variables you have, just like you are trying to use find in your second codeblock - you access it from message, as you're most likely in the message event.
Regarding second codeblock
This is not a proper way to use find, neither in old - removed in your version - way, nor the new way. The old way was to do find(property, value), to which you wouldn't provide a value ('🐺werewolves' would be treated as property you're trying to search by). New way you have to use, allows way more flexibility by requiring to pass it a function, just like in the example for the method. Since what you passed was a string and not a function, internal code throws an error fn is not a function.
For your example above, correct way to use that find would be
message.guild.channels.cache.find(channel => channel.name === '🐺werewolves');
Additionally, note that ideally you shouldn't try to call any methods on that directly, as in case when no channel with that name would be found, your code will throw an error. Snippet below should avoid that possibility.
const channel = message.guild.channels.cache.find(channel => channel.name === '🐺werewolves');
if (channel) channel.overwritePermissions(...)

How to lock table with pg-promise

I have
db.result('DELETE FROM categories WHERE id = ${id}', category).then(function (data) { ...
and
db.many('SELECT * FROM categories').then(function (data) { ...
initially delete is called from one API call and then select on following API call, but callback for db request happens in reverse order, so I get list of categories with removed category.
Is there a way how to lock categories table with pg-promise?
If you want the result of the SELECT to always reflect the result of the previous DELETE, then you have two approaches to consider...
The standard approach is to unify the operations into one, so you end up executing all your dependent queries against the same connection:
db.task(function * (t) {
yield t.none('DELETE FROM categories WHERE id = ${id}', category);
return yield t.any('SELECT FROM categories');
})
.then(data => {
// data = only the categories that weren't deleted
});
You can, of course, also use either the standard promise syntax or even ES7 await/async.
The second approach would be to organize an artificial lock inside your service that would hold off on executing any corresponding SELECT until the DELETE requests are all done.
However, this is a very awkward solution, typically pointing at the flaw in the architecture. Also, as the author of pg-promise, I won't be even getting into that solution, as it would be way outside of my library anyway.

How can i pass input argument when writing loopback-testing

I am writing a test driven development for my strongloop API code with the help of loopback-testing .
Here they do not have any detailed document on this, so i am stuck with case of argument passing with the API call
Example i have a below case,
Method : PUT
URL : /api/admin/vineyard/<vineyard_id>
i need to pass the below arguments with this URL
1. 'vineyard_id' is a id of vine, it should be an integer .
2. in header = 'token'
3. in body = '{'name':'tastyWine','price':200}'
How can i pass these three arguments with this API ?
I can easily handle ,if there is only two types of arguments
Example :
Method : POST
`/api/user/members/<test_username>/auth'`
arguments : test_username and password
I can handle this like this ,
lt.describe.whenCalledRemotely('POST',
'/api/user/members/'+test_username+'/auth', {
'password': test_passwords
},
But how can i handle the above case , Many thanks for your answers for this example.
I'm not entirely sure what your specific problem is, but I will attempt to walk through everything you should need.
I am assuming you are using the predefined prototype.updateAttributes() method for your model as described here.
Next assumption is that you want to use the built-in authentication and authorization to allow the user to call this method. Given that assumption, you need something like this in your test code:
var vineyard_id = 123; //the id of the test item you want to change
var testUser = {email: 'test#test.com',password: 'test'};
lt.describe.whenCalledByUser(testUser, 'PUT', '/api/admin/vineyard/'+vineyard_id,
{
'name':'tastyWine',
'price':200
},
function () {
it('should update the record and return ok', function() {
assert.equal(this.res.statusCode, 200);
});
}
);
If you are using the out-of-the-box user model, you should be fine, but if you extended the model as is commonly done, you may need something like this early on in your test file:
lt.beforeEach.withUserModel('user');
Also, be aware of a few (currently incomplete) updates to will allow for better handling of built-in model extensions: Suggestions #56, Add support for non-default models #57, and givenLoggedInUser() function throws error #59.

MongoDB Database Semaphores and Node.js Process.NextTick()

This may be a vary bad idea, or a possible solution that we have to a database concurrency problem.
We have a method that is called to do an update of a mongo record. We are seeing some concurrency problems - process A reads the record, process B reads the record, process A makes mods and saves the record, process makes B mods and saves the record. Because B reads after A, before A writes, it doesn't know about the changes A made, and we lose the data from A.
I'm wondering if we could not use a database semaphore, basically a field on the collection, that is a boolean. If we read the record at the start of the method, and the field is true, it's being edited. At that point, re-call the method using process.nexttick(), with the same data. Otherwise, set the semaphore, and carry on.
There would still be a bit of time between the read and the save, but it should be/could be faster than what we are doing now.
Be something like this. Any thoughts, anyone done anything like this? Will it even work?
function remove_source(service_id,session, next)
{
var User = Mongoose.model("User");
/* get the user, based on the session user id */
User.findById(session.me,function(err,user_info)
{
if (user_info.semaphore === true)
{
process.nextTick(remove_source(service_id,session,next));
}
else
{
user_info.semaphore = true;
user_info.save(function(err,user_new)
{
if (err) next(err,user_new);
else continue_on(null,user_new);
});
}
function continue_on(user_new)
{
etc.......
}
Edit: New Code:
The function now looks as follows. I'm doing individual updates to the arrays. This of course means that I now have the possibility, if the transaction fails between the first and second transactions, of having data out of sync. I'm thinking that I could simply resave the user object that I retrieved on entry into the function, overwriting my changes. I don't know if Mongoose/Mongo will not do the save if I have not changed that object, will have to try and see. Any more thoughts?
var User = Mongoose.model("User");
/* get the user, based on the session user id */
User.findById(session.me,function(err,user_info)
{
if (err)
{
next(err,user_info,null);
return;
}
if (!user_info || user_info.length === 0)
{
next(_e("ACCOUNT_NOT_FOUND"),"user_id: " + session.me);
return;
}
var source_service_info = _.where(user_info.credentials, {"source_service_id": service_id});
var source_service = source_service_info.source_service;
User.findByIdAndUpdate(session.me,{$pull: {"credentials": {"source_service_id": service_id}}},{},function(err,user_credential_removed)
{
if (err)
{
next(err,user_info,null);
return;
}
User.findByIdAndUpdate(session.me,{$pull: {"criteria": {"source_service": source_service}}},{},function(err,user_criteria_removed)
{
if (err)
{
next(err,user_info,null);
return;
}
else
{
next(null,user_criteria_removed);
}
});
});
});
};
The problem with your approach is that it just shortens the time during which the data could be read by a second process, it doesn't eliminate the problem.
The solution to this would be to set your semaphore in the same action as the read. I haven't used Mongoose, but in MongoDB you can use findAndModify to only return a User record if the semaphore is false, and if it is false, in one atomic operation, set the semaphore to true.
If you don't want to use findAndModify, you could first do an update that sets the semaphore true (or to some specific ID value so you know that it is YOUR semaphore) only if the semaphore is not set. Then, if that process succeeds, you could do the find (perhaps passing your semaphore ID as a criterion in the find). However, findAndModify, if it is available in Mongoose, would do that in one step.
A variation of that is described here: http://docs.mongodb.org/manual/tutorial/isolate-sequence-of-operations/ where you do a form of optimistic locking that checks that the old values are unchanged before changing them to the new values.
There is a variation on this that uses a separate table to simulate a two-phase commit: http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
Edited: Upon interchange below, this seems to be a schema and updating issue. Question may become something like: I have some entries in an array, and the ordinal index to those entries relates to some other arrays as well. How do I perform deletes without having mismatches?
Three off the top possibilities occur, depending on frequency in the real world vs QA test scenarios.
Consider adding a deleted flag but keeping the records in the same order. If someone toggles, reuse the same record, but fix however you want.
Use an associative array (JS object) for each element (not a feature from relational world.) If you need an order, add an array that lists the keys in order. Both have syntax to update without touching anything other that what has changed, and will not overwrite changes to different fields.
Use an associative array where the keys are numbers. Actual deletion won't hurt retrieval.
stuff = {}
stuff[1] = {some:'details'}
stuff[2] = {some:'details2'}
Was
1) Are you making changes to the same field? Make that into an array, and push changes, and pop the latest to read the current value.
2) Are you changing different fields, but data is getting trounced? Then there is better syntax to use for the updating. you can update field by field.
$set: { 'fielda': 'valuea' }
won't lose edits on previous fields
3) change your schema
4) change the timing on the processes so they don't overlap. Or so they do so in smaller subsets, that you can manage to prevent from overlapping.
I'd like to know, just out of interest, what multiple processes are needed to make updates on the same record? I don't work with anything that looks like that.

Resources