How to tell if a Firestore operation succeeded or failed? - python-3.x

I need to be able to tell if a set(), delete(), or update() operation on a document has succeeded before I do a transaction operation to increment or decrement a counter.
I've tried to print what set(), delete() and update() return, but it always just returns "seconds" and "nanos" whether or not the operation succeeded or not. I've tried to do operations on document IDs that don't exist, or collections that don't exist, but it always just returns the same thing with no indication if it did anything or not.
collection.("some_col").document("SoM3DoC").delete()
collection.("some_other_col").document("SoM30tH3RDoC").collection("some_col_ref").document("SoM3DoC").delete()
Then, ONLY if the above succeeded (the document existed and was deleted):
some_transaction(transaction, collection.("some_other_col").document("SoM30tH3RDoC")) # decrement a counter in this doc
I'm expecting that the operation methods would either throw an error if it couldn't complete the operation or return some message to indicate it but I can't seem to get any response. I even tried starting with some random collection like collection.("asdfsergreasg").document... but there's still no response.

The API documentation indicates what to expect from various operations, so you should use that as a reference. Some operations on documents and collections that don't exist don't yield errors. Some do. For example, calling get on a document that doesn't exist isn't an error, but the returned object will be clear that no document exists. However, calling update on a document that doesn't exist should raise an error.

Related

How to unify this two instructions?

Is it possible to unify in a single instruction these two Firestore document set/update?
await batchArray[batchIndex].set(finref, doc2.data());
await batchArray[batchIndex].update(finref, {"esito" : 1, "timestamp": currentTime});
Where "finref" is a document reference and doc2 is a DocumentSnapshot
You can merge those two objects using spread syntax and pass it to single command.
await batchArray[batchIndex].set(finref, {
...doc2.data(),
...{
"esito": 1,
"timestamp": currentTime
}
});
If you want to perform both operations at once, then you can execute multiple write operations as a single batch that contains any combination of set(), update(), or even delete() operations. A batch of writes completes atomically, meaning that all operations will succeed or all will fail.
As you can see in the docs, the first argument is always a document reference. If you already have a document snapshot, then you need to get the document reference out of that object in order to commit the batch.
Edit:
If you try to update a document, that doesn't exist, indeed you'll get an error that says "No document to update". In that case, you should consider using set() with merge: true. This means that, if the document does not exist, then it will be created, and if the document does exist, the data will be merged into the existing document.

Consecutive calls to updateOne of mongodb: 3rd one does not work

I receive 3 post calls from client, let say in a second, and with nodejs-mongodb immediately(without any pause, sleep, etc) I try to insert the data that is posted in database using updateOne. All data is new, so in every call, insert would happen.
Here is the code (js):
const myCollection = mydb.collection("mydata")
myCollection.updateOne({name:req.data.name},{$set:{name:req.data.name, data:req.data.data}}, {upsert:true}, function(err, result) {console.log("UPDATEONE err: "+err)})
When I call just 1 time this updateOne, it works; 2 times successively, it works. But if I call 2+ times in succession, only the first two ones correctly inserted into database, and the rest, no.
The error that I get after updateOne is, MongoWriteConcernError: No write concern mode named 'majority;' found in replica set configuration. However, I always get this error, also even when the insertion is done correctly. So I don't think this is related to my problem.
Probably you will suggest to me to use updateMany, bulkWrite, etc. and you will be right, but I want to know the reason why after 2+ the insertion is not done.
Have in mind .updateOne() returns a Promise so it should be handled properly in order to avoid concurrency issues. More info about it here.
The error MongoWriteConcernError might be related to the connection string you are using. Check if there is any &w=majority and remove it as recommended here.

Mongodb node.js $out with aggregation only working if calling toArray()

Saving an aggregation query using "mongodb": "^3.0.6" as result with the $out operator is only working when calling .toArray().
The aggregation step(s):
let aggregationSteps = [{
$group: {
_id: '$created_at',
}
}, {'$out': 'ProjectsByCreated'}];
Executing the aggregation:
await collection.aggregate(aggregationSteps, {'allowDiskUse': true})
Expected result: New collection called ProjectsByCreated.
Result: No collection, query does not throw an exception but is not being executed? (takes only 1ms)
Appending toArray() results in the expected behaviour:
await collection.aggregate(aggregationSteps, {'allowDiskUse': true}).toArray();
Why does mongodb only create the result collection when calling .toArray() and where does the documentation tell so? How can I fix this?
The documentation doesn't seem to provide any information about this:
https://mongodb.github.io/node-mongodb-native/3.0/api/Collection.html#aggregate
https://docs.mongodb.com/manual/reference/operator/aggregation/out/index.html
MongoDB acknowledge this behaviour, but they also say this is working as designed.
It has been logged as a bug in the MongoDB JIRA, $out aggregation stage doesn't take effect, and the responses say it is not a fault:
This behavior is intentional and has not changed in some time with the node driver. When you "run" an aggregation by calling Collection.prototype.aggregate, we create an intermediary Cursor which is not executed until some sort of I/O is requested. This allows us to provide the chainable cursor API (e.g. cursor.limit(..).sort(..).project(..)), building up the find or aggregate options in a builder before executing the initial query.
... Chaining toArray in order to execute the out stage doesn't feel quite right. Is there something more natural that I haven't noticed?
Unfortunately not, the chained out method there simply continues to build your aggregation. Any of the following methods will cause the initial aggregation to be run: toArray, each, forEach, hasNext, next. We had considered adding something like exec/run for something like change streams, however it's still in our backlog. For now you could theoretically just call hasNext which should run the first aggregation and retrieve the first batch (this is likely what exec/run would do internally anyway).
So, it looks like you do have to call one of the methods to start iterating the cursor before $out will do anything. Adding .toArray(), as you're already doing, is probably safest. Note that to.Array() does not load the entire result into RAM as normal; because it includes a $out, the aggregation returns an empty cursor.
Because the Aggregation operation returns a cursor, not the results.
In order to return all the documents from the cursor, we need to use toArray method.

node-mongodb show update results

When I db.collection('example').update({"a":1},{"$set":{"b":2}},{multi:true},function(e,r){
I get r:
{
n:3,
nModified:3,
ok:1
}
This works, I can see If I look at my db that I have successfully updated 3 documents but where are my results?
Quoted from https://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html
callback is the callback to be run after the records are updated. Has three parameters, the first is an error object (if error occured), the second is the count of records that were modified, the third is an object with the status of the operation.
I've tried with 3 outputs in the callback but, then I just get null as a result
db.collection('example').update({"a":1},{"$set":{"b":2}},{multi:true},function(e,n,r){
My documents have been successfully updated but r is null!
I am expecting for this to return my updated documents
It doesn't look like this operation ever does, so how can I manullay return the documents that got changed?
You can use findAndModify to get the updated document in the result. It's callback has 2 parameters:
1- error
2- Updated document
I am not sure this would work for you, but check [documentation]: https://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html#find-and-modify for more info.
To get the updated documents in the returned result, you'll need to use the db.collection.bulkWrite method instead.

StackExchange Redis Client StringSet return false?

We have been using StackExchange Redis .Net client for several months without issue. Our logs indicate that StringSet returned false thousands of time over the course of an hour recently, but it is working as expected again.
I can't find what FALSE means anywhere. I assume this means that the value was not put in cache, but if that is correct, how do I tell why? The client is not throwing an exception. Can someone show me the API Specification that describes the return value and how to troubleshoot?
We are running against Redis in Azure if that matters.
result = cache.StringSet(fullKey, value, GetCacheTime(cacheType));
if (!result)
{
if (_logger != null)
{
_logger.LogError( "Failed to Set Cache");
}
}
http://redis.io/commands/set
Simple string reply: OK if SET was executed correctly. Null reply: a
Null Bulk Reply is returned if the SET operation was not performed
becase the user specified the NX or XX option but the condition was
not met.
Though it looks like you are using SETEX (http://redis.io/commands/setex) are you setting a valid timespan as the third argument?
SETEX is atomic, and can be reproduced by using the previous two
commands inside an MULTI / EXEC block. It is provided as a faster
alternative to the given sequence of operations, because this
operation is very common when Redis is used as a cache. An error is
returned when seconds is invalid.

Resources