What happens in CouchDB when I create an index repeatedly? - couchdb

To implement sorting, in CouchDB we have to create an index (otherwise the corresponding mango query fails). I haven't found a way to do this in Fauxton (if I have missed something, please comment in Github), so I've decided to create it programmatically. As I'm using couchdb-nano, I've added:
this.clientAuthPromise.then(async () => {
try {
await this.client.use('test_polling_storage').createIndex({
index: {
fields: [
'isoDate',
],
},
name: 'test_polling_storage--time_index',
})
console.log('index created?')
} catch (error) {
console.log(`failed to create index:`, error)
}
})
into the storage class constructor, where
this.clientAuthPromise = this.client.auth(connectionParams.auth.user, connectionParams.auth.password)
Now, on each run of the server, I'm getting index created?, so the createIndex method (which presumably POSTs to /db/_index) doesn't fail (and sorting works, too). But as I haven't found indexes viewer in Fauxton either, I wonder what actually happens on each call of createIndex: does it create a new index? Does it rebuild the index? Or sees that the index with such name already exists and doesn't do anything? It's annoying to deal with this in a blind fashion, so please clarify or suggest a way to clarify.

Ok, as the docs suggest that the response will contain "created" or "exists", I've tried
const result = await this.client.use('test_polling_storage').createIndex({
...
console.log('index created?', result.result)
got index created? exists and concluded that if the index was created before, it won't be re-created. It's not clear what will happen if I try to change the index, but at least now I have a mean to find out.

Related

Most performant way to Insert or Read(if record already exists) in Google Cloud Spanner

Assuming I have a cars table where vin is the primary key.
I want to insert a record(in a transaction) or read the record(if one already exists with the same PK).
What's the most performant way to insert the record or read it if one already exists with the same PK?
This is my current approach:
Case A: Record does not exist
Insert record
Return record
Case B: Record already exists
Insert record
Check if error is due to the record already existing
Read the record
Return record
const car = { vin: '123', make: 'honda', model: 'accord' };
spannerDatabase.runTransactionAsync(async (databaseTransaction) => {
try {
// Try to insert car
await databaseTransaction.insert('cars', car);
await databaseTransaction.commit();
return car;
} catch (error) {
await databaseTransaction.end();
// Spanner "row already exists" error. Insert failed because there is already a record with the same vin(PK)
if (error.code === 6) {
// Since the record already exists, I want to read it and return it. Whats the most performant way to do this?
const existingRecord = await carsTable.read({
columns: ['vin', 'make', 'model'],
keys: [car.vin],
json: true,
});
return existingRecord;
}
}
})
As #skuruppu mentioned in the comment above, your current example is mostly fine for what you are describing. It does however implicitly assume a couple of things, as you are not executing the read and the insert in the same transaction. That means that the two operations together are not atomic, and other transactions might update or delete the record between your two operations.
Also, your approach assumes that scenario A (record does not exist) is the most probable. If that is not the case, and it is just as probable that the record does exist, then you should execute the read in the transaction before the write.
You should also do that if there are other processes that might delete the record. Otherwise, another process might delete the record after you tried to insert the record, but before you try to read it (outside the transaction).
The above is only really a problem if there are other processes that might delete or alter the record. If that is not the case, and also won't be in the future, this is only a theoretical problem.
So to summarize:
Your example is fine if scenario A is the most probable and no other process will ever delete any records in the cars table.
You should execute the read before the write using the same read/write transaction for both operations if any of the conditions in 1 are not true.
The read operation that you are using in your example is the most efficient way to read a single row from a table.

Having trouble incrementing objects inside array in with moongose

So I've got an object that looks like this
{"_id":"5fb07ab6215679200cef0eb1","user":{"_id":"5fb07437538fcd2870e21a8e","email":"example#example.com","id":"5fb07437538fcd2870e21a8e"},"question":"question?","answers":[{"_id":"5fb07ab6215679200cef0eb2","answer":"Yes","votes":0}],"voters":[],"createdAt":"2020-11-15T00:47:50.156Z","updatedAt":"2020-11-15T00:47:50.156Z","__v":0,"id":"5fb07ab6215679200cef0eb1"}
and I'm trying to increase the votes variable by this function using findOneAndUpdate
export const castVote = async (id, answersid) =>
Poll.findOneAndUpdate(
{ id, 'answers._id': answersid },
{ $inc: { 'answers.$.votes': 1 } }
);
As far as i can see calling castVote("5fb07ab6215679200cef0eb1", "5fb07ab6215679200cef0eb2") works as in not crashing the server and not giving any errors back, but the votes variable in the answers object doesn't increase so something must be wrong. Is there something obvious I'm missing here?.
got it working by simply dropping the id field which i guess i enough since they're uniquely created

Watch MongoDB to return changes along with a specified field value instead of returning fullDocument

I'm using watch() function of mongo to listen to changes made to a replicaSet, now I know I can get the whole document (fullDocument) by passing { fullDocument: 'updateLookup' } to the watch method,
like :-
someModel.watch({ fullDocument: 'updateLookup' })
But what I really want to do is, get just one extra field which isn't changed every time a new update is made.
Let's say a field called 'user_id', currently I only get the updatedFields and the fullDocument which contains the 'user_id' along with a lot of other data which I would like to avoid.
What I have researched so far is Aggregation pipeline but couldn't figure out a way to implement it.
Can anybody help me figure out a way to this?
Thanks everyone for suggesting, as #D.SM pointed out I successfully implemented $project
Like this :-
const filter = [{"$match":{"operationType":"update"}}, {"$project":{"fullDocument.user_id": 1, "fullDocument.chats": 0, "fullDocument._id": 0, "fullDocument.first_name": 0, "fullDocument.last_name": 0 }}];
Then passed it to watch() method
Like:-
const userDBChange = userChatModel.watch(filter, { fullDocument: 'updateLookup' });
Now I'm only getting user_id inside fullDocument object when the operationType is update hence reducing the data overhead returned from mongo
Thanks again #D.SM and other's for trying to help me out ;)

findOneAndUpdate works part of the time. MEAN stack

I'm working with the mean stack I'm trying to update the following object:
{
_id : "the id",
fields to be updated....
}
This is the function that does the updating:
function updateById(_id, update, opts){
var deferred = Q.defer();
var validId = new RegExp("^[0-9a-fA-F]{24}$");
if(!validId.test(_id)){
deferred.reject({error: 'invalid id'});
} else {
collection.findOneAndUpdate({"_id": new ObjectID(_id)}, update, opts)
.then(function(result){
deferred.resolve(result);
},
function(err){
deferred.reject(err);
});
}
return deferred.promise;
}
This works with some of my objects, but doesn't work with others.
This is what is returned when it fails to update:
{
ok: 1,
value:null
}
When the function is successful in updating the object it returns this:
{
lastErrorObject: {}
ok: 1
value: {}
}
It seems like Mongo is unable to find the objects I'm trying to update when it fails. However, I can locate those objects within the Mongo shell using their _id.
Does anybody know why the driver would be behaving this way? Could my data have become corrupt?
Cheers!
I found the answer and now this question seems more ambiguous so I apologize if it was confusing.
The reason I was able to find some of the documents using ObjectID(_id) was because I had manually generated some _id fields using strings.
Now I feel like an idiot but, instead of deleting this question I decided to post the answer just in case someone is running into a similar issue. If you save an _id as a string querying the collection with the _id field changes.
querying collection with MongoDB generated _ids:
collection.findOneAndUpdate({"_id": new ObjectID(_id)}, update, opts)
querying collection with manually generated _ids:
collection.findOneAndUpdate({"_id": _id}, update, opts)
In the second example _id is a string.
Hope this helps someone!

Saving subdocuments with mongoose

I have this:
exports.deleteSlide = function(data,callback){
customers.findOne(data.query,{'files.$':1},function(err,data2){
if(data2){
console.log(data2.files[0]);
data2.files[0].slides.splice((data.slide-1),1);
data2.files[0].markModified('slides');
data2.save(function(err,product,numberAffected){
if(numberAffected==1){
console.log("manifest saved");
var back={success:true};
console.log(product.files[0]);
callback(back);
return;
}
});
}
});
}
I get the "manifest saved" message and a callback with success being true.
When I do the console.log when I first find the data, and compare it with the console.log after I save the data, it looks like what I expect. I don't get any errors.
However, when I look at the database after running this code, it looks like nothing was ever changed. The element that I should have deleted, still appears?
What's wrong here?
EDIT:
For my query, I do {'name':'some string','files.name':'some string'}, and if the object is found, I get an array of files with one object in it.
I guess this is a subdoc.
I've looked around and it says the rules for saving subdocs are different than saving the entire collection, or rather, the subdocs are only applied when the root object is saved.
I've been going around this by grabbing the entire root object, then I do loops to find the actual subdoc I that I want, and after I manipulate that, I save the whole object.
Can I avoid doing this?
I'd probably just switch to using native drivers for this query as it is much simpler. (For that matter, I recently dropped mongoose on my primary project and am happy with the speed improvements.)
You can find documentation on getting access to the native collection elsewhere.
Following advice here:
https://stackoverflow.com/a/4588909/68567
customersNative.update(data.query, {$unset : {"slides.1" : 1 }}, function(err){
if(err) { return callback(err); }
customersNative.findAndModify(data.query, [],
{$pull: {'slides' : null } }, {safe: true, 'new' : true}, function(err, updated) {
//'updated' has new object
} );
});

Resources