Deleted documents of local clients are not reflecting in remote server after replication - couchdb

I have written my own JavaScript based CouchDB replication client.
I have a doc at my local client which, I have deleted.
{"_id":"5ebe99c6b179d0be4ff05bd43d0038d1","_deleted":true,"_rev":"2-c6667ac72ec89a03496f6e402265a2eba6a1d695"}
I make a bulk doc request to push it to couchdb server. I get empty array as response.
I check on remote couchdb server using /{db}/doc_id?rev=deleted_doc_rev, and its present there too.
However I still see the doc at remote server, in spite of the fact that the doc is deleted and is replicated too.

Ok Guys,
The problem was in my replicator.I created my replicator without considering MVCC structure of CouchDB. In the replication protocol while making bulk request one need to send the MVCC structure present locally to server, to override the current branch at the server(couchdb).
Previously my bulk reqeust would be like:
POST http://0.0.0.0:5984/{db}/_bulk_docs
{
"docs" : [ {
"_id" : "5ebe99c6b179d0be4ff05bd43d0038d1",
"_deleted" : true,
"_rev" : "2-c6667ac72ec89a03496f6e402265a2eba6a1d695"
} ],
"new_edits" : false
}
However for getting the correct MVCC based replication, one need to make bulk request like this:
POST http://0.0.0.0:5984/{db}/_bulk_docs
{
"docs" : [ {
"_deleted" : true,
"_id" : "5ebe99c6b179d0be4ff05bd43d0038d1",
"_rev" : "2-7a5de99d17a4504b2f74cda829bf7054",
"_revisions" : {
"start" : 2,
"ids" : [ "7a5de99d17a4504b2f74cda829bf7054", "0c4f4e08acc7931c290193b1434f5e2b" ]
}
} ],
"new_edits" : false
}
I figured this by studying traffic between correct replicator implementations.

Related

vscode provide link to json schem via extension

I need to provide a json schema for other users without using the json schema store, for example if you look at the following link user are able to configure there own schema ,but here I want that every one who installing my vs-extension will have this jsonschema.
This is my question:
How should I link the user schema , for example for my internal usage what I did
"json.schemas": [
{
"fileMatch": [
"/*.tzr.json"
],
"url": "./tzrschema.json"
}
]
I put the schema in my workspace and link it via url and it works for me,
Assume that my vs-ext is providing a folder with file tzrschema.json , how should I link the users
workspace to the file that I’ve provided via extension ?
You should use the jsonValidation contribution point in package.json:
{
"contributes": {
"jsonValidation": [
{
"fileMatch": "/*.tzr.json",
"url": "./tzrschema.json"
}
]
}
}

Loopback: How to update multiple objects over REST?

On my current project I'm using a loopback backend as the REST API. My question is actually quite simple, but I wasn't able to figure it out on my own.
On my client I have a bulk of message objects which are updated by the user - these can add up to 50-100 messages.
Now I want to update the items using the loopback backend. There are some default endpoints which support PUT or PATCH methods. However, as soon as I pass an array I receive an error message, complaining that the item with the id already exists.
It seems wrong to me to just fire up 100 HTTP-Requests just to update a bunch of items. Any suggestions?
For completeness - here the error message.
{
"error": {
"name": "Error",
"status": 500,
"message": "Failed with multiple errors, see `details` for more information.",
"details": [
{
"code": 11000,
"index": 0,
"errmsg": "E11000 duplicate key error collection: xxx.Message index: _id_ dup key: { : ObjectId('588bc0afcf8d8c7b13ff44e2') }",
"op": {
// message object
}
}
}
I would create a custom remote method that can receive all of your messages. Then on the server loopback will offer multiple options to update/insert in batch.
Disclaimer: Remote method is a notion in Loopback 3.x. I'm not sure what the 4.x equivalent is.

Proper way to update mongodb document from REST

simply I have AngularJs client. He wants to use every API using CRUD architecture. For example:
GET /user
GET /user/:id
POST /user
PUT /user/:id
DEL /user/:id
This is all endpoints he want to use for my schema.(using MongoDB database).
I have user schema like (simplified):
{
id : ObjectId("..."),
name: "Foo fooer",
itemIds : [
ObjectId("..."),
ObjectId("..."),
ObjectId("...")
]
}
and schema Items(do not need to show for questions).
We need to add/remove itemsId from user.itemIds.
Client wants to create new schema userItems:
{
id : ObjectId("..."),
userId : ObjectId("..."),
itemID : ObjectId("...")
}
and He want to remove user.itemIds from user schema and create 4 CRUD endpoints /userItems.
I think this wrong approach, normalizing the mongo database.
But I don't know which one of these is better from both sides(client and server).
1) Create 2 endpoints POST /UserItem, DEL /UserItem to update items in user.itemIds.
2) Update user.itemIds using existing API PUT /user, but client needs to send whole Array of itemsIds to update it(if there are many this is probably bad approach.
Client say these 2 approaches are bad, and he only know his SQL REST archictere(where everything is normalized). How can I prove him, that he is wrong? Because he said to me this:
Server to adapt to the client and not vice versa.
Thank you.
This is a good article about RESTful API.
In brief, your RESTful API should:
Focus on resource
Make sense
Consistence
For example:
When you want to add an item to an user: find the user first, not the other way around. So, POST users/:userId/items/ to add a new item to user is good. Or, DELETE users/:userId/items/:itemId to remove an item from the user.
When you want to find all the users who have this specific item: GET /items/:itemId/users/.

Rewrite URLs in CouchDB/PouchDB-Server

If it is possible, how would I achieve the following URL rewrites using PouchDB Server?
At /index.html, display the HTML output of /index/_design/index/_show/index.html.
At /my_database/index.html, display /my_database/_design/my_database/_show/index.html.
My aim is to use PouchDB (and eventually CouchDB) as a stand-alone web server.
I am struggling to translate the rewrite documentation into working code.
Apache CouchDB uses an HTTP API and (consequently) can be used as a static web server--similar to Nginx or Apache HTTPD, but with the added bonus that you can also use MapReduce views, replication, and the other bits that make up Apache CouchDB.
Given just the core API you could store an entire static site as attachments on a single JSON document and serve each file from it's own URL. If that single document is a _design document, then you get the added value of the rewriter.
Here's an example faux JSON document that would do just that:
{
"_id": "_design/site",
"_attachments": {
"index.html": {
"content_type": "text/html",
"data": "..."
},
"images/logo.png": {
"content_type": "image/png",
"data": "..."
},
"rewrites": [
{
"from": "/",
"to": "index.html"
}
]
}
The actual value of the "data": "..." would be the base64 encoded version of the file. See the Creating Multiple Attachments example in the CouchDB Docs.
You can also use an admin UI for CouchDB such as Futon or Fauxton--available at http://localhost:5984/_utils--both of which offer file upload features. However, those systems will require that the JSON document exist first and will PUT the attachment into the database directly.
Once that's completed, you can then setup a virtual host entry in CouchDB (or Cloudant) which points to the _rewrite endpoint within that design document. Like so:
[vhosts]
example.com = /example-com/_design/site/_rewrite/
If you're not hosting on port 80, then you'll need to request the site at http://example.com:5984/.
Using a _show function (as in your example) is only necessary if you're wanting to transform the JSON into HTML (or different JSON, XML, CSV, etc). If you only want static hosting, then the option above works fabulously. ^_^
There are also great tools for creating these documents. couchapp.py and couchdb-push are the ones I use most often and both support the CouchApp filesystem mapping "spec".
Hope that helps!

Authentication always failing when connecting to MongoDB

I am using node/express
node_modules =
"mongodb": "2.0.33",
"mongoose": "3.8.15",
mongo shell version: 3.0, and mongo 3.0
I'm able to connect to my mongoDB just fine, but if I pass in any authentication parameters, it will fail:
connection error: { [MongoError: auth failed] name: 'MongoError', ok: 0, errmsg: 'auth failed', code: 18 }
The following shows up in the logs when this happens:
2015-06-13T15:10:09.863-0400 I ACCESS [conn8] authenticate db: mydatabase { authenticate: 1, user: "user", nonce: "xxx", key: "xxx" } 2015-06-13T15:10:09.863-0400 I ACCESS [conn8] Failed to authenticate user#mydatabase with mechanism MONGODB-CR: AuthenticationFailed UserNotFound Could not find user user#mydatabase
I've done quite a few patterns to try to get this to work.
Here's what happens when I do the show users command in the mongo shell while on the appropriate database:
{
"_id" : "mydatabase.user",
"user" : "user",
"db" : "mydatabase",
"roles" : [
{
"role" : "readWrite",
"db" : "mydatabase"
}
]
}
Here's my attempt to connect to this particular database while passing in the correct parameters:
mongoose.connect('mongodb://user:password#host:port/mydatabase');
For good measure I also tried passing in an options hash instead of passing the params via uri:
mongoose.connect('mongodb://host:port/mydatabase',{user: 'user',pass: 'password'});
Strangely enough, this works when done from the shell:
mongo mydatabase -u user -p password
so clearly, the credentials are right, and it's lining them up to the correct database, but something about the connection with Mongoose is not working...
Here is the shell command I passed in when creating that user:
db.createUser({
user: "user",
pwd: "password",
roles: [
{ role: "readWrite", db: "mydatabase" }
]
});
I got a success message with this, and I confirmed by calling the show users command when using the mydatabase set
I'm at a real loss here.... Here's some of the prior research I have done that hasn't yet given me success:
Cannot authenticate into mongo, "auth fails"
This answer suggests that it wouldn't be working because authentication happens at a database level, so I'm missing some sort of config option for my mongo instance, however the docs now say that such level authentication is disabled by default, and the docs the answer links to are since deprecated.
MongoDB & Mongoose accessing one database while authenticating against another (NodeJS, Mongoose)
uses older version of Mongo that still have addUser
On top of that, I don't see why that would work given it suggests I add a parameter to the 'auth' options that isn't listed in the documentation:
http://mongodb.github.io/node-mongodb-native/api-generated/db.html#authenticate
http://mongoosejs.com/docs/connections.html
Basically what I'm trying now, but isn't working.
authenticate user with mongoose + express.js
I've tried a number of answers that involved doing something of this sort, that gave me the same error. Also, I'd rather avoid these type of solutions that require +80 lines of code to authenticate for now. I just want to get basic authentication down first.
You mentioned that you are using MongoDB 3.0. In MongoDB 3.0, it now supports multiple authentication mechanisms.
MongoDB Challenge and Response (SCRAM-SHA-1) - default in 3.0
MongoDB Challenge and Response (MONGODB-CR) - previous default (< 3.0)
If you started with a new 3.0 database with new users created, they would have been created using SCRAM-SHA-1.
So you will need a driver capable of that authentication:
http://docs.mongodb.org/manual/release-notes/3.0-scram/#considerations-scram-sha-1-drivers
If you had a database upgraded from 2.x with existing user data, they would still be using MONGODB-CR, and the user authentication database would have to be upgraded:
http://docs.mongodb.org/manual/release-notes/3.0-scram/#upgrade-mongodb-cr-to-scram
Specific to your particular environment Mongoose compatibility for that version doesn't appear to support 3.0 due to the MongoDB Node.js driver that is in use (see the compatibility page on the Mongoose site--sorry, can't post more than 2 links currently).
I would suggest you update Mongoose.
Mongo v3.0+:
The db field (outside of the role object) matters here. That is not settable bby passing it into the createUser command afaict.
The way to set it is to type 'use ' before issuing the creatUser command.
Unless you do it that way, the db object inside role may have he correct and intended value, but the auth will still not work.

Resources