CouchDB Replication overwrites Documents - couchdb

I know that when you create a Document on Database A, replicate the Database, then make changes to it on DB A and DB B and THEN replicate again, you’ll get a conflict but both Versions exist in the Revision Tree.
But when you create a Doc with an Id XY on DB A and then create a Doc with the same Id but different content on DB B and then replicate, only one of the Version exists. The other one gets overwritten.
Is the reason for that, that because both documents have no version they descend from and so the Replication Algorithm can’t know that they both exist?
And if yes is there a way of saving both Versions?
Use Case is that there are two databases, one local, one online. they biderectionally sync. On both DBs User create docs. But I need to make sure IF the connection fails for a while that both CAN still create docs and I can merge them whenever the connection is back. I guess the hard Part here is the CREATE instead of UPDATE right?

Firstly, and for total clarity, CouchDB does not overwrite data. The only way for data you've written to be forgotten is to make a successful update to a document.
CouchDB will introduce new branches (aka conflicts) during replication to preserve all divergences of content. If what you've seen is reproducible then it's a bug. Below is my transcript though which shows that CouchDB indeed preserves both revisions as expected;
curl 127.0.0.1:5984/db1 -XPUT
{"ok":true}
curl 127.0.0.1:5984/db2 -XPUT
{"ok":true}
curl 127.0.0.1:5984/db1/mydoc -XPUT -d '{"foo":true}'
{"ok":true,"id":"mydoc","rev":"1-89248382088d08ccb7183515daf390b8"}
curl 127.0.0.1:5984/db2/mydoc -XPUT -d '{"foo":false}'
{"ok":true,"id":"mydoc","rev":"1-1153b140e4c8674e2e6425c94de860a0"}
curl 127.0.0.1:5984/_replicate -Hcontent-type:application/json -d '{"source":"db1","target":"db2"}'
{"ok":true,...}
curl '127.0.0.1:5984/db2/mydoc?conflicts=true'
{"_id":"mydoc","_rev":"1-89248382088d08ccb7183515daf390b8","foo":true,"_conflicts":["1-1153b140e4c8674e2e6425c94de860a0"]}

Related

How to initialize Alembic on an existing DB

I have an existing app which uses SQLAlchemy for DB access. It works well.
Now I want to introduce DB migrations, and I read alembic is the recommended way. Basically I want to start with the "current DB state" (not empty DB!) as revision 0, and then track further revisions going forward.
So I installed alembic (version 1.7.3) and put
from my_project.db_tables import Base
target_metadata = Base.metadata
into my env.py. The Base is just standard SQLAlchemy Base = sqlalchemy.ext.declarative.declarative_base() within my project (again, which works fine).
Then I ran alembic revision --autogenerate -m "revision0", expecting to see an upgrade() method that gets me to the current DB state from an empty DB. Or maybe an empty upgrade(), since it's the first revision, I don't know.
Instead, the upgrade() method is full of op.drop_index and op.drop_table calls, while downgrade() is all op.create_index and op.create_table. Basically the opposite of what I expected.
Any idea what's wrong?
What's the recommended way to "initialize" migrations from an existing DB state?
OK, I figured it out.
The existing production DB has lots of stuff that alembic revision --autogenerate is not picking up on. That's why its generated migration scripts are full of op.drops in upgrade(), and op.creates in downgrade().
So I'll have to manually clean up the generated scripts every time. Or automate this cleanup Python script cleanup somehow programmatically, outside of Alembic.

What is the best Way to Export CouchDB?

I'm new in CouchDb, And sometimes I need to export my Database.
Until Now I used this command
curl -X GET http://127.0.0.1:5984/nomeDB/_all_docs\?include_docs\=true > /Users/bob/Desktop/db.json
But in this way, before import my Dump with this command
curl -d #db.json -H "Content-type: application/json" -X POST http://127.0.0.1:5984/dbName/_bulk_docs
I have to correct the Json with
"rows": [ =====> "docs": [
And in this way My documents have one more key, the doc key.
What is the best way to do a Dump to pass, for example, to another developer?
The easiest export/import and backup/restore strategy is to simply copy the raw database file. Usually, this file is located at /var/lib/couchdb/my-database.couch. You can safely copy this file, even while the database is running. (Source)
Another option is to use replication to copy entire databases between servers. This option can be done incrementally, unlike the first.

How to add entries with slapadd in openldap replication(synrepl)

We have openldap replication with syncrepl, I don't know how to add slapadd entries into it.
On standalone it works fine. but when i add entries in one of the machine in replication, second machines fails to start slapd.
Thanks
Unfortunately slapadd doesn't write to the accesslog and thus the modifications aren't replicable. This is especially bad, because some attributes can't be modified via ldapadd.
If you only need ordinary attributes, use ldapadd instead.
UPDATE:
It looks like you can use the -w switch:
Write syncrepl context information. After all entries are added, the
contextCSN will be updated with the greatest CSN in the database.

Clearing XHProf Data not working

I got XHProf working with XHgui. I like to clear or restart fresh profiling for certain site or globally. How do i clear/reset XHprof? I assume i have to delete logs in Mongo DB but I am not familiar with Mongo and I don't know the tables it stores info.
To clear XHProf with XHGui, log into mongo db and clear the collection - results as following:
mongo
db.results.drop()
The first line log into mongo db console. The last command, drops collection results that is is going to be recreated by the XHGui on the next request that is profiled
Some other useful commands:
show collections //shows all collection
use results //the same meaning as mysql i believe
db.results.help() //to find out all commands available for collection results
I hope it helps
I have similar issue in my Drupal setup, using Devel module in Drupal.
After a few check & reading on how xhprof library save the data, i'm able to figure out where the data is being saved.
The library will check is there any defined path in php.ini
xhprof.output_dir
if there's nothing defined in your php.ini, it will get the system temp dir.
sys_get_temp_dir()
In short, print out these value to find the xhprof data:
$xhprof_dir = ini_get("xhprof.output_dir");
$systemp_dir = sys_get_temp_dir();
if $xhprof_dir doesn't return any value, check the $systemp_dir, xhprof data should be there with .xhprof extension.

couchdb error while accessing a view "file corruption"?

I have a couchdb installation which generally opens fine at futon, in "All documents" view i can see all the objects; however when i try accessing one of my views, i get this error after a couple of seconds "Error: file_corruption file corruption" in an alert dialog box.
The db has been moved between discs in past, as we changed the disks to make sure that we have enough space.
if the file is corrupt, it shouldn't work at all, is there any way i could repair the same.
The db is quite big already more than 150 GB; I even tried making the same view with diff name but the error persisted.
I think that moving files is a bad idea. You should better replicate your DB from old server to the new one.
$ curl -H 'Content-Type: application/json' -X POST http://newserver:5984/_replicate -d ' {"source": "http://oldserver:5984/foo", "target": "bar", "create_target": true} '
remove the couchdb index files and try again
they are located at either
/var/lib/couchdb/.{dbname}_design/
/usr/local/var/lib/couchdb/.{dbname}_design/

Resources