I have a couchdb installation which generally opens fine at futon, in "All documents" view i can see all the objects; however when i try accessing one of my views, i get this error after a couple of seconds "Error: file_corruption file corruption" in an alert dialog box.
The db has been moved between discs in past, as we changed the disks to make sure that we have enough space.
if the file is corrupt, it shouldn't work at all, is there any way i could repair the same.
The db is quite big already more than 150 GB; I even tried making the same view with diff name but the error persisted.
I think that moving files is a bad idea. You should better replicate your DB from old server to the new one.
$ curl -H 'Content-Type: application/json' -X POST http://newserver:5984/_replicate -d ' {"source": "http://oldserver:5984/foo", "target": "bar", "create_target": true} '
remove the couchdb index files and try again
they are located at either
/var/lib/couchdb/.{dbname}_design/
/usr/local/var/lib/couchdb/.{dbname}_design/
Related
I'm trying to do repairDatabase of MongoDB on Ubuntu 16.04 but it fails with an error "errno:24 Too many open files" ("code" : 16818).
I've raised "ulimit -n" up to 1024000, restarted the server, but still getting the same error.
It does not seem possible to raise it higher and I'm stuck with no ideas. Please help!
We have faced similar issue. First please make sure number of file descriptors used by "mongod" process while running repairDatabase() command. You can verify this with the help of "lsof -p mongod_pid" Also please note, if you want to change "max number of process", you need to edit "/etc/security/limits.conf" file by adding entry for mongod process.
Edit:
Also there is already feature request to open file per database as currently "wiredtiger" opens one file per collection and one for index. Also one should seriously look into horizontal scaling by sharding if cost is not a serious issue.
I'm new in CouchDb, And sometimes I need to export my Database.
Until Now I used this command
curl -X GET http://127.0.0.1:5984/nomeDB/_all_docs\?include_docs\=true > /Users/bob/Desktop/db.json
But in this way, before import my Dump with this command
curl -d #db.json -H "Content-type: application/json" -X POST http://127.0.0.1:5984/dbName/_bulk_docs
I have to correct the Json with
"rows": [ =====> "docs": [
And in this way My documents have one more key, the doc key.
What is the best way to do a Dump to pass, for example, to another developer?
The easiest export/import and backup/restore strategy is to simply copy the raw database file. Usually, this file is located at /var/lib/couchdb/my-database.couch. You can safely copy this file, even while the database is running. (Source)
Another option is to use replication to copy entire databases between servers. This option can be done incrementally, unlike the first.
After a server crash, I get a weird problem concerning database fixup. The console constantly throws a block of errors "Error checking database File does not exist" I did not find any databases with these names.
Here is an image as I am notz allowed to directly include pics:
https://pbs.twimg.com/media/CA87BQfUcAA21Cq.png:large
Where does domino know which databases to fixup?
How may I get rid of these errors?
Any idea appreciated.
Rene
Apparently I found a clue, myself:
http://www-01.ibm.com/support/docview.wss?uid=swg1LO78425
So, my next steps were fixup -j & compact on command line level without the server being up. Also, I deleted the dbdirman.nsf as suggested by Torsten.
I stumbled over a corrrupt database which caused a crash of the fixup. After moving the DB and recreating it from backup, the server could be started without an issue.
For now, the problem seems to be solved.
I'm using psloglist to analysis the saved event log for my windows 2003 server, however, the critical information i need is not retrieved properly and "message text not available. insertion strings" is appended instead. I've been searching for long while and still unable to find any solution or the root cause, anybody come across the same and could give some help in this? Thanks.
psloglist \\localhost -d 7 application -o "Source" | find "MessageText"
I know that when you create a Document on Database A, replicate the Database, then make changes to it on DB A and DB B and THEN replicate again, you’ll get a conflict but both Versions exist in the Revision Tree.
But when you create a Doc with an Id XY on DB A and then create a Doc with the same Id but different content on DB B and then replicate, only one of the Version exists. The other one gets overwritten.
Is the reason for that, that because both documents have no version they descend from and so the Replication Algorithm can’t know that they both exist?
And if yes is there a way of saving both Versions?
Use Case is that there are two databases, one local, one online. they biderectionally sync. On both DBs User create docs. But I need to make sure IF the connection fails for a while that both CAN still create docs and I can merge them whenever the connection is back. I guess the hard Part here is the CREATE instead of UPDATE right?
Firstly, and for total clarity, CouchDB does not overwrite data. The only way for data you've written to be forgotten is to make a successful update to a document.
CouchDB will introduce new branches (aka conflicts) during replication to preserve all divergences of content. If what you've seen is reproducible then it's a bug. Below is my transcript though which shows that CouchDB indeed preserves both revisions as expected;
curl 127.0.0.1:5984/db1 -XPUT
{"ok":true}
curl 127.0.0.1:5984/db2 -XPUT
{"ok":true}
curl 127.0.0.1:5984/db1/mydoc -XPUT -d '{"foo":true}'
{"ok":true,"id":"mydoc","rev":"1-89248382088d08ccb7183515daf390b8"}
curl 127.0.0.1:5984/db2/mydoc -XPUT -d '{"foo":false}'
{"ok":true,"id":"mydoc","rev":"1-1153b140e4c8674e2e6425c94de860a0"}
curl 127.0.0.1:5984/_replicate -Hcontent-type:application/json -d '{"source":"db1","target":"db2"}'
{"ok":true,...}
curl '127.0.0.1:5984/db2/mydoc?conflicts=true'
{"_id":"mydoc","_rev":"1-89248382088d08ccb7183515daf390b8","foo":true,"_conflicts":["1-1153b140e4c8674e2e6425c94de860a0"]}