I have a master couchdb, which is replicated to a local db every time the local application starts.
The user can modify the local docs, but I want these docs to be deleted when the replication starts if they have disappeared from the master db.
How can I achieve that?
This is already how replication works. When a document is modified (including deletion), that change gets replicated.
The only possible problem you may encounter is that if a local change is made at the same time a deletion occurs, then upon sync, there will be a conflict.
So you need your local app to do some sort of conflict resolution, which selects the deleted revision. I suggest reading about the CouchDB Replication and Conflict Model as a starting place.
Related
Why is perforce giving this error when I try to create a checkpoint? Can I restore the entire database from just a checkpoint file and the journal file? What am I doing wrong, how does this work? Why the perforce user guide a giant book, and there are no video tutorials online?
Why is perforce giving this error when I try to create a checkpoint?
You specified an invalid prefix (//. is not a valid filename). If you want to create a checkpoint that doesn't have a particular prefix, just omit that argument:
p4d -jc
This will create a checkpoint called something like checkpoint.8 in the server root directory (P4ROOT), and back up the previous journal file (journal.7) in the same location.
Can I restore the entire database from just a checkpoint file and the journal file?
Yes. The checkpoint is a snapshot of the database at the moment in time when you took the checkpoint. The journal file records all transactions made after that point.
If you restore from just the checkpoint, you will recover the database as of the point in time at which the checkpoint was taken. If you restore from your latest checkpoint plus the currently running journal, you can recover the entire database up to the last recorded transaction.
The old journal backups that are created as part of the checkpoint process provide a record of everything that happened in between checkpoints. You don't need these to recover the latest state, but they can be useful in extraordinary circumstances (e.g. you discover that important data was permanently deleted by a rogue admin a month ago and you need to recover a copy of the database to the exact moment in time before that happened).
The database (and hence the checkpoint/journal) does not include depot file content! Make sure that your depots are located on reasonably durable storage (e.g. a mirrored RAID) and/or have regular backups (ideally coordinated with your database checkpoints so that you can restore a consistent snapshot in the event of a catastrophe).
https://www.perforce.com/manuals/v15.1/p4sag/chapter.backup.html
As I understand it doing a localDB.replicate.from(remoteDB) will cause PouchDB to get all document revisions into the localDB.
Is there some way to make replicate.from only replicate/get the most recent version of every document, and only keep one revision of each document? I only ever do a replicate.from and never a replicate.to. I have some large documents in the remoteDB which get updated every week and want to minimise the amount of space that the localDB takes up.
I have set the localDB to auto_compact but I think this only affects documents that are created locally and not documents replicated from the remote DB?
Is there a way I can clear down the data in my local pouchDB without the changes being replicated to the online couchDB.
I am currently using the db.sync function with live: true
The context for this is I have lots of users entering orders in an offline first environment and would like to cleardown the data every few days to keep the application quick but new lose the orders from couchDB
Unfortunately not, there is a long running open issue for purge # https://github.com/pouchdb/pouchdb/issues/802 which would do what you want, but it has not been implemented yet.
What is your use case, are you doing a 2 way sync and seeing remote updates locally or are you only doing push replication to send the orders? One way to work around this is to periodically create a fresh database locally that only contains the orders you care about.
I have a mongodb running on windows azure linux vm,
The database is located on the system drive and i wish to move it to another hard drive since there is not enough space there.
I found out this post :
Changing MongoDB data store directory
These seems to be a fine solution suggested there, yet there is another person who mentioned something about copying the files,
My database is live and getting data all the time, how can i make this proccess with lossing the least data possible ?
Thanks,
First, if this is a production system you really need to be running this as a replica set. Running production databases on singleton mongodb instances is not a best practice. I would consider 2 full members plus 1 arbiter the minimum production set up.
If you want to go the replica set route, you can first convert this instance to a replica set:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
this should have minimal down time.
Then add 2 new instances with the correct storage set up. After they sync you will have a full 3 member set. You can then fail over to one of the new instances. Remove this bad instance from the replica set. Finally I'd add an arbiter instance to get you back up to 3 members of the replica set while keeping costs down.
If on the other hand you do not want to run as a replica set, I'd shutdown mongod on this instance, copy the files over to the new directory structure on another appropriate volume, change the config to point to it (either changing dbpath or using a symlink) and then startup again. Downtime will be largely a factor of the size of your existing database, so the sooner you do this the better.
However - I will stress this again - if you are looking for little to no down time with mongoDB you need to use a replica set.
I'm having a problem with the replication of my couchDB databases.
I have a remote database which gathers measurement data and replicates it to a central server.
On the server, I add some extra parameters to these documents.
Sometimes, a measurement goes wrong and I just want to delete it.
I do that on the central server and want to replicate it to the remote database.
Since I updated the document on the central server, there is a new revision which isn't synced to the remote.
If I want to delete that measurement, couchdb deletes the latest revision.
Replicating this to the remote doesn't delete the documents on the remote.
(Probably because it doesn't sync the latest revision first, it just wants to delete the latest revision, which isn't on the remote yet).
Replicating the database to the remote before I delete the document fixes this issue.
But sometimes, the remote host is unreachable. I want to be able to delete the document on the central database and make sure that once the remote comes online, it also deletes the document. Is there a way to do this with default couchdb commands?
You could configure continuous replication so that your remote listens for changes on the central database. If it goes offline, and comes back online, re-start continuous replication.