I'm having a problem with the replication of my couchDB databases.
I have a remote database which gathers measurement data and replicates it to a central server.
On the server, I add some extra parameters to these documents.
Sometimes, a measurement goes wrong and I just want to delete it.
I do that on the central server and want to replicate it to the remote database.
Since I updated the document on the central server, there is a new revision which isn't synced to the remote.
If I want to delete that measurement, couchdb deletes the latest revision.
Replicating this to the remote doesn't delete the documents on the remote.
(Probably because it doesn't sync the latest revision first, it just wants to delete the latest revision, which isn't on the remote yet).
Replicating the database to the remote before I delete the document fixes this issue.
But sometimes, the remote host is unreachable. I want to be able to delete the document on the central database and make sure that once the remote comes online, it also deletes the document. Is there a way to do this with default couchdb commands?
You could configure continuous replication so that your remote listens for changes on the central database. If it goes offline, and comes back online, re-start continuous replication.
Related
I have a master couchdb, which is replicated to a local db every time the local application starts.
The user can modify the local docs, but I want these docs to be deleted when the replication starts if they have disappeared from the master db.
How can I achieve that?
This is already how replication works. When a document is modified (including deletion), that change gets replicated.
The only possible problem you may encounter is that if a local change is made at the same time a deletion occurs, then upon sync, there will be a conflict.
So you need your local app to do some sort of conflict resolution, which selects the deleted revision. I suggest reading about the CouchDB Replication and Conflict Model as a starting place.
As I understand it doing a localDB.replicate.from(remoteDB) will cause PouchDB to get all document revisions into the localDB.
Is there some way to make replicate.from only replicate/get the most recent version of every document, and only keep one revision of each document? I only ever do a replicate.from and never a replicate.to. I have some large documents in the remoteDB which get updated every week and want to minimise the amount of space that the localDB takes up.
I have set the localDB to auto_compact but I think this only affects documents that are created locally and not documents replicated from the remote DB?
Is there a way I can clear down the data in my local pouchDB without the changes being replicated to the online couchDB.
I am currently using the db.sync function with live: true
The context for this is I have lots of users entering orders in an offline first environment and would like to cleardown the data every few days to keep the application quick but new lose the orders from couchDB
Unfortunately not, there is a long running open issue for purge # https://github.com/pouchdb/pouchdb/issues/802 which would do what you want, but it has not been implemented yet.
What is your use case, are you doing a 2 way sync and seeing remote updates locally or are you only doing push replication to send the orders? One way to work around this is to periodically create a fresh database locally that only contains the orders you care about.
I need to copy a depot from one Perforce server to another. The file revision history needs to be intact but the user information and workspace information can not be copied to the new server.
I've tried a standard checkpoint creation and restore procedure, but if there exist users or workspaces with the same name on both servers, the source server will overwrite this info on the destination server. This is pretty bad if those user accounts and workspaces do not have exactly identical details.
The goal of this sort of operation is to allow two separate, disconnected groups to view a versioned source tree with revision history. Updates would be single directional with one group developing and one just viewing. Each group's network is completely enclosed, no outside connections of any kind.
Any ideas would be appreciated, i've been busting my brains on this one for a while.
EDIT:
Ultimately my solution was to install an intermediate Perforce server on the same machine as my source server. Using that I could do a standard backup/restore from the source server to the intermediate server and then delete all unwanted meta data in the intermediate server before backing up from the intermediate server to the final destination server. Pretty complicated but it got the job done and it can all be done programatically in Windows Power Shell.
There are a few ways, but I think you are going about this one the hard way.
Continue to do what you are doing, but delete the db.user, db.view(I think) and db.group. Then when you start the perforce server, it will create these, but they will be empty, which will make it hard for anyone to log in. So you'll have to create users/groups. I'm not sure if you can take those db files from another server and copy them in, never tried that.
The MUCH easier way, make a replica. http://www.perforce.com/perforce/r10.2/manuals/p4sag/10_replication.html Make sure you look at the p4d -M flag to make sure it's a read only replica. I assume you have a USB drive or something to move between networks, so you can just issue a p4 pull onto the USB drive, then move the drive, and either run it off the USB, or issue another p4 pull, pulling to a final server. Never tried this, but with some work it should be possible, you'll have to run a server off the USB to issue the final p4 pull.
You could take a look at perforce git fusion, and make some git clones.
You could also look at remote depots. Basically you create a new depot on your destination server, and point it at a depot on your source server. This works if you have a fast connection between the 2 servers. Protections are handled by the destination server, as to who has access to that new depot. The source server can be set up to share it out as read only to the destination server. Here is some info
http://answers.perforce.com/articles/KB_Article/Creating-A-Remote-Depot
Just make sure you test it during a slow period, as it can slow down the destination server. I tried it from 2 remote locations, both on the east coast US, and it was acceptable, but not too useful. If both servers are in the same building it would be fine.
I have built my liferay website in the development environment and now ready to be published. I have also installed two liferay nodes on two different servers where I want to put my website. Server1 is active and server2 as backup.
The problem is when I started the development, I didn't know I would one day need to have the two-server structure, so I stored all the documents and images on the file system and not to a database. So basically with this setting, when I make changes on server 1, I have to transfer the document library manually to server two, just like I would do for the themes.
I tried to change the document library location from the filesystem to the database in the portal-ext.properties, but that didn't help.
So, my questions:
Is there a way to transfer these files to a database now, where they can be shared by both servers? and if not,
Is it possible to somehow transfer the document library from server1 to server2 automatically through some script?
Thanks,
Adia
If server2 is a cold standby backup server and assuming you have a correct backup of the Liferay data directory of server1 and the database at the same moment in time, you can just restore the backup of the Liferay data directory to server2, restore the DB to the corresponding moment in time as the data directory backup and start server2.
In hot standby scenario's and clustered environments things get a little bit more complicated as you would need to use a common place to store documents, images, search indexes, etc... The easiest way is to store everything in the database or on a common file system so that multiple nodes are always working on the same data.
In you want to get your current set of documents that is stored on disk into the database the easiest way is to use the Server > Server Administration > Data Migration tab in the Control Panel. It has an option to migrate documents from the existing repository aka the disk to another, which would be the JCRStore in your case as that store can be configured to use the database.