I have a situation in which I have an offline db with large data stored .
I want this data be synced with my server realm .
In this case I would have to copy the data from offline db and I would need to paste em into new db with "SyncConfiguration" config ?
or by any means I can sync the realm db which was created with "RealmConfiguration" . This way I can avoid data migration .
Currently, there is no way to sync a Realm that was originally created as standalone. You'll need to copy the data manually.
Related
How do I delete a single record from the local store on multiple phones? The initiating phone correctly deletes the record from its local store (sqlite) and Azure (SQL Server).
However, I incorrectly assumed that other phones would delete the record from their local store after performing a pull, they don’t. Instead the ‘should be’ deleted record becomes orphaned until its entire table is purged and then pulled. This seems like overkill to delete a single record. How do I easily delete local store records between multiple devices?
Use 'soft-delete' on the server.
In a node-based server, set table.softDelete = true; in the table definition.
In an ASP.NET based server, set enableSoftDelete: true in the constructor of the EntityDomainManager.
This adds a Deleted column to the model. When the client pulls, any records that are marked deleted will be pulled down as well, and the client will delete the records from the SQLite store. When a record is deleted on the client, it is marked deleted instead.
On the server, you will need to clean up the marked-deleted records on a regular basis.
Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.
I'm trying to sync a database in a local CouchDB installation (v 1.3.1 on a Mac) and a database on Cloudant with master-master replication.
In my local Futon http://localhost:5984/_utils I've configured Replicator to replicate my local database to Cloudant. Everything works fine replicating from the local database to the Cloudant one, but not backwards. If data changes in the Cloudant's database those changes are not been replicated to my local database.
Local -> Cloudant = works
Cloudant -> Local = doesn't work
Is this possible to be made? Can anyone help?
Thanks!
Finally I figured out that I only needed to configure two replications from my local CouchDB.
Here are both replications:
{
"source":"https://username:pwd#username.cloudant.com/cloud_db",
"target":"http://username:pwd#localhost:5985/local_db"
}
{
"source":"http://username:pwd#localhost:5985/local_db",
"target":"https://username:pwd#username.cloudant.com/cloud_db"
}
Now, in http://localhost:5984/_utils/status.html there are two replications running.
Note that I added the username and password to the local connection too. That's because you need to be an authorized user in order to replicate design documents.
Thanks a lot Mike, your answer helped a lot!
Can you try specifying full URLs for both source and target? I think it would look like:
#create the replicator database for your local server
curl -X PUT 'http://127.0.0.1:5984/_replicator'
then upload this document:
{
"source":"https://username:pwd#username.cloudant.com/source_db",
"target":"http://127.0.0.1:5985/target_db"
}
Then you should be able to monitor that by a:
http://127.0.0.1:5984/_active_tasks
If that doesn't work for you, can you please copy/paste:
the body of the document in the _replicator database
grep the log file for anything with _replication
Also, there's a classic 'gotcha' here, that I think you need 'writer' permissions on both the source and target database. That may seem odd, but it's because the replicator is saving checkpoint documents on both the source and the target so that the next time you ask to replicate, it doesn't have to start from scratch.
I finished setup for the making Azure hub and installing Client Agent and Database .
Then define dataset.
That time whatever database i chose and click get latest schema, got the error.
Error is
The get schema request is either taking a long time or has failed.
When check log ,it said like below :
Getting schema information for the database failed with the exception "There is already an open DataReader associated with this Command which must be closed first.
For more information, provide
tracing id ‘xxxx’ to customer support.
Any idea for this?
the current release has maximum of 500 tables in sync group. also, the drop down for the tables list is restricted to this same limit.
here's a quick workaround:
script the tables you want to sync
create a new temporary database and run the script to create the tables you want to sync
register and add the new temporary database as a member of the sync group
use the new temporary database to pick the tables you want to sync
add all other databases that you want to sync with (on-premise databases and hub database)
once the provisioning is done, remove the temporary database from the sync group.
i am using couch db and i want to post data in local host, i want to pass data append to couch db URL without open the couch db my data is save couch db database how can do this process.like i want to save name url=http://127.0.0.1:5984/address/_design/addressbook/index.html?name=lovesrivastava this URL pass through local-host and save data in couch db database. and return true or false
That question is very hard to understand, but I think your question might be this:
You have 2 CouchDB databases, one on the localhost, and another on a "server". You want to serve a CouchApp from the localhost, but you want saved documents to be saved on the server. You can't save directly from the browser to the server because that would be "cross domain".
Your idea to "pass through" the local database to the server is not the right approach. You always need to save your document back to the database where you got it.
What you need is "replication":
You save the document to your local CouchDB, and then "replicate" it up to the server.