I have a remote CouchDB named 'mydb', and a local PouchDB at client side sync with it. The situation is client can go offline and back, so during client offline, I DELETED the remote 'mydb' and re-create one with same name and added some random new files to the new db.
When the client come back online, is it going to sync back the old file and overwrite those with same name?
If you need bi-directional replication you might do:
// use "sync"
localDB.sync(remoteDB)
// another option is to use "replicate" with both "to" and "from"
localDB.replicate.to(remoteDB)
localDB.replicate.from(remoteDB)
If you need uni-directional replication, you might do:
// use "replicate" with only "to"
localDB.replicate.to(remoteDB)
Take a look at this.
Related
My app uses CoreData+Cloudkit with a public database.
By default, iCloud sync with a public database is only done at launch and then every half hour.
Since my app requires faster sync, all users are logged in to iCloud, and subscribed to iCloud changes. Thus, an iCloud modification of one user sends a notification to all other users. This works.
The problem:
The notification should now trigger an update of the local persistent store, i.e. an iCloud insert or update should download the respective iCloud record, and insert it or update it in the persistent store (deletes don’t happen).
Possible solutions:
I could download the record from iCloud manually and then insert or update it in the managed context. However, an insert will then be treated as a new record, and uploaded later as a duplicate to iCloud. There will be a duplicate for every user who received the notification. While such dupes could be handled (there are only a few users), this is not so elegant.
Much better is simply to trigger a re-mirroring, as it is anyway done during launch and every half hour. But I did not find any reasonable way to do this. I found one suggestion to toggle iCloud sync off and on (which should trigger a sync), but this gives me a client error (re-registration of a mirroring agent). I found another suggestion to swap the persistent stores (one with iCloud mirroring and one without) but this seems to me a terrible hack for my problem.
My question:
What is a reasonable way to update the local store with the iCloud changes?
At the moment, the ck public db is only replicated at startup or after about 20 min
I was going through this question: socket.io determine if a user is online or offline.
In the answer I have seen that an object containing the online users is created. In a production app should you store this data in a database like redis? Or is it okay if it stays saved in memory in the server?
I would not store the users in the server's memory, imagine this case:
for some reason you need to restart the server, a crash, a new version update, a new release and the memory of the server gets reset and you loose the users object.
So for this redis looks like a great option to store users data.
I'm using Azure Mobile Services and I want to clear local database, how can I do that?
I have a problem with my local database. When I logout in app and login with other user, the data of the previous user is loaded for current user and I don't have idea why this occurs. I use debug on server side and the server return correct data, then I believe that the problem is the local Database.
I'm using Azure Mobile Services and I want to clear local database, how can I do that?
For deleting your SQLite file, you could follow Deleting the backing store. Also, you could leverage the capability provided by IMobileServiceSyncTable to purge records under your offline cache, details you could follow Purging Records from the Offline Cache.
When I logout in app and login with other user, the data of the previous user is loaded for current user and I don't have idea why this occurs. I use debug on server side and the server return correct data
Since you did not provide details about your implementations (e.g. user log in/log out, user data management,etc), I would recommend you check whether your server/client side both enable per-user data store. You could use fiddler to capture the network traces when other user logging in, and make sure that the correctly user identifier (e.g. UserId) is returned, then check the query against your local database. Moreover, I would recommend you follow adrian hall's book about Data Projection and Queries.
You can delete all of the local DB files by doing the following.
var dbFiles = Directory.GetFiles(MobileServiceClient.DefaultDatabasePath, "*.db");
foreach (var db in dbFiles)
{
File.Delete(db);
}
However, this would delete all data each time and cause performance issues, as every time after you did this, you'd be getting a fresh copy of the data from your Azure DB, rather than using the cached copy in the device's SQLite DB.
We typically only use this for debugging, and the reason it's in a foreach is to capture all databases created (refer to my last suggestion)
There are a few other things you could try to get around your core issue of data cross-over.
There's another reason you might be seeing this behaviour. With your PullAsync, are you passing it a query ID? Your PullAsync line should look similar to this.
GetAllFoo(string userId)
{
return await fooTable.PullAsync("allFoo"+userId,fooTable.Where(f=>f.userId == userId));
}
Note that the query ID will be unique each time (or at least, for each user). This is used primarilly by the offline sync portion of Azure, but in combination with the Where statement (be sure to import System.Linq), this should ensure only the correct data is brought back.
You can find more information about this here.
Also, some things you may want to consider, store a separate database for each userId. We're doing this for our app (With a company ID) - so that each database is separate. If you do this, and use the correct database on logging in, there's no chance of any data cross over.
I'm trying to write a simple node.js program to sync a few address books from a CardDAV server to a local MySQL database. I'm using the node dav client.
I know CardDAV supports only syncing changes since the last sync via sync-token and I see some references to sync tokens when I browse through the source and readme of the dav client. But, I'm very new to DAV, so I'm not 100% sure how to put it all together.
I'm guessing I need to store the sync token (and level?) the server sends back after I run a sync and then include that in my next sync request. Am I on the right track?
Building a CardDAV client is a great resource which describes how all that works, including WebDAV Sync, which is what you are looking for.
Note that a server is not required to provided WebDAV sync (and quite a few don't).
Also note that even if they support WebDAV sync, they can expire the tokens however/whenever they want (e.g. some only store a single token, or only for a limited time).
In short: do not rely on WebDAV-sync. If it is not available, or the token is expired, you need to fallback to a full, regular sync (comparing href's and etag's).
I'm guessing I need to store the sync token (and level?) the server sends back after I run a sync and then include that in my next sync request. Am I on the right track?
Yes you are on the right track. Sync-tokens are usually per collection (Depth:1, I think they can be Depth:infinity, but I'm not sure).
So you need to store it alongside the URL of the collection you are syncing.
Then in the next sync-request, you embed it into the sync-report. If the token is still valid, you get back the new/deleted/changed records. If the token was invalidated, you need to perform a full sync.
Hope that helps :-)
Best approach to store multiple user data is per user per database. I am using this same approach.
I have couchdb on server and pouchdb for mobile application. I am maintaining each user data by creating separate database for the user in pouchdb and couchdb. That.That means i have multiple database in couchdb and one database in pouchdb.
usually in sqlbase database user data is store in different different table.
so in nosql pouchdb i am creating document for each table.
Actual problem i am facing is:
I have one document in each database that stores the transactions of user.
Client transaction is stored in pouchdb when he/she is offline and when application get on-line transaction sync to couchdb user database in to transaction document.
data is stored in transaction document is as follows
{
"_id":"transaction ",
"_rev":"1-3e5e140d50bf6a4d873f0c0f3e3deb8c",
"data":[
{
"transaction_id":"tran_1",
"transaction_name":"approve item",
"status":"Pending",
"ResultMsg":""
},
{
"transaction_id":"tran_2",
"transaction_name":"approve item",
"status":"Pending",
"ResultMsg":""
}]
}
All these transaction is performed on server side and result is updated in these document.when ever any new transaction performed i store it in transaction document in data attribute.
Now i have 1 transaction in pouch and couchdb both means both are in sync.
Now when mobile application is offline it perform offline transaction that is stored in pouchdb transaction doc.
and on server side that 1 transaction is updated to success.
Now when application goes to on-line and sync perform i am losing my server side changes and finally data in transaction doc is as client pouchdb.
here i am losing server side data. so what is good approach or how can i solve it.
What's happening is that you have conflicts to the same document, because it is modified in one way by the server and another way by the client. One conflicting version is winning arbitrarily, and the other is losing.
You can either resolve the conflicts or (the more reasonable solution in your case) store multiple documents per user instead of one big document.
Just because you have one database per user doesn't mean you need to have one document per user. :) E.g your docs could be:
{_id: "Tran_1", status: "Pending"}
{_id: "Tran_2", status: "Pending"}
// etc.
These documents would be created once on the client and updated once on the server. No possibility of conflicts. Piece of cake!