Azure Mobile Services and Code First Migrations update - azure

I have created an Azure Mobile Service project. From the beginning of the project I created my entities and enabled Code First Migrations. During the development process I never had any problem creating new entities, modifying existing ones and updating the database through data migrations. All sweet and nice.
I published my solution to Azure Mobile Services. My database schema was created automatically and everything was playing nice.
After few days I needed to update a field in a table. So updated the entity locally and run the service locally. My local version of my database was updated with my new addition. I uploaded the service to Azure and I was expecting my online database to be updated also. But I get this error
The model backing the 'xxxxx' context has changed since the database was created. Consider using Code First Migrations to update the database.
That is strange, since code first migrations are already enabled. My database was initially created using them. After many days of trying almost everything I deleted the schema of my database at the online version. I run again the service online and it created again the database schema with the last change I did. So I figure out the Azure Mobile Service has no problem to create the schema from the beginning but cannot figure out how to apply schema updates.

I do not recommend this as an answer (so please don't accept it as such), but I ended up so frustrated with the code-first migrations (which, like you, I just could not get to work) that I did this as a work-around while I await enlightenment.
1) Update the data model
For me this was simply adding this line to my Item class:
public bool IsPublic { get; set; }
2) Manually update the SQL server
You'll find the connection details in the publish profile you can download from the mobile service's dashboard in the Azure Portal. My tSQL command was simply
ALTER TABLE vcollectapi.Items
ADD IsPublic BIT NOT NULL DEFAULT(0)
3) Stop the service checking whether the model backing the context has changed since the last successful migration
There are several answers on how to do this, I followed this one and added the following static constructor to my data context class VCollectAPIContext
static VCollectAPIContext()
{
Database.SetInitializer<VCollectAPIContext>(null);
}
Now my service is back up-and-running and my data remained intact.

Related

Getting EF to automatically create 2 databases when running in Azure App Service

I recently just deployed a web app in Azure App Service. This web app uses 2 separate database (2 connection strings) to be able to function fully. Upon deploying and browsing the web app, the first database got automatically created but the 2nd one was not. I was expecting for both to be automatically created but I can't make it happen for the 2nd database.
I checked the connection strings and it is correct. What do I need to so the 2nd database gets created as well?
EDIT:
Code was requested but I am not really sure what relevant code I need to post. This is the only thing I can think of that might be involved in the database creation.
in ConfigureServices:
services.AddDbContext<FirstDbContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("FirstConnection")));
services.AddDbContext<SecondDbContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("SecondResourceConnection")));
Not sure if this is the correct code I need to post relating to this.
EDIT 2:
One thing to note is that this worked in my local, both DBs were created.

Cosmos DB apply changes to existing data

I'm using Cosmos DB in an attempt to keep a Web App as cheap / free as possible. I'm not very familiar with it.
I've added a bunch of data. Approx 200 rows in a table called Members. I then added more fields. In particular this field
public bool ArchiveMember { get; set; }
Any new Members I add have the ArchiveMember field, but existing data doesn't include the new field (set as false) as I expected.
Is there a way of applying migrations to all data?
Thank you rickvdbosch ,Posting your comment as an answer to help other community members for this similar issue.
"You should update the data yourself using a script or tool. It might be simpler to have the entity have a default value for the ArchiveMember property instead of updating all data. You could also take a look at Table Storage which is a feature of Storage Accounts. The API is also supported by Cosmos DB, enabling you to start with a storage account and migrate over if requirements or performance change."

Syncing Problems with Xamarin Forms and Azure Easy Tables

I've been working on a Xamarin.Forms application in Visual Studio using Azure for the backend for a while now, and I've come across a really strange issue.
Please note, that I am following the methods mentioned in this blog
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution. What I mean by that is that if I create another solution that accesses the exact same backend, it can perform it's own create/sync data, but will not bring over the data generated by the other solution, even though they both seem to have the exact same access. This appears to be some kind of a security feature/issue, but I can't quite make sense of it.
Has anyone else encountered this at all? Was there a work-around at all? This could potentially cause problems down the road if I were to ever want to create another solution that accesses the same system/data for whatever reason.
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution.
According to your provided tutorial, I found that the related PullAsync is using Incremental Sync.
await coffeeTable.PullAsync("allCoffees", coffeeTable.CreateQuery());
Incremental Sync:
the first parameter to the pull operation is a query name that is used only on the client. If you use a non-null query name, the Azure Mobile SDK performs an incremental sync. Each time a pull operation returns a set of results, the latest updatedAt timestamp from that result set is stored in the SDK local system tables. Subsequent pull operations retrieve only records after that timestamp.
Here is my test, you could refer to it for a better understanding of Incremental Sync:
Client : await todoTable.PullAsync("todoItems-02", todoTable.CreateQuery());
The client SDK would check if there has a record with the id equals deltaToken|{table-name}|{query-id} from the __config table of your SQLite local store.
If there has no record, then the SDK would send a request as following for pulling your records:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Note: the $filter would be set as (updatedAt ge datetimeoffset'1970-01-01T00:00:00.0000000+00:00')
While there has a record, then the SDK would pick up the value as the latest updatedAt timestamp and send the request as follows:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'2017-06-26T02%3A44%3A25.3940000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Per my understanding, if you handle the same logical query with the same query id (non-null) in different mobile client, you need to make sure the local db is newly created by each client. Also, if you want to opt out of incremental sync, pass null as the query ID. In this case, all records are retrieved on every call to PullAsync, which is potentially inefficient. For more details, you could refer to How offline synchronization works.
Additionally, you could leverage fiddler for capturing the network traces when you invoke the PullAsync, in order to troubleshoot your issue.

Fetching Initial Data from CloudKit

Here is a common scenario: app is installed the first time and needs some initial data. You could bundle it in the app and have it load from a plist or something, or a CSV file. Or you could go get it from a remote store.
I want to get it from CloudKit. Yes, I know that CloudKit is not to be treated as a remote database but rather a hub. I am fine with that. Frankly I think this use case is one of the only holes in that strategy.
Imagine I have an object graph I need to get that has one class at the base and then 3 or 4 related classes. I want the new user to install the app and then get the latest version of this class. If I use CloudKit, I have to load each entity with a separate fetch and assemble the whole. It's ugly and not generic. Once I do that, I will go into change tracking mode. Listening for updates and syncing my local copy.
In some ways this is similar to the challenge that you have using Services on Android: suppose I have a service for the weather forecast. When I subscribe to it, I will not get the weather until tomorrow when it creates its next new forecast. To handle the deficiency of this, the Android Services SDK allows me to make 'sticky' services where I can get the last message that service produced upon subscribing.
I am thinking of doing something similar in a generic way: making it possible to hold a snapshot of some object graph, probably in JSON, with a version token, and then for initial loads, just being able to fetch those and turn them into CoreData object graphs locally.
Question is does this strategy make sense or should I hold my nose and write pyramid of doom code with nested queries? (Don't suggest using CoreData syncing as that has been deprecated.)
Your question is a bit old, so you probably already moved on from this, but I figured I'd suggest an option.
You could create a record type called Data in the Public database in your CloudKit container. Within Data, you could have a field named structure that is a String (or a CKAsset if you wanted to attach a JSON file).
Then on every app load, you query the public database and pull down the structure string that has your classes definitions and use it how you like. Since it's in the public database, all your users would have access to it. Good luck!

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

Resources