I have an Access project where the tables will be put up on Azure and then the database split so that the front end stays an Access form (I know not very high tech :) ) The problem is that when I used SSMA previously all the tables were found and everything worked nicely. Since that time I have added more tables (to a fresh version of the program, not the one I already converted to Azure, that was just a test) but when I try the SSMA it only finds the newest tables. What am I doing wrong? Thanks!
It's been quite some time since I worked in Access, but are the tables that you have already ported to Azure in the Access database now linked tables in this Access Database? If the tables are linked tables (meaning they are in another data source) then I would not expect SSMA to show them.
Also, I'd be very suspect of using Access with Azure in this manner. The Access front end forms do not have retry logic built into them which means that in you get transient errors while attempting to save, read, etc. on these records it will not be handled well and you may end up in inconsistent states. One option, besides moving away from Access as the front end if that is not an option, is to no rely on the binding the Access Forms give you and for you to manually write all the binding code handling transient errors; however, by the time you've done that you could have likely re-written the forms into a Web or Desktop app as well.
Article on Transient faults: http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/
Related
I have two Azure Websites set up - one that serves the client application with no database, another with a database and WebApi solution that the client gets data from.
I'm about to add a new table to the database and populate it with data using a temporary Seed method that I only plan on running once. I'm not sure what the best way to go about it is though.
Right now I have the database initializer set to MigrateDatabaseToLatestVersion and I've tested this update locally several times. Everything seems good to go but the update / seed method takes about 6 minutes to run. I have some questions about concurrency while migrating:
What happens when someone performs CRUD operations against the database while business logic and tables are being updated in this 6-minute window? I mean - the time between when I hit "publish" from VS, and when the new bits are actually deployed. What if the seed method modifies every entry in another table, and a user adds some data mid-seed that doesn't get hit by this critical update? Should I lock the site while doing it just in case (far from ideal...)?
Any general guidance on this process would be fantastic.
Operations like creating a new table or adding new columns should have only minimal impact on the performance and be transparent, especially if the application applies the recommended pattern of dealing with transient faults (for instance by leveraging the Enterprise Library).
Mass updates or reindexing could cause contention and affect the application's performance or even cause errors. Depending on the case, transient fault handling could work around that as well.
Concurrent modifications to data that is being upgraded could cause problems that would be more difficult to deal with. These are some possible approaches:
Maintenance window
The most simple and safe approach is to take the application offline, backup the database, upgrade the database, update the application, test and bring the application back online.
Read-only mode
This approach avoids making the application completely unavailable, by keeping it online but disabling any feature that changes the database. The users can still query and view data while the application is updated.
Staged upgrade
This approach is based on carefully planned sequences of changes to the database structure and data and to the application code so that at any given stage the application version that is online is compatible with the current database structure.
For example, let's suppose we need to introduce a "date of last purchase" field to a customer record. This sequence could be used:
Add the new field to the customer record in the database (without updating the application). Set the new field default value as NULL.
Update the application so that for each new sale, the date of last purchase field is updated. For old sales the field is left unchanged, and the application at this point does not query or show the new field.
Execute a batch job on the database to update this field for all customers where it is still NULL. A delay could be introduced between updates so that the system is not overloaded.
Update the application to start querying and showing the new information.
There are several variations of this approach, such as the concept of "expansion scripts" and "contraction scripts" described in Zero-Downtime Database Deployment. This could be used along with feature toggles to change the application's behavior dinamically as the upgrade stages are executed.
New columns could be added to records to indicate that they have been converted. The application logic could be adapted to deal with records in the old version and in the new version concurrently.
The Entity Framework may impose some additional limitations in the options, because it generates the SQL statements on behalf of the application, so you would have to take that into consideration when planning the stages.
Staging environment
Changing the production database structure and executing mass data changes is risky business, especially when it must be done in a specific sequence while data is being entered and changed by users. Your options to revert mistakes can be severely limited.
It would be necessary to do extensive testing and simulation in a separate staging environment before executing the upgrade procedures on the production environment.
I agree with the maintenance window idea from Fernando. But here is the approach I would take given your question.
Make sure your database is backed up before doing anything (I am assuming its SQL Azure)
Put up a maintenance page on the Client Application
Run the migration via Visual Studio to your database(I am assuming you are doing this through the console) or a unit test
Publish the website/web api websites
Verify your changes.
The main thing is working with the seed method via Entity Framework is that its easy to get it wrong and without a proper backup while running against Prod you could get yourself in trouble real fast. I would probably run it through your test database/environment first (if you have one) to verify what you want is happening.
What is the best way to check iCloud for existing data?
I need to check that data doesn't exist on the local device, or iCloud so I can then download it.
Since you included the core-data tag I'm assuming you mean that you're using Core Data rather than iCloud file APIs or the ubiquitous key-value store.
With Core Data's built-in iCloud support, you check on existing data in exactly the same way as if you were not using iCloud. Once you create your Core Data stack, you check what data exists by doing normal Core Data fetches. There's no (exposed) concept of local vs. cloud data, there's just one data store that happens to know how to communicate with iCloud. You don't explicitly initiate downloads-- they happen automatically.
At app launch time when you call addPersistentStoreWithType:configuration:URL:options:error:, Core Data internally initiates the download of any data that's not available locally yet. As a result this method may block for a while. If it returns successfully, then all current downloads can be assumed to be complete.
If new changes appear while your app is running, Core Data will download and import them and, when it's finished, will post NSPersistentStoreDidImportUbiquitousContentChangesNotification to tell you what just happened.
This all describes how Core Data's iCloud is supposed to work. In practice you'll probably find that it doesn't always work as intended.
Thanks to #Tom Harrington for pointing out, this error is nothing to do with the developer/coding - it's purely to do with iCloud/Apple/connection issues.
More on this SO answer I found.
I'm pretty new to Azure and trying to work on deploying an already existing MVC 3 website (I'm late to the project).
It has membership information (where the tables should be genned from aspnet_regsql) and it links those tables to application specific tables. To get it into a working state I need to insert some form of "default data" as the code does (unfortunately) make some assumptions about what should be in the database.
No bother, I have an app that creates a default database and inserts the required data. I can then import that into Azure, this doesn't work as Azure demands clustered indexes. This is because aspnet_regsql creates some auth table keys as unclustered so I'm now left having to alter these tables as part of the process to make the primary keys clustered.
I was just wondering if aspnet_regsql had been superceded somehow due to Azure demanding clustered indexes? Am I missing a trick here or is writing a script to modify the clustering of these indexes the sensible approach?
Found the solution elsewhere here:
http://support.microsoft.com/kb/2006191/de
If you use the Universal Providers, you don't need the scripts.
Check out Hanselman's post. The Universal providers will manage the database creation if you are working with SQL Server, Compact Edition, or Windows Azure Database
There are a lot of references to updated scripts including some on my own blog that are no longer needed.
I recently submitted an upgrade of my app which included a lightweight coredata migration (including new fields in existing tables and a couple of new tables). I followed every tip regarding this migration, including some I found on this site.
I thoroughly tested the update on three different devices and it all went ok!!!
However, this update is crashing an all my devices and probably on all my customers. I can't explain why this is happening.
Could you please help me understand this debacle?
To truly test your app and migration, you need to run your original app to create data store according to the original data model. Then you need to run your new app, opening data store that was generated with original app. This can be a real pain and is easier (at least initially) to do in Simulator because you have more control over the file system and can swap in a saved original data store. On iDevice you need to regenerate original data store for each test.
If you are testing on your own development devices then you have already migrated your data store. Is it possible that your test devices created their data stores with new data model - and never actually performed a migration?
I only generally use automatic migration during beta testing, for quick revisions, other than that I always use a mapping model, so that you have control.
the other issue is that if your model shifts far enough between releases, auto migration from v1-v2 could be fine, and v2-v3 could be ok, but v1-v3 could be too drastic to be inferred. by making maps for them, you retain control of the migration.
I am working with Azure Table storage using the .NET API (TableServiceContext, WCF Data Service, etc). I have a simple graph of objects that I want to save to the table store. In the service context class, I have the following code.
_TableClient.CreateTableIfNotExist("AggRootTable");
this.AddObject("AggRoots", model);
foreach (var related in model.RelatedObjects)
{
this.AddRelatedObject(model, "RelatedCollection", related);
}
this.SaveChanges();
I have used this style of code in WCF Data Services via EF and a SQL Server, but it doesn't work against Azure Tables. I would not expect it to, as there aren't real relationships between tables in Azure. However, the methods are there. Does anyone know how to use AddRelatedObject, AddLink, etc in the context of Azure Tables? Or can suggest approaches to storing object graphs in general? I haven't been able to find any docs, and Google hasn't been helpful.
Thanks,
Erick
You can't. ATS does not support relationships. There are many non-working methods available due to it using data services API.
What you can do, however, is store the full object tree in a single table. Not sure if this will work for your design/architecture
also, it is a bad idea to keep calling CreateIfNotExists before every write operation. First, you pay extra for transactions that occur for the round-trip, second the call is not instantaneous and will slow down your writes.
just precreate the tables before deployment or during roles start.
The Table Storage Service is generally not a good place to store entire object graphs, since there's a size limit (of 1 MB, IIRC) on each row/entity. Obviously, if you know that your object graphs will never be large, you may not care...
A good alternative is often to store a serialized graph in Blob Storage. However, you must have a strategy for how to handle versioning.