I'm pretty new to Azure and trying to work on deploying an already existing MVC 3 website (I'm late to the project).
It has membership information (where the tables should be genned from aspnet_regsql) and it links those tables to application specific tables. To get it into a working state I need to insert some form of "default data" as the code does (unfortunately) make some assumptions about what should be in the database.
No bother, I have an app that creates a default database and inserts the required data. I can then import that into Azure, this doesn't work as Azure demands clustered indexes. This is because aspnet_regsql creates some auth table keys as unclustered so I'm now left having to alter these tables as part of the process to make the primary keys clustered.
I was just wondering if aspnet_regsql had been superceded somehow due to Azure demanding clustered indexes? Am I missing a trick here or is writing a script to modify the clustering of these indexes the sensible approach?
Found the solution elsewhere here:
http://support.microsoft.com/kb/2006191/de
If you use the Universal Providers, you don't need the scripts.
Check out Hanselman's post. The Universal providers will manage the database creation if you are working with SQL Server, Compact Edition, or Windows Azure Database
There are a lot of references to updated scripts including some on my own blog that are no longer needed.
Related
I already know about the Azure App Configuration for storing application configurations such as connection strings for my Azure apps. However, I am now working on an Azure Functions app where I have to store a more complex configuration for my application.
The configuration consists of mappings where for each entry I have a key/id and multiple values associated with it. Ideally, I'd like to store this in a database table, but setting up a whole database just to store this configuration seems a bit excessive to me. There will be about 200 entries in this table and I don't expect this number to grow much in the future.
Is there a way to store this in a way how it can easily be edited later using an Azure App Configuration, or do I really need to create a new database just for this purpose? Is there maybe another alternative which I didn't consider so far?
Following suggestion is under the assumption that you are not going to edit that data frequently
One way to do is to create a hash table and store in configuration section in Function App. During run time, you can access the data. And for editing you just need to copy whole data from config section , edit it (using notepad++) and update it back to config section.
Though this is not an ideal way , it’s far better than having an dedicated DB just for this purpose ( plus the DB cost )
I'm attempting to make my existing SQL Server 2008 database compatible with the Windows Azure platform by using SSDT, however I am getting a whole bunch of errors when I build the project due to TVFs and views looking for an external database that sits in the same instance in SSMS.
I've added the database that its looking for into Azure, which wasn't a problem.
I've found that if I load the offending piece of code I can add the Azure server address to the FROM statement which resolves the error (shown below), however I have a huge number that rely on the external db and hoped there may be a quicker way?
FROM [myAzureserver.database.windows.net.ExternalDBName.dbo.TableName] as ALIAS
I understand that this issue would not exist if I merged the databases, however this isn't possible at present.
Thanks a lot for your help.
Why are you trying to make your local SQL Server Azure compliant? Are you planning to move it at some point in the cloud? If so, you won't be able to use linked servers. Your FROM clause will work as long as the database remains on an on-premise SQL Server instance.
Assuming that's what you want to do, you are asking if there is quicker way to change your references to point to the cloud database, right? I am not sure if this will work for you but I had a similar issue on another project and ended up using synonyms. Check our synonyms here: http://msdn.microsoft.com/en-us/library/ms177544.aspx. Although you can't create a synonym for a server, you can create synonyms for tables/views/procs.
Again, this may not work for you, but let's try this...
Assuming you have your primary database called DB1, the secondary database called DB2, and the cloud database of DB2 called AzureDB2, you could create synonyms in DB2 to point to the cloud database without changing any SQL statement from DB1.
So assume you have this statement today in DB1:
SELECT * FROM DB2.MyTable
You could create a synonym in DB2 called MyTable:
CREATE SYNONYM MyTable FOR [myAzureserver.database.windows.net.ExternalDBName.dbo.TableName]
DB2 becomes a bridge basically. You don't need to change any statement in DB1; just create synonyms in DB2 that point to the cloud database.
Hopefully this works for you. :)
I have a RESTful service running on azure. Currently, it has zero persistence. (It is just a REST gateway to another api.) I run it in a single, minimal Azure instance, and expect this will handle all the load this will ever get.
I now need to add some very lightweight persistence to it. A simple table, of 40-200 rows, eight data columns. The data is very static.
Doing the whole SQL Azure thing seems big overkill for my needs.
My thoughts have been to use:
An XML file, and load it into memory, as the db. XML file is
deployed with code.
Some better way to deploy XML, so it can be
rolled out/updated easier
SQL Compact (can I do this on Azure?)
___ ?
What is the right path here?
Thank you!
SQL Server Compact would need to store its data somewhere in persistent manner, so you would need to sync it regularly to a persistent storage and that's a lot of extra work and I have no idea how to do that reliably, so it's likely not a very good idea.
For your simple table the Azure Table Storage might be just enough. If that's not enough then SQL Azure is the next choice.
You can use the XML file as your store, there is no harm it it, rather this is a very easy and cost efficient solution, but there is a catch. As you mentioned currently you are using only azure instance, in this case you can store the XML file in your App_Data, but if in future if you want to shift to 2 azure instance, you will have to replicate the App_Data folder. In other words you will need to keep App_Data folder in sync.
Suggestion
Instead of storing file in App_Data store it in BLOB, you can retrieve it using WebClient and the store it in memory.
Pros: The advantage of BLOB is, you don't have to sync it.
Cons: There is a cost associated on the number of transactions you can make. This will depend upon how many times you update the file.
Summary
If you are going to work with only one Azure Instance, use App_Data
More than one Azure Instance, use BLOB with no syncing or use App_Data with sync.
Do not use Azure Table, as BLOB is the designated store provided for this purpose only.
EDIT
From MSDN post
As far as I know, Windows Azure does not support SQL Compact Edition. SQL Compact Edition stores data in file system which will not be synchronized in multiple instances (a web role may be deployed to more than one instance. An instance is similar to a virtual machine). And files stored in file system will lost when the instance is restarted or reimaged.
Hope this helps you.
I am new to subsonic and I'd like to know about the best practices regarding the following scenario:
Subsonic supports multiple database systems, e.g. SQLServer and MySQL. Our customers need to decide while deploying our application to their servers, which database system should be used. Long story short: the providerName, normally specified within the application configuration, should be configurable after the application is finished.
How can this be done? Do I have to generate seperate data libraries for each database system I want to support?
Thank you in advance
Marco
No you do not need to genarate seperate libraries.
How ever you can not use direct sql string as you understand but you need to go always using subsonic sql create code.
Also is good to make some tests on the diferent databases, because not all code have been 100% testes on every case.
I currently developed an app that connects to SQL Server 2005 database, so my DAL objects where generated using information from that DB.
It will also be possible to connect to an Oracle and MySQL db, all with the same table structures (aside from the normal differences in fields, such as varbinary(max) in SQL Server and BLOB in Oracle, and so on). For this purpose, I already defined multiple connection strings and multiple SubSonic providers for the different DB's the app will run on.
My question is, if I generated my objects using a SQL Server database, should the generated objects work transparently with the other DB's or do I need to generate a different DAL for each database engine I use? Should I be aware of any possible bugs I may encounter while performing these operations?
Thanks in advance for any advice on this issue.
I'm using SubSonic 2.2 by the way....
From what I've been able to test so far, I can't see an easy way to achieve what I'm trying to do.
The ideal situation for me would have been to generate SubSonic objects using SQL Server for example, and just be able to switch dynamically to MySQL by just creating at runtime the correct Provider for it along with its connection string. I got to a point where my app would correctly connect from SQL Server to a MySQL DB, but there's a point where the app fails since SubSonic internally generates queries of the form
SELECT * FROM dbo.MyTable
which MySQL doesn't support obviously. I also noticed queries that enclosed table names with brackets ([]), so it seems that there are a number of factors that would limit the use of one Provider along multiple DB engines.
I guess my only other option is to sort it out with multiple generated providers, although I must admit it does not make me comfortable knowing that I'll have N copies of basically the same classes along my project.
I would really love to hear from anyone else if they've had similar experiences. I'll be sure to post my results once I get everything sorted out and working for my project.
Has any of this changed in 3.0? This would definitely be a worthy reason for me to upgrade if life is any easier on this matter...