Migrating database to SQL Azure - azure

As far as I know the key points to migrate an existing database to SQL Azure are:
Tables has to contain a clustered
index. This is mandatory.
Schema and data migration should be
done through data sync, bulk copy,
or the SQL Azure migration
wizard, but not with the restore option in SSMS.
The .NET code should handle the
transient conditions related with
SQL Azure.
The creation of logins is in the
master database.
Some TSQL features may be not
supported.
And I think that's all, am I right? Am I missing any other consideration before starting a migration?
Kind regards.

Update 2015-08-06
The Web and Business editions are no longer available, they are replaced by Basic, Standard and Premium Tiers.
.CLR Stored Procedure Support is now available
New: SQL Server support for Linked Server and Distributed Queries against Windows Azure SQL Database, more info.
Additional considerations:
Basic tier allows 2 GB
Standard tier allows 250 GB
Premium tier allow 500 GB
The following features are NOT supported:
Distributed Transactions, see feature request on UserVoice
SQL Service broker, see feature request on UserVoice

I'd add in bandwidth considerations (for initial population and on-going bandwidth). This has cost and performance considerations.
Another potential consideration is any long running processes or large transactions that could be subject to SQL Azure's rather cryptic throttling techniques.

Another key area to point out are SQL Jobs. Since SQL Agent is not running, SQL Jobs are not supported.
One way to migrate these jobs are to refactor so that a worker role can kick off these tasks. The content of the job might be moved into a stored procedure to reduce re-architecture. The worker role could then be designed to wake up and run at the appropriate time and kick off the stored procedure.

Related

How to move from DTU to vCore in Azure SQL Database

At present we have 3 (Dev, QA & Prod) stages in our azure resources. All the three are using SQL Database 'Standard S6: 400 DTUs'. Because of Dev and QA SQL Database our monthly cost is going more than 700 euro's. I am planning to move from DTU to vCore serverless. Below are my queries,
Just going into portal -> Compute and storage -> and changing from DTU to vCore Serverless is the right process?
Do i need to take any other things before doing this operation?
Does my existing Azure SQL DB is going to get affected by this operation?
If things are not fine as per my requirement same way can i come back to DTU mode.
Thanks in advance.
You can have a look at this MS doc for details: Migrate Azure SQL Database from the DTU-based model to the vCore-based model
Just going into portal -> Compute and storage -> and changing from
DTU to vCore Serverless is the right process?
Yes! just change to required option from dropdown and click on Apply.
Migrating a database from the DTU-based purchasing model to the
vCore-based purchasing model is similar to scaling between service
objectives in the Basic, Standard, and Premium service tiers, with
similar duration and a minimal downtime at the end of the migration
process.
Do i need to take any other things before doing this operation?
Some hardware generations may not be available in every region. Check availability under Hardware generations for SQL
Database.
In the vCore model, the supported maximum database size may differ depending on hardware generation. For large databases, check supported
maximum sizes in the vCore model for single
databases
and elastic
pools.
If you have geo-replicated databases, during migration, you don't have
to stop geo-replication, but you must upgrade the secondary database
first, and then upgrade the primary. When downgrading, reverse the
order Also go through the doc once.
Does my existing Azure SQL DB is going to get affected by this
operation?
You can copy any database with a DTU-based compute size to a database
with a vCore-based compute size without restrictions or special
sequencing as long as the target compute size supports the maximum
database size of the source database. Database copy creates a
transactionally consistent snapshot of the data as of a point in time
after the copy operation starts. It doesn't synchronize data between
the source and the target after that point in time.
If things are not fine as per my requirement same way can i come
back to DTU mode.
A database migrated to the vCore-based purchasing model can be
migrated back to the DTU-based purchasing model at any time in the
same fashion, with the exception of databases migrated to the
Hyperscale service tier.

Minimal Logging in Azure SQL Database

While analyzing the performance of an Azure SQL Database with huge workload (Business-Critical service tier), I noticed the Log IO Percentage is hammered and hits 100% for considerable time periods, which as a consequence affects the overall performance. The database is being populated by several Data factory pipelines, that embody SSIS packages and stored procedures, and using INSERT/UPDATE statements extensively.
Back in on-premise world, I would change the database recovery model to Simple or Bulk-Logged, and use TABLOCK hint in my inserts, and the minimal logging is achieved (satisfying some other conditions).
Is this kind of minimal logging (TABLOCK) also applicable to Azure SQL Databases ? (I read they are in Full recovery model by default).
How to achieve minimal logging in the Azure SQL Database described above, using the same pipelines?
As Subbu comments, this is not supported by now. You can vote up here to progress this feature. https://feedback.azure.com/forums/217321-sql-database/suggestions/36400585-allow-recovery-model-to-be-changed-to-simple-in-az

Maintenance required for Azure SQL DB in the long term

What is the maintenance required from an organization when deploying an Azure SQL Database in the long term?
My current organization is hoping to do as little database management as possible, and have looked for products that fully manage our databases without much intervention needed from our end. Some products that are being considered includes Snowflake (for their automated partitioning of tables) and Domo (for their data warehousing, connectors, and BI tool offerings).
I'm leaning towards using Azure SQL DB for multiple reasons (products offered, transparent pricing, integration ease, available documentation, SSO, etc.), but want to first understand the skills needed and ease in maintaining it in the long run.
Will we have to manually rebuild indexes and partition out tables as we scale up? Or is Azure intelligent enough that it'll do most of the heavy lifting of performance optimization itself?
Does Azure or other vendors provide services to optimize a DB?
Sorry for the vague prompts, but any additional considerations in choosing DB vendors would be great. Thanks!
Actually for your questions, you should know what is Azure SQL database and it's capabilities.
I'm leaning towards using Azure SQL DB for multiple reasons (products offered, transparent pricing, integration ease, available documentation, SSO, etc.), but want to first understand the skills needed and ease in maintaining it in the long run.
This document What is Azure SQL Database service introduced almost all message you want to know. SQL Database is a general-purpose relational database managed service in Microsoft Azure that supports structures such as relational data, JSON, spatial, and XML. SQL Database delivers dynamically scalable performance within two different purchasing models: a vCore-based purchasing model and a DTU-based purchasing model. SQL Database also provides options such as columnstore indexes for extreme analytic analysis and reporting, and in-memory OLTP for extreme transactional processing. Microsoft handles all patching and updating of the SQL code base seamlessly and abstracts away all management of the underlying infrastructure.
Will we have to manually rebuild indexes and partition out tables as we scale up? Or is Azure intelligent enough that it'll do most of the heavy lifting of performance optimization itself?
No, you don't. Scalability is one of the most important characteristics of PaaS that enables you to dynamically add more resources to your service when needed. Azure SQL Database enables you to easily change resources (CPU power, memory, IO throughput, and storage) allocated to your databases.
You can mitigate performance issues due to increased usage of your application that cannot be fixed using indexing or query rewrite methods. Adding more resources enables you to quickly react when your database hits the current resource limits and needs more power to handle the incoming workload. Azure SQL Database also enables you to scale-down the resources when they are not needed to lower the cost.
For more details, please reference: Scale Up/Down.
Does Azure or other vendors provide services to optimize a DB?
As Woblli said, Azure SQL database provides the Azure SQL database Monitoring and tuning for you.
As a complement, you also can use Azure SQL Database Automatic tuning to help you optimize the database automatically.
Hope this helps.
Azure SQL DB offers the services you're asking.
You can enable automatic tuning, which will create and drop indexes based on performance gains. Force good query plans again based on performance. It will roll back changes if the specific change has decreased the overall database performance level.
It will not partition or shard your database for you however.
Official documentation:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-automatic-tuning

Azure SQL database and pool usage. When to use database pool

Hello I have 14 Databases for Azure SQL with DTU SO, S1 and S4 (prod)
So I am paying for some unused or not frequently used databases.
10 databases for Dev and test. 2 for production.
So I saw one post for Azure elastic pool. It was mentioned with Azure elastic pool. Can somebody suggest which kind database should I put in elastic pool and tips for cost saving.
Also I have Azure storage account (classic). How should I take its backup weekly. Is it possible.
Help and tips will be appreciated.
Thanks
Regards
KP
To keep it simple, Elastic pool will give you number of dtu's which can be used/distributed around number of databases you have as per their need.
So currently if you have 14 databases in S1 tier then you are have 14*50 =700 dtu's , if some databases are not in use, it's possible the dtu's are greatly underutilized.
In this case if You opt for Elastic pool with 50 dtu's then it will distribute among 14 databases , and as per need they will be used. which means you will save more and balance resources.
I have not verified all the numbers I have mentioned, but that's the principle idea.
I will just add to others answers and comments. For backups take in consideration you have Azure automated backups that provides backups with 7-35 days of retention period. Additionally you can use Azure Long-Term Backup Retention which can store backups with a retention period of 10 years.
About choosing the correct pool size to save money one of the documents shared by Nick above states the following: "SQL Database automatically evaluates the historical resource usage of databases in an existing SQL Database server and recommends the appropriate pool configuration in the Azure portal. In addition to the recommendations, a built-in experience estimates the eDTU usage for a custom group of databases on the server. This enables you to do a "what-if" analysis by interactively adding databases to the pool and removing them to get resource usage analysis and sizing advice before committing your changes".
Additionally, "After adding databases to the pool, recommendations are dynamically generated based on the historical usage of the databases you have selected. These recommendations are shown in the eDTU and GB usage chart and in a recommendation banner at the top of the Configure pool page. These recommendations are intended to assist you in creating an elastic pool optimized for your specific databases".

SQL Azure reliability and scalability

I need to make sure the availability of my database is high. working with SQL Azure does not make that clear.
Is there a way to run multi servers (one will take over if one server fails? ) under SQL Azure, above that is there something equivalent to increasing memory on the DB server to speed up the Database processing ?
Read High Availability on the Intro the Azure SQL and then read Business Continuity in Windows Azure SQL Database. To summarize:
Data durability and fault tolerance is enhanced by maintaining
multiple copies of all data in different physical nodes located across
fully independent physical sub-systems such as server racks and
network routers. At any one time, Windows Azure SQL Database keeps
three replicas of data running—one primary replica and two secondary
replicas.
Right now there is no way to specify hardware configuration for SQL Azure Databases. It's totally out of your control and from SAAS perspective that makes sense. The backend management services are responsible making sure you get the best performance possible.
If you need dedicated and reserved hardware for your SQL deployment you may take a look at IAAS offerings in Azure and start a VM with SQL installed however you need to make sure you know the main differences between a IAAS and PAAS offering.
I do not know what your high availability requirements are, but you should look at the SLAs provided by Microsoft. SQL Database offers 99.9% monthly availability.

Resources