With a Small instance worker role containing a WCF service, I want it to auto-scale if the memory usage goes to n%. The WCF application uses Azure SQL Database, which is a Singleton in my application. If/when the application tier autoscales, what is "different" between the two systems that can be tracked by the database? Is there a way to alter the "Application Name" in a DB connection string when things scale-up? Is there an Azure-specific ID that can be trapped and logged in the DB? I could fall-back on hacking the connection string and passing that into SQL myself, but I am hoping there is something built-in I can use now.
I tried looking around on the Azure team's site(s) but have seen nothing clear/definitive.
Thanks.
Connections to SQL Azure are tracked by the host name which is different from machine to machine. Is this what you're trying to achieve by passing machine name into connection string?
You can monitor the connections to SQL Azure database by executing the following query:
SELECT
e.connection_id,
s.session_id,
s.login_name,
s.last_request_end_time,
s.cpu_time,
s.host_name
FROM
sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_connections e
ON s.session_id = e.session_id
I want to also mention that Azure's native auto-scaling feature does not support auto-scaling based on memory utilization and only based on CPU utilization. To auto-scale based on anything but CPU or queue counts, you'll need to use WASABi API or AzureWatch
Related
I have a VM with SQL Server and an application that uses no more than 50 users. I don't require to have a zero downtime application in case my VM or datacenter had an issue, but what I need at least to assure is that I can make the app available again in less than 30 minutes.
First approach: using an Availability Set with 2 VM's won't work actually because my SQL Server lives in the same VM and I don't think Availability Set will take care of the real time replication of my SQL Server data, it will care only about the web application itself and not the persistent data (if I'm wrong please let me know), so having the above statement AV Set is not for me. Also It will be twice expensive because of the 2 VMs.
Second approach: using Recovery Site with disaster recovery I was reading that wont warranty to have a zero data loss, because there is a minimum frequency of replication and I think is 1 hour, so you have to be prepared to deal with 1 hour of data loss and I don't like this.
Third option: Azure Backup for SQL Server VM, this option could work the only downside is that has a RPO of 15 minutes that is not that much, but the problem is that if by some reason the user generates in the app some critical records we wont be able to get them again into the app because the user always destroy everything right away when they register into the app.
Fourth approach: Because I don't really require a zero downtime app, I was thinking on just having the actual VM using 2 premium disks one for SQL Server data files and other for SQL Server logs. In case of a VM failure I will get notified by users inmediately and what I can do is to create a snapshot of OS disk, and SQL premium disks (total of 3) and then create a new VM using these snapshots, so I will get a new working VM maybe in a different region having the exact very last data inserted into SQL before the failure happened.
Of course I guess I will need on top the VM a load balancer so I can just reroute traffic to the new VM. The failed VM i will just kill it and use the new VM as my new system. If fail happens again I just follow same process so this way I just only pay for one VM and not two.
Is this someone has already tried, does this sound reasonable and doable or Im missing a big thing or maybe I wont get what I expect to get?
You better use Azure SQL (PaaS) instead of VM, there are many different options that you can do for your needs. Running SO + SQL in the same VM is not recommended, changing to a Azure SQL (PaaS) you can decrease your hardware for SO VM and configure your SQL for supporting 50 users. Also you can use Load Balancer as you said, either Traffic Manager (https://learn.microsoft.com/pt-br/azure/traffic-manager/traffic-manager-overview) or Application Gateway (https://learn.microsoft.com/pt-br/azure/application-gateway/overview) to route traffic to your SO VM's where the application is running. Depends on your application you can migrate to Azure Web App (https://learn.microsoft.com/en-us/azure/app-service/).
Azure SQL (Paas) you can have less than 30 minutes for sure, I would say almost zero down time although you don't required it.
Automatic backups and Point-in-time restores
https://learn.microsoft.com/pt-br/azure/sql-database/sql-database-automated-backups
Active geo-replication
https://learn.microsoft.com/pt-br/azure/sql-database/sql-database-active-geo-replication
Zone-redundant databases
https://learn.microsoft.com/pt-br/azure/sql-database/sql-database-high-availability
Finally I don't think having Always-on (https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server?view=sql-server-ver15) solution is good, once it is expensive and there are only 50 users. That's why I believe you better thinking of a Saas + PaaS solution for your application and database. Your 4th option sounds fine, but you need to create a new VM, configure IP, install SQL, configure SQL and so on to bring up your SQL.
What users is going to do if it happens when you are not available to fix it immediately? Your 30 minutes won't be accomplished :)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We have several website hosted on a dedicated server with following configuration
Windows OS based dedicated server
MS SQL SERVER Windows 2012 for database
IIS 7.5+
and other software to managing website such as plesk
We have developed websites in asp.net web-form with framework 4.0 or 4.5 (about 10 website)
We also have few asp.net MVC based website with framework 4.5 (about 5 websites)
We use InProc default session state for sessions
beside this we have other software installed for security etc.
we also have mobile application api running same server and same application is use to send push notification to OS & Android devices
Now i have few question regarding migration to Azure enviroment.
First, my big concerns are session state, i want to migrate to Azure without making any changes to code except changes to web.config is this possible?
Second we use MS SQL SERVER 2012 as database and on azure we have to use Azure SQL Database which i believe is not same as MS SQL SERVER, Can i use Azure SQL Database or i should still to MS SQL SERVER as i have application running on this and migrating to Azure database may create problems?
Third, let us say i choose web+Mobile--> App Service Standard Package (which comes with upto 10 instance) what are these instance? and will individual session always connect to same instance?
Forth: I have about 20 database one of them is about 6GB & other database are about 200MB-700MB which service should i use for database in case i use Azure SQL Database
Single Database or Elastic?
Can i create multiple database under elastic mode?
Let us say if i choose "100 eDTUs: 10 GB included storage per pool, 200 DBs per pool, $0.20/hour" will i have total of 10GB space for all
database in elastic pool and what is per pool and how many pools will
i get in this option.
or MS SQL SERVER on Virtual Machine is better option as i can run SQL Server based session
Fifth: Disk Space, let us say i choose App Service "S2: 2 Cores(s), 3.5 GB RAM, 50 GB Storage, $0.200", is 50GB disk space include OS or space allocated to file which we upload?
Sixth: Some of our application are used to send push notification to iOS & Android device i am not sure if they will work in Azure environment as they need certain ports to be open and also some sort of certificate to be installed on the server.
I have asked too many question as i didn't had clarity from MS chat as they just passed links which can be confusing at time, i hope i get clarity here
Q: First, my big concerns are session state, i want to migrate to Azure without making any changes to code except changes to web.config is this possible?
If one of your concerns is code refactoring, then the model you should chose is Infrastructure-As-A-Service. In this model, there is no need to change in code because the infrastructure on Azure can be similar to the on-premises in which you provision virtual machines to run Windows Server, SQL Server and IIS. Software versions are all of your choice with no limitation. As long as the software version is still supported in Microsoft product lifecycle when procuring new software license.
If you'd love to modernize your web application, Azure App Service can be a good chosen destination. Azure App Service can run code compiled against .NET 4.0 framewor. InProc session state is not guaranteed in Azure App Service so you need to look into an alternative if using Azure App Service, for example Azure Redis Cache.
Q: Second we use MS SQL SERVER 2012 as database and on azure we have to use Azure SQL Database which i believe is not same as MS SQL SERVER, Can i use Azure SQL Database or i should still to MS SQL SERVER as i have application running on this and migrating to Azure database may create problems?
Without impact analysis and how complex your data model is, it's hard to say whether Azure SQL Database is compatible with your database. Fortunately, Microsoft provides a tool named Data Migration Assistant (DAM) which assists you to perform database compatibility analysis for Azure SQL Database. This link gives you more details on DAM (https://learn.microsoft.com/en-us/azure/sql-database/sql-database-cloud-migrate). Moving from SQL Server to Azure SQL Database would gain more benefits in high availability, disaster recovery and scalability. Administration effort with server management, OS patching is significantly reduced. With SQL Server in Azure VM, the migration cost is much better as you only need to shift and lift (provision VM, perform database detach/attach or other backup/restore methods).
Q: Third, let us say i choose web+Mobile--> App Service Standard Package (which comes with upto 10 instance) what are these instance? and will individual session always connect to same instance?
No, session would not be maintained in guaranteed mode. When you chose Azure App Service, your web application will be run on virtualized servers running Windows Server and IIS. The term "Instance" is server instance. Azure App Service helps you handle scaling by allocating compute resource across multiple instance to make sure your application does not get crashed with inadequate memory and resource. The default at the first time you provision your web app is 1, but the number of instance is configurable.
Q: Forth: I have about 20 database one of them is about 6GB & other database are about 200MB-700MB which service should i use for database in case i use Azure SQL Database
Single Database or Elastic?
Can i create multiple database under elastic mode?
Let us say if i choose "100 eDTUs: 10 GB included storage per pool, 200 DBs per pool, $0.20/hour" will i have total of 10GB space for all database in elastic pool and what is per pool and how many pools will i get in this option.
or MS SQL SERVER on Virtual Machine is better option as i can run SQL Server based session
Choosing Single Database or Elastic depends on performance and peak load of your database. Single database is used for independently database, when you can specify the DTU (Data Transaction Unit) for predictable performance. While Elastic Pool is best for managing set of databases in a pool. Elastic Pool is a choice for unpredictable performance and usage.
In your case, I'd recommend to use Elastic Pool to rescue performance. Elastic Pool allows you to set eDTU for your pool no matter how much of DTU a specific database in a pool needs. Elastic Pool monitors and perform performance analysis in depth to give you an insight and overall picture of each database performance.
When it comes to pool, you should not worry about how much storage you are given to each database. You don't also have to worry about the number of databases you can store in a pool. Saying you have total 20 databases, you need only one pool.
The eDTU you need can be calculated via this website http://dtucalculator.azurewebsites.net/. Just run one of the given scripts in the website on your SQL Server (where your on-premises databases are running) to capture performance metrics, then upload Excel file to the website. It will gives you a number. For example, the result says that total 20 databases need totally 100 eDTU. Then you just create an Elastic pool and adjust 100 eDTU for the pool. However, if using Elastic Pool Basic, you are only given 10 GB per pool which is not enough for 120 GB (20 * 6 GB), then you need Elastic Pool Standard for 100 eDTU to achieve 750 GB maximum. Note that you can choose Basic plan of 1,200 eDTU to achieve 156 GB maximum. However, this way is never recommended because storage space is much cheaper than eDTU.
In a nutshell, with your draft info above, I'd recommend to chose Standard plan of Elastic Pool with 100 eDTU. You can increase number of eDTU if it does not satisfy the performance of totally 20 databases. No database downtime is needed when adjusting eDTU number.
Creating only 1 pool is not really my recommendation. It depends on your database workload. For example, in 20 databases, there are 5 databases that are heavy workload for an ERP or business-critical systems while the rest are just normal databases. In this case, you'd need two Elastic pools. One pool with high number of eDTU is set, and another pool has low number of eDTU.
Q: Fifth: Disk Space, let us say i choose App Service "S2: 2 Cores(s), 3.5 GB RAM, 50 GB Storage, $0.200", is 50GB disk space include OS or space allocated to file which we upload?
When it comes to Azure App Service, OS is not counted in. 50 GB storage space is given directly to your application's space (to store image, compiled DLL, video, library..)
Q: Sixth: Some of our application are used to send push notification to iOS & Android device i am not sure if they will work in Azure environment as they need certain ports to be open and also some sort of certificate to be installed on the server.
Azure Notification Hubs can help you achieve push notification. Azure Notification Hub allows you to use certificate of each kind of platform (e.g iOS to manage devices. This is a sample reference if you are familiar with iOS https://learn.microsoft.com/en-us/azure/notification-hubs/notification-hubs-ios-apple-push-notification-apns-get-started. Azure Notification Hub also supports token-based for APNS if you need.
For each case, please give more details (e.g. your mobile scenario) , and specific questions if possible so I and people here can elaborate more.
One of my customer is developing multi-tenant solution. And I'm working as developer for the automation of resource provisioning part. The solution is developed such that each tenant have their resources separate from each other.
So for example, a single tenant will require a SQL database (PAAS), A Storage Account, and also many other resources.
One of the requirement that, customer set is, he wants to have X number of databases to be hosted on a SQL server (a logical server not VM). Which I don't think is valid having been using SQL as PAAS.
So My question is, Should we create SQL Server and SQL database for each tenant?
Or
Should we create a SQL server then host X number of databases on that server. when server reaches limits (X databases), create another server and execute same logic.
In either scenario, what difference does it make from Database Performace, Pricing and Database security point of view?
FYI, My thinking is that, If I host 'X' database on a single SQL Logical Server or If I create 'X' SQL Logical Server for 'X' SQL database hosting, It won't make any difference from Pricing and Database Performace point of view.
Few differences i could think of, if you go with single server for all clients..
1.Administrator Password is per Server and using this,one client can have access to other databases as well..
2.Azure has a limit of how many DTU's can be capped under one server,so if you have many databases under one server..This may lead to few issues like
a.)frequent DTU increase requests
b.)some times automated backup may fail,if there are no DTU's available(Backup needs to copy the whole database,so in this process ,DTU's needed will be equal to database which is backed up)
Your question is too broad, as there are many opinions and approaches to your question.
But in any way you should take a look at elastic database pools: https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-pool/ which is a feature exactly designed for multi-tenant SaaS solutions.
Your end solution may be a combination of both - you may want to use a single server to "bigger" tenants, while you can host multiple small tenants together in a single server.
Security shall not be a factor with big weight because, when you use database contained credentials for application access, it does not really matter whether the databases are allocated in single logical server or not.
I have one Microsoft Azure subscription with one cloud service and one sql azure instance. Now I want create another cloud service with a different subscription (using a different microsoft account). With this second cloud service, can I use the same sql azure instance of the first subscription? (I need to share data between the two cloud service)
Or there may be performance issues?
Thanks in advance
Yes. Azure SQL DB instance can be accessed from different subscription as long as you have the connection string, username and password to the Azure SQL instance. As long as both the services are from the same region, there is no performance issue.
Yes, sure. From user perspective SQL Azure is mostly an ordinary SQL Server which you can access from anywhere in the world (given that the firewall rules allow that access) - from Azure services, from VMs in some other services hosted elsewhere, from your desktop, from servers in your company server room.
Network latency might kick in. Also more clients to the same instance mean more load. Also there's a limit on number of concurrent connections. Other than that - no problems.
You need to make sure are a member in each Azure instance to be able to use the others SQL DB
How do I see if an SQL Azure database is being throttled?
I want to see data like: what percentage of time it was throttled, the count of throttles, the top reasons of throttles.
See https://stackoverflow.com/questions/2711868/azure-performance/13091125#13091125
Throttling is the least of your troubles. If you need performance then you would be best served to build your own DB servers using VM roles. I found that the performance of these is vastly improved over SQL Azure. For fault tolerance you can provision a primary and a failover in a different VM in a different region if necessary. Make sure that the DB resides on the local drive.
I don't believe that information is currently available. However, the team does share reasons why you could be throttled and how to handle it (see here).