I'm deploying an eCommerce site for my customer in spain. So, my first idea was to deploy it to the Azure Nortwest region.
The problem is that, even with the SLA of 99.99%, there could happen that the whole Azure datacenter would broke-down. (The same as the Amazon S3 services that went down for severeal hours some months ago).
My question is: How to protect against this eventual problem? I know that I can change my DNS C-Name to change the endpoint website, but it takes a lot of time to propagate DNS changes, and I must have a very-current backup of the database to be able to restore it into another server.
I know I can use traffic manager too, but I still have the problem with the database....
Which is the best aproach to solve this problem?
Also, I have some doubts about if this is reasonable to take this into consideration for a medium sized company.
Is anyone doing this, and is happy with the solution? 8ยท)
thanks in advance for your help,
luis
SQL Data Sync is a great way to synchronize the data between Azure SQL Databases. It works across Data Centers as well as regions. Using SQL Data Sync you could create a second database in another data center and synchronize the data between the two databases. There will likely be a period of time that you are exposed to loss however as the time between automatic syncs currently can't be lower that five minutes.
Related
I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.
Is there any way to stop a SQL Azure DB so that it doesn't charge anything towards our account? I don't want to delete it, just while in testing and it's not being used than we set it to "stopped" like we can do with websites, cloud services, and VMs.
As of 10th February, 2023, The answer is No.
They won't allow it. So the billing will continue for your Azure Database starting the day you create it. There really is no way to pause / stop billing for your Azure SQL Database.
Official Source: feedback.azure.com Please add ability to temporarily turn off/on SQL Azure server to pause billing
Microsoft's official answer appears to be "Yes, you can export your database. Delete the Azure SQL database and that will pause billing. Then when you need it you can create a new database and import your previously expored DB."
I don't believe this is acceptable as an answer for "Allow me to temporarily turn off SQL Server to save on my billing"
This is not an option today - the only choice you have is to reduce the size of the Azure SQL Database which will reduce the cost from the next hour of service. If you really don't want to pay for the DB you could backup the DB to blob storage, delete the database and then restore when required. You could orchestrate this using PowerShell or similar.
Update May 2019: There is a new Azure SQL Database "Serverless" tier coming that might meet some of the requirements around reducing costs by not billing when not in use. Official documentation is available to read.
The Azure SQL Database team is happy to announce that there is now an option that may address your request. We just announced a "serverless" option for Azure SQL DB that will pause your database when it is not in use. You can read more about the feature here:
SQL Database Serverless
The databases get backed up automatically just before a drop. so, you can just drop it when you dont need it and restore it when needed.
Restores will take some time depending on the database size and how much log you generated, so it wont be fast for large databases.
Also, there is an expiration policy on how long the backups are retained (depends on the service tier) so just watch out for that.
https://msdn.microsoft.com/en-us/library/azure/jj650016.aspx
Agree with #Shiva answer.
But if you are simply trying out SQL Server on an Azure VM, you would not want to incur charges by accidentally leaving it running over the weekend or weeks. One solution is to use Automatic Shutdown Feature.
This is now possible and the preview feature is public.
Azure SQL Database serverless
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-serverless
Whereas, at this low scale (scale down requirement (pause) rather than scale up), SQL running within a VM might be a better answer for you...
As is it is not supported however there are a few work arounds. It really depends upon how long you want to stop it and how immediately you want it and the size of the database. If it is for a couple hours then it may not worth it because the billing is hourly and you may have round off issues. If days then you can drop it and restore when needed. If weeks then exporting the database and importing when needed is another option. Also you may want to check backup strategy for the edition you choose before preferring export / restore.
The other thing to note is the immediate need vs planned. If immediate, and the db is big then make sure the SLAs in place works for you
You could export the database to Azure storage and Import it when you want to re-enable it, as suggested here:
Temporarily turn off on sql
Here's what I did with my Azure database (4/20/19)
I scaled the database DTUs as low as possible while still reserving up to 250GB, which turns out to be 10 DTUs with an estimated cost of 1.50 per DTU (estimated at $15 per month). To me, that's almost as good as turning it off and on. I can scale it up to 100 DTUs when I want to get more processing and scale it down when I don't. Even at 100 DTUs for a whole month, the cost was only $75.93 last month on my test database.
NOTE: I am NOT using a VM to run a database, just the standard SQL server you get when you create a database in Azure.
Yes you can, with Azure Serverless SQL Database. While your compute resources will be suspended when the database is not in use, you'll definitely save the costs for compute resources, however the billing will continue for storage resources. You can set the inactive timeout for the database after which the compute resources will be suspended. This timeout can be as low as 1 hour.
Read this: https://azure.microsoft.com/en-in/updates/update-to-azure-sql-database-serverless-providing-even-greater-price-optimization/
Elastic PoolIf you have more than one database you can use the Elastic Pool option to bring your total cost down.
Others also mention the option to Drop your database, and rely on restore. That will also work, if you do not leave it deleted for too long...
We have TONS of websites hosted on Azure. Our VMs appear to be running now, but many of our Azure Websites are not. In an effort to bring our sites back up sooner than later, we have tried scaling UP, OUT, and changing our hosting plan, to no avail. Is there a way to force an Azure Website VM to move to another (working) datacenter? We don't want to destroy the site and bring it back up, as we will be forced to update DNS, which will cause an even longer delay in service to our customers.
Any help is greatly appreciated.
Sorry to everyone else experiencing a long night right along with me.
Your best bet is to run two instances of the site in two Regions and use something like Traffic Manager (or AWS Route 53 if you want something external to Azure) to perform failover routing for you.
Depending on the type of sites you could run a static holding site in a non-Azure environment and failover to that. How you choose to solve this will depend on what your budget is (or opportunity cost in the event your sites are offline).
Note that a 99.9% yearly SLA equates to almost 9 hours of downtime in a year.
If you want to understand how you could solve this intra-Azure here's a good guide: http://blog.kloud.com.au/2014/11/03/deploy-an-ultra-high-availablity-mvc-web-app-on-microsoft-azure-part-1/
I'm working on a quiet large and critical application. It's been deployed to azure with 3 web roles and sql azure db.
In case of disaster, we need to be able to restore both web roles and sql azure to different data centers. Could someone please help me how we can restore SQL Azure DB and Web Role(s) to different data center.
The simple answer is that you take regular backups of your SQL Azure database, which can be restored to a database in another datacenter. You will have a problem with the data since the last backup being lost, which becomes a more difficult problem to resolve โ the simplest may be to have a hot standby and use SQL Database Data Sync, but it may not be practical for all the data. Web roles are easier โ you redeploy them somewhere else, and change the connection strings to the database. You would also have to change the CNAME for your domain as they will be restored to a different cloudapp.net name.
You did ask for restore, and not failover, right? Performing a failover (where you have a hot standby) is a more difficult problem, particularly as far as data synchronisation is concerned.
I would go back and question 'disaster' and correlate with known facts. I am not sure of the outage history of Azure in specific data centres, but there have been significant Azure-wide outages (leap year 2012 and the certificate problem this year). The ability to restore to a different Azure datacentre won't help you in these scenarios. (Although AWS seems to mostly have regional outages) I don't think that a datacenter-specific recovery strategy is necessary on Windows Azure, but you may want to check the history and likelihood of datacenter-specific failures before making a final call. Having a multi-region architecture that distributes load and data across datacentres, and handles live traffic across all (say using traffic manager), has many benefits โ of side effect being builtin-disaster recovery - but comes at an architectural, development, hosting and bandwidth cost.
Go back and write the business case for your datacenter disaster recovery scenario. You may find that it is not worth it financially, or doesn't solve your real problem.
We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?