How do I resize my SQL Azure Web Edition 5 GB database to a 1 GB database? I no longer need the extra capacity and do not want to be billed at the higher rate. I don't see anything in the Management Portal and a quick web search also turned up nothing.
I answered a similar question here. It should be as simple as running an ALTER DATABASE command:
ALTER DATABASE MyDatabase MODIFY (EDITION='WEB', MAXSIZE=1GB)
Just keep this in mind: As long as your usage is <= 1GB, you'll be billed at the 1GB rate. Billing is monthly but amortized daily. So you actually don't need to reduce your max size unless you're relying on SQL Azure to prevent you from growing beyond 1GB.
EDIT: As of Feb. 2012, there's a new 100MB pricing tier (at $4.99, vs. $9.99 at 1GB). While you can't set MAXSIZE to 100MB, your monthly cost will drop if you stay under 100MB. See this blog post for more details.
Related
I set up azure sql database in free tier for testing purposes. It has 32 MB limit, but it should be fine, since my db is about 30 tables and few rows of data per each (really just for testing purposes).
After some while, I reached 32 MB limit. I was forced to delete (and drop) all the tables. Now the db takes 87.5 % WITH NO TABLES IN IT.
I followed this post about data size investigations and here are the results:
(more rows here, but each with 0.1 MB and less)
I tried to run DBCC SHRINKFILE (log, 0); but nothing has changed.
I also did sp_spaceused
Which resulted in:
The percentage form azure portal (87.5 %) changes time to time for no reason (sometimes it drops to 37.5 %)
So my question is - what am I doing wrong here? How should I proceed to not have most of the db filled without any data..?
The Azure free tier account type provides 250 GB free storage with S0 instance for 12 months.
Please create another database (upto 10 databases allowed in free tier) using S0 instance. Refer the steps given by #FabioSipoliMuller in this thread to deploy the same.
Note: You need to make some changes in configuration while deploying the database in free service.
We currently have an elastic pool of databases in Azure that we would like to scale based on high eDTU usage. There are 30+ databases in the pool and they currently use 100GB of storage (although this is likely to increase).
We were planning on increasing the eDTU's allocated to the pool when we detect high eDTU usage. However a few posts online have made me question how well this will work. The following quote is taken from the azure docs - https://learn.microsoft.com/en-us/azure/sql-database/sql-database-resource-limits
The duration to rescale pool eDTUs can depend on the total amount of storage space used by all databases in the pool. In general, the rescaling latency averages 90 minutes or less per 100 GB.
If i am understanding this correctly this means that if we want to increase the eDTUs we will have to wait for on average 90 minutes per 100GB. If this is the case scaling dynamically won't be suitable for us as 90 minutes to wait for an increase in performance is far too long.
Can anyone confirm if what i have said above is correct? And are there any alternative recommendations to increase eDTUs dynamically without having to wait for such a long period of time?
This would also mean if we wanted to scale based on a schedule, i.e. scale up eDTUs at 8am we would actually have to initiate the scaling at 6:30am to allow for the estimated 90mins of scaling time - if my understanding of this is correct.
When you scale the pool eDTUs, Azure may have to migrate data (this is a shared database service). This will take time, if required. I have seen scaling being instant and I have seen it take a lot of time. I think that Microsoft's intent is to offer cost savings via Elastic Pools and not the thru ability to quickly change eDTUs.
The following is the answer provided by a Microssoft Azure SQL Database manager:
For rescaling a Basic/Standard pool within the same tier, some service
optimizations have occurred so that the rescaling latency is now
generally proportional to the number of databases in the pool and
independent of their storage size. Typically, the latency is around
30 seconds per database for up to 8 databases in parallel provided
pool utilization isn’t too high and there aren’t long running
transactions. For example, a Standard pool with 500 databases
regardless of size can often be rescaled in around 30+ minutes (i.e.,
~ 500 databases * 30 seconds / 8 databases in parallel).
In the case of a Premium pool, the rescaling latency is still
proportional to size-of-data.
This Azure SQL Database manager promised to update Azure documentation as soon as they finish implementing more improvements.
Thank you for your patience waiting for this answer.
Sadly last week azure transfer one database from web tier to s1 tier. I manually increase the tier to s2. worked hard to change some stuff in the system so the dtu wont reach 100%.
Now i have new situation - i get background stuff that run and doing stuff in the db like delete etc. the problem is that the background stuff consume 100 percent dtu and my website start getting errors.
my question is: is there a way to tell the sql per query/operation to consume max of X dtu? for example i want to create an index and again when i do the operation my dtu raise to 100 and it stayed there allot of time - guess its a big index to build - so again im stuck and i cancel the query because i dont want my end users to suffer lags.
all those issue didnt exists in the web tier and everything worked smoothly.
That's a very nice suggestion,unfortunately limiting a particular query or operation to consume limited DTU is not possible ..may be in future versions they might bring resource governor like tools
Closest thing i can think of limiting DTU for a query is set to
Option (MAXDOP 1)
Query may go in Parallel and consume more resources for each thread ,so limiting MAXDOP will help in limiting DTU with some caveats
I'm looking at performance improvements for Azure Web Role and wondering if Diagnostics should be left on when publishing/deploying to the production site. This article says to disable it, but one of the comments say you lose critical data.
You should absolutely leave it enabled. How else will you do monitoring or auto-scaling of your application, once it is running in production?
Whether you use on-demand monitoring software like RedGate/Cerebrata's Diagnostic Manager or active monitoring/auto-scaling service like AzureWatch, you need to have Diagnostics enabled so that your instances are providing the external software with a way to monitor it and visualize performance data.
Just don't go crazy and enable every possible diagnostic data to be captured at the most frequent rate possible, but do so on a need basis.
Consider the reality that these "thousands of daily transactions" cost approximately 1 penny for 100k of transactions. So, if you transfer data once per minute to table storage, this is 1440 transactions per server per day, or 43,200 transactions per server per month. A whopping 0.43cents per server per month. If the ability to quickly debug or be notified of a production issue is not worth 0.43 cents per server per month, then you should reconsider your cost models :)
HTH
I signed up for windows azure and I was given a 1gb database as part of my trial. So my max size is 1gb and once I reach that size inserts will start to fail until I update the max size to 10gb. Now my question is, if I update the max size now to 10gb and I only use 400mb, will I still be charged that the 1gb rate? I think the answer is yes and if it is they why don't i just set the max size at 50gb so an insert never fails?
There are two editions: Web (1GB and 5GB) and Business (10GB through 50GB in 10GB increments). If you stay with a Web edition and go over the 1GB threshold on any given day, you'll be charged at the 5GB rate for that day. This is amortized daily over the month. So it's entirely possible you'll accrue costs just a little bit over the 1GB rate (if you upgrade to the 5GB Web Edition).
Moving to the Business edition, the lowest tier is 10GB, so that would be your baseline rate. Again, it's amortized daily.
If you want to set Web edition to 5GB (or Business edition to 50GB), you're going to avoid insert fails, as you pointed out. The tiers are going to help you when trying to manage cost.
See this MSDN blog post detailing the tiers, along with information on the ALTER DATABASE command.