I signed up for windows azure and I was given a 1gb database as part of my trial. So my max size is 1gb and once I reach that size inserts will start to fail until I update the max size to 10gb. Now my question is, if I update the max size now to 10gb and I only use 400mb, will I still be charged that the 1gb rate? I think the answer is yes and if it is they why don't i just set the max size at 50gb so an insert never fails?
There are two editions: Web (1GB and 5GB) and Business (10GB through 50GB in 10GB increments). If you stay with a Web edition and go over the 1GB threshold on any given day, you'll be charged at the 5GB rate for that day. This is amortized daily over the month. So it's entirely possible you'll accrue costs just a little bit over the 1GB rate (if you upgrade to the 5GB Web Edition).
Moving to the Business edition, the lowest tier is 10GB, so that would be your baseline rate. Again, it's amortized daily.
If you want to set Web edition to 5GB (or Business edition to 50GB), you're going to avoid insert fails, as you pointed out. The tiers are going to help you when trying to manage cost.
See this MSDN blog post detailing the tiers, along with information on the ALTER DATABASE command.
Related
We currently have an elastic pool of databases in Azure that we would like to scale based on high eDTU usage. There are 30+ databases in the pool and they currently use 100GB of storage (although this is likely to increase).
We were planning on increasing the eDTU's allocated to the pool when we detect high eDTU usage. However a few posts online have made me question how well this will work. The following quote is taken from the azure docs - https://learn.microsoft.com/en-us/azure/sql-database/sql-database-resource-limits
The duration to rescale pool eDTUs can depend on the total amount of storage space used by all databases in the pool. In general, the rescaling latency averages 90 minutes or less per 100 GB.
If i am understanding this correctly this means that if we want to increase the eDTUs we will have to wait for on average 90 minutes per 100GB. If this is the case scaling dynamically won't be suitable for us as 90 minutes to wait for an increase in performance is far too long.
Can anyone confirm if what i have said above is correct? And are there any alternative recommendations to increase eDTUs dynamically without having to wait for such a long period of time?
This would also mean if we wanted to scale based on a schedule, i.e. scale up eDTUs at 8am we would actually have to initiate the scaling at 6:30am to allow for the estimated 90mins of scaling time - if my understanding of this is correct.
When you scale the pool eDTUs, Azure may have to migrate data (this is a shared database service). This will take time, if required. I have seen scaling being instant and I have seen it take a lot of time. I think that Microsoft's intent is to offer cost savings via Elastic Pools and not the thru ability to quickly change eDTUs.
The following is the answer provided by a Microssoft Azure SQL Database manager:
For rescaling a Basic/Standard pool within the same tier, some service
optimizations have occurred so that the rescaling latency is now
generally proportional to the number of databases in the pool and
independent of their storage size. Typically, the latency is around
30 seconds per database for up to 8 databases in parallel provided
pool utilization isn’t too high and there aren’t long running
transactions. For example, a Standard pool with 500 databases
regardless of size can often be rescaled in around 30+ minutes (i.e.,
~ 500 databases * 30 seconds / 8 databases in parallel).
In the case of a Premium pool, the rescaling latency is still
proportional to size-of-data.
This Azure SQL Database manager promised to update Azure documentation as soon as they finish implementing more improvements.
Thank you for your patience waiting for this answer.
We have an SSAS tabular model that we want to add partitions to. The server is hosted on Azure with 100GB of memory (the highest tier). We manage to create 5 out of 20 partitions, but when we try to create the sixth partition we get the following error:
Failed to save modifications to the server. Error returned: 'Memory error: You have reached the maximum allowable memory allocation for your tier. Consider upgrading to a tier with more available memory.
Technical Details:
RootActivityId: b2ae04c9-f0eb-4f62-93f9-adcda143a25d
Date (UTC): 9/13/2017 7:43:46 AM
The strange thing is that the memory usage is just around 17gb out of 100gb when we check the server monitoring logs.
I have seen a similar issue in Azure Analysis Services maximum allowable memory issue, but I don't think this is the same problem.
Another funny thing is that we have managed to process another model with the same type of data, but the tables used in that model are even bigger than the tables in this model. The server that is hosting that model has the same amount of memory as the server that is hosting the model that fails partitioning.
If it is of any help, we upgraded this server's tier, so perhaps there is a bug in Azure so it thinks we have the old pricing tier with the lower amount of memory?
The strange thing is that our on-premise data gateway computer was the cause of this problem.. I don’t know why but we got rid of this error once we restarted the gateway computer...
Sadly last week azure transfer one database from web tier to s1 tier. I manually increase the tier to s2. worked hard to change some stuff in the system so the dtu wont reach 100%.
Now i have new situation - i get background stuff that run and doing stuff in the db like delete etc. the problem is that the background stuff consume 100 percent dtu and my website start getting errors.
my question is: is there a way to tell the sql per query/operation to consume max of X dtu? for example i want to create an index and again when i do the operation my dtu raise to 100 and it stayed there allot of time - guess its a big index to build - so again im stuck and i cancel the query because i dont want my end users to suffer lags.
all those issue didnt exists in the web tier and everything worked smoothly.
That's a very nice suggestion,unfortunately limiting a particular query or operation to consume limited DTU is not possible ..may be in future versions they might bring resource governor like tools
Closest thing i can think of limiting DTU for a query is set to
Option (MAXDOP 1)
Query may go in Parallel and consume more resources for each thread ,so limiting MAXDOP will help in limiting DTU with some caveats
I am using a azure websites solution with 20 websites. Hosted on 4 cores, 8 GB RAM standard instance. I would like to know how I could do scaling in Azure websites and when to do it ?
Also I am reading some values from the new azure portal.
Can someone guide me on the values that I see here ?
Thank you
Averages
The Avg % is telling you, on average, how much of that resource is being used. So, if you have 8GB of ram, and you are typically using 66% of it, then you are averaging 5.28 Gb of ram used. Same goes for the CPU average listed below.
For the totals, I have no idea.
You're not using much of the CPU available to you here, but you are definitely taking advantage of the RAM. I'm not sure of what kind of web application you are running though, so it's dificult to determine what could be causing this.
Scaling
In terms of scaling, I always suggest starting with a small machine, then gradually scaling up.
Based on your usage, I'd drop to a machine that has fewer CPU cores, but more available RAM. From within your dashboard, you can see how to scale by clicking no your web app, then scrolling down. Click on the scale tab and it should appear as it does below:
You can now adjust what you want to scale by. The default setting is CPU Percentage, but that isn't particularly useful in this case. Instead, select Schedule and performance rules and a new panel wioll appear. On the right hand side, select Metric name and look for Memory Percentage.
In your particular case, this is helpful as we saw that your RAM is consistently being used.
Look at Action and you will want to Increase count by and change the number of VMs to 1. What this does is when your RAM reaches a certain usage %, Azure will auto-scale and generate a new VM for you. After a cool down period of 5 minutes (the default, listed at the bottom), your machine will revert to 1 machine.
Conclusion
With these settings, each time your website uses <= (Select your percentage) of RAM, Azure will increase the size of your machines.
In your case, I suggest using fewer cores, but more RAM.
Make sure you save your settings, with the Save button above.
Scott Hanselman as a great blog post on how to make sense of all of this.
How do I resize my SQL Azure Web Edition 5 GB database to a 1 GB database? I no longer need the extra capacity and do not want to be billed at the higher rate. I don't see anything in the Management Portal and a quick web search also turned up nothing.
I answered a similar question here. It should be as simple as running an ALTER DATABASE command:
ALTER DATABASE MyDatabase MODIFY (EDITION='WEB', MAXSIZE=1GB)
Just keep this in mind: As long as your usage is <= 1GB, you'll be billed at the 1GB rate. Billing is monthly but amortized daily. So you actually don't need to reduce your max size unless you're relying on SQL Azure to prevent you from growing beyond 1GB.
EDIT: As of Feb. 2012, there's a new 100MB pricing tier (at $4.99, vs. $9.99 at 1GB). While you can't set MAXSIZE to 100MB, your monthly cost will drop if you stay under 100MB. See this blog post for more details.