I set up azure sql database in free tier for testing purposes. It has 32 MB limit, but it should be fine, since my db is about 30 tables and few rows of data per each (really just for testing purposes).
After some while, I reached 32 MB limit. I was forced to delete (and drop) all the tables. Now the db takes 87.5 % WITH NO TABLES IN IT.
I followed this post about data size investigations and here are the results:
(more rows here, but each with 0.1 MB and less)
I tried to run DBCC SHRINKFILE (log, 0); but nothing has changed.
I also did sp_spaceused
Which resulted in:
The percentage form azure portal (87.5 %) changes time to time for no reason (sometimes it drops to 37.5 %)
So my question is - what am I doing wrong here? How should I proceed to not have most of the db filled without any data..?
The Azure free tier account type provides 250 GB free storage with S0 instance for 12 months.
Please create another database (upto 10 databases allowed in free tier) using S0 instance. Refer the steps given by #FabioSipoliMuller in this thread to deploy the same.
Note: You need to make some changes in configuration while deploying the database in free service.
Related
We currently have an elastic pool of databases in Azure that we would like to scale based on high eDTU usage. There are 30+ databases in the pool and they currently use 100GB of storage (although this is likely to increase).
We were planning on increasing the eDTU's allocated to the pool when we detect high eDTU usage. However a few posts online have made me question how well this will work. The following quote is taken from the azure docs - https://learn.microsoft.com/en-us/azure/sql-database/sql-database-resource-limits
The duration to rescale pool eDTUs can depend on the total amount of storage space used by all databases in the pool. In general, the rescaling latency averages 90 minutes or less per 100 GB.
If i am understanding this correctly this means that if we want to increase the eDTUs we will have to wait for on average 90 minutes per 100GB. If this is the case scaling dynamically won't be suitable for us as 90 minutes to wait for an increase in performance is far too long.
Can anyone confirm if what i have said above is correct? And are there any alternative recommendations to increase eDTUs dynamically without having to wait for such a long period of time?
This would also mean if we wanted to scale based on a schedule, i.e. scale up eDTUs at 8am we would actually have to initiate the scaling at 6:30am to allow for the estimated 90mins of scaling time - if my understanding of this is correct.
When you scale the pool eDTUs, Azure may have to migrate data (this is a shared database service). This will take time, if required. I have seen scaling being instant and I have seen it take a lot of time. I think that Microsoft's intent is to offer cost savings via Elastic Pools and not the thru ability to quickly change eDTUs.
The following is the answer provided by a Microssoft Azure SQL Database manager:
For rescaling a Basic/Standard pool within the same tier, some service
optimizations have occurred so that the rescaling latency is now
generally proportional to the number of databases in the pool and
independent of their storage size. Typically, the latency is around
30 seconds per database for up to 8 databases in parallel provided
pool utilization isn’t too high and there aren’t long running
transactions. For example, a Standard pool with 500 databases
regardless of size can often be rescaled in around 30+ minutes (i.e.,
~ 500 databases * 30 seconds / 8 databases in parallel).
In the case of a Premium pool, the rescaling latency is still
proportional to size-of-data.
This Azure SQL Database manager promised to update Azure documentation as soon as they finish implementing more improvements.
Thank you for your patience waiting for this answer.
We run a web service that gets 6k+ requests per minute during peak hours and about 3k requests per minute during off hours. Lots of data feeds compiled from 3rd party web services and custom generated images. Our service and code is mature, we've been running this for years. A lot of work by good developers has gone into our service's code base.
We're migrating to Azure, and we're seeing some serious problems. For one, we are seeing our Premium P1 SQL Azure database routinely become unavailable for 1-2 full entire minutes. I'm sorry, but this seems absurd. How are we supposed to run a web service with requests waiting 2 minutes for access to our database? This is occurring several times a day. It occurs less after switching from Standard level to Premium level, but we're nowhere near our DB's DTU capacity and we're getting throttled hard far too often.
Our SQL Azure DB is Premium P1 and our load according to the new Azure portal is usually under 20% with a couple spikes each hour reaching 50-75%. Of course, we can't even trust Azure's portal metrics. The old portal gives us no data for our SQL, and the new portal is very obviously wrong at times (our DB was not down for 1/2 an hour, like the graph suggests, but it was down for more than 2 full minutes):
Azure reports the size of our DB at a little over 12GB (in our own SQL Server installation, the DB is under 1GB - that's another of many questions, why is it reported as 12GB on Azure?). We've done plenty of tuning over the years and have good indices.
Our service runs on two D4 cloud service instances. Our DB libraries are all implementing retry logic, waiting 2, 4, 8, 16, 32, and then 48 seconds before failing completely. Controllers are all async, most of our various external service calls are async. DB access is still largely synchronous but our heaviest queries are async. We heavily utilize in-memory and Redis caching. The most frequent use of our DB is 1-3 records inserted for each request (those tables are queried only once every 10 minutes to check error levels).
Aside from batching up those request logging inserts, there's really not much more give in our application's db access code. We're nowhere near our DTU allocation on this database, and the server our DB is on has like 2000 DTU's available to be allocated still. If we have to live with 1+ minute periods of unavailability every day, we're going to abandon Azure.
Is this the best we get?
Querying stats in the database seems to show we are nowhere near our resource limits. Also, on premium tier we should be guaranteed our DTU level second-by-second. But, again, we go more than an entire solid minute without being able to get a database connection. What is going on?
I can also say that after we experience one of these longer delays, our stats seem to reset. The above image was a couple minutes before a 1 min+ delay and this is a couple minutes after:
We have been in contact with Azure's technical staff and they confirm this is a bug in their platform that is causing our database to go through failover multiple times a day. They stated they will be deploying fixes starting this week and continuing over the next month.
Frankly, we're having trouble understanding how anyone can reliably run a web service on Azure. Our pool of Websites randomly goes down for a few minutes a few times a month, taking our public sites down. If our cloud service returns too many 500 responses something in front of it is cutting off all traffic and returning 502's (totally undocumented behavior as far as we can tell). SQL Azure has very limited performance and obviously isn't ready for prime time.
I'm trying to figure out the best performing approach when writing thousands of small Blobs to Azure Storage.
The application scenario is the following:
thousands of files are being created or overwritten by a constantly
running windows service installed on a Windows Azure VM
Writing to the Temporary Storage available to the VM, the service can reach more
than 9,000 file creations per second
file sizes range between 1 KB and 60 KB
on other VMs with same sw running, other files are being created with same rate and criteria
given the need to build and keep updated a central repository, another service running on each VM copies the newly created files from the Temporary Storage to Azure Blobs
other servers should then read the Azure Blobs in their more recent version
Please note that for many constraints that I'm not listing for shortness, it's not currently possible to modify the main service to directly create Blobs instead of files on Temporary file system. ...and from what I' currently seeing it would mean a slower rate of creation, not acceptable per original requirements.
This copy operation, that I'm testing in a tight loop on 10,000 files, seems to be limited at 200 blob creations per second. I've been able to reach this result after tweaking the sample code named "Windows Azure ImportExportBlob" found here: http://code.msdn.microsoft.com/windowsazure/Windows-Azure-ImportExportB-9d30ddd5 with the async suggestions found in this answer: Using Parallel.Foreach in a small azure instance
I obtained this apparent maximum of 200 blob creations per second on an extralarge VM with 8 cores and setting the "maxConcurrentThingsToProcess" Semaphore accordingly. The network utilization during the test is max 1% of the available 10Gb shown in task manager. This means roughly 100 Mb of the 800 Mb that should be available on that VM size.
I see that the total size copied during the elapsed time gives me around 10 MB/sec.
Is there some limitation on the Azure Storage traffic you can generate or should I use a different approach when writing so many and small files ?
#breischl Thank you for the scalability targets. After reading that post, I started searching for more target figures possibly prepared by Microsoft and found 4 posts (too many for my "reputation" to be posted here, the other 3 are part 2, 3 and 4 of the same series):
http://blogs.microsoft.co.il/blogs/applisec/archive/2012/01/04/windows-azure-benchmarks-part-1-blobs-read-throughput.aspx
the first post contains an important hint: "You may have to increase the ServicePointManager.DefaultConnectionLimit for multiple threads to establish more than 2 concurrent connections with the storage."
I've set this to 300 , rerun the test and seen an important increase in the MB/s. As I previously wrote, I was thinking to be hitting a limit in the underlying blob service when "too many" threads are writing blobs. This is the confirmation of my worries. Thus, I removed all the changes made to the code to work with a semaphore and replaced it again with a parallel.for to start as many blob upload operations as possible. The result has been awesome: 61 MB/s writing blobs and 65 MB/s reading.
The scalability target is 60 MB/s and I'm finally happy with the result.
Thank you all again for your answers.
How do I resize my SQL Azure Web Edition 5 GB database to a 1 GB database? I no longer need the extra capacity and do not want to be billed at the higher rate. I don't see anything in the Management Portal and a quick web search also turned up nothing.
I answered a similar question here. It should be as simple as running an ALTER DATABASE command:
ALTER DATABASE MyDatabase MODIFY (EDITION='WEB', MAXSIZE=1GB)
Just keep this in mind: As long as your usage is <= 1GB, you'll be billed at the 1GB rate. Billing is monthly but amortized daily. So you actually don't need to reduce your max size unless you're relying on SQL Azure to prevent you from growing beyond 1GB.
EDIT: As of Feb. 2012, there's a new 100MB pricing tier (at $4.99, vs. $9.99 at 1GB). While you can't set MAXSIZE to 100MB, your monthly cost will drop if you stay under 100MB. See this blog post for more details.
I signed up for windows azure and I was given a 1gb database as part of my trial. So my max size is 1gb and once I reach that size inserts will start to fail until I update the max size to 10gb. Now my question is, if I update the max size now to 10gb and I only use 400mb, will I still be charged that the 1gb rate? I think the answer is yes and if it is they why don't i just set the max size at 50gb so an insert never fails?
There are two editions: Web (1GB and 5GB) and Business (10GB through 50GB in 10GB increments). If you stay with a Web edition and go over the 1GB threshold on any given day, you'll be charged at the 5GB rate for that day. This is amortized daily over the month. So it's entirely possible you'll accrue costs just a little bit over the 1GB rate (if you upgrade to the 5GB Web Edition).
Moving to the Business edition, the lowest tier is 10GB, so that would be your baseline rate. Again, it's amortized daily.
If you want to set Web edition to 5GB (or Business edition to 50GB), you're going to avoid insert fails, as you pointed out. The tiers are going to help you when trying to manage cost.
See this MSDN blog post detailing the tiers, along with information on the ALTER DATABASE command.