I'm planning to make a NodeJS app with Express and an SQL database and upload it all to Heroku. I am going to get the Postgres Hobby Basic plan.
On the Heroku website it says that my database is limited to 10 000 000 rows, but I don't know if there are any memory limits. For example if I can't store more that 0.5 GB of data on my database. I would be grateful if someone could tell me is my database limited only by the 10 000 000 rows limit, or is there a memory limit as well.
Storage (disk) and memory (RAM) are different things. Dynos have a memory limit, e.g.
free, hobby and standard-1x have 512 MB
Heroku Postgres plans have different types of limits by tier. Hobby tier limits are based on row count. Standard and above tiers have no row limits, but they do have storage limits. For example, Standard-0 plans have a storage limit of 64 GB.
Related
Been looking into azure container apps and their limits make no sense.
They say a total of 2 cpus for all container instances of this app and what does that mean? It seems it's some kind of a limit total for revisions not for apps in a container apps environment.
Why? This limit makes no sense or the wording is weird.
They say a total of 2 cpus for all container instances of this app and
what does that mean? It seems it's some kind of a limit total for
revisions not for apps in a container apps environment.
Why? This limit makes no sense or the wording is weird.
As per the current quota limitations, the maximum number of cores allocated for each container app replica is 2 vcpu and we can have around 20 containers app per environment.
If you want to increase the quota for your container apps you need to raise a support ticket.
It's worth mentioning that there are certain limitations on requests for quota increases. For instance - you can't increase the number of revisions you can run per container app or the vCPU cores per replica even if you raise a support ticket.
I set up azure sql database in free tier for testing purposes. It has 32 MB limit, but it should be fine, since my db is about 30 tables and few rows of data per each (really just for testing purposes).
After some while, I reached 32 MB limit. I was forced to delete (and drop) all the tables. Now the db takes 87.5 % WITH NO TABLES IN IT.
I followed this post about data size investigations and here are the results:
(more rows here, but each with 0.1 MB and less)
I tried to run DBCC SHRINKFILE (log, 0); but nothing has changed.
I also did sp_spaceused
Which resulted in:
The percentage form azure portal (87.5 %) changes time to time for no reason (sometimes it drops to 37.5 %)
So my question is - what am I doing wrong here? How should I proceed to not have most of the db filled without any data..?
The Azure free tier account type provides 250 GB free storage with S0 instance for 12 months.
Please create another database (upto 10 databases allowed in free tier) using S0 instance. Refer the steps given by #FabioSipoliMuller in this thread to deploy the same.
Note: You need to make some changes in configuration while deploying the database in free service.
We currently have an elastic pool of databases in Azure that we would like to scale based on high eDTU usage. There are 30+ databases in the pool and they currently use 100GB of storage (although this is likely to increase).
We were planning on increasing the eDTU's allocated to the pool when we detect high eDTU usage. However a few posts online have made me question how well this will work. The following quote is taken from the azure docs - https://learn.microsoft.com/en-us/azure/sql-database/sql-database-resource-limits
The duration to rescale pool eDTUs can depend on the total amount of storage space used by all databases in the pool. In general, the rescaling latency averages 90 minutes or less per 100 GB.
If i am understanding this correctly this means that if we want to increase the eDTUs we will have to wait for on average 90 minutes per 100GB. If this is the case scaling dynamically won't be suitable for us as 90 minutes to wait for an increase in performance is far too long.
Can anyone confirm if what i have said above is correct? And are there any alternative recommendations to increase eDTUs dynamically without having to wait for such a long period of time?
This would also mean if we wanted to scale based on a schedule, i.e. scale up eDTUs at 8am we would actually have to initiate the scaling at 6:30am to allow for the estimated 90mins of scaling time - if my understanding of this is correct.
When you scale the pool eDTUs, Azure may have to migrate data (this is a shared database service). This will take time, if required. I have seen scaling being instant and I have seen it take a lot of time. I think that Microsoft's intent is to offer cost savings via Elastic Pools and not the thru ability to quickly change eDTUs.
The following is the answer provided by a Microssoft Azure SQL Database manager:
For rescaling a Basic/Standard pool within the same tier, some service
optimizations have occurred so that the rescaling latency is now
generally proportional to the number of databases in the pool and
independent of their storage size. Typically, the latency is around
30 seconds per database for up to 8 databases in parallel provided
pool utilization isn’t too high and there aren’t long running
transactions. For example, a Standard pool with 500 databases
regardless of size can often be rescaled in around 30+ minutes (i.e.,
~ 500 databases * 30 seconds / 8 databases in parallel).
In the case of a Premium pool, the rescaling latency is still
proportional to size-of-data.
This Azure SQL Database manager promised to update Azure documentation as soon as they finish implementing more improvements.
Thank you for your patience waiting for this answer.
How do I resize my SQL Azure Web Edition 5 GB database to a 1 GB database? I no longer need the extra capacity and do not want to be billed at the higher rate. I don't see anything in the Management Portal and a quick web search also turned up nothing.
I answered a similar question here. It should be as simple as running an ALTER DATABASE command:
ALTER DATABASE MyDatabase MODIFY (EDITION='WEB', MAXSIZE=1GB)
Just keep this in mind: As long as your usage is <= 1GB, you'll be billed at the 1GB rate. Billing is monthly but amortized daily. So you actually don't need to reduce your max size unless you're relying on SQL Azure to prevent you from growing beyond 1GB.
EDIT: As of Feb. 2012, there's a new 100MB pricing tier (at $4.99, vs. $9.99 at 1GB). While you can't set MAXSIZE to 100MB, your monthly cost will drop if you stay under 100MB. See this blog post for more details.
I signed up for windows azure and I was given a 1gb database as part of my trial. So my max size is 1gb and once I reach that size inserts will start to fail until I update the max size to 10gb. Now my question is, if I update the max size now to 10gb and I only use 400mb, will I still be charged that the 1gb rate? I think the answer is yes and if it is they why don't i just set the max size at 50gb so an insert never fails?
There are two editions: Web (1GB and 5GB) and Business (10GB through 50GB in 10GB increments). If you stay with a Web edition and go over the 1GB threshold on any given day, you'll be charged at the 5GB rate for that day. This is amortized daily over the month. So it's entirely possible you'll accrue costs just a little bit over the 1GB rate (if you upgrade to the 5GB Web Edition).
Moving to the Business edition, the lowest tier is 10GB, so that would be your baseline rate. Again, it's amortized daily.
If you want to set Web edition to 5GB (or Business edition to 50GB), you're going to avoid insert fails, as you pointed out. The tiers are going to help you when trying to manage cost.
See this MSDN blog post detailing the tiers, along with information on the ALTER DATABASE command.