What is Azure Appfabric Service Bus connection pack ...? - azure

When we want a new Service Namespace for Access control and Service bus, it offers option for CONNECTION PACK SIZE as 0,5,25,100,500.
What does it indicate i.e number of active connections ...? If that is the case...? How does the connection pack size of 0 work out...???

It is complicated, but this is how I understand it:
Azure AppFabric SB connections are per "average connection month". The way it's calculated is that on a daily basis, they take the maximum number of connections at any given time, and then average the daily maximum over the month.
If you buy a pack, you get the connections at a rate of around $2/connection/month, if you don't buy a pack, the connections are charged pro rated at around $4/connection/month.
For example. If you have no packs, and you create 40 connections in one hour of one day (say month end reporting) - even though it's only an hour you are charged $4 x 40 for the day, but that's averaged over the month so you pay $4 x 40 / 30.
Now to consider packs - even though you hit a peak of 40 connections your average is around 1.3 connections/month = ~$6, so even purchasing a 5 connection pack at $10 is unnecessary.

Related

How to change retention duration for Azure Application Insights?

At the moment most of the data retained for 90 days by default. I was wondering if there way to change this setting to 30-40 days. I know that I can export them to keep the data longer but what I'm looking for is mainly keep the data for shorter duration for the upcoming regulations.
Update
The default retention for Application Insights resources is 90 days. Different retention periods can be selected for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days.
Note: If you need to keep data longer than 730 days, you can use Continuous Export to copy it to a storage account during data ingestion.
To change the retention, from your Application Insights resource, go to the Usage and Estimated Costs page and select the Data Retention option:
Reference
Sometimes the only answer is a no. In this case, you can't. From the docs:
Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 90 days. If you need to keep data longer than that, you can use continuous export to copy it to a storage account.
Aggregated data (that is, counts, averages and other statistical data that you see in Metric Explorer) are retained at a grain of 1 minute for 90 days.
I remember that a long time ago the pricing tier dictated the maximum retention period but it is now fixed to 90 days for all plans.
You can try give your feedback / ask for this feature here.
It is now available as an option in the Azure portal. If not, you need to get in touch with Azure support to get it activated.

Azure Sql Database replication - 9 1/2 hour delay + cant remove the replica

Note
This is getting quite long so I will try and re-edit parts through the day.
These databases are no long active, which means I can play with them to work out what is going wrong.
The only thing left to answer: Given two databases running on Azure Databases at S3 (100 DTU). Should any secondary ever get significantly behind the primary database? Even while the DTU is hammered to 100% for over half the day. The reason for the DTU being hammered being IO writes mostly.
The Start: a few problems.
DTU limits were hit on Monday, Tuesday and to some extent on Wednesday for a significant amount of time. 3PM UTC - 6AM UTC.
Problem 1 (lag in data on the secondary): This had appeared to have caused a lag of data in the secondary of about 9 1/2 hours. The servers were effectively being spammed with updates causing a lot of IO updates. 6-8 million records on one table for the 24 hour period for example. This problem drove the reason for the post:
Shouldn't these be much more in sync?
The data became out of sync on Monday morning and continued out of sync until Friday. On Thursday some new databases were started up to replace these standard SQL databases and so they were left to rot. Well, for me to experiment with at least.
The application causing the redundant queries couldn't be fixed for a few days (I'm doubting they will ever fix it now) so that leaves changing the instance type. That action was attempted on the current instance but, the instance must disconnect with all standard replicas to increase to the performance tier. This led to the second problem (see below). The replica taking its time to be removed. Began on Wednesday morning and did not complete until Friday.
Problem 2 (can't remove the replica):
(Solved itself after two days)
Disconnecting the secondary database process began ~ Wed 8UTC (when the primary was at about 80GBs). The secondary being about 11GB behind in size at this point.
Setup
The databases (primary and secondary) are S3 that is geo-replicated (North + West Europe).
It has an audit log table(which I read from the secondary - normally with an SQL query), but this is currently 9 1/2 hours behind the last entry for the primary database. Running the query again on the secondary a few seconds later it is slowly catching up, but appears to be relative to the refresh rather than playing catch-up.
Both primary and secondary (read-only) databases are S3 (about to be bumped to P2).
the azure documentation states:
Active Geo-Replication (readable secondaries) is now available for all databases in all service tiers. In April 2017, the non-readable secondary type will be retired and existing non-readable databases will automatically be upgraded to readable secondaries.
How has the secondary has got so far behind? seconds to minutes would be acceptable. Hours not so much. The link above describes it as slightly:
While at any given point, the secondary database might be slightly behind the primary database, the secondary data is guaranteed to always be transactionally consistent with changes committed to the primary database.
Given the secondary is about to be destroyed and replaced by a higher level (need to remove replicas when upgrading from standard -> premium). I'm curious to know if it will happen again as well as what the definition of slight might be in this instance?
Notes: The primary did reach maximum DTU for a few hours but didn't harm the performance significantly, which is where the 9-hour difference was noticed.
Stats
Update for TheGameiswar:
I can't query it right now as it started removing itself as a replica (to be able to move the primary up to the P2 level, but that began hours ago at ~8.30UTC and 5 hours later it is still going). I think it's quite broken now.
Query - nothing special:
SELECT TOP 1000 [ID]
,[DateCreated]
,[SrcID]
,[HardwareID]
,[IP_Port]
,[Action]
,[ResponseTime]
,[ErrorCode]
,[ExtraInfo]
FROM [dbo].[Audit]
order by datecreated desc
I can't compare the tables anymore as it's quite stuck and refusing connections.
The 586 hours (10-14GB) are inserts into the primary database audit table. It was similar yesterday when noticing the 9 1/2 hour difference in data.
When the attempt to remove the replica (another person stated the process) it had about 10GB difference in size.
Cant compare data but can show DB-size at equivalent time
Primary DB Size (last 24 hours):
Secondary DB Size (last 24 hours):
Primary database size - week view
Secondary database size - week view
As mentioned ... it is being removed as a replica... but is still playing catch up with the DB size if you observe the charts above.
Stop replication errored for serverName: ---------, databaseName: Cloud4
ErrorCode: 400
ErrorMessage: A terminate operation is already in progress for database 'Cloud4'.
Update 2 - Replication - dm_continuous_copy_status
Replication is still removing ... moving on...
select * from sys.dm_continuous_copy_status
sys.dm_exec_requests
Querying from Thursday
Appears to be quite empty. The only record being
Replica removed itself at last.
The replica has removed itself after 2 days. At the 80GB mark that I predicted. It waited to replay the data in the transactions (till the point it was removed as a replica) before it would remove the replica.
A Week after on the P2 databases
DTU is holding between 20-40% at busy periods and currently performing ~12 million data entries every 24 hours (a similar amount for reads, but writing is much worse on the indexes and the table). 70-100% inserts extra in a week. This time, the replica is not struggling to keep up, which is good but that is likely due to it not reaching 100% DTU.
Conclusion
The replicas are useful but not in this case. This one caused degraded performance for several days that could have been averted. A simple increase to the performance tier until the cause of the problem could be fixed. IF the replica looks like it is dragging behind and you are on the border of Basic -> Standard or Standard -> Performance it would be safe to remove the replica as soon as possible and increase to a different tier.
Now we are on P2. The database is increasing at 20GB a day... and they say they have fixed the problem that sends 15 thousand redundant updates per minute. Thanks to the query performance insight for highlight that as querying the table is extremely painful on the DTU (even querying the last minute of data in that table is bad on the DTU. ~15 thousand new records every minute).
62617: insert ...
62618: select ...
62619: select ...
A positive from the above is that it's moved from 586 hours combined time for the insert statements (7.5 million entry rows per day) on S3 to 3 hours on P2 (12.4 million row rows per day). An extremely significant decrease in processing time. It did start with an empty table on Thursday but that has surpassed the previous size in a week whereas the previous one took a few months to get there.
It's doing well on the new tier. It should be ~5% if the applications were using the database responsibly and the secondary is up to date.
Spoke too soon. Now on P2
Someone thought it was a good idea to run an SQL query that repeats itself that deletes a thousand rows at a time. 12 million new rows a day.
10AM - 12AM it's managed to remove about 5.2 million rows. Now the database is showing signs of being in the same state as last week. Im curious if that is what happened now.

DynamoDB Downscales within the last 24 hours?

Is it anyhow possible to get the count, how often DynamoDB throughput (write units/read units) was downscaled within the last 24 hours?
My idea is to downscale as soon as an hugo drop e.g. 50% in the needed provisioned write units occur. I have really peaky traffic. Thus it is interessting to me to downscale after every peak. However I have a analytics jobs running at night which is provisioning a huge amount of read units making it necessary to be able to downscale after it. Thus I need to limit downscales to 3 times within 24 hours.
The number of decreases is returned in a DescribeTable result as part of the ProvisionedThroughputDescription.

Understanding Azure SQL Performance

The facts:
1 Azure SQL S0 instance
a few tables one of them containing ~ 8.6 Million Rows and 1 PK
Running a Count-query on this table takes nearly 30 minutes (!) to complete.
Upscaling the instance from S0 to S1 reduces the query time to 13 minutes:
Looking into Azure Portal (new version) the resource-usage-monitor shows the following:
Questions:
Does anyone else consider even 13 minutes as rediculos for a simple COUNT()?
Does the second screenshot meen that during the 100%-period my instance isn't responding to other requests?
Why are my metrics limited to 100% in both S0 and S1? (see look under "Which Service Tier is Right for My Database?" stating " These values can be above 100% (a big improvement over the values in the preview that were limited to a maximum of 100).") I'd expect the S0 to bee like on 150% or so if the quoted statement is true.
I'm interested in experiences regarding usage of databases with more than 1.000 records or so from other people. I don't see how a S*-scaled Azure SQL for 22 - 55 € per month could help me in upscaling-strategies at the moment.
Azure SQL Database editions provide increasing levels of DTUs from Basic -> Standard -> Premium levels (CPU,IO,Memory and other resources - see https://msdn.microsoft.com/en-us/library/azure/dn741336.aspx). Once your query reaches its limits of DTU (100%) in any of these resource dimensions, it will continue to receive these resources at that level (but not more) and that may increase the latency in completing the request. It looks like in your scenario above, the query is hitting its DTU limit (10 DTUs for S0 and 20 for S1). You can see the individual resource usage percentages (CPU, Data IO or Log IO) by adding these metrics to the same graph, or by querying the DMV sys.dm_db_resource_stats.
Here is a blog that provides more information on appropriately sizing your database performance levels. http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
To your specific questions
1) As you have 8.6 million rows, database needs to scan the index entries to get the count back. So, it may be hitting the IO limit for the edition here.
2) If you have multiple concurrent queries running against your DB, they will be scheduled appropriately to not starve one request or the other. But latencies may increase further for all queries since you will be hitting the available resource limits.
3) For older Web/Business editions, you may be able to see the metric values going beyond 100% (they are normalized to the limits of an S2 level), as they don't have any specific limits and run in a resource-shared environment with other customer loads. For the new editions, metrics will never exceed 100%, because system guarantees you resources upto 100% of that edition's limits, but no more. This provides predictable, guaranteed amount of resources for your DB unlike Web/Business editions, where you may get very little or lot more resources at different times depending on other competing customer DB workloads running on the same machine.
Hope this helps.
-- Srini

Comparing the new SQL Azure tiers to old ones [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Now that Microsoft made the new SQL Azure service tiers available (Basic, Standard, Premium) we are trying to figure out how they map to the existing ones (Web and Business).
Essentially, there are six performance levels in the new tier breakdown: Basic, S1, S2, P1, P2 and P3 (details here: http://msdn.microsoft.com/library/dn741336.aspx)
Does anyone know how the old database tiers map to those six levels? For instance, is Business equivalent of an S1? S2?
We need to be able to answer this question in order to figure out what service tiers/levels to migrate our existing databases to.
We just finished a performance comparison.
I can't publish our SQL queries, but we used 3 different test cases that match our normal activity. In each test case, we performed several queries with table joins and aggregate calculations (SUM, AVG, etc) for a few thousand rows. Our test database is modest - about 5GB in size with a few million rows.
A few notes: For each, we tested the my local machine which is a 5 year old iMac running Windows/SQL Server in a virtual machine ("Local"), SQL Azure Business ("Business"), SQL Azure Premium P1, SQL Azure Standard S2, and SQL Azure Standard S1. The basic tier seemed so slow that we didn't test it. All of these tests were done with no other activity on the system. The queries did not return data so network performance was hopefully not a factor.
Here were our results:
Test One
Local: 1 second
Business: 2 seconds
P1: 2 seconds
S2: 4 seconds
S1: 14 seconds
Test Two
Local: 2 seconds
Business: 5 seconds
P1: 5 seconds
S2: 10 seconds
S1: 30 seconds
Test Three
Local: 5 seconds
Business: 12 seconds
P1: 13 seconds
S2: 25 seconds
S1: 77 seconds
Conclusions:
After working with the different tiers for a few days, our team concluded a few things:
P1 appears to perform at the same level as SQL Azure Business. (P1 is 10x the price)
Basic and S1 are way too slow for anything but a starter database.
The Business tier is a shared service so performance depends on what other users are on your server. Our database shows a max of 4.01% CPU, 0.77% Data IO ,0.14% Log IO and we're experiencing major performance problems and timeouts. Microsoft Support confirmed that we are "just on a really busy server."
The Business tier delivers inconsistent service across servers and regions. In our case, we moved to a different server in a different region and our service is back to normal. (we view that as a temporary solution)
S1, S2, P1 tiers seem to provide the same performance across regions. We tested West and North Central.
Considering the results above, we're generally worried about the future of SQL Azure. The business tier has been great for us for a few years, but it's scheduled to go out of service in 12 months. The new tiers seem over priced compared to the Business tier.
I'm sure there are 100 ways this could be more scientific, but I'm hoping those stats help others getting ready to evaluate.
UPDATE:
Microsoft Support sent us a very helpful query to assess your database usage.
SELECT
avg(avg_cpu_percent) AS 'Average CPU Percentage Used',
max(avg_cpu_percent) AS 'Maximum CPU Percentage Used',
avg(avg_physical_data_read_percent) AS 'Average Physical IOPS Percentage',
max(avg_physical_data_read_percent) AS 'Maximum Physical IOPS Percentage',
avg(avg_log_write_percent) AS 'Average Log Write Percentage',
max(avg_log_write_percent) AS 'Maximum Log Write Percentage',
--avg(avg_memory_percent) AS 'Average Memory Used Percentage',
--max(avg_memory_percent) AS 'Maximum Memory Used Percentage',
avg(active_worker_count) AS 'Average # of Workers',
max(active_worker_count) AS 'Maximum # of Workers'
FROM sys.resource_stats
WHERE database_name = 'YOUR_DATABASE_NAME' AND
start_time > DATEADD(day, -7, GETDATE())
The most useful part is that the percentages represent % of an S2 instance. According to Microsoft Support, if you're at 100%, you're using 100% of an S2, 200% would be equivalent to a P1 instance.
We're having very good luck with P1 instances now, although the price difference has been a shocker.
I am the author of the Azure SQL Database Performance Testing blog posts mentioned above.
Making IOPS to DTU comparisons is quite difficult for Azure SQL Database, which is why I focussed on row counts and throughput rates (in MB per second) in my tests.
I would be cautious about using the Transaction Rates quoted by Microsoft - their benchmark databases are rather small e.g. for Standard tier, which has a capacity of 250 GB, their benchmark databases for S1 and S2 are only 2 GB and 7 GB respectively. At these sizes I suggest SQL Server is caching much/most of the database and as such their benchmark is avoid the worst of the read throttling that is likely to impact real world databases.
I have added a new post regarding the new Service Tiers hitting General Availability and making some estimates of the changes in performance around S0 and S1 at GA.
http://cbailiss.wordpress.com/2014/09/16/performance-in-new-azure-sql-database-performance-tiers/
There is not really any kind of mapping between the old and new offerings near as I can tell. The old offerings the only thing that was really different between the "web" and "business" offering was the size the database was limited to.
However, on the new offerings each tier has performance metrics associated with them. So in order to decide what offering you need to move your existing databases to you need to figure out what type of performance needs your application has.
It appears in terms of size Web and Business fall between Basic and S1. Here's a link that has a chart with the new and old tiers compared. It seems a little apple to oranges honestly so there isn't a direct mapping. Here's also a link specifically addressed to people currently on the Web and Business Tiers.
Comparison of Tiers
Web and Business Edition Sunset FAQ

Resources