cloud service and db between two subscriptions in Azure Cloud Service - azure

I have two subscription in an azure account
After I only changed sql database from subscription A to subscription B,
The website slowed down.
Now sql database is in subscription A.
And cloud service is in subscription B.
Could this be related?

Now SQL database is in subscription A, and cloud service is in subscription B.
Could this be related?
The short answer: no.
The slightly longer answer: there might be multiple factors that impact the performance between the Cloud Service and the database. You could think of location, network, hardware, SKU/tier and so on. The subscription the database is in should not be one of them.
As long as all other properties of the database and the server it runs on are the same as they were previously, there should be little to no difference in the performance of the connection between the two.
Azure continuously monitors the latency (speed) of core areas of its network using internal monitoring tools as well as measurements collected by ThousandEyes, a third-party synthetic monitoring service.
and
Monthly latency numbers across Azure regions do not change regularly.
Also, this might be an interesting read: Microsoft global network.

Related

Trying to find out Azure latency between on premises client and azure cloud application

I am trying to accomplish one task which is below.
What I am doing it.
All my users are on Premises.
Application is hosted on Azure VM IaaS.
Question =>
Azure cloud application talk with Internet and download huge packages and share with client which is on- Primes. So I am trying to understand the Risk and latency matrix between on-Prime users and Azure cloud application.
If any one has done some sort of thing and encounter latency issues and what will be possible fixes for that?
Note=> I can't Migrate user to Azure cloud as of now.
To encounter latency issues, please try the following:
To reduce the latency between on premises client and azure cloud application make use of Azure HPC cache.
Azure HPC Cache reduces latency for applications where data may be tethered to existing infrastructure because of dataset sizes and operational scale.
Azure HPC caches active data automatically that is present in both on-premises and in Azure.
You can make use of Accelerated networking where communication will be done more fast.
Try eliminating network congestion.
Try reducing number of network nodes needed to traverse from one stage to another.
Make use of Azure ExpressRoute and Azure Analysis Services to reduce Network latency.
Azure ExpressRoute creates a private connection between on-premises sources and the Azure.
Azure Analysis Services avoids the need for an on-premises data gateway and generally eliminates network latency.
For more in detail, please refer below links:
https://azure.microsoft.com/en-us/blog/azure-hpc-cache-reducing-latency-between-azure-and-on-premises-storage/
https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/
https://viniciusdeschamps.com.br/3-ways-to-reduce-network-latency-in-azure/#how-can-I-measure-network-latency

Function App consumption plan - Failover/Redundancy

I am a year old in using Azure resources.
I have an HTTP trigger function app and cosmos DB as a backend database. I am using a consumption plan for my function app, which I understand makes the system serverless. And by serverless, I understand I don't have to worry about the infrastructure.
So this serverless system is redundant within the region by default? or do I need to handle failover, by having an extra instance of the function app/cosmos DB in the same region which will serve as a backup instance?
Note - I'm talking about the same region failover, not multi-region redundancy.
Availability zone (AZ) support isn't currently available for function apps on Consumption plans.
In Consumption Plan, the functions apps SLA is 99.95% of the time.
It won't failover to a different geo region, no matter the hosting SKU. Use Traffic Manager to achieve that level of redundancy.
Needless to say, all resources your function depends on need to exist in the 2nd region as well (SQL instances, Redis, Event Hubs, ...). You are responsible for keeping state consistent across regions (think Azure SQL auto-failover groups).
One notable exception is Cosmos DB, which (if configured) can have automagically-managed replicas in multiple regions. Your function will always use the same connection string.
The best you can do (IMHO) is to define the data redundancy as Geo-zone-redundant storage and let Azure handle this for you.

Azure SQL POOL Elasticity

Just wanted to be sure if this is possible:
I am having all my app services in separate subscription, I want to use Sql pool elasticity for which I will create a separate subscription in which only my DBs will reside and will add all my DBs to this pool which I will create on this subscription.
Problems:
1.Is it possible for my apps in different subscription to access the DB in another subscription?
2. If the above scenario is possible then will it hamper performance of my apps?
3. Will I be charged Data Transfer Cost for this? What if the region of the app and Db is same?
Thanks in advance.
1.Is it possible for my apps in different subscription to access the DB in another subscription?
Yes. The Firewall and Authentication are not related to the subscription at all.
If the above scenario is possible then will it hamper performance of my apps?
No.
Will I be charged Data Transfer Cost for this? What if the region of the app and Db is same?
There is no data egress charge between resources in the same region.

It is possible use the same sql azure instance from two different cloud service of two different subscription?

I have one Microsoft Azure subscription with one cloud service and one sql azure instance. Now I want create another cloud service with a different subscription (using a different microsoft account). With this second cloud service, can I use the same sql azure instance of the first subscription? (I need to share data between the two cloud service)
Or there may be performance issues?
Thanks in advance
Yes. Azure SQL DB instance can be accessed from different subscription as long as you have the connection string, username and password to the Azure SQL instance. As long as both the services are from the same region, there is no performance issue.
Yes, sure. From user perspective SQL Azure is mostly an ordinary SQL Server which you can access from anywhere in the world (given that the firewall rules allow that access) - from Azure services, from VMs in some other services hosted elsewhere, from your desktop, from servers in your company server room.
Network latency might kick in. Also more clients to the same instance mean more load. Also there's a limit on number of concurrent connections. Other than that - no problems.
You need to make sure are a member in each Azure instance to be able to use the others SQL DB

How do I make my Windows Azure application resistant to Azure datacenter catastrophic event?

AFAIK Amazon AWS offers so-called "regions" and "availability zones" to mitigate risks of partial or complete datacenter outage. Looks like if I have copies of my application in two "regions" and one "region" goes down my application still can continue working as if nothing happened.
Is there something like that with Windows Azure? How do I address risk of datacenter catastrophic outage with Windows Azure?
Within a single data center, your Windows Azure application has the following benefits:
Going beyond one compute instance, your VMs are divided into fault domains, across different physical areas. This way, even if an entire server rack went down, you'd still have compute running somewhere else.
With Windows Azure Storage and SQL Azure, storage is triple replicated. This is not eventual replication - when a write call returns, at least one replica has been written to.
Ok, that's the easy stuff. What if a data center disappears? Here are the features that will help you build DR into your application:
For SQL Azure, you can set up Data Sync. This facility synchronizes your SQL Azure database with either another SQL Azure database (presumably in another data center), or an on-premises SQL Server database. More info here. Since this feature is still considered a Preview feature, you have to go here to set it up.
For Azure storage (tables, blobs), you'll need to handle replication to a second data center, as there is no built-in facility today. This can be done with, say, a background task that pulls data every hour and copies it to a storage account somewhere else. EDIT: Per Ryan's answer, there's data geo-replication for blobs and tables. HOWEVER: Aside from a mention in this blog post in December, and possibly at PDC, this is not live.
For Compute availability, you can set up Traffic Manager to load-balance across data centers. This feature is currently in CTP - visit the Beta area of the Windows Azure portal to sign up.
Remember that, with DR, whether in the cloud or on-premises, there are additional costs (such as bandwidth between data centers, storage costs for duplicate data in a secondary data center, and Compute instances in additional data centers). .
Just like with on-premises environments, DR needs to be carefully thought out and implemented.
David's answer is pretty good, but one piece is incorrect. For Windows Azure blobs and tables, your data is actually geographically replicated today between sub-regions (e.g. North and South US). This is an async process that has a target of about a 10 min lag or so. This process is also out of your control and is purely for a data center loss. In total, your data is replicated 6 times in 2 different data centers when you use Windows Azure blobs and tables (impressive, no?).
If a data center was lost, they would flip over your DNS for blob and table storage to the other sub-region and your account would appear online again. This is true only for blobs and tables (not queues, not SQL Azure, etc).
So, for a true disaster recovery, you could use Data Sync for SQL Azure and Traffic Manager for compute (assuming you run a hot standby in another sub-region). If a datacenter was lost, Traffic Manager would route to the new sub-region and you would find your data there as well.
The one failure that you didn't account for is in the ability for an error to be replicated across data centers. In that scenario, you may want to consider running Azure PAAS as part of HP Cloud offering in either a load balanced or failover scenario.

Resources