I have been tasked with building a PoC in Azure to "simulate" a future global deployment where data transfer time is important factor. The actual deployment will be using fully on-prem resources. So, as odd as it sounds, I am looking for the worse performance possible between the two options.
Architecture A (single tenant):
Create a single Azure tenant in the US region
Create a Resource Group with a US-based location
Create another Resource Group with an EU-based location
Architecture B (dual tenant):
Create an Azure tenant in the US region with a US-based RG
Create an entirely separate Azure tenant in an EU region with a EU-based RG
Would the dual-tenant structure above make any measurable difference one way or the other from the single-tenant (assuming all vNetwork, VMs, etc are identical)? I am thinking the single-tenant setup would be faster since (presumably) the traffic never leaves the Azure Service Fabric. But that's just speculation.
Here is what I got back from a colleague. She is (obviously) far more versed in Azure IaaS than I am. Answer #3 below indicates that the closest analog to the client MPLS connection is via VPN/ER. Not really worth the cost but still good to know.
Can a single subscription be used to provision US and European region located resources? Yes
Can resources in US and European located regions be managed from a US based portal? Yes
When allowing resources in US and European located regions communicate with one another what are our options? A couple primary ways...
Intra-regional (tenant to tenant:region to region)
Communications can be provisioned to travel across the Microsoft Azure
backbone. It never hits the open Internet.
VPN or Express Route:
Travels either the open internet or a private in TLS like route from
one region to another. However express route, the mpls like option,
does require advanced routing (BGP) and dedicated circuits at I other
point from different connectivity providers. Also, expensive.
Related
Kind of a simple question, but puzzling...
Is there a stat in Azure services to monitor how many times data factory is / was accessed ?
So, as an example if an automated system is set up to make persistent API calls to ADF with the malicious intent exhaust it is there a way to monitor for that and gather some kind of stats?
The monitoring built into the Azure Data Factory PaaS itself only monitors legitimate, authenticated usage. You can see this on the https://adf.azure.com/en/monitoring/pipelineruns?factory=%2Fsubscriptions%... dashboard.
Notice how the root domain is adf.azure.com - this is the same for all tenants using data factory around the world. Your specific subscription / instance are mere query parameters in the URL. Microsoft Azure is fully managing the actual hosting of this PaaS, which means they are entirely responsible for subverting any DDOS or similar bad-actor attempts on this service. It's not something you have to worry about, and therefore not something you have much visibility into.
If you ever needed or wanted to check in on how microsoft is doing with this, head on over to https://status.azure.com/status and search for the "Azure Data Factory" row:
This is really one of the biggest selling points of using a fully-hosted cloud PaaS such as Data Factory. You are no longer responsible for the hardware, or even range of ip addresses that back this service. No more than you have to worry about someone DDOS'ing outlook.office.com which probably services your entire organisation's email. I could happen, but if it did, it affects all of Microsoft's customers around the world, not just you personally, so there should be no expectation that you personally are doing anything special to mitigate against it.
Note that more generically if you want to monitor network traffic within your NSGs, iterfaces, VNETs etc in general on Azure, the thing to use is the Application Insights' Network Monitoring at https://portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/networkInsights
This is more generically applicable to all provisioned resources and services on Azure though, not something specific to Azure Data Factory.
I've been reading through the Microsoft Cloud Adoption Framework for some time now. In our company, we have a similar implementation (hub-spoke), but a lot less modular like it is depicted in the docs. We don't have an identity or management subscription for example.
When looking at our own hub-spoke architecture, we basically only have 2 spokes: non-prod and prod in which we deploy all applications (VMs) inside one big VNet (one per spoke). Since we have hundreds of VMs ranging from very small tools on a single VM up to large complex setups with dozens of VMs, we would eventually also have many landing zones (and therefor VNets) I suppose? Our hub contains central shared services like the firewall, domain controllers etc.
Important to know is that we don't do any in-house application development or let other departments like marketing deploy Azure resources themselves. We basically setup Azure infrastructure into the spokes from within our central IT infrastructure department and give external partners access to deploy their applications into it.
What I'm particularly curious about is when you would decide to create a new landing zone in this architecture? Would you have a landing zone for each application? One for each department to enable self-service? Is our approach a good idea?
Very interested to learn how other companies are implementing this architecture.
The big part of enterprise scale landing zone is probably the modules you mentioned that are missing in your implementation.(Or any start small type of land zone)
Enroll in EA so you have an account to manage multiple subscriptions
Have the proper resource organization in place(management groups and
subscriptions).This ensures that you can deploy policies and RBAC
etc at the proper level.
Landing Zone sits on much high level than the applications(resource groups). There is no need for more than 1 Landing Zone. You maybe need to extended it, by either creating new Spoke, and/or in some case new subscription.
I have an Azure subscription and there are a number of services available.
If I configure VM, web apps, application. etc.,
there are few high-end resources which are very expensive.
In order to avoid unwanted billing,
I want to create a policy that allows only a few services and lower configuration resources.
Is there an Azure policy that can do that?
If I configure VM, web apps, application.. etc there are few high-end
resources which prices are high. In order to avoid unwanted billing, I
want to create policy there allow only a few services and lower
configuration resources
Do take a look at Azure Policy. In short, Azure Policies enables Cloud Governance and by defining proper policies, you can restrict creation of certain kinds of resources, disallow certain SKUs for resources and more.
However, as a good practice, you should have only few people in your organization who have the capability to provision resources and there should be a formal procedure for provisioning resources. One of my friend burned $180,000 in Azure in just 3 months because every developer in his team has the capability to create resources in the company's Azure Subscription. The developers in the team created resources as they pleased without thinking about pricing implication.
Hej!
We have just started using Windows Azure and are now in the phase of designing our infrastructure. A question that I haven't really found a stright answer for is weather there is a limit on how many endpoints I can have per subscriptions. Some research told me 25 and then I found another place saying 150. I haven't found anything on MS offical Azure site or blog.
Does anyone know? and have the limit been confirmed?
Thanks in advance,
Lucas
I think you're confusing subscription with deployment (a subscription is really a billing model for your Azure resources: compute, storage, bandwidth, etc. A deployment will have a collection of VMs (or web/worker roles) living behind a single xxx.cloudapp.net namespace. You'd then configure endpoints at a deployment level. For a Virtual Machine deployment, you'll only worry about external-facing (input) endpoints, since VMs can communicate internally across all ports. For web/worker Cloud Service deployments, you'll also have input endpoints.
Regard the number of endpoints per deployment: This number has grown over the years, and will continue to evolve. I'm not sure of the current limit, but... It's very simple to create an endpoint with PowerShell. With a simple for-loop, you should be able to create endpoints until an error is thrown.
I've created a Hosted Service that talks to a Storage Account in Azure. Both have their regions set to Anywhere US but looking at the bills for the last couple of months I've found that I'm being charged for communication between the two as one is in North-Central US and the other South-Central US.
Am I correct in thinking there would be no charge if they were both hosted in the same sub-region?
If so, is it possible to move one of them and how do I go about doing it? I can't see anywhere in the Management Portal that allows me to do this.
Thanks in advance.
Adding to what astaykov said: My advice is to always select a specific region, even if you don't use affinity groups. You'll now be assured that your storage and services are in the same data center and you won't incur outbound bandwidth charges.
There isn't a way to move a storage account; you'll need to either transfer your data (and incur bandwidth costs), or re-deploy your hosted service to the region currently hosting your data (no bandwidth costs). To minimize downtime if your site is live, you can push your new hosted service up (to a new .cloudapp.net name), then change your DNS information to point to the new hosted service.
EDIT 5/23/2012 - If you re-visit the portal and create a new storage account or hosted service, you'll notice that the Anywhere options are no longer available. This doesn't impact existing accounts (although they'll now be shown at their current subregion).
In order to avoid such charges the best guideline is to use Affinity Groups. You define affinity group once, and then choose it when creating new storage account or hosted service. You can still have the Affinity Group in "Anywhere US", but as long as both the storage account and the hosted service are in the same affinity group, they will be placed in one DataCenter.
As for moving account from one region to another - I don't think it is possible. You might have to create a new account and migrate the data if required. You can use some 3rd party tool as Cerebrata's Cloud Storage Studio to first export your data and then import it into the new account.
Don't forget - use affinity groups! This is the way to make 100% sure there will no be traffic charges between Compute, Storage, SQL Azure.