I am trying to understand on Azure Databricks Architecture based on the this link. I could understand what is the purpose of control plane and data plane in Azure Databricks architecture.But I could't understand on the following questions .
How control plane and data plane will be communicating?
How control plane and data plane would be authenticate ?
There are two ways of communication between control plane & data plane:
Legacy - when VMs running on the data plane should have the public IPs, and control plane reaches them directly. This way was always a security headache. Azure still supports it & shows in the UI, but it shouldn't be used
"No Public IP (NPIP)" or another name "Secure Cluster Connectivity" (doc and more technical details). In this case, when VMs in the data plane are starting, they are opening a bi-directional tunnel to a relay on the control plane, and it's always used for controlling VMs & Spark. In this setup, VMs don't need public IPs, and it's much more secure & easy to control.
Regarding authentication - it's internal detail, but it provides a way of ensuring that VMs that are communicating with control plane are really that VMs that form a cluster.
Related
Out of the box, Azure Advisor includes Cost recommendations for the resource type of Virtual Machines, based on resource utilization.
If I look at them under our subscription they have the following information:
Is there any way to get similar advisory for the Virtual Machine Scale Set resource type? Is there any included out of the box?
Or if I want to get average resource consumption, of let's say CPU percentage of all or individual Virtual Machine instances inside of a Virtual Machine Scale set, to be able to aid in the decision if the SKU of the Virtual Machine Scale Set is appropriate, I need to make a query for this inside of Monitor Logs or similar?
Could one create their own custom made advisories (inside of Azure Advisor, if not - anywhere else?), to get this functionaltiy in place (if it isn't already provided)?
Thanks!
Is there any way to get similar advisory for the Virtual Machine Scale Set resource type? Is there any included out of the box?
As per the Azure Advisor documentation, Advisor provides recommendations for the following resource types:
Application Gateway, App Services, availability sets, Azure Cache, Azure Data Factory, Azure Database for MySQL, Azure Database for PostgreSQL, Azure Database for MariaDB, Azure ExpressRoute, Azure Cosmos DB, Azure public IP addresses, Azure Synapse Analytics, SQL servers, storage accounts, Traffic Manager profiles, and Virtual machines.
Although Azure Advisor also includes your recommendations from Azure Security Center which may include recommendations for additional resource types, this list does not cover cost recommendations for VMSS as of today, AFAIK.
I need to make a query for this inside of Monitor Logs or similar?
To monitor your Virtual machine Scale sets, you can leverage Azure Monitor. The performance views in the VM Insights feature are powered using log analytics queries, offering “Top N”, aggregate, and list views to quickly find outliers or issues in your scale set based on guest level metrics for CPU, available memory, bytes sent and received, and logical disk space used.
You can also deploy the Azure Monitor Application Insights Agent on Azure virtual machine scale sets to enable monitoring for your .NET or Java based web applications and get all the benefits of using Application Insights without modifying your code.
Could one create their own custom made advisories (inside of Azure Advisor, if not - anywhere else?), to get this functionaltiy in place (if it isn't already provided)?
Nope, that is not doable as of today. Azure Advisor is a managed offering that analyzes your resource configuration and usage telemetry and then recommends solutions that can help you optimize your Azure resources. Feel free to share your feedback and ideas here for the Advisor team to evaluate and prioritize.
I have been tasked with building a PoC in Azure to "simulate" a future global deployment where data transfer time is important factor. The actual deployment will be using fully on-prem resources. So, as odd as it sounds, I am looking for the worse performance possible between the two options.
Architecture A (single tenant):
Create a single Azure tenant in the US region
Create a Resource Group with a US-based location
Create another Resource Group with an EU-based location
Architecture B (dual tenant):
Create an Azure tenant in the US region with a US-based RG
Create an entirely separate Azure tenant in an EU region with a EU-based RG
Would the dual-tenant structure above make any measurable difference one way or the other from the single-tenant (assuming all vNetwork, VMs, etc are identical)? I am thinking the single-tenant setup would be faster since (presumably) the traffic never leaves the Azure Service Fabric. But that's just speculation.
Here is what I got back from a colleague. She is (obviously) far more versed in Azure IaaS than I am. Answer #3 below indicates that the closest analog to the client MPLS connection is via VPN/ER. Not really worth the cost but still good to know.
Can a single subscription be used to provision US and European region located resources? Yes
Can resources in US and European located regions be managed from a US based portal? Yes
When allowing resources in US and European located regions communicate with one another what are our options? A couple primary ways...
Intra-regional (tenant to tenant:region to region)
Communications can be provisioned to travel across the Microsoft Azure
backbone. It never hits the open Internet.
VPN or Express Route:
Travels either the open internet or a private in TLS like route from
one region to another. However express route, the mpls like option,
does require advanced routing (BGP) and dedicated circuits at I other
point from different connectivity providers. Also, expensive.
I am trying to understand how to design the distribution of an application.
The plan is to replicate the whole app in different geographic regions(EU,US,Asia) and use Azure Traffic Manager to handle the requests distribution.
The thing is that the app has a special need where the requests should be isolated within a region. The US users should be directed only to US data center, EU users to EU data center and so on.
The requirement is to prevent the traffic randomly going to different data centers, for example: a US user makes few requests to US data center and then few requests to EU data center.
Also it is important to note that this is not about request stickinesses. What I need to achieve is that all users from same city/country always get directed to the same data centers.
Only at a point of data center failure ALL the requests can be directed to another region.
Is it possible to create such configuration?
From my understanding it seems you can do this via the "Performance" Load Balancing Method. This Hands-on-Lab provided by Microsoft gives a step by step process to setting up the Traffic Manager to handle requests based on geography.
AFAIK Amazon AWS offers so-called "regions" and "availability zones" to mitigate risks of partial or complete datacenter outage. Looks like if I have copies of my application in two "regions" and one "region" goes down my application still can continue working as if nothing happened.
Is there something like that with Windows Azure? How do I address risk of datacenter catastrophic outage with Windows Azure?
Within a single data center, your Windows Azure application has the following benefits:
Going beyond one compute instance, your VMs are divided into fault domains, across different physical areas. This way, even if an entire server rack went down, you'd still have compute running somewhere else.
With Windows Azure Storage and SQL Azure, storage is triple replicated. This is not eventual replication - when a write call returns, at least one replica has been written to.
Ok, that's the easy stuff. What if a data center disappears? Here are the features that will help you build DR into your application:
For SQL Azure, you can set up Data Sync. This facility synchronizes your SQL Azure database with either another SQL Azure database (presumably in another data center), or an on-premises SQL Server database. More info here. Since this feature is still considered a Preview feature, you have to go here to set it up.
For Azure storage (tables, blobs), you'll need to handle replication to a second data center, as there is no built-in facility today. This can be done with, say, a background task that pulls data every hour and copies it to a storage account somewhere else. EDIT: Per Ryan's answer, there's data geo-replication for blobs and tables. HOWEVER: Aside from a mention in this blog post in December, and possibly at PDC, this is not live.
For Compute availability, you can set up Traffic Manager to load-balance across data centers. This feature is currently in CTP - visit the Beta area of the Windows Azure portal to sign up.
Remember that, with DR, whether in the cloud or on-premises, there are additional costs (such as bandwidth between data centers, storage costs for duplicate data in a secondary data center, and Compute instances in additional data centers). .
Just like with on-premises environments, DR needs to be carefully thought out and implemented.
David's answer is pretty good, but one piece is incorrect. For Windows Azure blobs and tables, your data is actually geographically replicated today between sub-regions (e.g. North and South US). This is an async process that has a target of about a 10 min lag or so. This process is also out of your control and is purely for a data center loss. In total, your data is replicated 6 times in 2 different data centers when you use Windows Azure blobs and tables (impressive, no?).
If a data center was lost, they would flip over your DNS for blob and table storage to the other sub-region and your account would appear online again. This is true only for blobs and tables (not queues, not SQL Azure, etc).
So, for a true disaster recovery, you could use Data Sync for SQL Azure and Traffic Manager for compute (assuming you run a hot standby in another sub-region). If a datacenter was lost, Traffic Manager would route to the new sub-region and you would find your data there as well.
The one failure that you didn't account for is in the ability for an error to be replicated across data centers. In that scenario, you may want to consider running Azure PAAS as part of HP Cloud offering in either a load balanced or failover scenario.
We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?