I have tried several times, on several subscriptions using a couple of different accounts and I keep running into the same exact issue when attempting to deploy a new service fabric cluster through the Azure portal. I tried this with both secure and unsecure clusters (to ensure that my certificate setup was not to blame) as well as with 5 node clusters as well as single node test clusters. In all cases the error was exactly the same.
At step 4, in all cases, the portal indicates that the portal generated ARM template is valid, and allows me to start the deployment process. After about 10 minutes I get the dreaded Deployment Failed icon on my dashboard for the 20th time!
Clicking on the icon takes me to the error logs which indicate that there was an issue with "Write Deployments"
I also see that all the necessary resource types have been generated (Storage Accounts, VM ScaleSets, Etc..)
However looking at the VM Scale Set I see another (more descriptive) issue stating that there was a provisioning error with the code "ProvisioningState/failed/InternalDiskManagementError" and that an internal disk management error occurred.
I am at a complete loss. I am not doing anything custom, this is all on the Azure Portal and as I mentioned I tried both simple test clusters without security or logging as well as 5 node clusters with security and logging enabled. In all case I get the same exact error. This is on 3 different Azure accounts.
The only other thing that I might try is different regions (I've only been targeting West US 2) and maybe some variants on the VM size (been targeting A0 for cost).
Has anyone else run into similar issues? I've been able to deploy clusters before (a few months back) but ever since then I keep getting stopped by this bug!
UPDATE 1
I attempted a deployment in West US 2 using the A1_V2 VM Size and I again got the Write Deployment failure, but this time on the VM Scale Set I have a different error:
ProvisioningState/failed/VMExtensionHandlerNonTransientError
Handler 'Microsoft.Azure.Diagnostics.IaaSDiagnostics' has reported failure for VM Extension 'VMDiagnosticsVmExt_vmNodeType0Name' with terminal error code '1007' and error message: 'Install failed for plugin (name: Microsoft.Azure.Diagnostics.IaaSDiagnostics, version 1.10.0.0) with exception Command C:\Packages\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.10.0.0\DiagnosticsInstall.cmd of Microsoft.Azure.Diagnostics.IaaSDiagnostics has not exited on time! Killing it...'
UPDATE 2
I made a deployment in Central US using a D sized VM and was able to deploy just fine. At this point it seems that either the Region or the VM Size is what is causing issues. Going to make a few more deployments using various VM sizes and regions and will continue updating here with my findings...
UPDATE 3
Was able to create a single node Standard_D1_v2 cluster in West US 2.
UPDATE 4
Was able to create a 3 node Standard_A2_v2 cluster in West US 2.
Region is not the issue.....
UPDATE 5
A second attempt at deploying A1_V2 VM in West US 2 resulted in the same error as the last time this VM size was used:
ProvisioningState/failed/VMExtensionHandlerNonTransientError
FINAL UPDATE
The issue seems to be that the VM's I was using are under-powered.
I hope that Microsoft updates their portal so the next developer does not run into the same issues as me. Right now the portal makes you think that your setup is valid (even passes the validation in step 4) and then fails without any clarity. I opened a support ticket and even the Azure techs are giving me the run around and having me check my Resource Provider settings! They have no clue that I'm using insufficient VM sizes!
I also think it's way too expensive for developers to have to pay so much just to get some test nodes up on the cloud. And I'm still perplexed that I was able to get a 5 node A0 cluster up an running, but no longer can! Maybe there was a Service Fabric software update since then?
The recommended VM SKU is Standard D3 or Standard D3_V2 or equivalent with a minimum of 14 GB of local SSD.
The minimum supported use VM SKU is Standard D1 or Standard D1_V2 or equivalent with a minimum of 14 GB of local SSD.
Partial core VM SKUs like Standard A0 are not supported for production workloads.
Standard A1 SKU is not supported for production workloads for performance reasons.
Source
These errors are usually caused by using unsupported VM sizes. As a workaround for test clusters, you can first deploy using something like D3_V2 and after successful deployment, scale down.
Related
We're looking for automated way to horizontally, vertically scale the pull of self hosted integration runtime virtual machines used in ADF.
Reading Microsoft docs does not provide answer.
Well, I don't have the experience, so I can only give you a theoretical answer, but maybe it's helpfull for you.
AFAIK, neither way is configurable out-of-the-box. For scale-out you'll have to deploy an additional IR machine yourself. So probably you'll want to create an image that you can provision from docker or kubernetes and has the IR and pre-requirements installed. The IR installation provides an PowerShell script that can be used to create an automated connection.
For scale-up/down, you'll have to run some script that scales your vm. In an IaaS solution (f.e.) Azure VM, that should be doable with an API call to change your VM.
For both cases you'll have to have some kind of montitor in place that monitors the IR loads and makes changess as needed. I think the measures provided in the Data Factory should do. Maybe you can use Log Analyics to monitor the loads.
I'm curious about your use case for this.
My solution is just for scaling out/in since the VM must be restarted if you are scaling up/down, which causes downtime and job failures etc.
At a high level this solution requires just 3 simple things:
Azure Metric Alert that fires when Scale-Out should occur (VM Start)
Azure Metric Alert that fires when Scale-In should occur (VM Deallocation)
Logic App that is triggered by Azure Alert and actually executes the Start/Stop of the VM, along with any other automation associated with this (eg posting to a Teams channel when Scale in/out occurs)
Here are more of the details surrounding how we setup the conditions for the alerts, but the main thing to keep in mind is (IR CPU %, IR queue length, Number of Nodes, and possibly IR Memory)
Scale-Out
Scale-In
Actions for Alerts
As you can see below we have the alert triggering 1 Logic App, using the payload that is passed to the Logic App, you can determine if the Logic App should be starting the VM, or stopping the VM. (As well as any other additional actions)
Logic App
There is a small chance that due to timing (and depending on how many ADF's the IR is shared to), that pipeline activities could be sent to Node 2 at the same time a deallocation command is sent to the VM for Node 2. I have not seen this as of yet, but adjusting the alert conditions based on your need could help avoid this. Feel free to play around with the conditions of the alerts, granularity, thresholds, etc. This is not a one size fits all solution.
I have an app service running that has 8 instances running in the service plan.
The app is written in asp dotnet core, it's an older version than is currently available.
Occasionally I have an issue where the servers start returning a high number of 5xx errors after a period of sustained load.
It appears that only one instance is having an issue - which is causing the failed request rate to climb.
I've noticed that there is a corresponding increase in the "locally written bytes" on the instance that is having problems - I am not writing any data locally so I am confused as to what this metric is actually measuring. In addition the number of open connections goes high and then stays high - rebooting the problematic instance doesn't seem to achieve anything.
The only thing I suspect is that we are copying data from a user's request straight into Azure Blob Store using the UploadFromStreamAsync from the HttpRequest.Body - with the data coming from a mobile phone app.
Microsoft support suggested we swapped to using local cache as an option to reduce issues with storage, however this has not resolved the issue.
Can anyone tell me what is the "locally written bytes" actually measuring? There is little documentation on this metric that I can find in google.
My question (to MS and anyone else) is: Why is this issue occurring and what work around can be implemented by the users / customers themselves as opposed to by Microsoft Support?
There have obviously been 'a few' other question about this issue:
Managed Azure Kubernetes connection error
Can't contact our Azure-AKS kube - TLS handshake timeout
Azure Kubernetes: TLS handshake timeout (this one has some Microsoft feedback)
And multiple GitHub issues posted to the AKS repo:
https://github.com/Azure/AKS/issues/112
https://github.com/Azure/AKS/issues/124
https://github.com/Azure/AKS/issues/164
https://github.com/Azure/AKS/issues/177
https://github.com/Azure/AKS/issues/324
Plus a few twitter threads:
https://twitter.com/ternel/status/955871839305261057
TL;DR
Skip to workarounds in Answers below.
Current best solution is post a help ticket — and wait — or re-create your AKS cluster (maybe more than once, cross your fingers, see below...) but there should be something better. At least please grant the ability to let AKS preview customers, regardless of support tier, upgrade their support request severity for THIS specific issue.
You can also try scaling your Cluster (assuming that doesn't break your app).
What about GitHub?
Many of the above GitHub issues have been closed as resolved but the issue persists. Previously there was an announcements document regarding the problem but no such status updates are currently available even though the problem continues to present itself:
https://github.com/Azure/AKS/tree/master/annoucements
I am posting this as I have a few new tidbits that I haven't seen elsewhere and I am wondering if anyone has ideas as far as other potential options for working around the issue.
Affected VM / Node Resource Usage
The first piece I haven't seen mentioned elsewhere is Resource usage on the nodes / vms / instances that are being impacted by the above Kubectl 'Unable to connect to the server: net/http: TLS handshake timeout' issue.
Production Node Utilization
The node(s) on my impacted cluster look like this:
The drop in utilization and network io correlates strongly with both the increase in disk utilization AND the time period we began experiencing the issue.
The overall Node / VM utilization is generally flat prior to this chart for the previous 30 days with a few bumps relating to production site traffic / update pushes etc.
Metrics After Issue Mitigation (Added Postmortem)
To the above point, here are the metrics the same Node after Scaling up and then back down (which happened to alleviate our issue, but does not always work — see answers at bottom):
Notice the 'Dip' in CPU and Network? That's where the Net/http: TLS issue impacted us — and when the AKS Server was un-reachable from Kubectl. Seems like it wasn't talking to the VM / Node in addition to not responding to our requests.
As soon as we were back (scaled the # nodes up by one, and back down — see answers for workaround) the Metrics (CPU etc) went back to normal — and we could connect from Kubectl. This means we can probably create an Alarm off of this behavior (and I have a issue in asking about this on Azure DevOps side: https://github.com/Azure/AKS/issues/416)
Node Size Potentially Impacts Issue Frequency
Zimmergren over on GitHub indicates that he has less issues with larger instances than he did running bare bones smaller nodes. This makes sense to me and could indicate that the way the AKS servers divy up the workload (see next section) could be based on the size of the instances.
"The size of the nodes (e.g. D2, A4, etc) :)
I've experienced that when running A4 and up, my cluster is healther than if running A2, for example. (And I've got more than a dozen similar experiences with size combinations and cluster failures, unfortunately)." (https://github.com/Azure/AKS/issues/268#issuecomment-375715435)
Other Cluster size impact references:
giorgited (https://github.com/Azure/AKS/issues/268#issuecomment-376390692)
An AKS server responsible for more smaller Clusters may possibly get hit more often?
Existence of Multiple AKS Management 'Servers' in one Az Region
The next thing I haven't seen mentioned elsewhere is the fact that you can have multiple Clusters running side by side in the same Region where one Cluster (production for us in this case) gets hit with 'net/http: TLS handshake timeout' and the other is working fine and can be connected to normally via Kubectl (for us this is our identical staging environment).
The fact that users (Zimmergren etc above) seem to feel that the Node size impacts the likelihood that this issue will impact you also seems to indicate that node size may relate to the way the sub-region responsibilities are assigned to the sub-regional AKS management servers.
That could mean that re-creating your cluster with a different Cluster size would be more likely to place you on a different management server — alleviating the issue and reducing the likelihood that multiple re-creations would be necessary.
Staging Cluster Utilization
Both of our AKS Clusters are in U.S. East. As a reference to the above 'Production' Cluster metrics our 'Staging' Cluster (also U.S. East) resource utilization does not have the massive drop in CPU / Network IO — AND does not have the increase in disk etc. over the same period:
Identical Environments are Impacted Differently
Both of our Clusters are running identical ingresses, services, pods, containers so it is also unlikely that anything a user is doing causes this problem to crop up.
Re-creation is only SOMETIMES successful
The above existence of multiple AKS management server sub-regional responsibilities makes sense with the behavior described by other users on github (https://github.com/Azure/AKS/issues/112) where some users are able to re-create a cluster (which can then be contacted) while others re-create and still have issues.
Emergency could = Multiple Re-Creations
In an emergency (ie your production site... like ours... needs to be managed) you can PROBABLY just re-create until you get a working cluster that happens to land on a different AKS management server instance (one that is not impacted) but be aware that this may not happen on your first attempt — AKS cluster re-creation is not exactly instant.
That said...
Resources on the Impacted Nodes Continue to Function
All of the containers / ingresses / resources on our impacted VM appear to be working well and I don't have any alarms going off for up-time / resource monitoring (other than the utilization weirdness listed above in the graphs)
I want to know why this issue is occurring and what work around can be implemented by the users themselves as opposed to by Microsoft Support (currently have a ticket in). If you have an idea let me know.
Potential Hints at the Cause
https://github.com/Azure/AKS/issues/164#issuecomment-363613110
https://github.com/Azure/AKS/issues/164#issuecomment-365389154
Why no GKE?
I understand that Azure AKS is in preview and that a lot of people have moved to GKE because of this problem (). That said my Azure experience has been nothing but positive thus far and I would prefer to contribute a solution if at all possible.
And also... GKE occasionally faces something similar:
TLS handshake timeout with kubernetes in GKE
I would be interested to see if scaling the nodes on GKE also solved the problem over there.
Workaround 1 (May Not Work for Everyone)
An interesting solution (worked for me) to test is scaling the number of nodes in your cluster up, and then back down...
Log into the Azure Console — Kubernetes Service blade.
Scale your cluster up by 1 node.
Wait for scale to complete and attempt to connect (you should be able to).
Scale your cluster back down to the normal size to avoid cost increases.
Alternately you can (maybe) do this from the command line:
az aks scale --name <name-of-cluster> --node-count <new-number-of-nodes> --resource-group <name-of-cluster-resource-group>
Since it is a finicky issue and I used the web interface I am uncertain if the above is identical or would work.
Total time it took me ~2 minutes — for my situation that is MUCH better than re-creating / configuring a Cluster (potentially multiple times...)
That being Said....
Zimmergren brings up some good points that Scaling is not a true Solution:
"It worked sometimes, where the cluster self-healed a period after scaling. It failed sometimes with the same errors. I don't consider scaling a solution to this problem, as that causes other challenges depending on how things are set up. I wouldn't trust that routine for a GA workload, that's for sure. In the current preview, it's a bit wild west (and expected), and I'm happy to blow up the cluster and create a new one when this fails continuously." (https://github.com/Azure/AKS/issues/268#issuecomment-395299308)
Azure Support Feedback
Since I had a support ticket open at the time I ran into the above scaling solution I was able to get feedback (or rather a guess) on what the above might have worked, here's a paraphrased response:
"I know that scaling the cluster can sometimes help if you get into a state where the number of nodes is mismatched between “az aks show” and “kubectl get nodes”. This may be similar."
Workaround References:
GitHub user Scaled nodes from console and fixed the problem: https://github.com/Azure/AKS/issues/268#issuecomment-375722317
Workaround Didn't Work?
If this DOES NOT work for you, please post a comment below as I am going to try to keep an up to date list of how often the issue crops up, whether it resolves itself, and whether this solution works across Azure AKS users (looks like it doesn't work for everyone).
Users Scaling Up / Down DID NOT work for:
omgsarge (https://github.com/Azure/AKS/issues/112#issuecomment-395231681)
Zimmergren (https://github.com/Azure/AKS/issues/268#issuecomment-395299308)
sercand — scale operation itself failed — not sure if it would have impacted connectability (https://github.com/Azure/AKS/issues/268#issuecomment-395301296)
Scaling Up / Down DID work for:
Me
LohithChanda (https://github.com/Azure/AKS/issues/268#issuecomment-395207716)
Zimmergren (https://github.com/Azure/AKS/issues/268#issuecomment-395299308)
Email Azure AKS Specific Support
If after all the diagnosis you still suffer from this issue, please don't hesitate to send email to aks-help#service.microsoft.com
Adding another answer since this is now the Azure Support official solution when the above attempts do not work. I haven't experienced the issue in a while so I can't verify this one but it seems like it would make sense to me (based on previous experience).
Credit on this one / full thread found here (https://github.com/Azure/AKS/issues/14#issuecomment-424828690)
Check for Tunneling Issues
ssh to the agent node which running the tunnelfront pod
get tunnelfront logs: "docker ps" -> "docker logs "
"nslookup " whose fqdn can be get from above command -> if it resolves ip, which means dns works, then go to the following step
"ssh -vv azureuser# -p 9000" ->if port is working, go to the next step
"docker exec -it /bin/bash", type "ping google.com", if it is no response, which means tunnel front pod doesn't have external network, then do following step
restart kube-proxy, using "kubectl delete po -n kube-system", choose the kube-proxy which is runing on the same node with tunnelfront. customer can use "kubectl get po -n kube-system -o wide"
I feel like this particular work-around could PROBABLY be automated (for sure on Azure side but probably on the community side).
Email Azure AKS Specific Support
If after all the diagnosis you still suffer from this issue, please don't hesitate to send email to aks-help#service.microsoft.com
Workaround 2 Re-Create Cluster (Somewhat Obvious)
I am adding this one because there are some details to keep in mind and even though I touched on it in my original Question, that thing got long, so I am adding specific details on re-creation here.
Cluster Re-Creation Doesn't Always Work
Per the above in my original question there are multiple AKS Server instances that divide up responsibilities for a given Azure region (we think). Some, or all, of these can be impacted by this bug resulting in your Cluster being un-reachable via Kubectl.
That means that if you re-create your Cluster and it some how lands on the same AKS server, probably that new Cluster will ALSO not be reachable requiring...
Additional Re-creation Attempts
Probably re-creating multiple times will result in you eventually landing your new Cluster on one of the other AKS servers (which is working fine).
As far as I can tell I don't see any indication that ALL AKS servers get hit with this problem at once in a while (if ever).
Different Cluster Node Size
If you are in a pinch and want the highest possibly probability (we haven't confirmed this) that your re-creation lands on a different AKS management server — choose a different Node size when you create your new Cluster (see Node Size section of the initial Question above).
I have opened this ticket asking Azure DevOps whether or not the Node Size is ACTUALLY related to deciding which Clusters are administered by which AKS management servers: https://github.com/Azure/AKS/issues/416
Support Ticket Fix vs. Self Healing
Since there are a lot of users who indicate that the problem occasionally solves itself and just goes away I think that it is reasonable to guess that Support actually fixes the offending AKS server (which may result in other users having their Clusters fixed — 'Self Heal') as opposed to fixing the individual user's Cluster.
Creating Support Tickets
To me the above would likely mean that creating a Ticket is probably a good thing since it would fix other user Clusters experiencing the same issue — it might also be an argument for allowing support issue severity escalation for this specific issue.
I think this is also a decent indicator that maybe Azure support hasn't figured out how to fully alarm for the problem yet, in which case creation of a support ticket serves that purpose as well.
I also asked Azure DevOps whether they Alarm for the issue (based on my experience easily visualizing the issue based on CPU and Network IO metric changes) on their side: https://github.com/Azure/AKS/issues/416
If NOT (haven't heard back) then it makes sense to create a ticket EVEN IF you plan to re-create your cluster since that ticket would then make Azure DevOps aware of the issue resulting in a fix for other users on that Cluster management server.
Things to make Cluster Re-Creation Easier
I will add to this (feedback / ideas are appreciated) but off the top of my head:
Be diligent (obvious) about how you store all YAML files used to create your Cluster (even if you don't re-deploy often for your app by design).
Script your DNS modifications in order to speed up pointing to the new instance — If you have a public facing app / service that utilizes DNS (Maybe something like this example for Google Domains?: https://gist.github.com/cyrusboadway/5a7b715665f33c237996, Full docs here: https://cloud.google.com/dns/api/v1/)
We just had this issue for one of our clusters. Sent a support ticket and got called back 5 minutes later by an engineer asking if it was OK for them to restart the API Server. 2 minutes later it was working again.
Reason was something about timeouts in their messaging queue.
This morning I found 5 of my Azure Virtual machines to be stuck in Starting mode.
All other VMs are working ok.
I managed to stop the VMs using the Azure command shell and then start them again but they are still stuck in starting mode with no end in sight.
It has now been over 5 1/2 hours and still stuck in starting mode.
I have contacted Microsoft support but they are taking hours to respond :(((
The Azure Status page doesn't show anything is wrong in my region.
Has anybody else experiencing this problem?
We've had the same issue and it's linked to a big issue Azure is having this morning.
The trick we used in order to get the instance running again is:
1. stop the VMs via Powershell
2. change the size of the vm and back (preferably from A to D as this is different hardware)
3. start the VM
We also have people complaining about RDP not working where reboots fixed the problem.
There are currently some problems with Azure, including the VM service. Also the status page does not reflect all of the problems. Here you have to keep in mind that this page also show impacts affecting most of the service customers. It does not reflect minor outages to single customers. You should keep an eye at the Azure blog which possibly gives a statement related to the current problems.
What works for me is a redeploy of the Virtual Machine within the Azure Portal whenever it gets stuck at "Starting...". Altho it takes half an hour to redeploy, it solves the issue. More details here.
Same problem I experienced and what I did is I resized Virtual Machine's Disk Size, You can go for increasing the whole VM size / power but for me the Disk size fixed it, probably it was updating and the disk file ran out.
I am on a Windows Azure trial to evaluate migrating a number of commercial ASP.NET sites to Azure from dedicated hosting. All was going OK ... until just now!
Some background - the sites are set up under Web Roles (i.e. as opposed to Web Sites) using SQL Azure and SQL Reporting. The site content was under the X: drive (there was also a B: drive that seemed to be mapped to the same location). There are several days left of the trial.
Without any apparent warning my test sites suddenly stopped working. Examining the server (through RDP) I saw that the B: and X: drives had disappeared (just C: D & E: I think were left), and in IIS the application pools and Sites had disappeared. In the Portal however, nothing seemed to have changed - the same services & config seemed to be there.
Then about 20 minutes later the missing drives, app pools and sites reappeared and my test sites started working again! However, the B: drive was gone and now there was an F: drive (showing the same as X:); also the MS ReportViewer 2008 control that I had installed earlier in the day was gone. It is almost as if the server had been replaced with another (but the IIS config was restored from the original).
As you can imagine, this makes me worried! If this is something that could happen in production there is no way I would consider hosting commercial sites for clients on Azure (unless there is some redundancy system available to keep a site up when such a failure occurs).
Can anyone explain what may have happened, if this is possible/predictable under a live subscription, and if so how to work around it?
One other thing to keep in mind is that an Azure Web Role is not persistent. I'm not sure how you installed the MS Report Viewer 2008 control but anything you add or install outside of a deployment package when you push your solution to Azure is not guaranteed to be available at some future point.
I admit that I don't fully understand the full picture when it comes to the overall architecture of Azure but I do know that Web Roles can and do re-create themselves from time to time. When the role recycles, it returns to the state as it was when it was installed. This is why Microsoft suggests using at least 2 instances of your role because while one or the other may recycle they will never recycle both at the same time, part of what guarantees the 99.9% uptime.
You might also want to consider an Azure VM. They are persistent but require you to maintain the server in terms of updates and software much in the way I suspect you are already doing with your dedicated hosting.
I've been hosting my solution in a large (4 core) web role, also using SQL Azure, for about two years and have had great success with it. I have roughly 3,000 users and rarely see the utilization of my web role go over 2% (meaning I've got a lot of room to grow). Overall it is a great hosting solution in my opinion.
According to the Azure SLA Microsoft guarantees up time of 99.9% or higher on all its products per billing month. (20 min on the month would be .0004% loss, not being critical, just suggesting that they are still within their SLA)
Current status shows that sql databases were having issues in the US north last night, but all services appear to be up currently
Personally, I have seen the dashboard go down, and report very weird problems, but the services that I programmed to worked just fine all the way through it. When I experienced this problem it was reported on the Azure Status, the platform status and the twitter feed
While I have seen bumps, they are few and far between, and I find reliability to be perceptibly higher than other providers that I have worked with.
As for workarounds I would suggest a standard mode for your websites and increasing instances of the site. You might try looking into the new add ins that are available with the latest Azure release. Active Cloud Monitoring by Metrichub might be what you require.
It sounds like you're expecting the web role to act as a Virtual Machine instance.
Web Roles aren't persistent (the machine can be destroyed and recreated at any time), so you should do any additional required set up as a 'startup task' in your Azure project (never install software manually).
Because of this you need at least 2 instances so that rolling upgrades (i.e. Windows security patches, hotfixes and so on) can be performed automatically without having your entire deployment taken offline.
If this doesn't suit your use case then you should look at Azure Virtual Machines, but you'll need to manage updates and so on yourself. It's usually better to use Web Roles properly as you can then do scaling and so on a lot more easily.