I'm looking to create cassandra DB tables through terraform, on Azure. I already have the relative keyspaces in place.
My deployment is leveraging azurerm, however their provisioner is lacking a cassandra-tables resources.
As of now, I can only deploy cassandra tables through Azure UI on the portal or with Azure CLI scripting, however this isn't the best solution for a variety of reasons.
Is there a provider that could help me with this? I'm giving a look around but it seems that there isn't much I could leverage.
For whatever reason it looks like hashicorp never implemented cassandra table in their provider. Their source code is missing the implementation for it.
I suggest filing a new bug on their repo. You can do that here
Apparently a workaround could be deploying the resource to Azure as ARM using the ARM provider in an "incremental" mode.
resource "azurerm_resource_group_template_deployment" "example" {
depends_on = [module.cassandratest]
name = "example-cassandra-tables"
resource_group_name = azurerm_resource_group.test.name
deployment_mode = "Incremental"
template_content = templatefile("resources/templatecosmos.json", {
cosmos_db_account_name = "test-cassandra-2",
keyspace_name = "keyspace1",
table_name = "test-table-2",
autoscale_max_throughput = 4000
})
}
CosmosDB Cassandra templates are documented here. An example of the contents of resources/templatecosmos.json:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"accountName": {
"type": "string",
"defaultValue": "${cosmos_db_account_name}",
"metadata": {
"description": "Cosmos DB account name, max length 44 characters"
}
},
"keyspaceName": {
"type": "string",
"defaultValue": "${keyspace_name}",
"metadata": {
"description": "The name for the Cassandra Keyspace"
}
},
"tableName": {
"type": "string",
"defaultValue": "${table_name}",
"metadata": {
"description": "The name for the Cassandra table"
}
},
"autoscaleMaxThroughput": {
"type": "int",
"defaultValue": "[int(${autoscale_max_throughput})]",
"minValue": 4000,
"maxValue": 1000000,
"metadata": {
"description": "Maximum autoscale throughput for the Cassandra table"
}
}
},
"variables": {
"accountName": "[toLower(parameters('accountName'))]",
"databaseRef": "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]",
"keyspaceRef": "[resourceId('Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces', parameters('accountName'), parameters('keyspaceName'))]"
},
"resources": [
{
"type": "Microsoft.DocumentDb/databaseAccounts/cassandraKeyspaces/tables",
"name": "[concat(variables('accountName'), '/', parameters('keyspaceName'), '/', parameters('tableName'))]",
"apiVersion": "2020-04-01",
"properties": {
"resource": {
"id": "[concat(parameters('tableName'))]",
"schema": {
"columns": [
{
"name": "loadid",
"type": "uuid"
}
],
"partitionKeys": [
{ "name": "machine" },
{ "name": "cpu" },
{ "name": "mtime" }
],
"clusterKeys": [
{
"name": "loadid",
"orderBy": "asc"
}
]
}
},
"options": {
"autoscaleSettings": {
"maxThroughput": "[parameters('autoscaleMaxThroughput')]"
}
}
}
}
]
}
Related
I'm trying to manage CA certificates in Azure APIM through ARM but everything I tried gave no positive result.
For visualization, this is what I'm talking about:
When I look at the schema Microsoft.ApiManagement/service, there's a section for certificates where I can set the storeName variable but without results.
For sanity, I tried to upload it though Powershell plus manually and both option worked but that CA Certificate got wiped from the APIM at each deployment of my ARM template even if I used the "Incremental" option.
First I tried to modify the APIM ARM template by adding that block to the "properties" section:
"certificates": [
{
"encodedCertificate": "[parameters('RootCertificateBase64Content')]",
"certificatePassword": "[parameters('RootCertificatePassword')]",
"storeName": "Root"
}]
Here's my first test snippet for complete traceability:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"apimName": {
"type": "string",
"metadata": {
"description": "Name of the apimanagement"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"sku": {
"type": "string",
"allowedValues": [
"Developer",
"Standard",
"Premium"
],
"defaultValue": "Developer",
"metadata": {
"description": "The pricing tier of this API Management service"
}
},
"skuCapacity": {
"type": "string",
"allowedValues": [
"1",
"2"
],
"defaultValue": "1",
"metadata": {
"description": "The instance size of this API Management service."
}
},
"subnetResourceId": {
"type": "string",
"metadata": {
"description": ""
}
},
"RootCertificateBase64Content": {
"type": "string",
"metadata": {
"description": "The Root certificate content"
}
},
"RootCertificatePassword": {
"type": "string",
"metadata": {
"description": "The Root certificate password"
}
}
},
"variables": {
"publisherEmail": "whatever#heyho.com",
"publisherName": "Whatever Team",
"notificationSenderEmail": "whatever#heygo.com"
},
"resources": [
{
"apiVersion": "2019-12-01",
"name": "[parameters('apimName')]",
"type": "Microsoft.ApiManagement/service",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('sku')]",
"capacity": "[parameters('skuCapacity')]"
},
"properties": {
"notificationSenderEmail": "[variables('notificationSenderEmail')]",
"publisherEmail": "[variables('publisherEmail')]",
"publisherName": "[variables('publisherName')]",
"virtualNetworkConfiguration": {
"subnetResourceId": "[parameters('subnetResourceId')]"
},
"virtualNetworkType": "Internal",
"certificates": [
{
"encodedCertificate": "[parameters('RootCertificateBase64Content')]",
"certificatePassword": "[parameters('RootCertificatePassword')]",
"storeName": "Root"
}]
},
"identity": {
"type": "SystemAssigned"
}
}
],
"outputs": {
"apiManagementPrivateHostIp": {
"type": "string",
"value": "[reference(concat(resourceId('Microsoft.ApiManagement/service', parameters('apimName')))).privateIPAddresses[0]]"
}
}
}
Second alternative I tried was to use the Microsoft.ApiManagement/service/certificates schema. There is no option there to specify the StoreName so I assumed it wasn't the right schema but I tried anyway. All attempts generated a certificate in the built-in Certificates store instead of the CA Certificates store.
Here's my second attempt's snippet:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"apimName": {
"type": "string",
"metadata": {
"description": "The parent APIM name"
}
},
"certificateName": {
"type": "string",
"metadata": {
"description": "The certificate name"
}
},
"CertificateBase64Content": {
"type": "string",
"metadata": {
"description": "The content of the certificate"
}
},
"CertificatePassword": {
"type": "string",
"metadata": {
"description": "The certificate password"
}
}
},
"resources": [
{
"name": "[concat(parameters('apimName'), '/Root/', parameters('certificateName'))]",
"type": "Microsoft.ApiManagement/service/certificates",
"apiVersion": "2019-01-01",
"properties": {
"data": "[parameters('CertificateBase64Content')]",
"password": "[parameters('CertificatePassword')]"
}
}
],
"outputs": {}
}
While looking at terraform documentation, it seems that it's possible to manage these certificates through the base schema and I confirmed that through the terraform azurerm provider source code (Unfortunately I cannot use Terraform and I MUST use ARM in that scenario).
Any clues on how to manage CA certificates in Azure APIM through ARM?
I assume you want to update CA certificate authority sections of already existing APIM? If yes then just provide all required properties for Microsoft.ApiManagement/service but for name use already existing APIM name that you want to update and choose the same resource group.
Thanks to this, it will just update existing APIM with properties you provided, instead of creating new APIM. The required properties are name, type, apiVersion, location, sku, properties. For properties you need to provide publisherEmail and publisherName, and of course certificates - this is what you want to update after all. So the absolute minimum for update will look like this:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters":{
"base64EncodedCertificate":{
"defaultValue":"base64 encoded certificate",
"type":"String"
},
"certificatePassword":{
"defaultValue":"certificate password",
"type":"String"
}
},
"variables": {},
"resources": [
{
"name": "existing-apim-name",
"type": "Microsoft.ApiManagement/service",
"apiVersion": "2021-01-01-preview",
"location": "West Europe",
"sku": {
"name": "Developer",
"capacity": 1
},
"properties": {
"publisherEmail": "publisher#gmail.com",
"publisherName": "Publisher Name",
"certificates": [
{
"encodedCertificate": "[parameters('base64EncodedCertificate')]",
"certificatePassword": "[parameters('certificatePassword')]",
"storeName": "Root"
}
]
}
}
]
}
Watch out. certificates array must contain all certificates that you want to have on this APIM. All existing CA certs that are not in this array will be deleted.
I am working on to create the alerts in azure for various azure resources using ARM templates. But I want to create custom alerts for Azure Data Factory by using below log analytics query:
"alertLogQuery": "ADFPipelineRun\r\n| where ResourceId has 'df-xxx-xxx-xxxx'\r\n| where TimeGenerated > ago(15m)\r\n| where Status has 'Queued'\r\n| where PipelineName in ('pl_xxx_Business_xxx_Check' , 'pl_xxx_xxxx_Date_Check')\r\n| summarize by PipelineName, TimeGenerated\n",
Template file:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"isEnabled": {
"type": "bool",
"defaultValue": true,
"metadata": {
"description": "Specifies whether the alert is enabled"
}
},
"rgNameOfActionGroup": {
"type": "string",
"metadata": {
"description": "The resource group name of the action group"
}
},
"actionGroupName": {
"type": "string",
"metadata": {
"description": "The name of the action group"
}
},
"rgNameOfLogAnalyticsWorkspace": {
"type": "string",
"metadata": {
"description": "The resource group name of the log analytics workspace"
}
},
"logAnalyticsWorkspaceName": {
"type": "string",
"metadata": {
"description": "The name of the log analytics workspace"
}
},
"alertTypes": {
"type": "array",
"metadata": {
"description": "An array that contains objects with properties for the metric alerts."
}
}
},
"variables": {
"actionGroupResourceId": "[concat('/subscriptions/',subscription().subscriptionId, '/resourceGroups/', parameters('rgNameOfActionGroup'), '/providers/Microsoft.insights/actionGroups/', parameters('actionGroupName'))]",
"workspaceResourceId": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', parameters('rgNameOfLogAnalyticsWorkspace'), '/providers/Microsoft.OperationalInsights/workspaces/', parameters('logAnalyticsWorkspaceName'))]",
"copy": [
{
"name": "alertTypes",
"count": "[length(parameters('alertTypes'))]",
"input": "[parameters('alertTypes')[copyIndex('alertTypes')].alertName]"
}
],
"alertSource": {
"Type": "ResultCount"
},
"alertEvaluation": {
"Frequency": 15,
"Time": 15
},
"alertActions": {
"SuppressTimeinMin": 20
}
},
"resources": [
{
"copy": {
"name": "alertTypes",
"count": "[length(parameters('alertTypes'))]"
},
"name": "[parameters('alertTypes')[copyIndex('alertTypes')].alertName]",
"type": "Microsoft.Insights/scheduledQueryRules",
"apiVersion": "2018-04-16",
"location": "global",
"tags": {},
"properties": {
"description": "[parameters('alertTypes')[copyIndex('alertTypes')].alertDescription]",
"enabled": "[parameters('isEnabled')]",
"source": {
"query": "[parameters('alertTypes')[copyIndex('alertTypes')].alertLogQuery]",
"dataSourceId": "[variables('workspaceResourceId')]",
"queryType": "[variables('alertSource').Type]"
},
"schedule": {
"frequencyInMinutes": "[variables('alertEvaluation').Frequency]",
"timeWindowInMinutes": "[variables('alertEvaluation').Time]"
},
"action": {
"odata.type": "Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction",
"severity": "[parameters('alertTypes')[copyIndex('alertTypes')].alertSeverity]",
"throttlingInMin": "[variables('alertActions').SuppressTimeinMin]",
"aznsAction": {
"actionGroup": "[array(variables('actionGroupResourceId'))]",
"emailSubject": "[parameters('alertTypes')[copyIndex('alertTypes')].alertName]"
},
"trigger": {
"thresholdOperator": "[parameters('alertTypes')[copyIndex('alertTypes')].operator]",
"threshold": "[parameters('alertTypes')[copyIndex('alertTypes')].thresholdValue]",
"metricTrigger": {
"thresholdOperator": "[parameters('alertTypes')[copyIndex('alertTypes')].operator]",
"threshold": "[parameters('alertTypes')[copyIndex('alertTypes')].thresholdValue]",
"metricColumn": "Classification",
"metricTriggerType": "Consecutive"
}
}
}
}
}
],
"outputs": {
"alertNames": {
"type": "array",
"value": "[variables('alertTypes')]"
}
}
}
I'm getting the below error:
Template validation failed: The template resource 'df-xx-xx-xxx-Queued Demo ADF pipelines alert/report' for type 'Microsoft.WindowsAzure.ResourceStack.Frontdoor.Common.Entities.TemplateGenericProperty`1[System.String]' at line '71' and column '60' has incorrect segment lengths. A nested resource type must have identical number of segments as its resource name. A root resource type must have segment length one greater than its resource name.
So, can anyone suggest me how to fix the above issue.
Please refer to this link. In the variables -> alertSource section, you can add your custom alert rule there:
"alertSource":{
"Query":"write your query here",
"SourceId": "xxxxx",
"Type":"xxxx"
},
Note that you need to escape some characters like "" in your query if it has.
I'm trying to assign the role to 'Cosmos Db account' by using following template.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"principalId": {
"type": "string",
"defaultValue": "gb9e32f1-678f-4552-ae0a-0000f765aaaa",
"metadata": {
"description": ""
}
},
"CosmosDbAccountName": {
"type": "string",
"defaultValue": "cosmosdbaccount",
"metadata": {
"description": "Cosmos Db Account name"
}
},
"RoleType": {
"defaultValue" : "Contributor",
"type": "string",
"metadata": {
"description": "Built-in role to assign"
},
"allowedValues" : [
"Contributor"
]
}
},
"variables": {
"Scope": "[concat(parameters('CosmosDbAccountName'),'/Microsoft.Authorization/',guid(subscription().subscriptionId))]"
},
"resources": [
{
"type": "Microsoft.DocumentDB/databaseAccounts/providers/roleAssignments",
"name": "[variables('Scope')]",
"apiVersion":"2020-04-01-preview",
"properties": {
"RoleDefinitionId":"/subscriptions/[subscription().subscriptionId]/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
"principalId": "[parameters('principalId')]"
}
}
]
}
I am currently getting error as
{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"BadRequest","message":"{\r\n "error": {\r\n "code": "RoleAssignmentUpdateNotPermitted",\r\n "message": "Tenant ID, application ID, principal ID, and scope are not allowed to be updated."\r\n }\r\n}"}]}
I think there is existing role assignment with the same name that you are trying to create through this template and it ends up giving the error for "RoleAssignmentUpdateNotPermitted".
Few changes to your template can solve your problem like generating a unique GUID and then concat it with cosmos DB account name, Please try the below updated template:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"principalId": {
"type": "string",
"defaultValue": "gb9e32f1-678f-4552-ae0a-0000f765aaaa",
"metadata": {
"description": ""
}
},
"CosmosDbAccountName": {
"type": "string",
"defaultValue": "cosmosdbaccount",
"metadata": {
"description": "Cosmos Db Account name"
}
},
"RoleType": {
"defaultValue" : "Contributor",
"type": "string",
"metadata": {
"description": "Built-in role to assign"
},
"allowedValues" : [
"Contributor"
]
},
"guid": {
"defaultValue": "[newGuid()]",
"type": "String"
}
},
"variables": {
"Scope": "[concat(parameters('CosmosDbAccountName'),'/Microsoft.Authorization/', parameters('guid'))]"
},
"resources": [
{
"type": "Microsoft.DocumentDB/databaseAccounts/providers/roleAssignments",
"name": "[variables('Scope')]",
"apiVersion":"2020-04-01-preview",
"properties": {
"RoleDefinitionId":"/subscriptions/[subscription().subscriptionId]/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
"principalId": "[parameters('principalId')]"
}
}
]
}
I am working to build a ci/cd pipeline for AKS. the first task set is "Azure resource group deployment" which is used for creating vnet /subnet for the AKS .
The intention is to skip the task next time onwards since the vnet and subnet are already in place. Second time onwards getting the following error -
BadRequest: { "error": { "code": "InUseSubnetCannotBeDeleted", "message": "Subnet AKSSubnet is in use by /subscriptions/***************************************/resourceGroups/MC_**************-CLUSTER_eastus/providers/Microsoft.Network/networkInterfaces/aks-agentpool-
########-nic-0/ipConfigurations/ipconfig1 and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet.", "details": [] } }
Error: Task failed while creating or updating the template deployment.
Looks like the task is trying to delete the subnet instead of skipping it. What is the resolution?
It is using following arm templates : azuredeploy.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vnetName": {
"type": "string",
"defaultValue": "GEN-VNET-NAME",
"metadata": {
"description": "Name of the virtual Network"
}
},
"vnetAddressPrefix": {
"type": "string",
"defaultValue": "10.10.0.0/16",
"metadata": {
"description": "Address prefix"
}
},
"subnetPrefix": {
"type": "string",
"defaultValue": "10.10.0.0/24",
"metadata": {
"description": "Subnet Prefix"
}
},
"subnetName": {
"type": "string",
"defaultValue": "Subnet",
"metadata": {
"description": "GEN-SUBNET-NAME"
}
}
},
"variables": {},
"resources": [
{
"apiVersion": "2018-06-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('vnetName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[parameters('vnetAddressPrefix')]"
]
}
},
"resources": [
{
"apiVersion": "2018-06-01",
"type": "subnets",
"location": "[resourceGroup().location]",
"name": "[parameters('subnetName')]",
"dependsOn": [
"[parameters('vnetName')]"
],
"properties": {
"addressPrefix": "[parameters('subnetPrefix')]"
}
}
]
}
],
"outputs": {
"vnetName": {
"type": "string",
"value": "[parameters('vnetName')]"
},
"subnetName": {
"type": "string",
"value": "[parameters('subnetName')]"
}
}
}
azuredeploy.parameters.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vnetName": {
"value": "###########"
},
"vnetAddressPrefix": {
"value": "10.10.0.0/16"
},
"subnetPrefix": {
"value": "10.10.0.0/24"
},
"subnetName": {
"value": "######"
}
}
}
What is happening right here - your template is coded in such a fashion:
vnet resources
empty subnets property
subnet resource(s)
bla-bla-bla
and what is happening here it is trying to coerce the vnet to have 0 subnets, due to how you authored your template. you have 2 options:
put a condition on the vnet resource definition and pass a parameter to it if the build number is greater than 1 (or just manually specify at build time whether to skip it or not).
modify your template to look like so:
vnet resource
subnets property populated with subnets
bla-bla-bla
essentially, this has nothing to do with Azure Devops.
I have an ACS Kubernetes cluster that was created with an agent count of 1. I went to the portal to increase the agent count to 2 and received a generic error saying the provisioning of resource(s) for container service failed.
Looking at the activity logs, there is a bit more information.
Write ContainerServices - PreconditionFailed - Provisioning of resource(s) for container service 'xxxxxxx' in
resource group 'xxxxxxxx' failed.
Validate - InvalidTemplate - Deployment template validation failed: 'The resource 'Microsoft.Network/networkSecurityGroups/k8s-master-3E4D5818-nsg' is not defined in the template. Please see https://aka.ms/arm-template for usage details.'.
Trying to change it via the Azure CLI 2.0 also returns the same error.
Update: The cluster was stood up using an ARM template with a single container service resource based on the sample in the quickstart templates repo.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"dnsNamePrefix": {
"type": "string",
"metadata": {
"description": "Sets the Domain name prefix for the cluster. The concatenation of the domain name and the regionalized DNS zone make up the fully qualified domain name associated with the public IP address."
}
},
"agentCount": {
"type": "int",
"defaultValue": 1,
"metadata": {
"description": "The number of agents for the cluster. This value can be from 1 to 100 (note, for Kubernetes clusters you will also get 1 or 2 public agents in addition to these seleted masters)"
},
"minValue":1,
"maxValue":100
},
"agentVMSize": {
"type": "string",
"defaultValue": "Standard_D2_v2",
"allowedValues": [
"Standard_A0", "Standard_A1", "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5",
"Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11",
"Standard_D1", "Standard_D2", "Standard_D3", "Standard_D4",
"Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14",
"Standard_D1_v2", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2",
"Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2",
"Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5",
"Standard_DS1", "Standard_DS2", "Standard_DS3", "Standard_DS4",
"Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14",
"Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5"
],
"metadata": {
"description": "The size of the Virtual Machine."
}
},
"linuxAdminUsername": {
"type": "string",
"defaultValue": "azureuser",
"metadata": {
"description": "User name for the Linux Virtual Machines."
}
},
"orchestratorType": {
"type": "string",
"defaultValue": "Kubernetes",
"allowedValues": [
"Kubernetes",
"DCOS",
"Swarm"
],
"metadata": {
"description": "The type of orchestrator used to manage the applications on the cluster."
}
},
"masterCount": {
"type": "int",
"defaultValue": 1,
"allowedValues": [
1
],
"metadata": {
"description": "The number of Kubernetes masters for the cluster."
}
},
"sshRSAPublicKey": {
"type": "string",
"metadata": {
"description": "Configure all linux machines with the SSH RSA public key string. Your key should include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser#linuxvm'"
}
},
"servicePrincipalClientId": {
"metadata": {
"description": "Client ID (used by cloudprovider)"
},
"type": "securestring",
"defaultValue": "n/a"
},
"servicePrincipalClientSecret": {
"metadata": {
"description": "The Service Principal Client Secret."
},
"type": "securestring",
"defaultValue": "n/a"
}
},
"variables": {
"adminUsername":"[parameters('linuxAdminUsername')]",
"agentCount":"[parameters('agentCount')]",
"agentsEndpointDNSNamePrefix":"[concat(parameters('dnsNamePrefix'),'agents')]",
"agentVMSize":"[parameters('agentVMSize')]",
"masterCount":"[parameters('masterCount')]",
"mastersEndpointDNSNamePrefix":"[concat(parameters('dnsNamePrefix'),'mgmt')]",
"orchestratorType":"[parameters('orchestratorType')]",
"sshRSAPublicKey":"[parameters('sshRSAPublicKey')]",
"servicePrincipalClientId": "[parameters('servicePrincipalClientId')]",
"servicePrincipalClientSecret": "[parameters('servicePrincipalClientSecret')]",
"useServicePrincipalDictionary": {
"DCOS": 0,
"Swarm": 0,
"Kubernetes": 1
},
"useServicePrincipal": "[variables('useServicePrincipalDictionary')[variables('orchestratorType')]]",
"servicePrincipalFields": [
null,
{
"ClientId": "[parameters('servicePrincipalClientId')]",
"Secret": "[parameters('servicePrincipalClientSecret')]"
}
]
},
"resources": [
{
"apiVersion": "2016-09-30",
"type": "Microsoft.ContainerService/containerServices",
"location": "[resourceGroup().location]",
"name":"[resourceGroup().name]",
"properties": {
"orchestratorProfile": {
"orchestratorType": "[variables('orchestratorType')]"
},
"masterProfile": {
"count": "[variables('masterCount')]",
"dnsPrefix": "[variables('mastersEndpointDNSNamePrefix')]"
},
"agentPoolProfiles": [
{
"name": "agentpools",
"count": "[variables('agentCount')]",
"vmSize": "[variables('agentVMSize')]",
"dnsPrefix": "[variables('agentsEndpointDNSNamePrefix')]"
}
],
"linuxProfile": {
"adminUsername": "[variables('adminUsername')]",
"ssh": {
"publicKeys": [
{
"keyData": "[variables('sshRSAPublicKey')]"
}
]
}
},
"servicePrincipalProfile": "[variables('servicePrincipalFields')[variables('useServicePrincipal')]]"
}
}
],
"outputs": {
"masterFQDN": {
"type": "string",
"value": "[reference(concat('Microsoft.ContainerService/containerServices/', resourceGroup().name)).masterProfile.fqdn]"
},
"sshMaster0": {
"type": "string",
"value": "[concat('ssh ', variables('adminUsername'), '#', reference(concat('Microsoft.ContainerService/containerServices/', resourceGroup().name)).masterProfile.fqdn, ' -A -p 22')]"
},
"agentFQDN": {
"type": "string",
"value": "[reference(concat('Microsoft.ContainerService/containerServices/', resourceGroup().name)).agentPoolProfiles[0].fqdn]"
}
}
}
This is a known service issue for old clusters. A fix is currently rolling out and is being tracked in this github issue, https://github.com/Azure/ACS/issues/16
Jack (a dev on the ACS team)
I had test in my lab with this template, but I can't reproduce your error.
please try to use azure resource explorer to edit the count of agent pool: