Provision Azure Redis Cache using ARM and specify Access Keys - azure

I try to provision an Azure Redis Cache using an ARM template. This is working as expected with the exception that I can't specify the access keys.
Usually I work with the generated keys which is probably the recommended way - but in this case I wan't to provide thos keys within my deployment (for some legacy reasons).
Q: Is it possible to provide the access keys within an ARM template? Or can I set them after the deployment using PowerShell?
Here is a snippet of my ARM template:
"resources": [
{
"type": "Microsoft.Cache/Redis",
"name": "[parameters('myRedis_name')]",
"apiVersion": "2016-04-01",
"location": "West Europe",
"tags": {},
"properties": {
"redisVersion": "3.2",
"sku": {
"name": "Standard",
"family": "C",
"capacity": 1
},
"enableNonSslPort": false,
"redisConfiguration": {
"maxclients": "1000",
"maxmemory-reserved": "50",
"maxmemory-delta": "50"
}
},
"resources": [],
"dependsOn": []
},

Just like Azure Storage keys (or DocumentDB keys, etc), you have no ability to specify the keys. You may either use what's provided or, at any time, regenerate the keys (either primary or secondary). This is how keys are managed, regardless whether using ARM or the portal. Here's a screengrab where you can see the regen options:
There's no way to enter a specific key of your own.

Related

The specified name is not available Azure EventHub namespace

I have tried to deploy an ARM template with new EventHub Namespace. But it is failing with the BadRequest error, The specified name isn't available.. But the name has not used previously in anything under that resource group. When I tried to create a similar resource manually from the portal it is working fine. So it should not be a privileges' issue. Can anyone suggest my issue here please?
{
"type": "Microsoft.EventHub/namespaces",
"apiVersion": "2021-11-01",
"name": "xxxx-xxxx-xxx-000",
"location": "[variables('location')]",
"sku": {
"name": "Standard",
"tier": "Standard",
"capacity": 1
},
"properties": {
"isAutoInflateEnabled": false,
"maximumThroughputUnits": 0
}
}
We have tried the same in our environment to create an eventhub namespace with name similar to yours and it works fine .
Below is the workaround we followed;
In the same name that we will be trying to deploy through ARM created in portal and then trying to deploy through ARM and got the same issue.
Yes its a known issue we can expect , To ensure that we need to provide the name which is globally unique and not by any resource group ,Not only at your resource group ,this name should not use anywhere(Azure) .
Make sure that if you have created it through portal and trying with ARM again , please delete the previous one if you are owner of that namespace. And try again with same name after deletion.
We have provided the namespace just similar to you to check whether this value is passed or not and it works successfully.
template.json
"resources": [
{
"type": "Microsoft.EventHub/namespaces",
"apiVersion": "2022-01-01-preview",
"name": "[parameters('namespaces_ajletter_test_111_something_name')]",
"location": "Central India",
"sku": {
"name": "Standard",
"tier": "Standard",
"capacity": 1
},
OUTPUT SCREENSHOT FOR REFERENCE:-
For more information please refer the below links:-
MICROSOFT DOCUMENTATION| EventHub ARM EXCEPTION & Create EventHub Namespace using ARM Template .

How to create a folder inside of an Azure Data Lake container using an ARM template?

I have an Azure ADLS storage account called eventcoadltest and I have a container called eventconnector-transformed-data-fs.
I have deployed this ADLS through an ARM template but I need to create a directory inside of eventconnector-transformed-data-fs as shown below (the folder debugging was created through the UI but I need to achieve the same with an ARM template):
I have found some posts that indicate this is not possible but it can be bypassed with some workarounds:
How to create empty folder in azure blob storage
Use ARM template to create directories in Azure Storage Containers?
How to create a folder inside container in Azure Data Lake Storage Gen2 with the help of 'azure-storage' Package
ARM template throws incorrect segments lengths for array of storage containers types
how to create blob container inside Storage Account using ARM templates
Microsoft Azure: How to create sub directory in a blob container
How to create an azure blob storage using Arm template?
How to create directories in Azure storage container without creating extra files?
How to create a folder inside container in Azure Data Lake Storage Gen2 with the help of 'azure-storage' Package
I have tried to modify my ARM template as well to achieve a similar result but I haven't had any success.
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountDLName": {
"type": "string"
},
"sku": {
"type": "string"
},
"directoryOutput":{
"type": "string"
}
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2021-02-01",
"sku": {
"name": "[parameters('sku')]",
"tier": "Standard"
},
"kind": "StorageV2",
"name": "[parameters('storageAccountDLName')]",
"location": "[resourceGroup().location]",
"tags": {
"Contact": "[parameters('contact')]"
},
"scale": null,
"properties": {
"isHnsEnabled": true,
"networkAcls": {
"bypass": "AzureServices",
"virtualNetworkRules": [],
"ipRules": [],
"defaultAction": "Allow"
}
},
"dependsOn": [],
"resources": [
{
"type": "storageAccounts/blobServices/containers",
"name": "[concat('default/', 'eventconnector-raw-data-fs/test')]",
"apiVersion": "2021-02-01",
"properties": {},
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountDLName'))]"
]
}
]
}
]
}
The following code was modified for trying to create the folders inside of the containers.
"type": "storageAccounts/blobServices/containers",
"name": "[concat('default/', 'eventconnector-raw-data-fs/test')]"
The reason why I am trying to solve this problem is because I won't have access to create folders in our production environment, so that's why I need to do the deployment fully through ARM. How can I create this folder with the deployment script? Is there another alternative for achieving my desired result? Any idea or suggestion is welcome :)
this doesn't make any sense, as you can not create folders in Azure Storage. They don't have folders. blobs are individual objects\entities. you are confused to believe folders exist, because UI renders them as folders, however THERE ARE NO FOLDERS in a Azure Storage Blob Container.
TLDR: you can not do this at all no matter how hard you try
After some research I found out that it is possible to create a folder via Databricks with the following command:
dbutils.fs.mkdirs("dbfs:/mnt/folder_desktop/test/uploads")
I had to configure Databricks with my Azure Datafactory in order to run this command.

Use Azure Container Instances via ARM to create an indeterminate number of containers

I am attempting to deploy an Azure Storage account along with an indeterminate number of tables via an ARM template.
Since MS are yet to provide a tables resource type for ARM, I'm instead using Azure Container Instances to spin up a container running azure-cli and then create the table that way.
As you can see in my example below, I'm using property iteration to create multiple containers - one for each table. This seemed to be working until the number of tables to create changed, and then I started getting errors.
The updates on container group 'your-aci-instance' are invalid. If you are going to update the os type, restart policy, network profile, CPU, memory or GPU resources for a container group, you must delete it first and then create a new one.
I understand what it's saying, but it does seem strange to me that you can create a container group yet not alter the group of containers within.
As ARM doesn't allow you do delete resources, I'd have to add a manual step to my deployment process to ensure that the ACI doesn't exist, which isn't really desirable.
Equally undesirable would be to use resource iteration to create multiple ACI's - there would be the possibility of many ACI's being strewn about the Resource Group that will never be used again.
Is there some ARM magic that I don't yet know about which can help me achieve the creation of tables that meets the following criteria?
Only a single ACI is created.
The number of tables to be created can change.
Notes
I have tried to use variable iteration to create a single 'command' array for a single container, but it seems that ACI considers all commands as a one liner, so this caused an error.
Further reading suggests that it is only possible to run one command on container startup.
How do I run multiple commands when deploying a container group?
Current attempt
Here is a snippet from my ARM template showing how I used property iteration to try and achieve my goal.
{
"condition": "[not(empty(variables('tables')))]",
"type": "Microsoft.ContainerInstance/containerGroups",
"name": "[parameters('containerInstanceName')]",
"apiVersion": "2018-10-01",
"location": "[resourceGroup().location]",
"properties": {
"copy": [
{
"name": "containers",
"count": "[max(length(variables('tables')), 1)]",
"input": {
"name": "[toLower(variables('tables')[copyIndex('containers')])]",
"properties": {
"image": "microsoft/azure-cli",
"command": [
"az",
"storage",
"table",
"create",
"--name",
"[variables('tables')[copyIndex('containers')]]"
],
"environmentVariables": [
{
"name": "AZURE_STORAGE_KEY",
"value": "[listkeys(parameters('storageAccount_Application_Name'), '2019-04-01').keys[0].value]"
},
{
"name": "AZURE_STORAGE_ACCOUNT",
"value": "[parameters('storageAccount_Application_Name')]"
},
{
"name": "DATETIME",
"value": "[parameters('dateTime')]"
}
],
"resources": {
"requests": {
"cpu": "1",
"memoryInGb": "1.5"
}
}
}
}
}
],
"restartPolicy": "OnFailure",
"osType": "Linux"
},
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount_Application_Name'))]"
],
"tags": {
"displayName": "Application Storage Account - Tables",
"Application": "[parameters('tagApplication')]",
"environment": "[parameters('tagEnvironment')]",
"version": "[parameters('tagVersion')]"
}
}
If it says the field is immutable - it is, there's nothing you can do about it really. You can always create a unique name for that container instance and use complete deployment mode and only deploy ACI to this particular resource group, that way it will always have only this ACI instance and others will get deleted and it will work around immutability.
you can call an azure function from inside the template (HTTP trigger) and pass in names of storage tables to create and it will do that, for example.
But either way its a hack.

Since it's not possible to create Blob container in Azure ARM, then how can I enable Archive using ARM?

According to the documentation I can enable the Azure Event Hubs Archive feature using an Azure Resource Manager template. The template takes a blobContainerName argument:
"The blob container where you want your event data be archived."
But afaik it's not possible to create a blob container using an ARM template, then how am I supposed to enable the Archive feature on an Event Hub?
The purpose of the ARM template is to provision everything from scratch, not to manually create some of the resources using the portal.
It wasn't possible before to create containers in your storage account, but this has been changed. New functionality has been added to the ARM template for Storage Accounts which enable you to create containers.
To create a storage account with a container called theNameOfMyContainer, add this to your resources block of the ARM template.
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-02-01",
"location": "[resourceGroup().location]",
"kind": "StorageV2",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"properties": {
"accessTier": "Hot"
},
"resources": [{
"name": "[concat('default/', 'theNameOfMyContainer')]",
"type": "blobServices/containers",
"apiVersion": "2018-03-01-preview",
"dependsOn": [
"[parameters('storageAccountName')]"
],
"properties": {
"publicAccess": "Blob"
}
}]
}
To my knowledge, you can use None, Blob or Container for your publicAccess.
It's still not possible to create Queues and Tables, but hopefull this will be added soon.
Just like you said, there is no way to create a blob in Azure ARM Template, so the only logical answer to this question is: supply existing blob at deployment time. One way to do that would be to create a blob with powershell and pass it as a parameter to ARM Deployment.

Autoscaling IaaS VMs in ARM mode from a template

I've created an template-based deployment that over-provisions a number of Linux VMs. I'd like to autoscale them as per classic instances, where Azure will turn on/turn off instances according to CPU load.
Is this possible with ARM mode? And if not, is there a suggested alternative method? The only examples I can find are around using Application Insights and PaaS functionality. I've got a Python app running in Docker on Ubuntu hosts.
For IaaS, you must use virtual machine scale sets to use autoscale, else you need to stick with PaaS (web apps).
For this you would first need to create an availability group for the VMs. The resource decleration in the ARM template looks something like this:
{
"type": "Microsoft.Compute/availabilitySets",
"name": "[variables('availabilitySetName')]",
"apiVersion": "2015-05-01-preview",
"location": "[parameters('location')]",
"properties": {
"platformFaultDomainCount": "2"
}
}
Then for the virtual machine resource the decliration in the ARM Template would look something like this:
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'), '0')]",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', parameters('newStorageAccountName'))]",
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), '0')]",
"[concat('Microsoft.Compute/availabilitySets/', variables('availabilitySetName'))]"
],
"properties": {
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]"
},
...},
The quckstart templates are a good ref: https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-2-vms-2-FDs-no-resource-loops/azuredeploy.json
Once you have two or more VMs of the same size in an availability set, you would configure autoscale using microsoft.insights/autoscalesettings, which I beleive you referenced in the question. This is done at the cloud service so it will work similar to PaaS... like so:
{
"apiVersion": "2014-04-01",
"name": "[concat(variables('vmName'), '-', resourceGroup().name)]",
"type": "microsoft.insights/autoscalesettings",
"location": "East US",
...},
A pretty good example is here: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/6abc9f320e39d9d75dffb60846e88ab80d3ff33a/201-web-app-sql-database/azuredeploy.json
I also setup autoscale using the portal first and reviewed ARMExplorer to get a better idea of how things should look in my code. ARMExplorer is here: Azure Resource Explorer

Resources