Is it possible to create an azure storage file share using Ansible ?
There is an Ansible module to create Azure Storage Account, so I am good there.
I dont see anything for File Share.
There is something called azure_rm_resource but I have not been able to get it to work with it.
Any input will be appreciated.
Custom script solution
If you need custom implementation, I would suggest to use Azure CLI for developing a custom script, and then run it in your Ansible playbook.
E.g. reference fileshare.ps1 script with content of az storage share create --account-name MyAccount --name MyFileShare ...etc.
Documentation: https://learn.microsoft.com/en-us/cli/azure/storage/share?view=azure-cli-latest#az_storage_share_create
Simple solution with Ansible module (Edit: not sufficient at the moment)
Edit: this solution only creates the storage account as of now, so it is not sufficient in most scenarios.
There is an Ansible module for this, already implemented, called azure_rm_storageaccount in the azure namespace. See the link in the bottom.
Make sure to use the kind property and specify FileStorage value. Example:
- name: Create an account with kind of FileStorage
azure_rm_storageaccount:
resource_group: souser_resource_group
name: souser_file_storage
type: Premium_LRS
kind: FileStorage
tags:
testing: testing
Documentation:
https://docs.ansible.com/ansible/latest/collections/azure/azcollection/azure_rm_storageaccount_module.html
Posting my answer as it will help someone in the community:
The latest commit contains this enhancement !
https://github.com/ansible-collections/azure/pull/603/commits/83a6600417faccf56752a8ae5b1e52a0bac37dfb
Copying the example from the link above :
- name: Create storage share
azure_rm_storageshare:
name: testShare
resource_group: myResourceGroup
account_name: testStorageAccount
state: present
access_tier: Cool
quota: 2048
metadata:
key1: value1
key2: value2
Related
I'm currently porting some infrastructure as code scripts from Azure CLI to Azure Bicep. Among many other things, the Bicep files should create a subnet and allow access from this subnet to an existing Azure SQL Server and an existing Storage Account.
For the SQL Server, this is simple - I can reference the existing server resource and declare a child resource representing the VNET rule:
resource azureSqlServer 'Microsoft.Sql/servers#2021-05-01-preview' existing = {
name: azureSqlServerName
resource vnetRule 'virtualNetworkRules' = {
name: azureSqlServerVnetRuleName
properties: {
virtualNetworkSubnetId: subnetId
}
}
}
However, with the Storage Account, the network rules are not child resources, but a property of the Storage Account resource (properties.networkAcls.virtualNetworkRules). I cannot declare all the details of the Storage Account in my Bicep file because that resource is way out of scope from the deployment I'm currently working on. In essence, I want to adapt the existing resource, just ensuring a single rule is present.
The following does not work because existing cannot be combined with properties:
resource storageAccount 'Microsoft.Storage/storageAccounts#2021-06-01' existing = {
name: storageAccountName
properties: {
networkAcls: {
virtualNetworkRules: [
{
id: subnetId
action: 'Allow'
}
]
}
}
}
Is there any way I can adapt just a tiny bit of an existing resource using Bicep?
UPDATE: I just realized you came from Azure CLI and was trying to find a way in bicep - sorry for not answering your actual question - anyway your post made me think about this in another way other than bicep, so my "answer" is what I came up with...
...sounds like we thought about this in the same manner; using bicep to pimp an existing Storage Account, granting a new subnet access. However I ended up using AzureCLI az storage account network-rule add
e.g.
newSubnet='/subscriptions/<subscr-guid>/resourceGroups/<rg-name-where-vnet-resides>/providers/Microsoft.Network/virtualNetworks/<vnet-name>/subnets/<subnet-name>'
az storage account network-rule add -g <rg-name-where-sa-resides> --account-name <storage-account-name> --subnet $newSubnet
run this from a terminal or put it in an AzureCLI task in a devops pipeline (which is what I needed)
The existing keyword in bicep is used to tell bicep that the resource already exists and you just want a symbolic reference to that resource in the code. If the resource doesn't exist it's likely that the the deployment will fail in some way.
Your first snippet is equivalent to:
resource vnetRule 'Microsoft.Sql/servers/virtualNetworkRules#2021-05-01-preview' = {
name: '${azureSqlServerName}/${azureSqlServerVnetRuleName}'
properties: {
virtualNetworkSubnetId: subnetId
}
}
In your second snippet since you want to update properties you have to provide the complete declaration of the resource, IOW you have to define and deploy the storageAccount. This isn't unique to bicep, it's the way the declarative model in Azure works.
That said, if you want to deploy to another scope in bicep, you can use a module with the scope property. E.g.
module updateStorage 'storage.bicep' = {
scope: resourceGroup(storageResourceGroupName)
name: 'updateStorage'
}
The downside is that you need to make sure you define/declare all the properties need for that storageAccount which is not ideal. There are some ways you can author around that, but if the storageAccount doesn't exist, the deployment is guaranteed to fail. For example, you could assert the storageAccount exists, fetch its properties and then union or modify the properties in the module. You might be able to make that work (depending on the extent of your changes) but it's a bit of an anti-pattern in a declarative model.
That help?
How to configure Azure Blob Storage Container on an Yaml
- name: scripts-file-share
azureFile:
secretName: dev-blobstorage-secret
shareName: logs
readOnly: false```
The above is for the logs file share to configure on yaml.
But if I need to mount blob container? How to configure it?
Instead of azureFile do I need to use azureBlob?
And what is the configuration that I need to have below azureBlob? Please help
After the responses I got from the above post and also went through the articles online, I see there is no option for Azure Blob to mount on Azure AKS except to use azcopy or rest api integration for my problem considering the limitations I have on my environment.
So, after little bit research and taking references from below articles I could able to create a Docker image.
1.) Created the docker image with the reference article. But again, I also need support to run a bash script as I am running azcopy command using bash file. So, I tried to copy the azcopy tool to /usr/bin.
2.) Created SAS tokens for Azure File Share & Azure Blob. (Make sure you give required access permissions only)
3.) Created a bash file that runs the below command.
azcopy <FileShareSASTokenConnectionUrl> <BlobSASTokenConnectionUrl> --recursive=true
4.) Created a deployment yaml that runs on AKS. Added the command to run bash file in that.
This gave me the ability to copy the files from Azure File Share Folders to Azure Blob Container
References:
1.) https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#obtain-a-static-download-link
2.) https://github.com/Azure/azure-storage-azcopy/issues/423
I'm just getting started with Ansible. I've created simple vms with azure module on Ansible.
Now, I'm trying to take a snapshot on azure disk with ansible.
Is that possible?
There is no such a module according to the module list. You can use azure_rm_resource module to perform a custom post request to Azure with Ansible. Here's an API call you should mimic with Ansible
hosts: localhost
connection: local
tasks:
name: Create a snapshot by copying an existing managed disk.
azure_rm_snapshot:
resource_group: ****
name: mySnapshot
location: centralindia
creation_data:
create_option: Copy
source_id: '******'
I'm using Azure for my Continuous Deployment, My secret name is "cisecret" using
kubectl create secret docker-registry cisecret --docker-username=XXXXX --docker-password=XXXXXXX --docker-email=SomeOne#outlook.com --docker-server=XXXXXXXXXX.azurecr.io
In my Visual Studio Online Release Task
kubectl run
Under Secrets section
Type of secret: dockerRegistry
Container Registry type: Azure Container Registry
Secret name: cisecret
My Release is successfully, but when proxy into kubernetes
Failed to pull image xxxxxxx unauthorized: authentication required.
Could this be due to your container name possibly? I had an issue where I wasn't properly prepending the ACR domain in front of the image name in my Kubernetes YAML which meant I wasn't pointed at the container registry / image and therefore my secret (which was working) appeared to be broken.
Can you post your YAML? Maybe there is something simple amiss since it seems you are on the right track from the secrets perspective.
I need to grant AKS access to ACR.
Please refer to the link here
How to pass image pull secret while using 'kubectl run' command?
This should help, you need to override the kubectl command with "imagepullsecrets":"cisecret".
Add the following in yaml file.
imagePullSecrets:
- name: acr-auth
I am currently creating a CI for the FrontEnd of one of our client.
We need to copy the file coming from our repo the container account of the compagny that manage the operational part (we are only providing the code).
So , the company that will manage the infrastructure has Given us the storage account name (testdeploy) , the container name (artifact-deply) and the key (securekey).
I have managed to connect to the storage via Azure Storage Explorer , but now I need to deploy the artifact on this container via the CI.
The problem is , I don't know how , and I can't find documentation on how to proceed , every doc talk about deploying to a container in the same subscription.
But I do not have acces to this container , I only have it's name and key.
Here is the Yaml to what I have already setup , I do not know if i can help:
steps:
- task: AzureFileCopy#2
displayName: 'AzureBlob File Copy'
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/_listes-Azure/buildtest'
azureSubscription: 'Paiement à l''utilisation(my_subscription)'
Destination: AzureBlob
storage: testdeploy
ContainerName: 'artifact-deploy/front'
AdditionalArgumentsForBlobCopy: 'securekey'
outputStorageUri: 'https://testdeply.blob.core.windows.net/'
outputStorageContainerSasToken: 'securekey'
Of course when i do this I have this error message :
2019-10-25T10:45:51.1809999Z ##[error]Storage account: fprplistesdeploy not found. The selected service connection 'Service Principal' supports storage accounts of Azure Resource Manager type only.
Since It's not in my subscription scope , it can't acces it.
What I am doing wrong ?
I am using the AzurFileCopy task , is it good?
How can I setup the AzurFileCopy task to a container account that is not on my subscription scope , knowing that the only thing i have is a account name , and a key?
Thanks in advance !
What you basically have to do is to create and use a Shared Access Signature (SAS) to deploy resources into this blob container. Since you have the storage account key you can create a SAS token with Azure Storage Explorer.
Then use Azure Cloud Shell or Azure CLI on local machine for testing purposes. Try to copy a file into the blob container using a SAS token for authorization. If you have problems with authorization using a SAS token you can also test access using Azure Storage Explorer. Such basic tasks are widely known and well documented.
Finally find a way to run the file copy command used while testing in an Azure Pipeline Task. If Azure File Copy task does not fit to your use case, use a more generic task like an Azure CLI task. From reading over the docs it might be that it does not support your use case although the task name indicates that. I see your point. Find out how to access the artifact provided by the build pipeline and copy the file resources into the storage account. If that basically works find out how to improve it. Voila.
So I managed to do it.
Turns out , you can't do it via the AzureFile Copy , this task can't upload to as Container outside your subscription.
You must use an Azur CLI task , here is the script I used:
#!/bin/bash
az storage blob upload --container-name artifact --file $(System.DefaultWorkingDirectory)/artifact_deply/buildtest/front.zip --name front --account-key securekey
I changed all the variable but the idea is here ( I declared the account name in the variable panel of azur devops).
I used the account key , because I had error with the SAS URL , but I think you can easily use the Azur devops variable to pass the SAS Token URL.
And I created a task before this one to zip all the folder , so it's easier to manage.