I have 2 azure bicep files.
Storage.bicep
cdn.bicep
Now, I successfully created a storage account and BLOB container from storage.bicep. Is it possible to get the value of storage account properties from storage.bicep to use it in cdn.bicep ?
Yes, you can reference Storage Account in cdn.bicep as existing resource if you pass Storage Account name in to it.
resource stg 'Microsoft.Storage/storageAccounts#2019-06-01' existing = {
name: 'examplestorage'
}
output blobEndpoint string = stg.properties.primaryEndpoints.blob
Just make sure you declare cdn.bicep has dependency on Storage.bicep and is executed after it.
The example above is taken from documentation.
Related
I have to create a storage event trigger to process a file created on BLOB . While creating it is asking me for storage account and container name. I need to put the values dynamically as I have different storage account name for different environments(prod and non-prod).
But I am unable to find an option to give dynamic storage account name. What should i do?
I'm currently porting some infrastructure as code scripts from Azure CLI to Azure Bicep. Among many other things, the Bicep files should create a subnet and allow access from this subnet to an existing Azure SQL Server and an existing Storage Account.
For the SQL Server, this is simple - I can reference the existing server resource and declare a child resource representing the VNET rule:
resource azureSqlServer 'Microsoft.Sql/servers#2021-05-01-preview' existing = {
name: azureSqlServerName
resource vnetRule 'virtualNetworkRules' = {
name: azureSqlServerVnetRuleName
properties: {
virtualNetworkSubnetId: subnetId
}
}
}
However, with the Storage Account, the network rules are not child resources, but a property of the Storage Account resource (properties.networkAcls.virtualNetworkRules). I cannot declare all the details of the Storage Account in my Bicep file because that resource is way out of scope from the deployment I'm currently working on. In essence, I want to adapt the existing resource, just ensuring a single rule is present.
The following does not work because existing cannot be combined with properties:
resource storageAccount 'Microsoft.Storage/storageAccounts#2021-06-01' existing = {
name: storageAccountName
properties: {
networkAcls: {
virtualNetworkRules: [
{
id: subnetId
action: 'Allow'
}
]
}
}
}
Is there any way I can adapt just a tiny bit of an existing resource using Bicep?
UPDATE: I just realized you came from Azure CLI and was trying to find a way in bicep - sorry for not answering your actual question - anyway your post made me think about this in another way other than bicep, so my "answer" is what I came up with...
...sounds like we thought about this in the same manner; using bicep to pimp an existing Storage Account, granting a new subnet access. However I ended up using AzureCLI az storage account network-rule add
e.g.
newSubnet='/subscriptions/<subscr-guid>/resourceGroups/<rg-name-where-vnet-resides>/providers/Microsoft.Network/virtualNetworks/<vnet-name>/subnets/<subnet-name>'
az storage account network-rule add -g <rg-name-where-sa-resides> --account-name <storage-account-name> --subnet $newSubnet
run this from a terminal or put it in an AzureCLI task in a devops pipeline (which is what I needed)
The existing keyword in bicep is used to tell bicep that the resource already exists and you just want a symbolic reference to that resource in the code. If the resource doesn't exist it's likely that the the deployment will fail in some way.
Your first snippet is equivalent to:
resource vnetRule 'Microsoft.Sql/servers/virtualNetworkRules#2021-05-01-preview' = {
name: '${azureSqlServerName}/${azureSqlServerVnetRuleName}'
properties: {
virtualNetworkSubnetId: subnetId
}
}
In your second snippet since you want to update properties you have to provide the complete declaration of the resource, IOW you have to define and deploy the storageAccount. This isn't unique to bicep, it's the way the declarative model in Azure works.
That said, if you want to deploy to another scope in bicep, you can use a module with the scope property. E.g.
module updateStorage 'storage.bicep' = {
scope: resourceGroup(storageResourceGroupName)
name: 'updateStorage'
}
The downside is that you need to make sure you define/declare all the properties need for that storageAccount which is not ideal. There are some ways you can author around that, but if the storageAccount doesn't exist, the deployment is guaranteed to fail. For example, you could assert the storageAccount exists, fetch its properties and then union or modify the properties in the module. You might be able to make that work (depending on the extent of your changes) but it's a bit of an anti-pattern in a declarative model.
That help?
I'm using the EventProcessorHost() method in the Microsoft.Azure.EventHubs.Processor library and it seems that in addition to an event hub library, I need to specify a storage connection string and a lease container name - as parameters.
Do I need to create an azure blob storage and also - where can I find the lease container name?
Yes, you need a storage account, and lease container name is your actual container name.
So create a container within your blob storage, and specify that container name as your lease container name in constructor of EventProcessorHost.
Yes , you need Azure storage container to maintain leaser checker for your application. You just need to provide connection string of storage account and you have to provide container name in your application , your application will create storage container in blob automatically.
I'm trying to set up a function to take a snapshot of a blob container every time a change is pushed to it. There is some pretty simple functionality in Azure Functions to do this, but it only works for general purpose storage accounts. I'm trying to do this with a blob only storage account. I'm very new to Azure so I may be approaching this all wrong, but I haven't been able to find much helpful information. Is there any way to do this?
As #joy-wang mentioned, the Azure Functions Runtime requires a general purpose storage account.
A general purpose storage account is required to configure the AzureWebJobsStorage and the AzureWebJobsDashboard settings (local.settings.json or Appsettings Blade in the Azure portal):
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "my general purpose storage account connection string",
"AzureWebJobsDashboard": "my general purpose storage account connection string",
"MyOtherStorageAccountConnectionstring": "my blob only storage connection string"
}
}
If you want to create a BlobTrigger Function, you can specify another connection string and create a snapshot everytime a blob is created/updated:
[FunctionName("Function1")]
public static async Task Run([BlobTrigger("test-container/{name}",
Connection = "MyOtherStorageAccountConnectionstring")]CloudBlockBlob myBlob,
string name, TraceWriter log)
{
log.Info($"C# Blob trigger function Processed blob\n Name:{name}");
await myBlob.CreateSnapshotAsync();
}
In the Visual Studio:
I have tried to create snapshot for a blob-only storage
named joyblobstorage , but it failed. I supposed you should get the same error in the screenshot.
As the error information says Microsoft.Azure.WebJobs.Host: Storage account 'joyblobstorage' is of unsupported type 'Blob-Only/ZRS'. Supported types are 'General Purpose'.
In the portal:
I try to create a Function App and use the existing Storage, but it could not find my blob-only storage account. Azure Function setup in portal should not allow we to select a blob-only storage account. Please refer to the screenshot.
Conclusion:
It is not possible to create snapshot for a blob-only storage. In the official documentation, you could see the Storage account requirements.
When creating a function app in App Service, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage.
Also, in the App settings reference, you could see
AzureWebJobsStorage
The Azure Functions runtime uses this storage account connection string for all functions except for HTTP triggered functions. The storage account must be a general-purpose one that supports blobs, queues, and tables.
AzureWebJobsDashboard
Optional storage account connection string for storing logs and displaying them in the Monitor tab in the portal. The storage account must be a general-purpose one that supports blobs, queues, and tables.
Here is the Feedback, Azure App Service Team has explained the requirements on storage account, you could refer to it.
I can't find an example on how to create a new container/bucket with specific Location (Singapore) using JClouds. All the examples that I found on google are using null as default location.
azureBlobStore.createContainerInLocation(null, containerName);
Could any of you, JClouds veterans, help me out here?
I haven't used JClouds, but just went and looked at the docs for Azure storage. First thing they show is creation of a blob context:
BlobStoreContext context = new BlobStoreContextFactory().createContext("azureblob", accesskeyid, secretkey);
According to the Javadocs, the params are provider, identity, and credential. That being the case, you probably need to pass the storage account and key from the Windows Azure portal into the 2nd and 3rd parameters. Once you do this, your location is set for you, to the data center where you set up the storage account (In Windows Azure, a storage account is associated with a specific data center upon creation - all containers and objects are then created in that data center as well). I don't think the Location parameter is meaningful when setting up your Azure blob container. That Location parameter is nullable, since it only applies to a subset of cloud providers based on that provider's API (see Javadocs for more details).
I was looking for the same answer the other day and just wanted to echo what David said. Here is the code for AzureBlobStore.java in jclouds 1.5
#Override
public boolean createContainerInLocation(Location location, String container) {
return sync.createContainer(container);
}
As you can see, the location is ignored because your Azure account is already tied to a specific location.