Azure bicep: upload file from pipeline drop to storage blob - azure

Trying to write a simple Azure bicep file for my azure pipeline, but I got stuck with a simple task.
In the end, I need to create Azure functions app running from a package.
There will be one azure function that will be running based on a cron expression.
Here is my workflow - I created an azure pipeline with the following steps:
Task .NetCore Restore
Task .Net Core Build
Task .NetCore Publish
Publish Artifact: drop
Now I have a.zip file which contains the azure function project.
The next step is ARM Template deployment based on my bicep file.
In the bicep file I create:
storage account
the container inside the storage account
now I need to put a.zip from Drop to this newly created container
Why do I need this?
Because later, when I'll be creating the function app plan and function application, I will be in need of providing the following parameter: WEBSITE_RUN_FROM_PACKAGE = SAS url to the blob from storage created early.
I can't find the way to copy zip file from Drop to blob inside bicep file.
I tried creating deployscript like this,
resource fileCopy 'Microsoft.Resources/deploymentScripts#2020-10-01'= {
name: take('deployscript-upload-blob-${resourceGroup().id}', 24)
location: location
kind: 'AzureCLI'
dependsOn:[container]
properties: {
azCliVersion: '2.40.0'
retentionInterval: 'PT1H'
environmentVariables:[
{
name: 'AZURE_STORAGE_ACCOUNT'
value: storageAccount.name
}
{
name: 'AZURE_STORAGE_KEY'
secureValue: storageAccount.listKeys().keys[0].value
}
]
scriptContent:'az storage blob upload -f ${functionsZip} -c ${containerName} -n functions.zip'
}
}
where functions.zip is the param containing link to Drop\a.zip file
but I always get the error that file or folder is not found.
How can I accomplish my end goal?

Related

Cloud build avoid billing by changing eu.artifacts.<project>.appspot.com bucket to single-region

Using app engine standard environment for python 3.7.
When running the app deploy command are container images uploaded to google storage in the bucket eu.artifacts.<project>.appspot.com.
This message is printed during app deploy
Beginning deployment of service [default]...
#============================================================#
#= Uploading 827 files to Google Cloud Storage =#
#============================================================#
File upload done.
Updating service [default]...
The files are uploaded to a multi-region (eu), how do I change this to upload to a single region?
Guessing that it's a configuration file that should be added to the repository to instruct app engine, cloud build or cloud storage that the files should be uploaded to a single region.
Is the eu.artifacts.<project>.appspot.com bucket required, or could all files be ignore using the .gcloudignore file?
The issue is similar to this issue How can I specify a region for the Cloud Storage buckets used by Cloud Build for a Cloud Run deployment?, but for app engine.
I'm triggering the cloud build using a service account.
Tried to implement the changes in the solution in the link above, but aren't able to get rid of the multi region bucket.
substitutions:
_BUCKET: unused
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--promote', '--stop-previous-version']
artifacts:
objects:
location: 'gs://${_BUCKET}/artifacts'
paths: ['*']
Command gcloud builds submit --gcs-log-dir="gs://$BUCKET/logs" --gcs-source-staging-dir="gs://$BUCKET/source" --substitutions=_BUCKET="$BUCKET"
I delete whole bucket after deploying, which prevents billing
gsutil -m rm -r gs://us.artifacts.<project-id>.appspot.com
-m - multi-threading/multi-processing (instead of deleting object-by-object this arguments will delete objects simultaneously)
rm - command to remove objects
-r - recursive
https://cloud.google.com/storage/docs/gsutil/commands/rm
After investigation a little bit more, I want to mention that this
kind of bucket is created by the “container registry” product when you deploy a new container( when you deploy your App Engine Application)-> When you push an image to a registry with a new hostname, Container Registry creates a storage bucket in the specified multi-regional location.This bucket is the underlying storage for the registry. Within a project, all registries with the same hostname share one storage bucket.
Based on this, it is not accessible by default and itself contains container images which are written when you deploy a new container. It's not recommended to modify it because the artifacts bucket is meant to contain deployment images, which may influence your app.
Finally, something curious that I found is when you create a default bucket (as is the case of the aforementioned bucket), you also get a staging bucket with the same name except that staging. You can use this staging bucket for temporary files used for staging and test purposes; it also has a 5 GB limit, but it is automatically emptied on a weekly basis

How to save file into Azure Storage Account in the release pipeline?

I am trying to publish 2 files 1.ext, 2.ext into a General purpouse v2 storage account that I've just created, I've created a file share inside of it.
Question: How to save/publish a file into storage account from Azure DevOps pipeline? Which task should I use? Azure copy seems to have only two types of storage avaiable:
Yes, you can use Azure file copy task.
Run the pipeline, the file will be uploaded to target storage account:
Anyway, you can also use Azure PowerShell or Azure CLI task to upload file to storage account. Here are the tutorial for PowerShell and CLI
Update
The Source could be a file or a folder path, so you can:
Filter target files by PowerShell task in previous task, and copy it to a temporary folder.
Upload the whole folder.
For example: I just uploaded the whole project source files by setting the path to $(Build.SourcesDirectory)
And then, all the files were uploaded to storage account.

How to save a Blob in a not existing container

I am developing a Logic App which is scheduled every minute and creates a storage blob with logging data. My problem is that I must create a container for the blob manually to get it working. If I create blob within a not existing container
I get the following error:
"body": {
"status": 404,
"message": "Specified container tmp does not exist.\r\nclientRequestId: 1111111-2222222-3333-00000-4444444444",
"source": "azureblob-we.azconn-we.p.azurewebsites.net"
}
This stackoverflow question suggested putting the container name in the blob name
but If I do so I get the same error message: (also with /tmp/log1.txt)
{
"status": 404,
"message": "Specified container tmp does not exist.\r\nclientRequestId: 1234-8998989-a93e-d87940249da8",
"source": "azureblob-we.azconn-we.p.azurewebsites.net"
}
So you may say that is not a big deal, but I have to deploy this Logic App multiple times with a ARM template and there is no possibility to create a container in an storage account (see this link).
Do I really need to create the container manually or write a extra azure function to check if the container exists?
I have run in this before, you have a couple of options:
You write something that runs after the ARM template to provision the container. This is simple enough if you are provisioning through VSTS release management where you can just add another step.
You move from ARM templates to provisioning in PowerShell where you have the power to do the container creation. See New-AzureStorageContainer.

Build arm template in VSTS fails with error about 'artifactsLocation'

Normally when i deploy through visual studio _artifactsLocation shows when editing the parameters so what should this be in VSTS and how do I set it?
2018-02-21T08:49:46.1918199Z ##[error]Deployment template validation failed: 'The value for the template parameter '_artifactsLocation' at line '1' and column '182' is not provided. Please see https://aka.ms/arm-deploy/#parameter-file for usage details.'.
2018-02-21T08:49:46.1919769Z ##[error]Task failed while creating or updating the template deployment.
You can specify it in parameters file, then specify the file path in Template parameters input box of Azure Resource Group Deployment task if you are using.
Also, the parameters can be override by specifying in Override template parameters input box of Azure Resource Group Deployment task.
If you are calling script through Azure PowerShell task, you can specify it in the arguments: -ArtifactStagingDirectory, related issue: The value for the template parameter '_artifactsLocation' is not provided
This sounds like you are using the Azure Resource Group deployment template from VS to deploy via VSTS.
It uses MSDeploy as part of the ARM template deployment to deploy your service.
The Powershell script that is generated by the VS project template uploads a ZIP-file containing your service to Blob storage, and puts the URL and other information into _artifactsLocation and other ARM template parameters.
Instead of doing that, you can remove the artifacts related parameters and the MSDeploy resource from the ARM template. Then the template ONLY contains infrastructure related resources.
After this, add a "Deploy to App Service" step in the VSTS Release pipeline after the ARM template deployment. That can then be used to deploy your service code.
If you are using a separate parameters json file, you'll need to initialise the _artifactsLocation and _artifactsLocationSasToken there. You can give them empty strings, like:
"_artifactsLocation": {
"value": ""
},
"_artifactsLocationSasToken": {
"value": ""
},
They should automatically get their values from a PowerShell script. I'm using the AzureResourceManagerTemplateDeployment#3 task, it would probably work with AzureResourceGroupDeployment#2 as well.

Task to Deploy Artifact to a container Storage Outside of my account

I am currently creating a CI for the FrontEnd of one of our client.
We need to copy the file coming from our repo the container account of the compagny that manage the operational part (we are only providing the code).
So , the company that will manage the infrastructure has Given us the storage account name (testdeploy) , the container name (artifact-deply) and the key (securekey).
I have managed to connect to the storage via Azure Storage Explorer , but now I need to deploy the artifact on this container via the CI.
The problem is , I don't know how , and I can't find documentation on how to proceed , every doc talk about deploying to a container in the same subscription.
But I do not have acces to this container , I only have it's name and key.
Here is the Yaml to what I have already setup , I do not know if i can help:
steps:
- task: AzureFileCopy#2
displayName: 'AzureBlob File Copy'
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/_listes-Azure/buildtest'
azureSubscription: 'Paiement à l''utilisation(my_subscription)'
Destination: AzureBlob
storage: testdeploy
ContainerName: 'artifact-deploy/front'
AdditionalArgumentsForBlobCopy: 'securekey'
outputStorageUri: 'https://testdeply.blob.core.windows.net/'
outputStorageContainerSasToken: 'securekey'
Of course when i do this I have this error message :
2019-10-25T10:45:51.1809999Z ##[error]Storage account: fprplistesdeploy not found. The selected service connection 'Service Principal' supports storage accounts of Azure Resource Manager type only.
Since It's not in my subscription scope , it can't acces it.
What I am doing wrong ?
I am using the AzurFileCopy task , is it good?
How can I setup the AzurFileCopy task to a container account that is not on my subscription scope , knowing that the only thing i have is a account name , and a key?
Thanks in advance !
What you basically have to do is to create and use a Shared Access Signature (SAS) to deploy resources into this blob container. Since you have the storage account key you can create a SAS token with Azure Storage Explorer.
Then use Azure Cloud Shell or Azure CLI on local machine for testing purposes. Try to copy a file into the blob container using a SAS token for authorization. If you have problems with authorization using a SAS token you can also test access using Azure Storage Explorer. Such basic tasks are widely known and well documented.
Finally find a way to run the file copy command used while testing in an Azure Pipeline Task. If Azure File Copy task does not fit to your use case, use a more generic task like an Azure CLI task. From reading over the docs it might be that it does not support your use case although the task name indicates that. I see your point. Find out how to access the artifact provided by the build pipeline and copy the file resources into the storage account. If that basically works find out how to improve it. Voila.
So I managed to do it.
Turns out , you can't do it via the AzureFile Copy , this task can't upload to as Container outside your subscription.
You must use an Azur CLI task , here is the script I used:
#!/bin/bash
az storage blob upload --container-name artifact --file $(System.DefaultWorkingDirectory)/artifact_deply/buildtest/front.zip --name front --account-key securekey
I changed all the variable but the idea is here ( I declared the account name in the variable panel of azur devops).
I used the account key , because I had error with the SAS URL , but I think you can easily use the Azur devops variable to pass the SAS Token URL.
And I created a task before this one to zip all the folder , so it's easier to manage.

Resources