How to save a Blob in a not existing container - azure

I am developing a Logic App which is scheduled every minute and creates a storage blob with logging data. My problem is that I must create a container for the blob manually to get it working. If I create blob within a not existing container
I get the following error:
"body": {
"status": 404,
"message": "Specified container tmp does not exist.\r\nclientRequestId: 1111111-2222222-3333-00000-4444444444",
"source": "azureblob-we.azconn-we.p.azurewebsites.net"
}
This stackoverflow question suggested putting the container name in the blob name
but If I do so I get the same error message: (also with /tmp/log1.txt)
{
"status": 404,
"message": "Specified container tmp does not exist.\r\nclientRequestId: 1234-8998989-a93e-d87940249da8",
"source": "azureblob-we.azconn-we.p.azurewebsites.net"
}
So you may say that is not a big deal, but I have to deploy this Logic App multiple times with a ARM template and there is no possibility to create a container in an storage account (see this link).
Do I really need to create the container manually or write a extra azure function to check if the container exists?

I have run in this before, you have a couple of options:
You write something that runs after the ARM template to provision the container. This is simple enough if you are provisioning through VSTS release management where you can just add another step.
You move from ARM templates to provisioning in PowerShell where you have the power to do the container creation. See New-AzureStorageContainer.

Related

Unable to Deploy Flatcar OS on Azure

I was trying to deploy flatcar image on Azure, but I am not able to deploy it. following are the steps I performed
I downloaded latest azure supported VHD from https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2.
I uploaded this VHD to azure storage blob and converted it to an image as recommended by Azure guides
I tried creating VM out of this image. VM gets created successfully, but we can see one error while creating VM and VM creation is shown as failed (Even though it is actually successful). Following is the error which I can see:
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "VMExtensionHandlerNonTransientError",
"message": "The handler for VM extension type 'Microsoft.Azure.Diagnostics.LinuxDiagnostic' has reported terminal failure for VM extension 'LinuxDiagnostic' with error message: '[ExtensionOperationError] Non-zero exit code: 1, /var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-3.0.141/diagnostic.py -install\n[stdout]\n\n\n[stderr]\n File \"/var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-3.0.141/diagnostic.py\", line 54\n print 'A local import (e.g., waagent) failed. Exception: {0}\\n' \\\n ^\nSyntaxError: invalid syntax\n'.\r\n \r\n'Install handler failed for the extension. More information on troubleshooting is available at https://aka.ms/VMExtensionLinuxDiagnosticsTroubleshoot'"
}
]
}
I tried going through link provided, but it didn't help much.
I also tried another option as following
Deployed flatcar VM through Azure marketplace
Captured generalized image out of this VM
Deployed VM using the image created in above step
Even with this approach I am getting same error.
for now, waagent (Azure Linux agent) does not support python 3.x, hence this syntax error exists. You need to have python 2.x on your OS to not have this issue.

Creating Azure Elastic job agent throwing "DatabaseDoesNotExist" Error

I am creating Azure Elastic jobs agent and I am attaching a newly created, fresh Azure SQL Database with it, but when I execute the process it gives me this error
"error": {
"code": "DatabaseDoesNotExist",
"message": "Database 'mydatabase' does not exist."
}
Previously we were created it successfully
Update:
Actually, the issue was we were selecting the wrong database (hyperscale) type. When select the following tier it worked perfectly
I tried to deploy the same resource and it deployed successfully for me. Please find the screenshot for the same below. I just simply created the SQL server on Azure, then create a Database inside it and used the same database in Elastic Job.
As per the official document Troubleshoot common Azure deployment errors, the Conflict error occurs when:
You're requesting an operation that isn't allowed in the resource's
current state. For example, disk resizing is allowed only when
creating a VM or when the VM is deallocated.
You can check if your database has been deployed successfully without any error. And once it is deployed, you create the elastic job agent using that database. Make sure your database configuration match the elastic job requirement and you have appropriate permissions.

ARM Template deploymentScripts custom container instance fileshare name

I have created a template which deploys a deploymentScripts to do some work in SQL. Since we locked down SQL on a network level I needed to create a custom container instance which is connected to my vnet. That Container Instance needs to have a volume mounted on an Azure Storage Account. The fileshare on that is created by the deploymentScripts deployment but you don't seem to be able to set that. How do I reliably get to that value so I can create that myself and mount? Here's the part of the container instance I'm talking about. Its the "shareName": "hvtqyj3nqhygoazscripts" I'm looking for. Seems to be using the uniqueString() function concatenated with 'azscripts'. What is the input to the uniqueString() function?
"volumes": [
{
"name": "azscripts",
"azureFile": {
"shareName": "hvtqyj3nqhygoazscripts",
"storageAccountName": "<storename>",
"storageAccountKey": "<key>"
}
}
]
In DeploymentScripts (DS), ACI is not seen as a shared resource, so you cannot create it in advance and use it in DS (there's 1 ACI per DS), but you can use an existing Storage Account as seen here. File shares are controlled by DS to isolate script content and outputs so you cannot control the name.

Unable to deploy VMSS in combination to ARM deployment

I hope somebody can guide me with this issue. I do not have issues deploying resources via the web interface. This time I am trying to automatize my infrastructure and I am deploying via ARM. All the resources for the Service Fabric cluster I am trying to create are deployed with no issue, except for the VMSS which throws me this error:
{
"status": "Failed",
"error": {
"code": "LinkedAuthorizationFailed",
"message": "The client has permission to perform action 'Microsoft.KeyVault/vaults/deploy/action' on scope '/subscriptions/xxxxxx/resourcegroups/AllyStage-v2/providers/Microsoft.Compute/virtualMachineScaleSets/StageNode', however the linked subscription 'xxxxx' was not found. "
}
}
Thanks.
It would have helped to look at the ARM template that you're trying to deploy. However, I suspect the problem is that the resource ID for a resource or subscription isn't resolving correctly. Here is a similar issue from the past.
Also, if you are deploying the ARM template from within another bash/PowerShell script, I suggest you ensure that you have the correct context/subscription set before initiating the template deployment, and verify the scope of permissions of the principal performing the deployment.

'InvalidContainerGroupUpdate' when using 'create group container' in logic apps azure

I have been looking into the "logic apps designer' of Microsoft azure for a couple of days. Thank you for your help! I am stuck on the following:
Context
I wanted to perform some actions interacting between multiple files in a Dropbox. The logic app was not proposing an off-the-self solution, hence I created a python script that did exactly what I wanted.
I then decided to create an image of this script in order to be able to use it from the azure platform within the Logic Apps.
The containers registry contains the image I pushed to Azure and I created the container instance that includes only one image which is the python script.
Everything works.
Current structure
From what I read, it seems that we can run the container instance by using the action called create group container then adding a until action (run until state is equal to Succeeded) and finally using delete the container group.
I have a trigger that has been tested and that works.
Issue
When running the Logic App, the action create group container is failing:
"code": "InaccessibleImage",
"message": "The image '<name_of_the_image>' in container group '<name_of_the_group>' is not accessible. Please check the image and registry credential."
Question
How can I correct what seems to be a basic error on my part?
Where can this registry credential be appropriately corrected?
Update
I have tried removing everything, assigning myself "owner" role in the container registry, then adding the container instance, assigning myself "owner" role in the container instance, then rebuilt the logic app. I ran it again and I get the same error.
I figured the issue.
Since in my case, it is a private container registry, I needed to add the following the the action 'create group container': properties.imageRegistryCredentials.
In this, you will be required to enter the following information that are available in the Access keys of the container registry:
[
{
"password": "<yourpassword>",
"server": "<yourloginserver>",
"username": "<yourusername>"
}
]
So glad and I hope it helps others!
To set the credentials of ACI inside Create or update container group task in logic app you need to add a parameter (See the picture).
add parameter for ACI credentials

Resources