Use Azure CLI commands within VM - azure

I am writing a program that uploads a file to Azure blob storage, creates a virtual machine, and now I want to download and execute that file within the VM. For that I am trying to execute the command az storage blob download <parameters> in the VM, but unfortunately az is not recognized. How can I enable that the Azure CLI is (pre)installed on each new VM? Is there such a possibility provided by Azure? Or should I install the Azure CLI with yum for each VM within my script? Any information or ideas is highly appreciated, thank you.

You need to install the Azure CLI yourself. See the steps here. You can install it following the steps one by one, or put the commands inside a script and execute the script in the VM extension or cloud-init in the creation time.

Related

Get Mount script from Azure file share - Terraform

For every new fileshare that we create in Azure Storage account we get a connect option
,
if we click connect we get the below options,
is it possible to get that piece of code to mount this fileshare through terraform? I could not find it anywhere. Any help on this would be appreciated
Of course, it's possible. You just need to copy the code into a script and then use the VM extension to execute inside the VM. It's not complex at all. Here is an example.
But there is one thing you need to pay attention to, the VM extension only supports the non-interactive script. For example, the connect code for the Linux, the command sudo is an interactive command, so it's not recommended to use in the VM extension. You can get more details about the VM extension here.

How to upload a file from azure blob storage to Linux VM created on azure

I have one large file on my azure blob storage container. I want to move my file from blob storage to Linux VM created on azure> How can I do that using data factory? or any Powershell Command?
The easiest and without any tools is to generate SAS token for the blob and run CURL.
Generate SAS
And then CURL
curl <blob_sas_url> -o output.txt
If you need this automated every time you can generate SAS URL from the script or just use AzCopy.
Please reference this blog:How to copy data to VM from blob storage, it gives you a way to solve the problem with Data Factory:
"To anyone who might get into same problem in future, I solved my problem by using 'copy wizard' present in ADF.
We need to install Data Management Gateway on VM and register it before we use 'copy wizard'.
We need to specify blob storage as source and in destination we need to choose 'File Server Share' option. In 'File Server Share' option we need to specify user credentials which I suppose pipeline uses to login to VM, folder on VM where pipeline will copy the data."
From the Azure Blog Storage document, there is another way can help you Mount Blob storage as a file system with blobfuse on Linux.
Blobfuse is a virtual file system driver for Azure Blob storage. Blobfuse allows you to access your existing block blob data in your storage account through the Linux file system. Blobfuse uses the virtual directory scheme with the forward-slash '/' as a delimiter.
This guide shows you how to use blobfuse, and mount a Blob storage container on Linux and access data. To learn more about blobfuse, read the details in the blobfuse repository.
If you want to use AzCopy, you can reference this document Transfer data with AzCopy and Blob storage. You can download the AzCopy for Linux. It provided the command for upload and download files.
For example, upload file:
azcopy copy "<local-file-path>" "https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-name>"
For PowerShell, you need to use PowerShell Core 6.x and later on all platforms. It works with Windows and Linux virtual machines using Windows PowerShell 5.1 (Windows only) or PowerShell 6 (Windows and Linux).
You can find the PowerShell commands in this document:Quickstart: Upload, download, and list blobs by using Azure PowerShell
Here is another link talked about Copy Files to Azure VM using PowerShell Remoting 6 (Windows and Linux).
Hope this helps.
You have many options to copy content from the blob store to the disk on the VM:
1. Use AzCopy
2. Use Azure Pipelines - File copy task
3. Use Powershell cmdlets
A lot of content is available on these approaches on SO!
It seems this is not properly documented anywhere so I am sharing the most basic approach which is to use the azcopy tool that is available for both windows/linux OS. This approach doens't need the complexity of creating the credentials/tokens.
Download azcopy
Its simple executable which can be run directly after extraction
Create a managed identity(system-assigned identity) for your Virtual machine. Navigate to VM-> Identity -> Turn the Status to 'ON' -> Save
Now the VM can be assigned permission at the following levels:
Storage account
Container (file system)
Resource group
Subscription
For this case, navigate to storage account -> IAM -> Add role assignment -> Select role 'Storage Blob Data Contributor' -> Assign access to 'Virtual machine' -> Select the desired VM -> SAVE
NOTE: If you give access to the VM on IAM properties of a Resource Group, the VM will be able to access all the storage accounts of the RG.
Login to VM and assume the identity (run the command from the same location where the azcopy is located)
For windows : azcopy login --identity
For linux : ./azcopy login --identity
Upload or download the files now:
azcopy cp "source-file" "storageUri/blob-container/" --recursive=true
Example: azcopy cp "C:\test.txt" "https://mystorageaccount.blob.core.windows.net/backup/" --recursive=true
IAM permission can take few minutes to propagate. If you change/add the permissions/access level anywhere, run the azcopy login --identity command again to get the updated identity.
More info on Azcopy is available here

Creating Windows Image from VM using Packer

I have to create an Image from an existing VM using Packer.This is the link I'm following to do so.
Now I have few doubts in this before proceeding further.
Can I run all these commands remotely.
If yes, where should I install Packer , is it on client Machine or remote machine?
If it has to be installed on remote machine, from which the image is being created, is there any workaround for that. I will not have access to install anything on the remote machine.
No where the VM details are not mentioned. Does that mean, it will automatically take the VM details when we run the commands on the VM?
Where can I see the output of the whole process? Will it be available in azure portal?
Any inputs on the above questions are appreciated.Thanks!
First of all, there is something you have misunderstood about Packer.
The Azure builder can create either a VHD, or a managed image. If you
are creating a VHD, you must start with a VHD. Likewise, if you want
to create a managed image you must start with a managed image.
It means you must create the image from the image or VHD, not VM.
The answer to your question.
Yes, you can run the command remotely, just like Azure CLI.
You can install Packer on your on-premise machine.
From the description of Packer, it just needs the image information.
You can see the output where you run the Packer command.
Update
When you want to create the image from VHD file, you can make an instead:
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04.0-LTS",
Into
"image_url": "https://my-storage-account.blob.core.windows.net/path/to/your/custom/image.vhd",
If your vm is managed by Azure, you can pay attention to the option of custom_managed_image and images in Azure. Hope this will be helpful.

Unable to locate the repository cloned from git using Azure cloud shell

I opened Azure Cloud Shell and once the command prompt was ready, I tried git clone https://github.com/Azure-Samples/python-docs-hello-world and it was cloned successfully. However, i am unable to locate where the cloned files are. Need help with the process for locating using Azure Cloud Shell.
The Azure Cloud shell stores the files in a file share within a storage account that you either specified or Azure created for you.
When you use basic settings and select only a subscription, Cloud
Shell creates three resources on your behalf in the supported region
that's nearest to you:
Resource group: cloud-shell-storage-<region>
Storage account: cs<uniqueGuid>
File share: cs-<user>-<domain>-com-<uniqueGuid>
Source.

Copy and install exe on azure vm via powershell

I'm trying to create an Azure VM and then copy an install file to the VM and then silently installing it. I have created a basic Azure Resource Group project, and can create and deploy the VM, but I can't figure out how to do everything from the powershell script.
It sounds like you could use a custom script extension to do what you want. In your ARM template, you can specify the url for a file and the command to run; Azure will handle getting the file onto your VM and running it based on your command. Here is an example from the Azure Quickstart Templates: https://github.com/Azure/azure-quickstart-templates/tree/master/windows-vm-custom-script
Hope this helps! :)

Resources