I have created a vmss and added an addition Fisk from Fisk storage but it’s not available in the VM’s disk management as we usually get for normal VM(not from VMSS)
So how I can attach the new data disk on my VM from vmss.
There are a bunch of tutorials (in the official docs) available for this:
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-disks-cli
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-disks-powershell
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks
powershell cmd:
# Get scale set object
$vmss = Get-AzVmss `
-ResourceGroupName "myResourceGroup" `
-VMScaleSetName "myScaleSet"
# Attach a 128 GB data disk to LUN 2
Add-AzVmssDataDisk `
-VirtualMachineScaleSet $vmss `
-CreateOption Empty `
-Lun 2 `
-DiskSizeGB 128
# Update the scale set to apply the change
Update-AzVmss `
-ResourceGroupName "myResourceGroup" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $vmss
cli:
az vmss disk attach \
--resource-group myResourceGroup \
--name myScaleSet \
--size-gb 128
Check this link for ARM template Data Disk Link
You can find detailed options of how to use data disk and use prepopulated data disk here:
Azure virtual machine scale sets and attached data disks
Azure VM Scale Sets attach-detach disk preview
Related
I've a class which make some extract, transform an load to a dataset located in a different JSON files.
This process work Ok. But, I've the necessity to process manually every month. I submitt an spark application in intelliJ (and submit an Scalla Singleton Object with the transformation)
So, I'm trying to automate this process. But, I didn't find documentation or a tutorial to known what is the best service to accomplish this objective.
The processs Should:
Create a HDInsight Spark Cluster
Run The process (An Scala Class)
Delete the HDInsight Spark Cluster created before
I've searched but the links I find (looking for "Create on demand HD insight spark cluster") are the following:
Access datalake from Azure datafactory V2 using on demand HD Insight
cluster
How to create Azure on demand HD insight Spark cluster using Data
Factory
Other options I've searched:
Host and run your PowerShell scripts in Azure
Azure Logic Apps
Azure Automation
Thanks!
Here are the process which you want to
Create a HDInsight Spark Cluster
Using power shell it should be easy to create HDInsight cluster, here is a sample code:
### Create a Spark 2.3 cluster in Azure HDInsight
# Default cluster size (# of worker nodes), version, and type
$clusterSizeInNodes = "1"
$clusterVersion = "3.6"
$clusterType = "Spark"
# Create the resource group
$resourceGroupName = Read-Host -Prompt "Enter the resource group name"
$location = Read-Host -Prompt "Enter the Azure region to create resources in, such as 'Central US'"
$defaultStorageAccountName = Read-Host -Prompt "Enter the default storage account name"
New-AzResourceGroup -Name $resourceGroupName -Location $location
# Create an Azure storage account and container
# Note: Storage account kind BlobStorage can only be used as secondary storage for HDInsight clusters.
New-AzStorageAccount `
-ResourceGroupName $resourceGroupName `
-Name $defaultStorageAccountName `
-Location $location `
-SkuName Standard_LRS `
-Kind StorageV2 `
-EnableHttpsTrafficOnly 1
$defaultStorageAccountKey = (Get-AzStorageAccountKey `
-ResourceGroupName $resourceGroupName `
-Name $defaultStorageAccountName)[0].Value
$defaultStorageContext = New-AzStorageContext `
-StorageAccountName $defaultStorageAccountName `
-StorageAccountKey $defaultStorageAccountKey
# Create a Spark 2.3 cluster
$clusterName = Read-Host -Prompt "Enter the name of the HDInsight cluster"
# Cluster login is used to secure HTTPS services hosted on the cluster
$httpCredential = Get-Credential -Message "Enter Cluster login credentials" -UserName "admin"
# SSH user is used to remotely connect to the cluster using SSH clients
$sshCredentials = Get-Credential -Message "Enter SSH user credentials" -UserName "sshuser"
# Set the storage container name to the cluster name
$defaultBlobContainerName = $clusterName
# Create a blob container. This holds the default data store for the cluster.
New-AzStorageContainer `
-Name $clusterName `
-Context $defaultStorageContext
$sparkConfig = New-Object "System.Collections.Generic.Dictionary``2[System.String,System.String]"
$sparkConfig.Add("spark", "2.3")
# Create the HDInsight cluster
New-AzHDInsightCluster `
-ResourceGroupName $resourceGroupName `
-ClusterName $clusterName `
-Location $location `
-ClusterSizeInNodes $clusterSizeInNodes `
-ClusterType $clusterType `
-OSType "Linux" `
-Version $clusterVersion `
-ComponentVersion $sparkConfig `
-HttpCredential $httpCredential `
-DefaultStorageAccountName "$defaultStorageAccountName.blob.core.windows.net" `
-DefaultStorageAccountKey $defaultStorageAccountKey `
-DefaultStorageContainer $clusterName `
-SshCredential $sshCredentials
Get-AzHDInsightCluster `
-ResourceGroupName $resourceGroupName `
-ClusterName $clusterName
Run The process (An Scala Class)
You can refer this link to submit an application job remotely to the Spark cluster:
https://learn.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-create-standalone-application#run-the-application-on-the-apache-spark-cluster
Delete the HDInsight Spark Cluster created before
Cleaning up the cluster , you can achieve it using powershell, here is a sample code for the same;
# Removes the specified HDInsight cluster from the current subscription.
Remove-AzHDInsightCluster `
-ResourceGroupName $resourceGroupName `
-ClusterName $clusterName
# Removes the specified storage container.
Remove-AzStorageContainer `
-Name $clusterName `
-Context $defaultStorageContext
# Removes a Storage account from Azure.
Remove-AzStorageAccount `
-ResourceGroupName $resourceGroupName `
-Name $defaultStorageAccountName
# Removes a resource group.
Remove-AzResourceGroup `
-Name $resourceGroupName
Additional reference:
https://learn.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-spark-sql-use-powershell
https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-powershell.md
Hope it helps.
Is it possible to replace the OS disk on a Linux Azure Scale Set VM? I'm trying to restore a many node cluster from snapshots where each VM's OS and data disks has unique information. I was able to replace the original data disks by modifying the scale set model to have no data disks, manually updating the individual VMs to the latest model and then adding the recovered data disks to the VM. I have been unsuccessful in modifying the scale set model to have no OS disk (tried updating with empty StorageProfile or StorageProfile.OsDisk sections - no error, but model is unchanged). I also tried copying the snapshot over the os disk, but received a 'Disk xxx not found' error. Is there a way to restore a scale set from snapshots?
You can take a snapshot of a virtual machine scale set instance and create a managed disk from that snapshot.
Steps to achieve that Azure PowerShell:
Create a snapshot from an instance of a virtual machine scale set:
$rgname = "myResourceGroup"
$vmssname = "myVMScaleSet"
$Id = 0
$location = "East US"
$vmss1 = Get-AzVmssVM -ResourceGroupName $rgname -VMScaleSetName $vmssname -InstanceId $Id
$snapshotconfig = New-AzSnapshotConfig -Location $location -AccountType Standard_LRS -OsType Windows -CreateOption Copy -SourceUri $vmss1.StorageProfile.OsDisk.ManagedDisk.id
New-AzSnapshot -ResourceGroupName $rgname -SnapshotName 'mySnapshot' -Snapshot $snapshotconfig
Create a managed disk from the snapshot:
$snapshotName = "myShapshot"
$snapshot = Get-AzSnapshot -ResourceGroupName $rgname -SnapshotName $snapshotName
$diskConfig = New-AzDiskConfig -AccountType Premium_LRS -Location $location -CreateOption Copy -SourceResourceId $snapshot.Id
$osDisk = New-AzDisk -Disk $diskConfig -ResourceGroupName $rgname -DiskName ($snapshotName + '_Disk')
Reference doc: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq#how-do-i-take-a-snapshot-of-a-virtual-machine-scale-set-instance
I have an image that I created in Azure that located in eastus. I want to deploy VM from that image in a different region (westeurope). I tried this CLI command but nothing is happening.
az vm create --resource-group Automationsystem --name VMEurope --location westeurope --image MyCustomImage --admin-username azureuser --size Standard_F4S --no-wait --ssh-key-value ~/mykey.pub
Is the option to deploy VM from image at another region exists?
As Lech Migdal said, you must have the image in the same region as VM.
For now, image does not support copy to other location. You need create a new image on westeurope location. Please refer to the following steps.
1.Use the image to create a VM in the current location.
2.Create a storage account in westeurope.
3.Stop VM and copy VM's managed disk to new storage account.
$sas = Grant-AzureRmDiskAccess -ResourceGroupName "[ResourceGroupName]" -DiskName "[ManagedDiskName]" -DurationInSecond 3600 -Access Read
$destContext = New-AzureStorageContext –StorageAccountName "[StorageAccountName]" -StorageAccountKey "[StorageAccountAccessKey]"
$blobcopy=Start-AzureStorageBlobCopy -AbsoluteUri $sas.AccessSAS -DestContainer "[ContainerName]" -DestContext $destContext -DestBlob "[NameOfVhdFileToBeCreated].vhd"
Note: Use image to create VM, the OS disk is managed disk.
4.Use the VHD to create a new VM, you could use the template to do this.
5.Use the VM to create a new image. Please refer to this link.
I am trying to create a new VM in Azure RM based on the sysprepped capture of an existing VM installation. That is:
$urlOfCapturedImage = <I cannot find this>
...
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $newOsDiskUri `
-CreateOption fromImage -SourceImageUri $urlOfCapturedImage -Windows
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $location -VM $vm
My current problem is finding the correct URL for the stored VM image, since it doesn't appear to be stored as a VHD blob in my storage account. Instead, I find it in the Images category, with the following, limited information:
I have tried using the following URL/URIs, but none of them work:
https://<storage-account-name>.blob.core.windows.net/system/Microsoft.Compute/Images/jira-7-sysprep-20170724133831.vhd
/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Compute/images/jira-7-sysprep-20170724133831
Does anyone know how to get the proper URL for my VM image? Or could it simply be that I am using the wrong method altogether?
Does anyone know how to get the proper URL for my VM image?
For a Azure VM image, the VHD is managed by Azure, you could not get the URL.
Your command is used for create VM from storage account. If you want to create VM from image, you could use the following command to create a VM from custom image.
$image = Get-AzureRmImage `
-ImageName myImage `
-ResourceGroupName myResourceGroupImages
# Here is where we specify that we want to create the VM from and image and provide the image ID
$vmConfig = Set-AzureRmVMSourceImage -VM $vmConfig -Id $image.Id
$vmConfig = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
New-AzureRmVM `
-ResourceGroupName myResourceGroupFromImage `
-Location EastUS `
-VM $vmConfig
More information about this please refer to this link.
I'm currently running into difficulty in creating an Azure VM from a custom VM image. I am following the guide from Azure from here: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-capture-image/
I've used Waagent and deprovisioned the machine as instructed, and deallocated, generalized, and captured my machine image (I have made some modifications to the core Ubuntu 16.04LTS image available from Azure software wise). I have successfully created the template.json file (Can provide it if needed). I then completed all the tasks below in the powershell script as outlined in the article, just extracting the parameters to variables to make things a bit easier.
## Global
$rgName = "testrg"
$location = "eastus"
## Storage
$storageName = "teststore"
$storageType = "Standard_GRS"
## Network
$nicname = "testnic"
$subnetName = "subnet1"
$vnetName = "testnet"
$vnetAddressPrefix = "10.0.0.0/16"
$vnetSubnetAddressPrefix = "10.0.0.0/24"
$ipName = "TestIP"
## Compute
$vmName = "testvm"
$computerName = "testcomputer"
$vmSize = "Standard_D1_v2"
$osDiskName = $vmName + "osDisk"
#template
$fileTemplate = "C:\AzureTemplate\template.json"
azure group create $rgName -l $location
azure network vnet create $rgName $vnetName -l $location
azure network vnet subnet create --resource-group $rgName --vnet-name $vnetName --name $subnetName --address-prefix $vnetSubnetAddressPrefix
azure network public-ip create $rgName $ipName -l $location
azure network nic create $rgName $nicName -k $subnetName -m $vnetName -p $ipName -l $location
azure network nic show $rgName $nicname
azure group deployment create $rgName $computerName -f $fileTemplate
I am able to successfully run all the commands to create the resource group and the network components, however, when I try to run the deployment command at the bottom of the powershell script, I get the following and it just hangs here indefinitely. Am I using the right approach to create a VM from a custom image? Or is that Azure guide outdated?
azure group deployment create $rgName $computerName -f $fileTemplate
[32minfo[39m: Executing command [1mgroup deployment create[22m
[32minfo[39m: Supply values for the following parameters
EDIT: Link to image showing the issue: http://imgur.com/a/Fgh8K
I believe your understanding is not complete. If you see at the last line it says Supply values for the following parameters
You need to pass the values for VM name, the admin user name and password, and the Id of the NIC you created previously. My be you should re-read the documentation. Here is the screenshot for your reference from https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-capture-image/#deploy-a-new-vm-from-the-captured-image -