Azure - Managed Disks, how to create snapshot - azure

I have application which deploy VM with storage account and disks, I want to convert it to use managed disks - as this is the future of Azure storage. I am looking on the REST API - and I am missing two things:
1. how can i create a snapshot form existing managed disk, there is an API to create a snapshot but it is empty or from old unmanaged
2. can i choose the lun on which the disk is created?

how can i create a snapshot form existing managed disk, there is an API to create a snapshot but it is empty or from old unmanaged
According to your description, I have created a test demo to create a snapshot of an existing managed disk(OS disk), it works well.
I create a Windows VM and use managed disk as the OS disk, then I create another managed disk and add it to the VM.
The result is like below:
If you want to create a snapshot of an existing managed disk(It has the data), I suggest you could send request to below url.
Url: https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Compute/snapshots/{snapshotName}?api-version={api-version}
Method: PUT
Parameter:
subscriptionId The identifier of your subscription where the snapshot is being created.
resourceGroup The name of the resource group that will contain the snapshot.
snapshotName The name of the snapshot that is being created. The name can’t be changed after the snapshot is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters.
api-version The version of the API to use. The current version is 2016-04-30-preview.
Request content:
{
"properties": {
"creationData": {
"createOption": "Copy",
"sourceUri": "/subscriptions/{subscriptionId}/resourceGroups/{YourResourceGroup}/providers/Microsoft.Compute/disks/{YourManagedDiskName}"
}
},
"location": "eastasia"
}
More details, you could refer to follow C# codes:
json.txt:
{
"properties": {
"creationData": {
"createOption": "Copy",
"sourceUri": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/BrandoSecondTest/providers/Microsoft.Compute/disks/BrandoTestVM"
}
},
"location": "eastasia"
}
Code:
static void Main(string[] args)
{
string body = File.ReadAllText(#"D:\json.txt");
// Display the file contents to the console. Variable text is a string.
string tenantId = "xxxxxxxxxxxxxxxxxxxxxxxx";
string clientId = "xxxxxxxxxxxxxxxxxxxxxxxx";
string clientSecret = "xxxxxxxxxxxxxxxxxxxx";
string authContextURL = "https://login.windows.net/" + tenantId;
var authenticationContext = new AuthenticationContext(authContextURL);
var credential = new ClientCredential(clientId, clientSecret);
var result = authenticationContext.AcquireTokenAsync(resource: "https://management.azure.com/", clientCredential: credential).Result;
if (result == null)
{
throw new InvalidOperationException("Failed to obtain the JWT token");
}
string token = result.AccessToken;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("https://management.azure.com/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxxxxx/providers/Microsoft.Compute/snapshots/BrandoTestVM_snapshot2?api-version=2016-04-30-preview");
request.Method = "PUT";
request.Headers["Authorization"] = "Bearer " + token;
request.ContentType = "application/json";
try
{
using (var streamWriter = new StreamWriter(request.GetRequestStream()))
{
streamWriter.Write(body);
streamWriter.Flush();
streamWriter.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
//Get the response
var httpResponse = (HttpWebResponse)request.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
Console.WriteLine(streamReader.ReadToEnd());
}
Console.ReadLine();
}
Result:
can i choose the lun on which the disk is created?
Do you mean you want to use azuredeploy to select the LUN of the disk?
If this is your opinion, I suggest you could refer to follow json example to know how to build the deploy content of the VM and select its LUN.
More details, you could refer to below deploymentTemplate Json (Partly):
"diskArray": [
{
"name": "datadisk1",
"lun": 0,
"vhd": {
"uri": "[concat('http://', variables('storageAccountName'),'.blob.core.windows.net/vhds/', 'datadisk1.vhd')]"
},
"createOption": "Empty",
"caching": "[variables('diskCaching')]",
"diskSizeGB": "[variables('sizeOfDataDisksInGB')]"
},
]
More details, you could refer to follow link:
201-vm-dynamic-data-disks-selection/azuredeploy.json

New VM’s can be created in the same resource group or in different resource group using a snapshot.
In case the VM is to be created is in a different RG, make sure the locations are same. Azure supports only within the same location.
Take a snap shot of the OS disk
disks-> click on os disk -> create snapshot
Enter the name
Select the resource group, where the snapshot has to be created.
(In case of different RG, make sure locations are same)
Once snap shot is created, Update the below commands as required and run from terminal (Azure cli - ref: [link][1]) .
Provide the subscription Id of the subscription subscriptionId=88888-8888-888888-8888-8888888
Provide the name of your resource group
resourceGroupName=RG_name
Provide the name of the snapshot that will be used to create Managed Disks(this was created in previous step)
snapshotName=OSDISK_snapshot
Provide the name of the Managed Disk which will be created
osDiskName=MY_OSDISK
Provide the size of the disks in GB. It should be greater than the VHD file size.
diskSize=30
Provide the storage type for Managed Disk. Premium_LRS or Standard_LRS.
storageType=Premium_LRS
Set the context to the subscription Id where Managed Disk will be created
az account set --subscription $subscriptionId
Get the snapshot Id
snapshotId=$(az snapshot show --name $snapshotName --resource-group $resourceGroupName --query [id] -o tsv)
az disk create -n $osDiskName -g $resourceGroupName --size-gb $diskSize --sku $storageType --source $snapshotId
The above command will create a managed disk in the RG.
Select the created disk from the list under RG and click on create VM.
Enter the name, select RG , select the size .... and click on create.
Make sure NSG has all the inboud ports available
Http - 80, ssh -22
Once the vm is created, select the vm from RG’s resource list.
Scroll down to "run commands" and run the below commands in case SSH and HTTP are not accessible.
sudo service apace2 restart
sudo service ssh restart
This should resolve the issue with accessing from browser and terminal.
incase ssh is still not working, run below command
rm /run/nologin
now access the vm from terminal via ssh.

Related

Unable to create a blob in a container

Scenario: unable to deploy a blob within a container tha is created in a storage account, read in as a data source. Obscure error is produced in GitHub actions workflow.
Error:
Error: creating Blob "xrdpdeploy.sh" (Container "morpheus-tinkering-csecontainer" / Account "***"): opening: open # update available packages
[ERROR] provider.terraform-provider-azurerm_v3.35.0_x5: Response contains error diagnostic: #caller=github.com/hashicorp/terraform-plugin-go#v0.14.1/tfprotov5/internal/diag/diagnostics.go:55 #module=sdk.proto diagnostic_detail= diagnostic_severity=ERROR tf_proto_version=5.3 diagnostic_summary="creating Blob "xrdpdeploy.sh" (Container "morpheus-tinkering-csecontainer" / Account "***"): opening: open # update available packages
Hi folks, given the above scenario I am stuck with; do you have any suggestions to help me upload the blob object correct. The terraform config is shown below:
resource "azurerm_storage_container" "cse_container" {
name = "${local.naming_prefix}csecontainer"
storage_account_name = data.azurerm_storage_account.storage_account.name
container_access_type = "blob"
}
#-------------------------------------------------------------
# storage container blob script used to run as a custom script extension
resource "azurerm_storage_blob" "cse_blob" {
name = "xrdpdeploy.sh"
storage_account_name = data.azurerm_storage_account.storage_account.name
storage_container_name = azurerm_storage_container.cse_container.name
type = "Block"
access_tier = "Hot"
# absolute path to file on local system
source = file("${path.module}/cse-script/xrdpdeploy.sh")
#explicit dependency on storage container to be deployed first prior to uploading blobk blob
depends_on = [
azurerm_storage_container.cse_container
]
}
#-------------------------------------------------------------
data "azurerm_storage_account" "storage_account" {
name = "morpheuszcit10394"
resource_group_name = "zimcanit-morpheus-tinkering-rg"
}
#-------------------------------------------------------------
I'm using Azure Terraform provider version 3.35.0.
What I've done; ensured the container is being created correctly; explicitly set access tier and even tried dropping Azure provider version to 3.30.0.
Thanks in advance!

Azure Disk Encryption with Terraform for multiple disks

So i can encrypt the os disk with Terrafrom from what i have seen on this site. But how do i encrypt the data disks as well? I thought maybe "VolumeType": "All" would cover all disks but that did not happen. This code works for encrypting os disk... what do i need to do for multiple disks? I am stuck.
Thanks!
provider "azurerm" {
features {}
}
data "azurerm_key_vault" "keyvault" {
name = "testkeyvault1"
resource_group_name = "testRG1"
}
resource "azurerm_virtual_machine_extension" "vmextension" {
name = "DiskEncryption"
virtual_machine_id = "/subscriptions/<sub id>/resourceGroups/TESTRG1/providers/Microsoft.Compute/virtualMachines/testvm-1"
publisher = "Microsoft.Azure.Security"
type = "AzureDiskEncryption"
type_handler_version = "2.2"
#auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"EncryptionOperation": "EnableEncryption",
"KeyVaultURL": "${data.azurerm_key_vault.keyvault.vault_uri}",
"KeyVaultResourceId": "${data.azurerm_key_vault.keyvault.id}",
"KeyEncryptionKeyURL": "https://testkeyvault1-1.vault.azure.net/keys/testKey/314c507de8a047a5bfeeb477efcbff60",
"KekVaultResourceId": "${data.azurerm_key_vault.keyvault.id}",
"KeyEncryptionAlgorithm": "RSA-OAEP",
"VolumeType": "All"
}
SETTINGS
tags = {
Environment = "test"
}
}
I tested your code for a newly created VM with 2 Data Disks and it was the same for me as well , If I keep "Volume: ALL" then also only OS Disk get ADE enabled and not the data disks if I verify from portal or Azure CLI.
Solution for it will be as below :
Please make sure that the attached data disks are added as volumes and are formatted from within the VM before adding the extension from Terraform.
Once the above is done and you do a terraform apply to your code , After successful apply it will reflect on Portal and as well as inside the VM.

Terraform - When creating an EBS snapshot, how to provide permission for an account number?

How to add permissions to a different account when creating an EBS snapshot with terraform?
resource "aws_ebs_snapshot" "example_snapshot" {
volume_id = "${aws_ebs_volume.example.id}"
tags = {
Name = "HelloWorld_snap"
}
}
from console we can add the account number from here
Console

Provider information in azure to run command for azure portal?

I am unable to find information regarding azure provider information in ARM or using azure CLI?
I looked in the portal and google but none are providing the information?
I need the provider information so I can connect to Azure and run terraform deployment via azure terraform.
I want to place in a tf file. Can I separate the below into a single tf file and place other resources, such as the actual deployment of vnet, subnets, Iaas deployment, public IP, etc. all in separate tf files?
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
resource "azurerm_resource_group" "myterraformgroup" {
name = "myResourceGroup"
location = "eastus"
tags = {
environment = "Terraform Demo"
}
}
I am trying to find the client_id and client_secret
Is id the same as subscription_id?
What is isDefault = True? What is the difference form is Default = False?
Can I assume the False is the free trial then the True is the actual pay as you go?
output appeared automatically when logged in from azure CLI:
[
{
"cloudName": "AzureCloud",
"id": "21eb90c5-a6ed-4819-a2d0-XXXXXXXXXXXXXX",
"isDefault": true,
"name": "Pay-As-You-Go",
"state": "Enabled",
"tenantId": "1d6cd91f-d633-4291-8eca-XXXXXXXXXXX",
"user": {
"name": "samename01#yahoo.com",
"type": "user"
}
},
{
"cloudName": "AzureCloud",
"id": "b6d5b1ee-7327-42a0-b8e3-XXXXXXXXXXXXXX",
"isDefault": false,
"name": "Pay-As-You-Go",
"state": "Enabled",
"tenantId": "1d6cd91f-d633-4291-8eca-XXXXXXXXXXXX",
"user": {
"name": "samename01#yahoo.com",
"type": "user"
}
}
]
Can I separate the below into a single tf file and place other resources such as the actual deployment of vnet, subnets, Iaas deployment, public IP,... all in separate tf files?
Yes, you can separate Terraform configuration out into as many files as you'd like, as long as those files are in the same directory. So in your case it sounds like you might want a file for each resource (providers.tf, vnet.tf, subnet.tf, etc.)
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX I am trying to find the "client_id" and "client_secret"? Is "id" the same as subscription_id? What is isDefault = True? What is the difference form is Default = False? Can I assume the False is the free trial then the True is the actual pay as you go?
This has nothing to do with billing and payments, you can just ignore the isDefault flag since it's not important. What is important is the subscription id, a basic Terraform configuration will authenticate against 1 Azure subscription, and billing/payments in Azure are at the subscription level.
client_id is the application id of your service principal which you will use to authenticate Terraform in Azure. This is the instructions for setting that up, it's fairly simple.

Issue with install DSC extension on Azure VM during deployment using Terraform

I am trying to use the information in this article:
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-template#default-configuration-script
to onboard a VM to Azure Automation at deployment time and apply a configuration.
I am using Terraform to do the deployment, below is the code I am using for the extensions:
resource "azurerm_virtual_machine_extension" "cse-dscconfig" {
name = "${var.vm_name}-dscconfig-cse"
location = "${azurerm_resource_group.my_rg.location}"
resource_group_name = "${azurerm_resource_group.my_rg.name}"
virtual_machine_name = "${azurerm_virtual_machine.my_vm.name}"
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.76"
depends_on = ["azurerm_virtual_machine.my_vm"]
settings = <<SETTINGS
{
"configurationArguments": {
"RegistrationUrl": "${var.endpoint}",
"NodeConfigurationName": "VMConfig"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"configurationArguments": {
"registrationKey": {
"userName": "NOT_USED",
"Password": "${var.key}"
}
}
}
PROTECTED_SETTINGS
}
I am getting the RegistrationURL value at execution time by running the command below and passing the value into Terraform:
$endpoint = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).Endpoint
I am getting the Password value at execution time by running the command below and passing the value into Terraform:
$key = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).PrimaryKey
I can tell from the logs on the VM that the extension is getting installed but never registers with the Automation Account.
Figured out what the problem was. The documentation is thin on details in some areas so it really was by trial and error that I discovered what was causing the problem. I had the wrong value in the NodeConfigurationName properties. What the documentation says about this property: Specifies the node configuration in the Automation account to assign to the node. Not having much experience with DSC, I interrupted this to mean the name of the configuration as seen in the Configurations section of the State configuration (DSC) blade of the Automation Account in the Azure portal.
What the NodeConfigurationName property is really referring to is the Node definition inside the configuration and it should be in the format of ConfigurationName.NodeName. As an example, the name of my configuration is VMConfig and in the config source I have a Node block defined called localhost. So, with this...the value of the NodeConfigurationName property should be VMConfig.localhost.

Resources