How to stream Azure Powershell output to Azure table storage - azure

I have powershell cmdlets that output either a list or formatted as table.
I would like to stream the output to an azure table storage to be accessed by Power BI.
Is this possible?
Thanks

When working with Azure Storage Tables you don't have any way to directly stream to them but you can write data row by row.
Writing to a table is done like this
# Get the necessary modules, if needed. Latter one was not installed for me by running
# the top command.
# Install-Module AzureRM
# Install-Module AzureRMStorageTable
# Login to Azure RM model
Login-AzureRmAccount
# Some variables
$storageAccountName = <account name>
$tableName = <table name>
$partitionKey1 = <partition name
$rg = <resource group name>
# Get the storage account and its context
$storageAccount = Get-AzureRmStorageAccount -ResourceGroupName $rg `
-Name $storageAccountName
$ctx = $storageAccount.Context
# Create a new table, if needed
# New-AzureStorageTable –Name $tableName –Context $ctx
# Get a reference to the table
$storageTable = Get-AzureStorageTable –Name $tableName –Context $ctx
# Add some rows into the table
Add-StorageTableRow `
-table $storageTable `
-partitionKey $partitionKey1 `
-rowKey ("CA") -property #{"username"="Chris";"userid"=1}
See this page for more detailed information on how to work with Azure Storage Tables using PowerShell.

Related

New automation variable by cli or ansible

After create a runbook and edit content, I want to create variable and set value for them. How can I do it by ansible or azure cli ?
Please help me
Azure Automation stores each encrypted variable securely. When you create a variable, you can specify its encryption and storage by Azure Automation as a secure asset.
You must set the value with the Set-AzAutomationVariable cmdlet or the internal Set-AutomationVariable cmdlet. You use the Set-AutomationVariable in your runbooks that are intended to run in the Azure sandbox environment, or on a Windows Hybrid Runbook Worker.
You can create variables and set value for them using PowerShell script.
$rgName = "ResourceGroup01"
$accountName = "MyAutomationAccount"
$vm = Get-AzVM -ResourceGroupName "ResourceGroup01" -Name "VM01" | Select Name, Location,Extensions
New-AzAutomationVariable -ResourceGroupName "ResourceGroup01" -AutomationAccountName "MyAutomationAccount" -Name "MyComplexVariable" -Encrypted $false -Value $vm
$vmValue = Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" -AutomationAccountName "MyAutomationAccount" -Name "MyComplexVariable"
$vmName = $vmValue.Value.Name
$vmTags = $vmValue.Value.Tags
Reference: Manage variables in Azure Automation | Microsoft Docs

Reseting Azure VM deployed from an image using JSON template

I am deploying Azure VMs (with unmanaged disks), that are based on a VHD image. JSON templates used for deployment are stored in my Azure subscription.
Sometimes I need to reset the machine to the original state - the manual way to achieve this through the Azure web portal is:
Open the resource group, delete VM (while keeping other resources).
Going to the storage account and deleting VHD that served as OS disk for the machine.
Go back to the Resource group -> Deployment -> select last Deployment -> Redeploy.
I want to do this programmatically using PowerShell. All the steps are quite easily achievable except for the last one - running redeployment.
This is my PowerShell code:
# Authenticate to Azure Account
Login-AzAccount
$vm = Get-AzVM | Out-GridView -Title "Select machine to be reset to factory state" -PassThru
$groupName = $vm.ResourceGroupName
#Stop the VM
Stop-AzureRmVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name -Force
#Delete VM
#Remove-AzureRmVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
#Getting storage context, blob name and deleting VHD (blob)
$disk = $vm.StorageProfile.OsDisk
$storageAccount = Get-AzStorageAccount -ResourceGroupName "myStorageAccountResourceGroupName" -Name "myStorageAccountName"
#Get storage context
$storageKey = (Get-AzStorageAccountKey -ResourceGroupName $storageAccount.ResourceGroupName -Name $storageAccount.StorageAccountName)[0].Value
$context = New-AzStorageContext -StorageAccountName $storageAccount.StorageAccountName -StorageAccountKey $storageKey
$container = Get-AzStorageContainer -Context $context -Name 'vhds'
$blobName = $disk.Name + ".vhd"
$blob = Get-AzStorageBlob -Container $container.Name -Context $context -Blob $blobName
#Delete Blob
$blob | Remove-AzStorageBlob
Now for the last step - I can get the last Resource group deployment and set up a new deployment with -RollbackToLastDeployment parameter.
#Redeploy Group
$deployments = Get-AzResourceGroupDeployment $groupName
$deployment = $deployments[$deployments.size - 1]
New-AzResourceGroupDeployment -Name $deployment.DeploymentName -ResourceGroupName $groupName -TemplateFile <Expects template in local storage> -RollbackToLastDeployment
The problem is that the New-AzResourceGroupDeployment command expects a JSON template that is on my local disk, but I have my templates stored in the Azure subscription.
Is there any way to use a template that is located in Azure subscription for redeployment of a resource group?
No matter where is the template file located, you could convert/copy the template to the .json file in local, then upload it to the storage, then you will be able to use the -TemplateUri parameter to deploy the remote template.
Sample:
Set-AzCurrentStorageAccount -ResourceGroupName ManageGroup -Name {your-unique-name}
# get the URI with the SAS token
$templateuri = New-AzStorageBlobSASToken -Container templates -Blob storage.json -Permission r `
-ExpiryTime (Get-Date).AddHours(2.0) -FullUri
# provide URI with SAS token during deployment
New-AzResourceGroup -Name ExampleGroup -Location "South Central US"
New-AzResourceGroupDeployment -ResourceGroupName ExampleGroup -TemplateUri $templateuri
For more details, you could refer to this link.
Update:
Seems we could not find the uri of the Template(preview) in the portal, my workaround is copy the template as a .json file in local manually, then upload to the azure blob storage, then use the sample above.
Follow the steps:
1.In the portal, click the View Template, you can copy the template and save it as a .json file in local.
2.Then go to the container of your storage account, upload the .json file.
3.Click the ... of your .json file -> Generate SAS -> Generate blob SAS token and URL, copy the Blob SAS URL, it is the $templateuri what you need in the New-AzResourceGroupDeployment -ResourceGroupName ExampleGroup -TemplateUri $templateuri. Or you can use New-AzStorageBlobSASToken generate it like the sample above.

How to properly set up Continuous Exports on App Insights with powershell script?

I'm following the Microsoft Documentation on how to set up Continuous Exports for AppInsights on Azure.
My current script looks like this:
[CmdletBinding()]
Param(
[Parameter(Mandatory=$True)]
[String]$resourceGroupName,
[Parameter(Mandatory=$True)]
[String]$appInsightsName,
[Parameter(Mandatory=$True)]
[String[]]$docTypes,
[Parameter(Mandatory=$True)]
[String]$storageAccountName,
[Parameter(Mandatory=$True)]
[String]$continuousExportContainerName
)
Login-AzureSubscription > $Null
$storage = Get-AzureRmStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName
$continuousExportContainer = Get-AzureStorageContainer -Context $storage.Context -Name $continuousExportContainerName
$sasToken = New-AzureStorageContainerSASToken -Name testcontainer -Context $storage.Context -ExpiryTime (Get-Date).AddYears(50) -Permission "rwdl"
$sasUri = $continuousExportContainer.CloudBlobContainer.Uri.AbsoluteUri + $sasToken
$defaultLocation = Get-DataCenterLocation us AppInsights
New-AzureRmApplicationInsightsContinuousExport -ResourceGroupName $resourceGroupName -Name $appInsightsName -DocumentType $docTypes -StorageAccountId $storage.Id -StorageLocation $defaultLocation -StorageSASUri $sasUri
When running the scrip and checking the portal I can see it was created:
The problem:
The script turned on Request and Exception (supplied by me for the $docType parameter) but neither the Storage location or the Storage container were set up properly.
I'm not sure what is happening here.
This is by design(even though I don't know why, it is weird).
Even when you manually create the continuous exports by UI from the azure portal, you can see the same behavior. But it works and data will be sent to the storage container you defined previously.
And as far as I know, you can use this powershell cmdlet Get-AzApplicationInsightsContinuousExport to check the storage container / Storage location.
Sample powershell code:
$s = Get-AzApplicationInsightsContinuousExport -ResourceGroupName your_resourceGroupName -Name your_app_insights_name
# get the storage container name
$s.ContainerName
# get the Storage location name
$s.DestinationStorageLocationId
# get the storage account name
$s.StorageName
Test result as below:

How do I clone an Azure Managed Disk into a different subscription?

Using Azure VMs and managed disks (using the ARM deployment model), I have recently run into the following problem I would like to solve: In order to get production data out from a managed disk for testing purposes, I would like to clone a production data disk from the "Production Subscription" into a managed disk in the "Development Subscription", where I can play around with the data in a safe way.
We are talking quite a lot of data (200 GB+), so that an actual "copying" process would take far too much time. I want to be able to automate things and provision new environments in - let's say, under half an hour.
Cloning a managed disk within a subscription (given it's in the same region) is very simple and fast, I just have to specify a --source to the az disk create command. This does not work across subscriptions obviously, at least because the logged in user/service principal for the development subscription does not have access to the production subscription resources.
What I have tried so far:
Using az disk grant-access to retrieve an SAS URI for the managed disk; this thing is not accepted as a --source for az disk create though (it says VHD SAS links would work though...)
Any ideas?
I did this:
$RG = "youresourcegroup"
$Location = "West US 2"
$StorageAccName = "yourstorage"
$SkuName = "Standard_LRS"
$Containername = "images"
$Destdiskname = “yorblob.vhd”
$SourceSASurl = "https://yoursaasurl"
Login-AzureRmAccount
New-AzureRmResourceGroup -Name $RG -Location $Location
New-AzureRmStorageAccount -ResourceGroupName $RG -Name $StorageAccName -SkuName $SkuName -kind Storage -Location $Location
$Storageacccountkey = Get-AzureRmStorageAccountKey -ResourceGroupName $RG -Name $StorageAccName
$Storagectx = New-AzureStorageContext -StorageAccountName $StorageAccName -StorageAccountKey $Storageacccountkey[0].Value
$Targetcontainer = New-AzureStorageContainer -Name $Containername -Context $storagectx -Permission Blob
$sourceSASurl = $mdiskURL.AccessSAS
$ops = Start-AzureStorageBlobCopy -AbsoluteUri $SourceSASurl -DestBlob $Destdiskname -DestContainer $Containername -DestContext $Storagectx
Get-AzureStorageBlobCopyState -Container $Containername -Blob $Destdiskname -Context $Storagectx -WaitForComplete
After this you will have a copy of managed disk in your subscription stored as a regular blob.
Be careful, you should obtain SAS URL from Production subscription, but in the script you should login to a Development subscription.
Next you can go to the Azure Portal and convert the blob to managed disk.
Go to Azure portal --> More Services --> Disks or directly browse this URL https://portal.azure.com/#create/Microsoft.ManagedDisk-ARM
Click +Add
Select source as storage blob
Select your vhd using source blob field.
Here's the script I wrote to migrate all managed disks for each VM from one subscription to another. I hope this helps you.
# This script will get ALL VMs in a subscription and then migrate the disks
if the VM has managed disks
# Created by Joey Brakefield -- #kfprugger & https://www.linkedin.com/in/joeybrakefield/
#set global variables
$sourceSubscriptionId='6a1b5e5e-df06-4608-a7a2-6984f7abacd8'
select-azurermsubscription -subscriptionid $sourceSubscriptionId
$vms = get-azurermvm
$targetSubscriptionId='929e0340-bf36-45a2-8347-47f86b4715de'
#looping logic for each of the VMs that have managed disks
foreach ($vm in $vms) {
select-azurermsubscription -subscriptionid $sourceSubscriptionId
$vmrg = get-azurermresourcegroup -name $vm.ResourceGroupName
$vmname = $vm.name
Write-Host = "Working with: " $vmname " in " $vmrg -foregroundcolor Green
Write-Host ""
#This command will only target managed disks because unmanaged use the storage account locations rather than the /disks provider URIs
if (Get-AzureRmDisk | ? {$_.OwnerId -like "/subscriptions/"+$sourceSubscriptionId +"/resourceGroups/"+$vmrg.resourcegroupname+"/providers/Microsoft.Compute/virtualMachines/"+$vm.name})
{
#Sanity Check
#Read-host "Look correct? If not, CTRL-C to Break"
$manageddisk = Get-AzureRmDisk | ? {$_.OwnerId -like "/subscriptions/"+$sourceSubscriptionId +"/resourceGroups/"+$vmrg.resourcegroupname+"/providers/Microsoft.Compute/virtualMachines/"+$vm.name}
Select-AzureRmSubscription -SubscriptionId $targetSubscriptionId
#check to see if RG exists in the new CSP/Subscription
Get-AzureRmResourceGroup -Name $vmrg.resourcegroupname -ev notPresent -ea 0
write-host "Checking to see if"$vmrg.resourcegroupname"exists in subscriptionid"$targetSubscriptionId -foregroundcolor Cyan
Write-Host ""
if ($notPresent)
{
new-azurermresourcegroup -name $vmrg.resourcegroupname -location $vmrg.location
"Resource Group " + $vmrg.resourcegroupname + " has been created"
} else {"Resource Group " + $vmrg.resourcegroupname + " already exists"}
# Move the disks after all checks are done
foreach ($disk in $managedDisk){
$managedDiskName = $disk.Name
$targetResourceGroupName = $vmrg.resourcegroupname
$diskConfig = New-AzureRmDiskConfig -SourceResourceId $disk.Id -Location $disk.Location -CreateOption Copy
New-AzureRmDisk -Disk $diskConfig -DiskName $Disk.Name -ResourceGroupName $targetResourceGroupName}
}
}
You can use the following commands in Azure CLI -
# Source storage account name
STORAGE1=sourcestorage
#Security key of the source storage account
STORAGEKEY1= SampleKey0qNzttE/EX3hHfcFIzkQQmqXklRU2Z2uANICw==
#Container containing the source VHD
CONTAINER1=sourcevhds
# Name of VHD to be copied (name only, not full url)
DISK=DiskToBeCopied.vhd
#Specify the above properties for target
STORAGE2=targetstorage
STORAGEKEY2= SampleKeyAb6FYP3EqFVEcN2cc5wO QHzXvdc7Gzh1qRt0FXKq6w==
CONTAINER2= targetvhds
After setting the above parameters, execute the following command in Azure CLI -
azure storage blob copy start --account-name $STORAGE1 --account-key $STORAGEKEY1 --source-container $CONTAINER1 --source-blob $Disk --dest-account-name $STORAGE2 --dest-account-key $STORAGEKEY2 --dest-container $CONTAINER2

How to Export AzureVM config using powershell from portal

I am trying to restore a VM (http://blogs.technet.com/b/keithmayer/archive/2014/02/04/step-by-step-perform-cloud-restores-of-windows-azure-virtual-machines-using-powershell-part-2.aspx) which requires to export the VM config before deleting the VM. Now, I am trying to achieve all this through a runbook.
The Export-AzureVM saves the config details in a file in the local machine using Windows PowerShell from that machine. Now, since I am running this in Azure portal is there a way to save the config file in Azure Powershell?
EDIT:
This is working as expected in Azure portal, but from where it's getting C: drive I am not sure.
$exportedFile = "C:\file.xml"
New-Item -Path $exportFolder -ItemType Directory
$exportPath = $exportFolder + "\" + $vm.Name + ".xml"
$vm | Export-AzureVM -Path $exportPath
Output:
Directory: C:\
Mode LastWriteTime Length Name PSComputerName
---- --------- ------ ---- ----------
d---- 8/20/2015 2:15 PM ExportVMs localhost
You have to copy your exported files to some place that you can access from any place. Of course you can chose from all file storage services that exist in the internet. My first approach would it be to copy those exported files to an Azure Storage account.
Here is some sample PowerShell code which shows how to copy files to an Azure Storage account. That sample even creates a new Storage account. If you already have an Azure Storage account to use, just remove the lines that create a new one.
# Used settings
$subscriptionName = "Your Subscription Name" # Get-AzureSubscription
$location = "West Europe" # Get-AzureLocation
$storageAccountName = "mystorageaccount123"
# Create storage account and set is as current.
New-AzureStorageAccount -Location $location -StorageAccountName $storageAccountName -Type Standard_LRS
Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccountName $storageAccountName
$container = "exportedfiles"
# Create destination container in storage if it does not exist.
$containerList = Get-AzureStorageContainer -Name $container -ErrorAction Ignore # Ignore error if container not found.
if ($containerList.Length -eq 0) {
New-AzureStorageContainer -Name $container -Permission Off
}
$exportedFile = "C:\file.xml" # That is something you should know where you have exported your file.
# Upload PowerShell file
Set-AzureStorageBlobContent -Container $container -File $exportedFile -Force
I took the sample code from my own sample code on GitHub Gist and adapted it to your question. You can find the whole sample here, if you like.
If you need a tool to access an Azure Storage account, see this list. There are some good tools. I personally use ClumsyLeaf CloudXplorer.

Resources