How do I remove a property from Azure Table Storage with powershell? - azure

I tried this code to remove property from table:
$propertiesToRemove = #("Prop1", "Prop2")
$row = Get-AzTableRow -Table $cloudTable -PartitionKey "PK" -RowKey $id
$propertiesToRemove | %{ $row.PSObject.Properties.Remove($_) }
Update-AzTableRow -Table $cloudTable -entity $row
but it leaves them untouched. How do I do this with powershell?

Assuming you're using AzureRmStorageTable, I don't think it is possible to remove properties from a table because Update-AzTableRow performs an InsertOrMerge operation instead of either Replace or InsertOrReplace operation.
From the source code:
return ($Table.Execute([Microsoft.Azure.Cosmos.Table.TableOperation]::InsertOrMerge($updatedEntity)))

Related

Query Multi-Domain for SamAccountName using PowerShell

I'm trying to populate an employee ID column in a CSV file by querying Active Directory against another column in the CSV file called "OwnerEmail" (Which is the user principal name). The problem is users in the owneremail column do not all belong to the same domain. How can I query 2 domains at one time?
Table for reference
Employee ID
OwnerEmail
DeptNumber
Costs
test#mail.com
0894
4654.45
test2#mail.com
3453
4994.15
This is what I have tried so far. The script isn't working and there are no error messages. Any Ideas
$Domains='us.domain.corp', 'uk.domain.corp'
$CSVImport |Select-Object #{
Name = "employeeID"
Expression = {
foreach($user in $CSVImport)
{
foreach($Domain in $Domains){
$user= (Get-ADUser -Filter "UserPrincipalName -eq '$($_.OwnerEmail)'" -Server $Domain -Properties 'SamAccountName').SamAccountName
}
}
}}, * |Select-Object employeeID, DepartmentNumber, OwnerEmail, #{Name="Costs"; Expression={"$ $($_.Cost)"}} | Export-Csv "$Env:temp/$OutputFile" -NoTypeInformation
How can I query 2 domains at one time?
There is no need to do this, you can query both at once with multithreading but seems like an overkill. What I would recommend is to query all users at once per Domain, the code below may seem awfully complicated but should be pretty efficient. See the inline comments for details.
# Import the Csv
$CSVImport = Import-Csv path\to\thecsv.csv
# Create a LDAP Filter to query all users at once
# This filter would look like this for example:
# (|(userPrincipalName=test#mail.com)(userPrincipalName=test2#mail.com))
$filter = "(|"
foreach($email in $CSVImport.OwnerEmail) {
if(-not [string]::IsNullOrWhiteSpace($email)) {
$filter += "(userPrincipalName=$email)"
}
}
$filter += ")"
# For each Domain, use the same filter and get all existing users
'us.domain.corp', 'uk.domain.corp' | ForEach-Object { $map = #{} } {
foreach($user in Get-ADUser -LDAPFilter $filter -Server $_) {
# and store them in a hashtable where
# the Keys are their `UserPrincipalName`
# and the Values are the attribute of interest (`SamAccountName`)
$map[$user.UserPrincipalName] = $user.SamAccountName
}
}
# Now we can simply use a calculated property with `Select-Object`
$CSVImport | Select-Object #{N='EmployeeID'; E={ $map[$_.OwnerEmail] }}, * |
Export-Csv "$Env:temp/$OutputFile" -NoTypeInformation

Need to apply an if condition based on a check in Powershell

I am new to Powershell. I am actually getting the details of the azure data factory linked services but after get I need to use contains to check if the element exists. In python I would just check if string in a list but powershell not quite sure. Please check the code below.
$output = Get-AzDataFactoryV2LinkedService -ResourceGroupName $ResourceGroupName -DataFactoryName "xxxxxxxx" | Format-List
The output of the below is :
sample output given below
LinkedServiceName : abcdef
ResourceGroupName : ghijk
DataFactoryName : lmnopq
Properties : Microsoft.Azure.Management.DataFactory.Models.AzureDatabricksLinkedService
So now I try to do this:
if ($output.Properties -contains "Microsoft.Azure.Management.DataFactory.Models.AzureDatabricksLinkedService") {
Write-Output "test output"
}
But $output.Properties gives us the properties of that json.
I need to check if "Microsoft.Azure.Management.DataFactory.Models.AzureDatabricksLinkedService" exists in output variable and perform the required operations. Please help me on this.
The -contains operator requires a collection and an element. Here's a basic example of its proper use:
$collection = #(1,2,3,4)
$element1 = 5
$element2 = 3
if ($collection -contains $element1) {'yes'} else {'no'}
if ($collection -contains $element2) {'yes'} else {'no'}
What you've done is ask PowerShell to look in an object that isn't a collection for an element of type [string] and value equal to the name of that same object.
What you need to do is inspect this object:
$output.Properties | format-list *
Then once you figure out what needs to be present inside of it, create a new condition.
$output.Properties.something -eq 'some string value'
...assuming that your value is a string, for example.
I would recommend watching some beginner tutorials.

Azure Portal - Assign tags to Resource Groups using Powershell and reading data from Excel spreadsheet or CSV

I am looking to apply a new set of tags to a number of Azure Resource Groups. The tags must adhere to a naming convention which is a concatenation of a number of Microsoft Excel column data field values and will be read row by row for each Resource Group.
For example, let's use the below as an example of the first 6 columns:
Column A: Azure Subscription Name
Column B: Resource Group Name
Column C: Business Unit
Column D: Cost Center
Column E: Project Code
Column F: Service or Application Name
The proposed tag will then be a concatenation of for example:
[Column A] - [Column B] - [Column C] - [Column E] - [Column F]
The resultant tag for the first row should therefore look like the below example:
"Subscription 1 : RG-01 : BU-A1 : 1001 : WebApp-1"
Powershell would probably be the preferred solution to complete this task and I'd welcome any suggestions or ideas on how to achieve this.
Updated answer 0406:
The scripts below can work on multi-rows scenario:
Note: to use Set-AzContext cmdlet, please install Az.Accounts 2.2.7 module.
#load the .csv file
$file_path = "D:\test\t1.csv"
#define the tag name, you can change it as per your need
$tag_name="mytag111"
#loop the values in the .csv file
Import-Csv -Path $file_path | ForEach-Object{
#define a tag value with empty string
$tag_value=""
#define a variable for subscription name
$subscription_name = ""
#define a variable for resource group name
$resource_group_name = ""
foreach($property in $_.PSObject.Properties)
{
#here, get the subscription name from the csv file
if($property.name -eq "Az Sub Name"){
$subscription_name = $property.Value
}
#here, get the resource group name from the csv file
if($property.name -eq "RG Name"){
$resource_group_name = $property.Value
}
#exclude the "Cost Ctr" column
if(!($property.name -eq "Cost Ctr"))
{
#then we loop all the values from each row, and then concatenate them
#here, we just don't want to add the colon(:) at the end of the value from "Svc/App" column
if(!($property.name -eq "Svc/App"))
{
$tag_value += $property.Value + " : "
}
else
{
$tag_value += $property.Value
}
}
}
#change the context as per different subscription
Set-AzContext -Subscription $subscription_name
#get the existing tags for the resource group
$tags = (Get-AzResourceGroup -Name $resource_group_name).Tags
#add the new tags to the existing tags, so the existing tags will not be removed
$tags +=#{$tag_name=$tag_value}
#set the tags
Set-AzResourceGroup -Name $resource_group_name -Tag $tags
}
"completed********"
Here are the test data:
The scripts work fine as per my testing. And please let me know if you have any issues about it.
Original answer:
Please correct me if I misunderstood you.
I write a simple code to read data from .csv file and then add tags to the resource group.
Note that in my testing, there is only 1 row data in .csv file and hard-cord the resource group name, please feel free to modify the code to meet your requirement.
And to use the Set-AzResourceGroup cmdlet, you should make sure the Az.Resources module is installed.
The .csv file:
The powershell code:
#load the .csv file
$file_path = "D:\test\t1.csv"
#define the tag name, you can change it as per your need
$tag_name="mytag111"
#define a tag value with empty string
$tag_value=""
#loop the values in the .csv file
Import-Csv -Path $file_path | ForEach-Object{
foreach($property in $_.PSObject.Properties)
{
#exclude the "Cost Ctr" column
if(!($property.name -eq "Cost Ctr"))
{
#then we loop all the values from each row, and then concatenate them
#here, we just don't want to add the colon(:) at the end of the value from "Svc/App" column
if(!($property.name -eq "Svc/App"))
{
$tag_value += $property.Value + " : "
}
else
{
$tag_value += $property.Value
}
}
}
#get the existing tags for the resource group
$tags = (Get-AzResourceGroup -Name yyrg11).Tags
#add the new tags to the existing tags, so the existing tags will not be removed
$tags +=#{$tag_name=$tag_value}
#set the tags
Set-AzResourceGroup -Name yyrg11 -Tag $tags
}
"completed********"
The test result:
In azure portal, the tag is added:

Powershell runspace output behaves differently depending on how returning custom object is defined

I am experimenting with Powershell runspaces and have noticed a difference in how output is written to the console depending on where I create my custom object. If I create the custom object directly in my script block, the output is written to the console in a table format. However, the table appears to be held open while the runspace pool still has open threads, i.e. it creates a table but I can see the results from finished jobs being appended dynamically to the table. This is the desired behavior. I'll refer to this as behavior 1.
The discrepancy occurs when I add a custom module to the runspace pool and then call a function contained in that module, which then creates a custom object. This object is printed to the screen in a list format for each returned object. This is not the desired behavior. I'll call this behavior 2
I have tried piping the output from behavior 2 to Format-Table but this just creates a new table for each returned object. I can achieve the desired effect somewhat by using Write-Host to print a line of the object values but I don't think this is appropriate considering it seems there is a built in behavior that can achieve my desired result if I can understand it.
My thoughts on the matter are that it has something to do with the asynchronous behavior of the runspace. I'm new to powershell but perhaps when the custom object comes directly from the script block there is a hidden method or type declaration telling powershell to hold the table open and wait for result? This would be overridden when using the second technique because its coming from my custom function?
I would like to understand why this is occurring and how I can achieve behavior 1 while being able to use the custom module, which will eventually be very large. I'm open to a different method technique as well, so long as its possible to essentially see the table of outputs grow as jobs finish. The code used is below.
$ISS = [InitialSessionState]::CreateDefault()
[void]$ISS.ImportPSModule(".\Modules\Test-Item.psm1")
$Pool = [RunspaceFactory]::CreateRunspacePool(1, 5, $ISS, $Host)
$Pool.Open()
$Runspaces = #()
# Script block to run code in
$ScriptBlock = {
Param ( [string]$Server, [int]$Count )
Test-Server -Server $Server -Count $Count
# Uncomment the three lines below and comment out the two
# lines above to test behavior 1.
#[int] $SleepTime = Get-Random -Maximum 4 -Minimum 1
#Start-Sleep -Seconds $SleepTime
#[pscustomobject]#{Server=$Server; Count=$Count;}
}
# Create runspaces and assign to runspace pool
1..10 | ForEach-Object {
$ParamList = #{ Server = "Server A" Count = $_ }
$Runspace = [PowerShell]::Create()
[void]$Runspace.AddScript($ScriptBlock)
[void]$Runspace.AddParameters($ParamList)
$Runspace.RunspacePool = $Pool
$Runspaces += [PSCustomObject]#{
Id = $_
Pipe = $Runspace
Handle = $Runspace.BeginInvoke()
Object = $Object
}
}
# Check for things to be finished
while ($Runspaces.Handle -ne $null)
{
$Completed = $Runspaces | Where-Object { $_.Handle.IsCompleted -eq $true }
foreach ($Runspace in $Completed)
{
$Runspace.Pipe.EndInvoke($Runspace.Handle)
$Runspace.Handle = $null
}
Start-Sleep -Milliseconds 100
}
$Pool.Close()
$Pool.Dispose()
The custom module I'm using is as follows.
function Test-Server {
Param ([string]$Server, [int]$Count )
[int] $SleepTime = Get-Random -Maximum 4 -Minimum 1
Start-Sleep -Seconds $SleepTime
[pscustomobject]#{Server = $Server;Item = $Count}
}
What you have mentioned sounds completely normal to me. That is how powershell is designed because it shares the burden of display. If the user has not specified how to display, PowerShell decides how to.
I couldn't reproduce your issue with the code provided but I think this will solve your problem.
$FinalTable = foreach ($Runspace in $Completed)
{
$Runspace.Pipe.EndInvoke($Runspace.Handle)
$Runspace.Handle = $null
}
$FinalResult will now have the table format you expect.
It appears that my primary issue, aside from errors in my code, was a lack of understanding related to powershell's default object handling. Powershell displays the output of objects as a table when there are less than four key-value pairs and as a list when there are more.
The custom object returned in my test module had more than for key-value pairs while the custom object I returned directly only had two. This resulted in what I thought was odd behavior. I compounded the issue by removing some key-value pairs in my posted code to shorten it and then didn't test it (sorry).
This stackoverflow post has a lengthy answer explaining the behavior some and providing examples for changing the default output.

Powershell 2.0: how to -match or -like a string in hash table keys

For Powershell 2.0:
I have a hash table with several strings as keys. Unlike #{}.containskey, is it possible to find a key (e.g., "examplekey") using wildcards (e.g., "*xampl*")?
I managed to accomplish what I wanted making a list of the keys and using Where-Object as a filter. But is there a simpler way to do it? I think this method is specially bad when I'm adding new keys, because I need to recreate the list everytime.
Use .Keys property and -like or -notlike operators that return an array of keys (or a single key):
if ($hash.keys -notlike '*xampl*') {
$hash.example = 1
}
Store the keys in an array for multiple checks:
$keys = $hash.keys
if ($keys -notlike '*xampl*') {
$hash.example = 1
}
if ($keys -notlike '*foo*') {
$hash.example = 1
}
Chain the comparisons:
if ($hash.keys -notlike '*xampl*' -notlike '*123*') {
$hash.example = 1
}
Or use regexp in case there are lots of keys and you want to perform lots of checks:
if ($hash.keys -join "`n" -match '(?mi)xampl|foo|bar|^herp\d+|\wDerp$|^and$|\bso\b|on') {
echo 'Already present'
} else {
$hash.foo123 = 'bar'
# ......
}
(?mi) means multiline case-insensitive mode: each key is tested individually.

Resources