Apparently, in PowerShell (ver. 3) not all $null's are the same:
>function emptyArray() { #() }
>$l_t = #() ; $l_t.Count
0
>$l_t1 = #(); $l_t1 -eq $null; $l_t1.count; $l_t1.gettype()
0
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
>$l_t += $l_t1; $l_t.Count
0
>$l_t += emptyArray; $l_t.Count
0
>$l_t2 = emptyArray; $l_t2 -eq $null; $l_t2.Count; $l_t2.gettype()
True
0
You cannot call a method on a null-valued expression.
At line:1 char:38
+ $l_t2 = emptyArray; $l_t2 -eq $null; $l_t2.Count; $l_t2.gettype()
+ ~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : InvokeMethodOnNull
>$l_t += $l_t2; $l_t.Count
0
>$l_t3 = $null; $l_t3 -eq $null;$l_t3.gettype()
True
You cannot call a method on a null-valued expression.
At line:1 char:32
+ $l_t3 = $null; $l_t3 -eq $null;$l_t3.gettype()
+ ~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : InvokeMethodOnNull
>$l_t += $l_t3; $l_t.count
1
>function addToArray($l_a, $l_b) { $l_a += $l_b; $l_a.count }
>$l_t = #(); $l_t.Count
0
>addToArray $l_t $l_t1
0
>addToArray $l_t $l_t2
1
So how and why is $l_t2 different from $l_t3? In particular, is $l_t2 really $null or not? Note that $l_t2 is NOT an empty array ($l_t1 is, and $l_t1 -eq $null returns nothing, as expected), but neither is it truly $null, like $l_t3. In particular, $l_t2.count returns 0 rather than an error, and furthermore, adding $l_t2 to $l_t behaves like adding an empty array, not like adding $null. And why does $l_t2 suddenly seem to become "more $null" when it gets passed in the the function addToArray as a parameter???????
Can anyone explain this behaviour, or point me to documentation that would explain it?
Edit:
The answer by PetSerAl below is correct. I have also found this stackOverflow post on the same issue.
Powershell version info:
>$PSVersionTable
Name Value
---- -----
WSManStackVersion 3.0
PSCompatibleVersions {1.0, 2.0, 3.0}
SerializationVersion 1.1.0.1
BuildVersion 6.2.9200.16481
PSVersion 3.0
CLRVersion 4.0.30319.1026
PSRemotingProtocolVersion 2.2
In particular, is $l_t2 really $null or not?
$l_t2 is not $null, but a [System.Management.Automation.Internal.AutomationNull]::Value. It is a special instance of PSObject. It is returned when a pipeline returns zero objects. That is how you can check it:
$a=&{} #shortest, I know, pipeline, that returns zero objects
$b=[System.Management.Automation.Internal.AutomationNull]::Value
$ReferenceEquals=[Object].GetMethod('ReferenceEquals')
$ReferenceEquals.Invoke($null,($a,$null)) #returns False
$ReferenceEquals.Invoke($null,($a,$b)) #returns True
I call ReferenceEquals thru Reflection to prevent conversion from AutomationNull to $null by PowerShell.
$l_t1 -eq $null returns nothing
For me it returns an empty array, as I expect from it.
$l_t2.count returns 0
It is a new feature of PowerShell v3:
You can now use Count or Length on any object, even if it didn’t have the property. If the object didn’t have a Count or Length property, it will will return 1 (or 0 for $null). Objects that have Count or Length properties will continue to work as they always have.
PS> $a = 42
PS> $a.Count
1
And why does $l_t2 suddenly seem to become "more $null" when it gets passed in the the function addToArray as a parameter???????
It seems that PowerShell converts AutomationNull to $null in some cases, like calling .NET methods. In PowerShell v2, even when saving AutomationNull to a variable it gets converted to $null.
To complement PetSerAl's great answer with a pragmatic summary:
Commands that happen to produce no output do not return $null, but the [System.Management.Automation.Internal.AutomationNull]::Value singleton,
which can be thought of as an "array-valued $null" or, to coin a term, null enumeration. It is sometimes also called "AutomationNull", for its type name.
Note that, due to PowerShell's automatic enumeration of collections, even a command that explicitly outputs an empty collection object such as #() has no output (unless enumeration is explicitly prevented, such as with Write-Output -NoEnumerate).
In short, this special value behaves like $null in scalar contexts, and like an empty array in enumeration contexts, notably in the pipeline, as the examples below demonstrate.
Given that $null and the null enumeration situationally behave differently, distinguishing between the two via reflection may be necessary, which is currently far from trivial; GitHub issue #13465 proposes implementing a test that would allow you to use $someValue -is [AutomationNull].
As of PowerShell 7.3.0, the following, obscure test is required:
$null -eq $someValue -and $someValue -is [psobject]
Caveats:
Passing [System.Management.Automation.Internal.AutomationNull]::Value as a cmdlet / function parameter value invariably converts it to $null.
See GitHub issue #9150.
In PSv3+, even an actual (scalar) $null is not enumerated in a foreach loop; it is enumerated in a pipeline, however - see bottom.
In PSv2-, saving a null enumeration in a variable quietly converted it to $null and $null was enumerated in a foreach loop as well (not just in a pipeline) - see bottom.
# A true $null value:
$trueNull = $null
# An operation with no output returns
# the [System.Management.Automation.Internal.AutomationNull]::Value singleton,
# which is treated like $null in a scalar expression context,
# but behaves like an empty array in a pipeline or array expression context.
$automationNull = & {} # calling (&) an empty script block ({}) produces no output
# In a *scalar expression*, [System.Management.Automation.Internal.AutomationNull]::Value
# is implicitly converted to $null, which is why all of the following commands
# return $true.
$null -eq $automationNull
$trueNull -eq $automationNull
$null -eq [System.Management.Automation.Internal.AutomationNull]::Value
& { param($param) $null -eq $param } $automationNull
# By contrast, in a *pipeline*, $null and
# [System.Management.Automation.Internal.AutomationNull]::Value
# are NOT the same:
# Actual $null *is* sent as data through the pipeline:
# The (implied) -Process block executes once.
$trueNull | % { 'input received' } # -> 'input received'
# [System.Management.Automation.Internal.AutomationNull]::Value is *not* sent
# as data through the pipeline, it behaves like an empty array:
# The (implied) -Process block does *not* execute (but -Begin and -End blocks would).
$automationNull | % { 'input received' } # -> NO output; effectively like: #() | % { 'input received' }
# Similarly, in an *array expression* context
# [System.Management.Automation.Internal.AutomationNull]::Value also behaves
# like an empty array:
(#() + $automationNull).Count # -> 0 - contrast with (#() + $trueNull).Count, which returns 1.
# CAVEAT: Passing [System.Management.Automation.Internal.AutomationNull]::Value to
# *any parameter* converts it to actual $null, whether that parameter is an
# array parameter or not.
# Passing [System.Management.Automation.Internal.AutomationNull]::Value is equivalent
# to passing true $null or omitting the parameter (by contrast,
# passing #() would result in an actual, empty array instance).
& { param([object[]] $param)
[Object].GetMethod('ReferenceEquals').Invoke($null, #($null, $param))
} $automationNull # -> $true; would be the same with $trueNull or no argument at all.
The [System.Management.Automation.Internal.AutomationNull]::Value documentation states:
Any operation that returns no actual value should return AutomationNull.Value.
Any component that evaluates a Windows PowerShell expression should be prepared to deal with receiving and discarding this result. When received in an evaluation where a value is required, it should be replaced with null.
PSv2 vs. PSv3+, and general inconsistencies:
PSv2 offered no distinction between [System.Management.Automation.Internal.AutomationNull]::Value and $null for values stored in variables:
Using a no-output command directly in a foreach statement / pipeline did work as expected - nothing was sent through the pipeline / the foreach loop wasn't entered:
Get-ChildItem nosuchfiles* | ForEach-Object { 'hi' }
foreach ($f in (Get-ChildItem nosuchfiles*)) { 'hi' }
By contrast, if a no-output commands was saved in a variable or an explicit $null was used, the behavior was different:
# Store the output from a no-output command in a variable.
$result = Get-ChildItem nosuchfiles* # PSv2-: quiet conversion to $null happens here
# Enumerate the variable.
$result | ForEach-Object { 'hi1' }
foreach ($f in $result) { 'hi2' }
# Enumerate a $null literal.
$null | ForEach-Object { 'hi3' }
foreach ($f in $null) { 'hi4' }
PSv2: all of the above commands output a string starting with hi, because $null is sent through the pipeline / being enumerated by foreach:
Unlike in PSv3+, [System.Management.Automation.Internal.AutomationNull]::Value is converted to $null on assigning to a variable, and $null is always enumerated in PSv2.
PSv3+: The behavior changed in PSv3, both for better and worse:
Better: Nothing is sent through the pipeline for the commands that enumerate $result: The foreach loop is not entered, because the [System.Management.Automation.Internal.AutomationNull]::Value is preserved when assigning to a variable, unlike in PSv2.
Possibly Worse: foreach no longer enumerates $null (whether specified as a literal or stored in a variable), so that foreach ($f in $null) { 'hi4' } perhaps surprisingly produces no output.
On the plus side, the new behavior no longer enumerates uninitialized variables, which evaluate to $null (unless prevented altogether with Set-StrictMode).
Generally, however, not enumerating $null would have been more justified in PSv2, given its inability to store the null-collection value in a variable.
In summary, the PSv3+ behavior:
takes away the ability to distinguish between $null and [System.Management.Automation.Internal.AutomationNull]::Value in the context of a foreach statement
thereby introduces an inconsistency with pipeline behavior, where this distinction is respected.
For the sake of backward compatibility, the current behavior cannot be changed. This comment on GitHub proposes a way to resolve these inconsistencies for a (hypothetical) potential future PowerShell version that needn't be backward-compatible.
When you return a collection from a PowerShell function, by default PowerShell determines the data type of the return value as follows:
If the collection has more than one element, the return result is an array. Note that the data type of the return result is System.Array even if the object being returned is a collection of a different type.
If the collection has a single element, the return result is the value of that element, rather than a collection of one element, and the data type of the return result is the data type of that element.
If the collection is empty, the return result is $null
$l_t = #() assigns an empty array to $l_t.
$l_t2 = emptyArray assigns $null to $l_t2, because the function emptyArray returns an empty collection, and therefore the return result is $null.
$l_t2 and $l_t3 are both null, and they behave the same way. Since you've pre-declared $l_t as an empty array, when you add either $l_t2 or $l_t3 to it, either with the += operator or the addToArray function, an element whose value is **$null* is added to the array.
If you want to force the function to preserve the data type of the collection object you're returning, use the comma operator:
PS> function emptyArray {,#()}
PS> $l_t2 = emptyArray
PS> $l_t2.GetType()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
PS> $l_t2.Count
0
Note: The empty parentheses after emtpyArray in the function declaration is superfluous. You only need parentheses after the function name if you're using them to declare parameters.
An interesting point to be aware of is that the comma operator doesn't necessarily make the return value an array.
Recall that as I mentioned in the first bullet point, by default the data type of the return result of a collection with more than one element is System.Array regardless of the actual data type of the collection. For example:
PS> $list = New-Object -TypeName System.Collections.Generic.List[int]
PS> $list.Add(1)
PS> $list.Add(2)
PS> $list.Count
2
PS> $list.GetType()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True List`1 System.Object
Note that the data type of this collection is List`1, not System.Array.
However, if you return it from a function, within the function the data type of $list is List`1, but it's returned as a System.Array containing the same elements.
PS> function Get-List {$list = New-Object -TypeName System.Collections.Generic.List[int]; $list.Add(1); $list.Add(2); return $list}
PS> $l = Get-List
PS> $l.Count
2
PS> $l.GetType()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
If you want the return result to be a collection of the same data type as the one within the function that you're returning, the comma operator will accomplish that:
PS> function Get-List {$list = New-Object -TypeName System.Collections.Generic.List[int]; $list.Add(1); $list.Add(2); return ,$list}
PS> $l = Get-List
PS> $l.Count
2
PS> $l.GetType()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True List`1 System.Object
This isn't limited to array-like collection objects. As far as I've seen, any time PowerShell changes the data type of the object you're returning, and you want the return value to preserve the object's original data type, you can do that by preceding the object being returned with a comma. I first encountered this issue when writing a function that queried a database and returned a DataTable object. The return result was an array of hashtables instead of a DataTable. Changing return $my_datatable_object to return ,$my_datatable_object made the function return an actual DataTable object.
Related
I am new to Powershell. I am actually getting the details of the azure data factory linked services but after get I need to use contains to check if the element exists. In python I would just check if string in a list but powershell not quite sure. Please check the code below.
$output = Get-AzDataFactoryV2LinkedService -ResourceGroupName $ResourceGroupName -DataFactoryName "xxxxxxxx" | Format-List
The output of the below is :
sample output given below
LinkedServiceName : abcdef
ResourceGroupName : ghijk
DataFactoryName : lmnopq
Properties : Microsoft.Azure.Management.DataFactory.Models.AzureDatabricksLinkedService
So now I try to do this:
if ($output.Properties -contains "Microsoft.Azure.Management.DataFactory.Models.AzureDatabricksLinkedService") {
Write-Output "test output"
}
But $output.Properties gives us the properties of that json.
I need to check if "Microsoft.Azure.Management.DataFactory.Models.AzureDatabricksLinkedService" exists in output variable and perform the required operations. Please help me on this.
The -contains operator requires a collection and an element. Here's a basic example of its proper use:
$collection = #(1,2,3,4)
$element1 = 5
$element2 = 3
if ($collection -contains $element1) {'yes'} else {'no'}
if ($collection -contains $element2) {'yes'} else {'no'}
What you've done is ask PowerShell to look in an object that isn't a collection for an element of type [string] and value equal to the name of that same object.
What you need to do is inspect this object:
$output.Properties | format-list *
Then once you figure out what needs to be present inside of it, create a new condition.
$output.Properties.something -eq 'some string value'
...assuming that your value is a string, for example.
I would recommend watching some beginner tutorials.
Let's say that I'm trying to write a Powershell function that prints a result set to an Excel worksheet, like this:
function Write-ToWorksheet {
param (
[Parameter( Position = 0, Mandatory = $true )]
[MyLibrary.MyCustomResultType[]]
$ResultSet,
[Parameter( Position = 1, Mandatory = $true )]
[Excel.Worksheet]
$Worksheet
)
# ... Implementation goes here ...
}
And let's say that I'm calling it in a way something like this:
$excel = New-Object -ComObject Excel.Application
$wb = $excel.Workbooks.Add()
$results = Get-MyResults # Never mind what this does.
Write-ToWorksheet -ResultSet $results -Worksheet $wb.Sheets[ 1 ]
And this code will almost work, except that it chokes on my type specification of [Excel.Worksheet].
I realize that it is not necessary to specify the parameter type, and that the code will work just fine without it, as this answer points out.
But to please my inner pedant, is there any way to constrain the parameter type using a reference to a COM object type like Excel.Worksheet?
The reason that PowerShell is complaining about your Excel.Worksheet type is because it's not the name of the true .NET class/interface.
The parameter type you'd need to specify is Microsoft.Office.Interop.Excel.Worksheet instead (once the Excel interop assembly has been loaded, either directly via Add-Type or after the call to New-Object -ComObject Excel.Application as that will load the desired library too)
With that said, I don't believe this will work as intended because of the way that PowerShell handles COM objects by creating a transparent COM adapter layer between the true type of the variable exposed in PowerShell.
Interestingly there appear to be differences in the way that PowerShell handles parameter conversions when supplying them via Named arguments vs Positional arguments as can be seen with my demo code below:
function Get-WorksheetName {
param (
[Parameter( Position = 1, Mandatory = $true )]
[Microsoft.Office.Interop.Excel.Worksheet]
$Worksheet
)
return $Worksheet.Name
}
Calling the function using named arguments fails:
Whereas calling the function via positional arguments works as expected:
If positional arguments aren't something you'd like to use, then another alternative would be to drop the parameter type constraint and instead check the type using the ValidateScript attribute instead. This still ensures type safety:
function Get-WorksheetName {
param (
[Parameter(Position = 1, Mandatory = $true)]
[ValidateScript({$_ -is [Microsoft.Office.Interop.Excel.Worksheet]})]
$Worksheet
)
return $Worksheet.Name
}
Passing a different type of object would result in this:
The following code:
$xl = New-Object -ComObject Excel.Application
$constants = $xl.gettype().assembly.getexportedtypes() | GM
where-object {$_.IsEnum -and $_.name -eq 'constants'}
$pso = new-object psobject
[enum]::getNames($constants) | foreach { $pso | Add-Member -MemberType NoteProperty $_ ($constants::$_) }
$xlConstants = $pso
Fails in the [enum]::getNames with the ff. message from Powershell 5.1 ISE:
Cannot convert argument "enumType", with value: "System.Object[]", for "GetNames" to type "System.Type":
"Cannot convert the "System.Object[]" value of type "System.Object[]" to type "System.Type"."
At line:9 char:1
Would be grateful for some guidance.
The code was copied from a 2010 answer to a post, which wanted to extract the Excel Enum constants.
There's an extraneous GM (Get-Member) call in your code, and the Where-Object call - which should be where GM is - is disconnected from the pipeline above (which makes it a no-op).
$constants is therefore an array of objects (output by Get-Member), and passing an array to [enum]::GetNames() fails with the error you saw.
Find a corrected version of your code below, but your problem can be solved more simply, combining the solutions from the post your code came from as shown in this answer.
Here's a corrected version of your code that also shows you a faster PSv5+ solution for creating the custom object whose properties are named for the enumeration values' symbolic names and whose property values are the enumeration values themselves.
As the linked simpler solution shows, this isn't really necessary, however.
# Get the [enum]-derived type named 'Constants' from among
# the types that the Excel interop assembly exports.
$xl = New-Object -ComObject Excel.Application
$constantsType = $xl.GetType().Assembly.GetExportedTypes() |
Where-Object { $_.IsEnum -and $_.Name -eq 'constants' }
# Construct a custom object that reflects the enum type's
# enumeration values as properties.
$xlConstants = New-Object pscustomobject
[enum]::GetNames($constantsType).ForEach({
$xlConstants.psobject.properties.Add([psnoteproperty]::new($_, $constantsType::$_))
})
If you'd rather use the raw [int] values as the property values, use [psnoteproperty]::new($_, $constantsType::$_.value__) instead.
.psobject.properties provides access to any object's properties, and the .Add() method allows creating properties. [psnoteproperty]::new() creates a note property (shown as type NoteProperty by Get-Member), i.e., a property with a static value; the first argument is the property name, and the 2nd the property value, which can be of any type.
I am experimenting with Powershell runspaces and have noticed a difference in how output is written to the console depending on where I create my custom object. If I create the custom object directly in my script block, the output is written to the console in a table format. However, the table appears to be held open while the runspace pool still has open threads, i.e. it creates a table but I can see the results from finished jobs being appended dynamically to the table. This is the desired behavior. I'll refer to this as behavior 1.
The discrepancy occurs when I add a custom module to the runspace pool and then call a function contained in that module, which then creates a custom object. This object is printed to the screen in a list format for each returned object. This is not the desired behavior. I'll call this behavior 2
I have tried piping the output from behavior 2 to Format-Table but this just creates a new table for each returned object. I can achieve the desired effect somewhat by using Write-Host to print a line of the object values but I don't think this is appropriate considering it seems there is a built in behavior that can achieve my desired result if I can understand it.
My thoughts on the matter are that it has something to do with the asynchronous behavior of the runspace. I'm new to powershell but perhaps when the custom object comes directly from the script block there is a hidden method or type declaration telling powershell to hold the table open and wait for result? This would be overridden when using the second technique because its coming from my custom function?
I would like to understand why this is occurring and how I can achieve behavior 1 while being able to use the custom module, which will eventually be very large. I'm open to a different method technique as well, so long as its possible to essentially see the table of outputs grow as jobs finish. The code used is below.
$ISS = [InitialSessionState]::CreateDefault()
[void]$ISS.ImportPSModule(".\Modules\Test-Item.psm1")
$Pool = [RunspaceFactory]::CreateRunspacePool(1, 5, $ISS, $Host)
$Pool.Open()
$Runspaces = #()
# Script block to run code in
$ScriptBlock = {
Param ( [string]$Server, [int]$Count )
Test-Server -Server $Server -Count $Count
# Uncomment the three lines below and comment out the two
# lines above to test behavior 1.
#[int] $SleepTime = Get-Random -Maximum 4 -Minimum 1
#Start-Sleep -Seconds $SleepTime
#[pscustomobject]#{Server=$Server; Count=$Count;}
}
# Create runspaces and assign to runspace pool
1..10 | ForEach-Object {
$ParamList = #{ Server = "Server A" Count = $_ }
$Runspace = [PowerShell]::Create()
[void]$Runspace.AddScript($ScriptBlock)
[void]$Runspace.AddParameters($ParamList)
$Runspace.RunspacePool = $Pool
$Runspaces += [PSCustomObject]#{
Id = $_
Pipe = $Runspace
Handle = $Runspace.BeginInvoke()
Object = $Object
}
}
# Check for things to be finished
while ($Runspaces.Handle -ne $null)
{
$Completed = $Runspaces | Where-Object { $_.Handle.IsCompleted -eq $true }
foreach ($Runspace in $Completed)
{
$Runspace.Pipe.EndInvoke($Runspace.Handle)
$Runspace.Handle = $null
}
Start-Sleep -Milliseconds 100
}
$Pool.Close()
$Pool.Dispose()
The custom module I'm using is as follows.
function Test-Server {
Param ([string]$Server, [int]$Count )
[int] $SleepTime = Get-Random -Maximum 4 -Minimum 1
Start-Sleep -Seconds $SleepTime
[pscustomobject]#{Server = $Server;Item = $Count}
}
What you have mentioned sounds completely normal to me. That is how powershell is designed because it shares the burden of display. If the user has not specified how to display, PowerShell decides how to.
I couldn't reproduce your issue with the code provided but I think this will solve your problem.
$FinalTable = foreach ($Runspace in $Completed)
{
$Runspace.Pipe.EndInvoke($Runspace.Handle)
$Runspace.Handle = $null
}
$FinalResult will now have the table format you expect.
It appears that my primary issue, aside from errors in my code, was a lack of understanding related to powershell's default object handling. Powershell displays the output of objects as a table when there are less than four key-value pairs and as a list when there are more.
The custom object returned in my test module had more than for key-value pairs while the custom object I returned directly only had two. This resulted in what I thought was odd behavior. I compounded the issue by removing some key-value pairs in my posted code to shorten it and then didn't test it (sorry).
This stackoverflow post has a lengthy answer explaining the behavior some and providing examples for changing the default output.
Take this code:
$logged_on_user = get-wmiobject win32_computersystem | select username
If I want to output the value into a new string I'd do something like:
$A = $logged_on_user.username
However, if I do the following:
$logged_on_user = get-wmiobject win32_computersystem | select *
..to try to assign all the values to a new "object", do I?:
$logged_on_user.items
$logged_on_user.value
$logged_on_user.text
$logged_on_user.propertry
I've tried them all and they don't work.
Anybody got any ideas?
Thanks
P.S. I think I may have got the title of this question wrong.
In your example:
$logged_on_user = get-wmiobject win32_computersystem | select username
creates a new PSCustomObject with a single property - username. When you do the following:
$A = $logged_on_user.username
you are assigning the return value of the PSCustomObject's username property to a variable $A. Because the return type of the username property is a string, $A will also be a string.
When executing the following:
$cs = get-wmiobject win32_computersystem
If you assign $cs to a new variable like in the following:
$newVariable = $cs
Then $newVariable will reference the same object $cs does, so all properties and methods that are accessible on $cs will also be accessible on $newVariable.
If you don't specify any properties or call any methods on an object when assigning a return value to another variable, then the return value is the object itself, not the return value of one of the object's properties or methods.
Additional info, but not directly related to the question:
When you pipe the output of get-wmiobject to select-object, like in the following:
$cs = get-wmiobject win32_computersystem | select-object *
The variable $cs is of type: PSCustomObject as opposed to ManagementObject (as it is when you do not pipe to Select-Object) which has all of the same properties and their values that the ManagementObject that was piped in did.
So, if you only want the property values contained by the ManagementObject, there is no need to pipe the output to Select-Object as this just creates a new object (of type PSCustomObject) with the values from the MangementObject. Select-Object is useful when you either want to select a subset of the properties of the object that is being piped in, or if you want to create a new PSCustomObject with different properties that are calculated through expressions.
I'm not sure if you're asking about copying the results of Get-WmiObject or PowerShell objects in general. In the former case, Get-WmiObject returns instances of the ManagementObject class, which implements the ICloneable interface that provides a Clone method. You can use it like this...
$computerSystem = Get-WmiObject -Class 'Win32_ComputerSystem';
$computerSystemCopy = $computerSystem.Clone();
After the above code executes, $computerSystem and $computerSystemCopy will be identical but completely separate ManagementObject instances. You can confirm this by running...
$areSameValue = $computerSystem -eq $computerSystemCopy;
$areSameInstance = [Object]::ReferenceEquals($computerSystem, $computerSystemCopy);
...and noting that $areSameValue is $true and $areSameInstance is $false.