I am using a program to export hundreds of rows in an Excel sheet into separate documents, but the problem is that a PLC will be reading the files and they only save in (macintosh).csv with no option for windows. Is there a way to bulk convert multiple files with different names into the correct format?
I have used this code for a single file but I do not have the knowledge to use it for multiple in a directory
$path = 'c:\filename.csv';
[System.IO.File]::WriteAllText($path.Remove($path.Length-3)+'txt',[System.IO.File]::ReadAllText($path).Replace("`n","`r`n"));
Thank you
The general PowerShell idiom for processing multiple files one by one:
Use Get-ChildItem (or Get-Item) to enumerate the files of interest, as System.IO.FileInfo instances.
Pipe the result to a ForEach-Object call, whose script-block argument ({ ... }) is invoked once for each input object received via the pipeline, reflected in the automatic $_ variable.
Specifically, since you're calling .NET API methods, be sure to pass full, file-system-native file paths to them, because .NET's working directory usually differs from PowerShell's. $_.FullName does that.
Therefore:
Get-ChildItem -LiteralPath C:\ -Filter *.csv |
ForEach-Object {
[IO.File]::WriteAllText(
[IO.Path]::ChangeExtension($_.FullName, 'txt'),
[IO.File]::ReadAllText($_.FullName).Replace("`n", "`r`n")
)
}
Note:
In PowerShell type literals such as [System.IO.File], the System. part is optional and can be omitted, as shown above.
[System.IO.Path]::ChangeExtension(), as used above, is a more robust way to obtain a copy of a path with the original file-name extension changed to a given one.
While Get-ChildItem -Path C:\*.csv or even Get-ChildItem C:\*.csv would work too (Get-ChildItem's first positional parameter is -Path), -Filter, as shown above, is usually preferable for performance reasons.
Caveat: While -Filter is typically sufficient, it does not use PowerShell's wildcard language, but delegates matching to the host platform's file-system APIs. This means that range or character-set expressions such as [0-9] and [fg] are not supported, and, on Windows, several legacy quirks affect the matching behavior - see this answer for more information.
Related
I have a process where files containing data are generated in separate locations, saved to a networked location, and merged into a single file.
And the end of the process, I would like to check that all locations are present in that merged file, and notify me if not.
I am having a problem finding a way to identify that a string specific to each location isn't present, to be used in an if statement, but it doesn't seem to be identifying the string correctly?
I have tried :
get-childitem -filter *daily.csv.ready \\x.x.x.x\data\* -recurse | where-object {$_ -notin 'D,KPI,KPI,1,'}
I know it's probably easier to do nothing if it is present, and perform the warning action if not, but I'm curious if this can be done in the reverse.
Thank you,
As Doug Maurer points out, your command does not search through the content of the files output by the Get-ChildItem command, because what that cmdlet emits are System.IO.FileInfo (or, potentially, System.IO.DirectoryInfo) instances containing metadata about the matching files (directories) rather than their content.
In other words: the automatic $_ variable in your Where-Object command refers to an object describing a file rather than its content.
However, you can pipe System.IO.FileInfo instances to the Select-String cmdlet, which indeed searches the input files' content:
Get-ChildItem -Filter *daily.csv.ready \\x.x.x.x\data\* -Recurse |
Where-Object { $_ | Select-String -Quiet -NotMatch 'D,KPI,KPI,1,' }
I have an XML file that I'm trying to read with PowerShell. However when I read it, the output of some of the XML objects have the following characters in them: ​
I simply downloaded an XML file I needed from a third-party, which opens in Excel. Then I grab the columns I need and paste them into a new Excel Workbook. Then I map the fields with an XML Schema and then export it as an XML file, which I then use for scripting.
In the Excel spreadsheet my data looks clean, but then when I export it and run the PS script, these strange characters appear in the output. The characters even appear in the actual XML file after exporting. What am I doing wrong?
I tried using -Encoding UTF8, but I'm relatively new to PowerShell and am not sure how to appropriately apply it to my script. Appreciate any help!
PowerShell
$xmlpath = 'Path\To\The\File.xml'
[xml]$xmldata = (Get-Content $xmlpath)
$xmldata.applications.application.name
Example of Output
​ABC_DEF_GHI​.com​​
​JKL_MNO_PQRS​.com​
TUV_WXY_Z.com
AB_CD_EF_GH​.com
This is a prime example of why you shouldn't use the idiom [xml]$xmldata = (Get-Content $xmlpath) - as convenient as it is.[1] The problem is indeed one of character encoding: your file is UTF-8-encoded, but Windows PowerShell's Get-Content cmdlet interprets it as ANSI-encoded in the absence of a BOM - this answer explains the encoding part in detail.Thanks, choroba.
Instead, to ensure that the XML file's character encoding is interpreted correctly, use the following:
# Note: If you know that $xmlPath contains a *full*, native path,
# you don't need the Convert-Path call.
($xmlData = [xml]::new()).Load((Convert-Path -LiteralPath $xmlPath))
This delegates interpretation of the character encoding to the System.Xml.XmlDocument.Load .NET API method, which not only assumes the proper default for XML (UTF-8), but also respects any explicit encoding specification as part of the XML declaration, if present (e.g., <?xml version="1.0" encoding="iso-8859-1"?>)
See also:
the bottom section of this answer for background information.
GitHub proposal #14505, which proposes introducing a New-Xml cmdlet that robustly parses XML files.
[1] If you happen to know the encoding of the input file ahead of time, you can get away with using Get-Content's -Encoding parameter in your original approach ([xml]$xmldata = (Get-Content -Encoding utf8 $xmlpath), but the .Load()-based approach is much more robust.
Problem:
Update a specific string within numerous configuration files that are found within the subfolders of a partial path using PowerShell.
Expanded Details:
I have multiple configuration files that need a specific string to be updated; however, I do not know the name of these files and must begin my search from a partial path. I must scan each file for the specific string. Then I must replace the old string with the new string, but I must make sure it saves the file with its original name and in the same location it was found. I must also be able to display the results of the script (number of files affected and their names/path). Lastly, this must all be done in PowerShell.
So far I have come up with the following on my own:
$old = "string1"
$new = "string2"
$configs = Get-ChildItem -Path C:\*\foldername\*.config -Recurse
$configs | %{(Get-Content $_) -Replace $old, $new | Set-Content $_FullName
When I run this, something seems to happen.
If the files are open, they will tell me that they were modified by another program.
However, nothing seems to have changed.
I have attempted various modifications of the below code as well. To my dismay, it only seems to be opening and saving each file rather than actually making the change I want to happen.
$configFiles = GCI -Path C:\*\Somefolder\*.config -Recurse
foreach ($config in $configFiles) {
(GC $config.PSPath) | ForEach-Object {
$_ -Replace "oldString", "newString"
} | Set-Content $config.PSPath)
}
To further exasperate the issue, all of my attempts to perform a simple search against the specified string seems to be posing me issues as well.
Discussing with several others, and based on what have learned via SO... the following code SHOULD return results:
GCI -Path C:\*\Somefolder\*.config -Recurse |
Select-String -Pattern "string" |
Select Name
However, nothing seems to happen. I do not know if I am missing something or if the code itself is wrong...
Some questions I have researched and tried that are similar can be found at the below links:
UPDATE:
It is possible that I am being thwarted by special characters such as
+ and /. For example, my string might be: "s+r/ng"
I have applied the escape character that PowerShell says to use, but it seems this is not helping either.
Replacing a text at specified line number of a file using powershell
Find and replacing strings in multiple files
PowerShell Script to Find and Replace for all Files with a Specific Extension
Powershell to replace text in multiple files stored in many folders
I will continue my research and continue making modifications. I'll be sure to notate anything that get's me to my goal or even a step closer. Thank you all in advance.
What is the benefit of using Join-Path to create a file path instead of just doing something like the following?
$Folder\$FileName
Join-Path benefits:
Uses the path-separator defined for the provider on which it's running
Supports more than just filesystems (Certificates, registry, etc)
Accepts multiple items to join.
Accepts credentials
Can resolve the result to a full path if it's relative
Accepts arguments from the pipeline without a Foreach-Object intermediary
I was wondering if anybody knows a way to achieve this without breaking/mesing with the data itself?
I have a CSV file which is delimited by '|' which was created by retrieving data from Sharepoint using an SPQuery and exported using out-file (because export-csv is not an option since I would have to store the data in a variable and this would eat at the RAM of the server, querying remotely unfortuntely will also not work so i have to do this on the server itself). Nevertheless I have the Data i need but i want to perform some manipulations and move and autocalc certain data within an excel file and export the said excel file.
The problem I have right now is that I sort of need a header to the file. I have tried using the following code:
$header ="Name|GF_GZ|GF_Title|GF_UniqueId|GF_OldId|GFURL|GF_ITEMREDIRECTID"
$file = Import-Csv inputfilename.csv -Header $header | Export-Csv D:\outputfilename.csv
In powershell but the issue here is that when i perform the second Export-Csv it will delimit at anything that has a comma and thus remove it, i sort of need the data to remain intact.
I have tried playing with the -Delimit '|' setting both on the import and the export path but no matter what i do it seems to be cutting off the data. Is there a better way to simply add a row at the Top (a header) without messing with the already existing file structure?
I have found out that using a delimiter such as -delimiter '°' or any other special case character will remove my problem entirely, but i can never be sure if such a character is going to show up in the dataset and thus (as stated already) am looking for a more "elegant" solution.
Thanks
One option you have is to create the original CSV with the headers first. Then when you are exporting the SharePoint data, use the switch -Append in the Out-File command to append the SP data to the CSV.
I wouldn't even bother messing with it in csv format.
$header ="Name|GF_GZ|GF_Title|GF_UniqueId|GF_OldId|GFURL|GF_ITEMREDIRECTID"
$in_file = '.\inputfilename.csv'
$out_file = '.\outputfilename.csv'
$x = Get-Content $in_file
Set-Content $out_file -Value $header,$x
There's probably a more eloquent/refined two-liner for some of this, but this should get you what you need.