I am trying to use Powershell to run through a list of wiki pages that were originally on a seperate wiki site. I want to migrate these to a Sharepoint 2010 Wiki site. I have a Powershell script that goes through all the files, creates a wiki page for each, sets the Page Content to the body HTML of the old page, and updates the itemm, but for some reason none of the layout is shown. The page is just blank with no boxes shown in Edit mode or Sharepoint Designer. Here is my script:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$SPSite = New-Object Microsoft.SharePoint.SPSite("http://dev-sharepoint1/sites/brandonwiki");
$OpenWeb = $SpSite.OpenWeb("/sites/brandonwiki");
$List = $OpenWeb.Lists["Pages"];
$rootFolder = $List.RootFolder
$OpenWeb.Dispose();
$SPSite.Dispose()
$files=get-childitem ./testEx -rec|where-object {!($_.psiscontainer)}
foreach ($file in $files) {
$name=$file.Name.Substring(0, $file.Name.IndexOf(".htm"))
$strDestURL="/sites/brandonwiki"
$strDestURL+=$rootFolder
$strDestURL+="/"
$strDestURL+=$name;
$strDestURL+=".aspx"
$Page = $rootFolder.Files.Add($strDestUrl, [Microsoft.SharePoint.SPTemplateFileType]::WikiPage)
$Item=$Page.Item
$Item["Name"] = $name
$cont = Get-Content ./testEx/$file
$cont = $cont.ToString()
$Item["Page Content"] = $cont
$Item.UpdateOverwriteVersion();
}
Here is what a good page looks like in edit mode that was added manually:
http://imageshack.us/photo/my-images/695/goodpage.png/
And what the pages look like done through Powershell:
http://imageshack.us/photo/my-images/16/badpage.png/
I found the answer. The script needs to create a Publishing Page instead of a List item. Then it will add the appropriate Wiki Page Layout and setup everything for you. Here is my new script that works for anyone that may also have this issue - Note it also replaces the doku links with Sharepoint ones - :
##################################################################################################################
# Wiki Page Creater #
##################################################################################################################
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") #Setup Sharepoint Functions
#Commonly Used Variables To Set
$baseSite = "http://sharepointSite" #Main Sharepoint Site
$subSite = "/sites/subSite" #Sharepoint Wiki sub-site
$pageFolder = "Pages" #Directory of Wiki Pages
$fileFolder = "Media" #Directory of Files
## Setup Basic sites and pages
$site = New-Object Microsoft.SharePoint.SPSite($baseSite+$subSite) #Sharepoint Site
$psite = New-Object Microsoft.SharePoint.Publishing.PublishingSite($baseSite+$subSite) #Publishing Site
$ctype = $psite.ContentTypes["Enterprise Wiki Page"] #Get Enterprise Wiki page content type
$layouts = $psite.GetPageLayouts($ctype, $true) #Get Enterprise Wiki page layout
$layout = $layouts[0] #Choose first layout
$web = $site.OpenWeb(); #Site.Rootweb
$pweb = [Microsoft.SharePoint.Publishing.PublishingWeb]::GetPublishingWeb($web) #Get the Publishing Web Page
$pages = $pweb.GetPublishingPages($pweb) #Get collection of Pages from webpage
## Get files in exported folder and parse them
$files=get-childitem ./testEx -rec|where-object {!($_.psiscontainer)} #Get all files recursively
foreach ($file in $files) { #Foreach file in folder(s)
$name=$file.Name.Substring(0, $file.Name.IndexOf(".htm")) #Get file name, stipped of extension
#Prep Destination url for new pages
$strDestURL=$subSite+$pageFolder+"/"+$name+".aspx"
#End Prep
$page = $pages.Add($strDestURL, $layout) #Add a new page to the collection with Wiki layout
$item = $page.ListItem #Get list item of the page to access fields
$item["Title"] = $name; #Set Title to file name
$cont = Get-Content ./testEx/$file #Read contents of the file
[string]$cont1 = "" #String to hold contents after parsing
### HTML PARSING
foreach($line in $cont){ #Get each line in the contents
#############################################
# Replacing Doku URI with Sharepoint URI #
#############################################
## Matching for hyperlinks and img src
$mat = $mod -match ".*href=`"/Media.*`"" #Find href since href and img src have same URI Structure
if($mat){ #If a match is found
foreach($i in $matches){ #Cycle through all matches
[string]$j = $i[0] #Set match to a string
$j = $j.Substring($j.IndexOf($fileFolder)) #Shorten string for easier parsing
$j = $j.Substring(0, $j.IndexOf("amp;")+4) #Remove end for easier parsing
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf("id="))+$j.Substring($j.IndexOf("&")+5)) #Replace all occurances of original with two sections
}
}
## Matching for files and images
$mat = $mod -match "`"/Media.*`"" #Find all Media resources
if($mat){ #If match is found
[string]$j = $matches[0] #Set match to a string
$j = $j.Substring(0, $j.IndexOf("class")) #Shorten string for easier parsing
$j = $j.Substring($j.IndexOf($fileFolder)) #Sorten
$j = $j.Substring(0, $j.LastIndexOf(":")+1) #Remove end for easier parsing
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf($fileFolder)+5)+$j.Substring($j.LastIndexOf(":")+1)) #Replace all occurances of original with two sections
}
$mod = $mod.Replace("/"+$fileFolder, $subSite+"/"+$fileFolder+"/") #make sure URI contains base site
## Matching for links
$mat = $mod -match "`"/Page.*`"" #Find all Page files
if($mat){ #If match is found
[string]$j = $matches[0] #Set match to a string
if($j -match "start"){ #If it is a start page,
$k = $j.Replace(":", "-") #replace : with a - instead of removing to keep track of all start pages
$k = $k.Replace("/"+$pageFolder+"/", "/"+$pageFolder) #Remove following / from Pages
$mod = $mod.Replace($j, $k) #Replace old string with the remade one
}
else{ #If it is not a start page
$j = $j.Substring(0, $j.IndexOf("class")) #Stop the string at the end of the href so not to snag any other :
$j = $j.Substring($j.IndexOf($pageFolder)) #Start at Pages in URI
$j = $j.Substring(0, $j.LastIndexOf(":")+1) #Now limit down to last :
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf($pageFolder)+5)+$j.Substring($j.LastIndexOf(":")+1)) #Replace all occurances of original with two sections
}
[string]$j = $mod #Set the match to a new string
$j = $j.Substring(0, $j.IndexOf("class")) #Stop at class to limit extra "
$j = $j.Substring($j.IndexOf($pageFolder)) #Start at Pages in URI
$j = $j.Substring(0, $j.LastIndexOf("`"")) #Grab ending "
$rep = $j+".aspx" #Add .aspx to the end of URI
$mod = $mod.Replace($j, $rep) #Replaced old URI with new one
}
$mod = $mod.Replace("/"+$pageFolder, $subSite+"/"+$pageFolder+"/") #Add base site to URI
$cont1 += $mod #Add parsed line to the new HTML string
}
##### END HTML PARSING
$item["Page Content"] = $cont1 #Set Wiki page's content to new HTML
$item.Update() #Update the page to set the new fields
}
$site.Dispose() #Dispose of the open link to site
$web.Dispose() #Dispose of the open link to the webpage
Related
my sharepoint site list is having too many items and i need to delete 10000 files from it.How to do that using powershell script.
Create the query:
$list = (Get-Spweb http://devmy101).GetList("http://devmy101/Lists/smarEnteredTerritorialWaters")
$query = New-Object Microsoft.SharePoint.SPQuery;
$query.ViewAttributes = "Scope='Recursive'";
$query.RowLimit = 2000;
$query.Query = '<Where><Gt><FieldRef Name="Created"/><Value Type="DateTime" IncludeTimeValue="TRUE">2013-07-10T14:20:00Z</Value></Gt></Where>';
Build the command (note the query is limited to returning 2000 items at a time, and uses the ListItemCollectionPosition property to continue retrieving items in batches of 2000 until all the items have been queried. See this MSDN documentation for more info.)
$itemCount = 0;
$listId = $list.ID;
[System.Text.StringBuilder]$batchXml = New-Object "System.Text.StringBuilder";
$batchXml.Append("<?xml version=`"1.0`" encoding=`"UTF-8`"?><Batch>");
$command = [System.String]::Format( "<Method><SetList>{0}</SetList><SetVar Name=`"ID`">{1}</SetVar><SetVar Name=`"Cmd`">Delete</SetVar></Method>", $listId, "{0}" );
do
{
$listItems = $list.GetItems($query)
$query.ListItemCollectionPosition = $listItems.ListItemCollectionPosition
foreach ($item in $listItems)
{
if($item -ne $null){$batchXml.Append([System.String]::Format($command, $item.ID.ToString())) | Out-Null;$itemCount++;}
}
}
while ($query.ListItemCollectionPosition -ne $null)
$batchXml.Append("</Batch>");
$itemCount;
And lastly (and most importantly!), run the query
$web = Get-Spweb http://inceweb/HKMarineDB;
$web.ProcessBatchData($batchXml.ToString()) | Out-Null;
You will want to run this in SharePoint Management Shell as Admin.
This is verbatim takin from a blog post by Matthew Yarlett. I posted the majority of the post here incase his blog ever goes away. http://matthewyarlett.blogspot.com/2013/07/well-that-was-fun-bulk-deleting-items.html
I am using PowerShell to search a document for a key word (TTF) and then import some data. I am searching a few thousand excel documents and about half way through it started picking up unwanted data.
The code I have is as follows
$condition1 = "TTF"
$fs1 = $ws.cells.find($condition1)
It started getting unwanted data as the excel documents started using "TTF All Day" in another cell which was at the start of the document.
How do I get powershell to only look for "TTF" exactly and not "TTF" followed by more characters.
Thanks
Try using the LookAt:=xlWhole option to specify cells containing only "TTF":
$condition1 = "TTF"
$fs1 = $ws.cells.find($condition1, LookAt:=xlWhole)
This will work
$condition1 = "TTF"
$fs1 = [void]$ws.cells.find($condition1, [Microsoft.Office.Interop.Excel.XlLookAt]::xlWhole)
# This script illustrates how to use the Range.Find method
# with parameters to find exact match
# ------------------------------------------------------------------------------------
# Learn More:
# Range.Find Method: https://learn.microsoft.com/en-us/office/vba/api/Excel.Range.Find
# XlFindLookIn: https://learn.microsoft.com/en-us/office/vba/api/excel.xlfindlookin
# XlLookAt: https://learn.microsoft.com/en-us/office/vba/api/excel.xllookat
# Open Excel
$Excel = New-Object -ComObject Excel.Application
$Excel.Visible=$false
$Excel.DisplayAlerts=$false
# Open Spreadsheet you want to test
$xlSource = "C:\Temp\YourSpreadsheetNameGoesHere.xlsx"
$xlBook = $Excel.Workbooks.Open($xlSource)
$xlSheet = $xlBook.worksheets.item("SheetNameGoesHere")
# What you want to seach
$searchString = "John Smith"
# Column or Range you want to seach
$searchRange = $xlSheet.Range("A1").EntireColumn
# Search
$search = $searchRange.find($searchString,
$searchRange.Cells(1,1),
[Microsoft.Office.Interop.Excel.XlFindLookIn]::xlValues,
[Microsoft.Office.Interop.Excel.XlLookAt]::xlWhole
)
# NOT FOUND
if ($search -eq $null) {
Write-Host "Not found"
}
else { # FOUND
Write-Host "Found at row #" -NoNewline
Write-Host $search.Row
}
# Close Objects
if ($xlBook) { $xlBook.close() }
if ($Excel) { $Excel.quit() }
I have a list of files that I am breaking up into parts. That works, but I want to be able to reference each part individually. The issue is with my array not wanting/being able to take in a string. The file naming format is custID_invID_prodID or custID_invID_prodID_Boolvalue.
$files = Get-ChildItem test *.txt
[string]$custId = $files.Count
[string]$invID = $files.Count
[string]$prodID = $files.Count
[int]$Boolvalue = $files.Count
foreach($file in (Get-ChildItem test *.txt)) {
for($i = 0; $i -le $files.Count; $i++){
$custId[$i], $invID[$i], $prodID[$i], $Boolvalue[$i] = $file.BaseName -split "_"
Write-Host $custId, $invID, $prodID, $Boolvalue
}
}
The error message I am seeing is:
Unable to index into an object of type System.String.
How can I do this?
I'd suggest working with objects instead of a lot of string arrays. I have an example below in which I have replaced the file listing, since I don't have the file structure in place, with an ordinary array. Just remove that array declaration and put in your Get-ChildItem call and it should work just fine.
function ConvertTo-MyTypeOfItem
{
PARAM (
[ValidatePattern("([^_]+_){3}[^_]+")]
[Parameter(Mandatory = $true, ValueFromPipeline = $true)]
[string]$StringToParse
)
PROCESS {
$custId, $invId, $prodId, [int]$value = $StringToParse -split "_"
$myObject = New-Object PSObject -Property #{
CustomerID = $custId;
InvoiceID = $invId;
ProductID = $prodId;
Value = $value
}
Write-Output $myObject
}
}
# In the test scenario I have replaced getting the list of files
# with an array of names. Just uncomment the first and second lines
# following this comment and remove the other $baseNames setter, to
# get the $baseNames from the file listing
#$files = Get-ChildItem test *.txt
#$baseNames = $files.BaseName
$baseNames = #(
"cust1_inv1_prod1_1";
"cust2_inv2_prod2_2";
"cust3_inv3_prod3_3";
"cust4_inv4_prod4_4";
)
$myObjectArray = $baseNames | ConvertTo-MyTypeOfItem
$myObjectArray
The above function will return objects with the CustomerID, InvoiceID, ProductID and Value properties. In the sample above, the function is called and the returned array value is set to the $myObjectArray/code> variable. When output in the console it will give the following output:
InvoiceID CustomerID ProductID Value
--------- ---------- --------- -----
inv1 cust1 prod1 1
inv2 cust2 prod2 2
inv3 cust3 prod3 3
inv4 cust4 prod4 4
Seems to me that you're doing it the hard way. Why 4 arrays for every "field" of a file? It's better to create array of arrays - first index would indicate a file, and second a field in file:
$files = Get-ChildItem test *.txt
$arrFiles = #(,#());
foreach($file in $files ) {
$arrFile = $file.BaseName -split "_"
$arrFiles += ,$arrFile;
}
Write-Host "listing all parts from file 1:"
foreach ($part in $arrFiles[1]) {
Write-Host $part
}
Write-Host "listing part 0 from all files":
for ($i=0; $i -lt $arrFiles.Count ; $i++) {
Write-Host $arrFiles[$i][0];
}
Because you're trying to index into an object of type System.String! You set all your variables as strings to start with and then try to assign to an index, which I presume would attempt to assign a string to the character position at the index you provide.
This is untested but should be in the right direction.
$custIdArr = #()
$invIDArr = #()
$prodIDArr = #()
$BoolvalueArr = #()
foreach($file in (Get-ChildItem test*.txt)) {
$split = $file.BaseName -split "_"
$custId = $split[0]; $custIdArr += $custId
$invID = $split[1]; $invIDArr += $invId
$prodID = $split[2]; $prodIDArr += $prodID
$boolValue = $split[3]; $boolValueArr += $boolValue
Write-Host $custId, $invID, $prodID, $Boolvalue
}
Create a set of empty arrays, loop through your directory, split the filename for each file, append the results of the split into the relevant array.
I'm assigning to $custId, $invID, $prodID, $Boolvalue for the sake of clarity above, you may choose to directly add to the array from the $split var i.e. $invIDArr += $split[1]
I'm trying to use powershell and Sharepoint 2013 CSOM to copy attachments of one item to a new item in another list. I've been able to successfully generate an attachments folder for the new item, so in theory all I need to do is move the files from the old attachments folder to the new one. CopyTo and MoveTo only seem to work for moving files within a list, so I thought to use OpenBinaryDirect and SaveBinaryDirect with the site context. However, in powershell, calling either of these methods results in the following error: Method invocation failed because [System.RuntimeType] doesn't contain a method named 'OpenBinaryDirect'.
$attachments = $item.AttachmentFiles
if($attachments.Count -gt 0)
{
#Creates a temporary attachment for the new item to genereate a folder, will be deleted later.
$attCI = New-Object Microsoft.SharePoint.Client.AttachmentCreationInformation
$attCI.FileName = "TempAttach"
$enc = New-Object System.Text.ASCIIEncoding
$buffer = [byte[]] $enc.GetBytes("Temp attachment contents")
$memStream = New-Object System.IO.MemoryStream (,$buffer)
$attCI.contentStream = $memStream
$newItem.AttachmentFiles.Add($attCI)
$ctx.load($newItem)
$sourceIN = $sourceList.Title
$archIN = $archList.Title
$sourcePath = "/" + "Lists/$sourceIN/Attachments/" + $item.Id
$archPath = "/" + "Lists/$archIN/Attachments/" + $newItem.Id
$sFolder = $web.GetFolderByServerRelativeUrl($sourcePath)
$aFolder = $web.GetFolderByServerRelativeURL($archPath)
$ctx.load($sFolder)
$ctx.load($aFolder)
$ctx.ExecuteQuery()
$sFiles = $sFolder.Files
$aFiles = $aFolder.Files
$ctx.load($sFiles)
$ctx.load($aFiles)
$ctx.ExecuteQuery()
foreach($file in $sFiles)
{
$fileInfo = [Microsoft.SharePoint.Client.File].OpenBinaryDirect($ctx, $file.ServerRelativeUrl)
[Microsoft.Sharepoint.Client.File].SaveBinaryDirect($ctx, $archPath, $fileInfo.Stream, $true)
}
}
$ctx.ExecuteQuery()
Any help on either getting the BinaryDirect methods to work or just a generalized strategy for copying attachments across lists using powershell + CSOM would be greatly appreciated.
You have the wrong syntax for invoking a static method. You want [Microsoft.SharePoint.Client.File]::OpenBinaryDirect( ... )
Note the double colons syntax :: between the type name and the method name. Same for SaveBinaryDirect invocation.
I'm trying to create a link to a document in SharePoint 2010 using PowerShell 2.0. I've already enabled other content types and added the 'Link to a Document' content type to the document library in question.
The document that I'm trying to link to is in a different shared document list on another web in the same site collection. The actual file is nested in a subfolder called "PM". The file types may range from excel files to word files to PDFs.
I've tested the process manually (Shared Documents -> New Document -> Link to a Document -> ...) and it worked fine (as was indicated by the document icon with an arrow over the bottom right corner when I was done), but I cannot seem to get it to work with PowerShell. Any ideas?
This is the only non-PowerShell solution that I've come accross so far:
http://howtosharepoint.blogspot.com/2010/05/programmatically-add-link-to-document.html
I got it working finally by porting the aforementioned solution. There's superfluous detail here, but the gist of it is easy enough to parse out:
# Push file links out to weekly role page
$site = New-Object Microsoft.SharePoint.SPSite($roleURL)
$web = $site.OpenWeb()
$listName = "Shared Documents"
$list = $web.Lists[$listName]
$fileCollection = $list.RootFolder.files
ForEach ($doc in $docLoadList) {
$parsedFileName = $d.Split("\")
$index = $parsedFileName.Length
$index = $index - 1
$actualFileName = $parsedFileName[$index]
$existingFiles = Get-ExistingFileNames $homeURL
If ($existingFiles -Match $actualFileName) {
$existingFileObject = Get-ExistingFileObject $actualFileName $homeURL
$docLinkURL = Get-ExistingFileURL $actualFileName $homeURL
# Create new aspx file
$redirectFileName = $actualFileName
$redirectFileName += ".aspx"
$redirectASPX = Get-Content C:\Test\redirect.aspx
$redirectASPX = $redirectASPX -Replace "#Q#", $docLinkURL
$utf = new-object System.Text.UTF8Encoding
$newFile = $fileCollection.Add($redirectFileName, $utf.GetBytes($redirectASPX), $true)
# Update the newly added file's content type
$lct = $list.ContentTypes["Link to a Document"]
$lctID = $lct.ID
$updateFile = $list.Items | ? {$_.Name -eq $redirectFileName}
$updateFile["ContentTypeId"] = $lctID
$updateFile.SystemUpdate()
$web.Dispose()
}
}
I may end up stringing together the .aspx file in the script too at some point...