I've changed paths (polling uri's) to XML with data but Windows still requests the old one xml url.
I was updating xml url in the following steps:
Turn live tiles off
Unpin tile
MS Edge browser cache and history clearing
Delete all content within C:/Users/user_name/AppData/Local/Packages/Microsoft.MicrosoftEdge_randomized_hash/LocalState/PinnedTiles
Delete file iconcache.db inside C:/Users/user_name/AppData/Local
Disk Cleanup
So I start MS Edge again and pin tiles to start menu. Then I see that Windows still requests the old xml path via server logs.
How to update it? There must be some system cache I suppose ...
I've spent a lot of time and would appreciate any advice!
Microsoft Support replied to my question. So the reason was MS Edge cache.
These steps helped me, I hope it'll help someone else.
Please try the below steps to reset the edge browser and check. Please
know that resetting the Microsoft edge will remove your bookmarks and
History. Follow the instruction provided below and check.
a. Navigate to the location:
C:\Users\%username%\AppData\Local\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe
b. Delete everything in this folder.
c. Type Windows Powershell in search box.
d. Right click on Windows Powershell and select Run as administrator.
e. Copy and paste the following command.
Get-AppXPackage -AllUsers -Name Microsoft.MicrosoftEdge | Foreach
{Add-AppxPackage -DisableDevelopmentMode -Register
"$($_.InstallLocation)\AppXManifest.xml" –Verbose}
The above worked for me. However, I have since discovered that after unpinning the tile/s you don't want, you can simply just delete the unrequired tile folder/s here: C:\Users\YOUR USERNAME\AppData\Local\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe\LocalState\PinnedTiles
Not to hijack the topic, but if you'd like to see all your bookmarks/favorites, here's a starter powershell script to give you that information.
You'll need the Newtonsoft.Json.dll
cls;
[Reflection.Assembly]::LoadFile("C:\Users\<YOUR USER FOLDER>\Documents\WindowsPowerShell\Newtonsoft.Json.dll") |Out-Null;
$source = "C:\Users\<YOUR USER FOLDER>\AppData\Local\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe\RoamingState";
$filter = "{*}.json";
$files = Get-ChildItem -Recurse -Path $source -filter $filter -File;
foreach ($f in $files)
{
$json = Get-Content -Path $f.FullName
$result = [Newtonsoft.Json.JsonConvert]::DeserializeObject($json);
$result.Title.ToString()
$result.URL.ToString()
}
Related
I am trying to create a site script from an existing team site but when I run the script that I followed from this Microsoft document it asks me for the WebURL WebURL Prompt, even though it is in the script, then when I provide it at the prompt it gives me the error Error.
I am not new to site scripts but have not used them much. I would like to create one from an existing site that I have created for the PM team. Please advise. I am using the latest SharePoint Online Management Shell and I am logged in. I am also a Global Admin.
Any assistance would be helpful as I have done everything I can think of to do and Googled my heart out but cannot figure out what is going on.
Welcome to StackOverflow!
From the screenshot you posted, it looks like you did not include ` (back tick) at the end of each row in your command. Those are necessary in order for PowerShell to understand entirety of your command which spans across multiple rows, like in the following example:
Get-SPOSiteScriptFromWeb `
-WebUrl https://contoso.sharepoint.com/sites/template `
-IncludeBranding `
-IncludeTheme `
-IncludeRegionalSettings `
-IncludeSiteExternalSharingCapability `
-IncludeLinksToExportedItems `
-IncludedLists ("Shared Documents", "Lists/Project Activities")
Alternative would be to enter entire command and all parameters in single row, like this:
Get-SPOSiteScriptFromWeb -WebUrl https://contoso.sharepoint.com/sites/template -IncludeBranding -IncludeTheme -IncludeRegionalSettings -IncludeSiteExternalSharingCapability -IncludeLinksToExportedItems -IncludedLists ("Shared Documents", "Lists/Project Activities")
Hope this helps :)
Dragan
I'm extremely new to this, so apologies if it's a dumb question but I couldn't find anything about it either here, at help.octopusdeploy.com, or google.
Additionally, I'm a DevOps engineer, not a developer and have been using TC and Octopus for about 3 weeks. I'm loving it so far, but it's probably best if you consider me a total rookie ;)
I currently have a build configuration in TeamCity that on a successful build run, creates a release in Octopus and deploys the project to a test server on a succssful build. It is kept separate but deployed alongside the master build. So, in IIS it looks like:
IIS Sites
site.domain.com (master build)
featurebuild1-site.domain.com (feature branch 1)
featurebuild2-site.domain.com (feature branch 2)
etc...
Obviously, this makes life really easy for the devs when testing their feature builds, but it leaves a hell of a mess on the test and integration servers. I can go in and clean them up manually, but I'd vastly prefer it to not leave crap lying around after they've removed the branch in TeamCity.
So, the Project in TeamCity looks like:
Project Name
Feature
/Featurebuild1
/Featurebuild2
/Featurebuild3
Master
Assuming all three feature builds run successfully, I will have 3 feature build IIS sites on the test server alongside the master. If they decide they're done with Featurebuild3 and remove it, I want to somehow automate the removal of featurebuild3-site.domain.com in IIS on my test server. Is this possible? If so, how?
My initial thoughts are to have another Octopus project that will go in and remove the site(s), but I can't figure out if I can/how to trigger it.
Relevant details:
TeamCity version: 9.1.1 (build 37059)
Octopus Deploy version: 3.0.10.2278
Ok, it took me a little while to figure it out, but here's what I ended up doing (just in the event that anyone else is attempting to do the same thing).
I ended up bypassing TeamCity entirely and using our Stash repositories as the source. Also, as I didn't need it to clean up IMMEDIATELY upon deletion, I was happy to have it run nightly. Once I'd decided that, it was then down to a bunch of nested REST API calls to loop through each project and team to enumerate all the different repositories (apologies if I'm butchering terminology here).
$stashroot = "http://<yourstashsite>/rest/api/1.0"
$stashsuffix = "/repos/"
$stashappendix = "/branches"
$teamquery = curl $stash -erroraction silentlycontinue
At this point, I started using jq (https://stedolan.github.io/jq/) to do some better parsing of the text I was getting back
$teams = $teamquery.content | jq -r ".values[].link.url"
Foreach ($team in $teams)
{
# Get the list of branches in the repository
# Feature branch URL format be like: http://<yourstashsite>/projects/<projectname>/repos/<repositoryname>/branches #
$project = $stashroot +$team +$stashsuffix
$projectquery = curl $project -erroraction silentlycontinue
$repos = $projectquery.content | jq -r ".values[].name"
Foreach ($repo in $repos)
{
Try
{
$repository = $stashroot +$team +$stashsuffix +$repo +$stashappendix
$repositoryquery = curl $repository -erroraction silentlycontinue
$reponames = $repositoryquery.content | jq -r ".values[].displayId"
Foreach ($reponame in $reponames)
{
#write-host $team "/" $repo "/" $reponame -erroraction silentlycontinue
$NewObject = new-object PSObject
$NewObject | add-member -membertype NoteProperty -name "Team" -value $team
$NewObject | add-member -membertype NoteProperty -name "Repository" -value $repo
$NewObject | add-member -membertype NoteProperty -name "Branch" -value $reponame
$NewObject | export-csv <desiredfilepath> -notype -append
}
}
Catch{} # Yes, I know this is terrible; it makes me sad too :(
}
}
After that, it was simply a matter of doing a compare-item against the CSV files from two different days (I have logic in place to look for a pre-existing csv and rename it to append "_yesterday" to it), outputting to a file, all the repositories/builds that have been nuked since yesterday.
After that, it strips out the feature branch names (which we use to prefix test site names in IIS, and loops through looking for any sites in IIS that match that site prefix, removes them, the associated application pool, and deletes the directory on the server that stored the site content.
I'm sure there are far better ways to achieve this, especially if you know how to code. I'm just a poor little script monkey though, so I have to make do with what I have :)
I've been writing a script to connect to SharePoint online, create a document library and add some AD accounts to the permission. I've written all the code using snippets I have found through many searches but am having an issue with the permissions part.
I keep getting an error when adding the user and permission type to roledefinitionbinding.
The error is:
Collection has not been initialized at line 0 char 0.
Here is my code
$collRdb = new-object Microsoft.SharePoint.Client.RoleDefinitionBindingCollection($ctx)
$collRdb.Add($role)
$collRoleAssign = $web.RoleAssignments;
$ctx.Load($principal)
$collRoleAssign.Add($principal, $collRdb)
$ctx.ExecuteQuery()
The issue is when it runs $collRoleAssign.Add($principal, $collRdb) part and stop with the error above.
I would really appreciate a hand with this before my PC get launched out of the window.
Thanks
Mark
EDIT
All of the code is taken from this page:
http://jeffreypaarhuis.com/2012/06/07/scripting-sharepoint-online-with-powershell-using-client-object-model/
The only change is i'm using the get principal fun instead of the get group, but not sure if that's what has done it. I'm very new to powershell.
Thanks
I don't think you can add something into $collRoleAssign if it's not loaded before.
You get an error because it has null value.
I would have wrote it like this:
$collRoleAssign = $web.RoleAssignments
$ctx.Load($collRoleAssign)
Comment: I suppose you already set $principal before
$ctx.Load($principal)
Comment: here I suppose $collRdb is set and loaded
$collRoleAssign.Add($principal, $collRdb)
$ctx.ExecuteQuery()
By the way there is a ";" in your code which should not be there
I didn't try it but that should help!
Sylvain
I am very noob to Powershell and have small amounts of Linux bash scripting experience. I have been looking for a way to get a list of files that have Social Security Numbers on a server. I found this in my research and it performed exactly as I had wanted when testing on my home computer except for the fact that it did not return results from my work and excel test documents. Is there a way to use a PowerShell command to get results from the various office documents as well? This server is almost all Word and excel files with a few PowerPoints.
PS C:\Users\Stephen> Get-ChildItem -Path C:\Users -Recurse -Exclude *.exe, *.dll | `
Select-String "\d{3}[-| ]\d{2}[-| ]\d{4}"
Documents\SSN:1:222-33-2345
Documents\SSN:2:111-22-1234
Documents\SSN:3:111 11 1234
PS C:\Users\Stephen> Get-childitem -rec | ?{ findstr.exe /mprc:. $_.FullName } | `
select-string "[0-9]{3}[-| ][0-9]{2}[-| ][0-9]{4}"
Documents\SSN:1:222-33-2345
Documents\SSN:2:111-22-1234
Documents\SSN:3:111 11 1234
Is there a way to use a PowerShell command to get results from the various office documents as well? This server is almost all Word and excel files with a few PowerPoints.
When interacting with MS Office files, the best way is to use COM interfaces to grab the information you need.
If you are new to Powershell, COM will definitely be somewhat of a learning curve for you, as very little "beginner" documentation exists on the internet.
Therefore I strongly advise starting off small :
First focus on opening a single Word doc and reading in the contents into a string for now.
Once you have this ready, focus on extracting relevant info (The Powershell Match operator is very helpful)
Once you are able to work with a single Word doc, try to locate all files named *.docx in a folder and repeat your process on them: foreach ($file in (ls *.docx)) { # work on $file }
Here's some reading (admittedly, all this is for Excel as I build automated Excel charting tools, but the lessons will be very helpful for automating any Office application)
Powershell and Excel - Introduction
A useful document from a deleted link (link points to the Google Cache for that doc) - http://dev.donet.com/automating-excel-spreadsheets-with-powershell
Introduction to working with "Objects" in PS - CodeProject
When you only want to restrict this to docx and xlsx, you might also want to consider plain unzipping and then searching through the contents, ignoring any XML tags (so allow between each digit one or more XML elements).
I'm trying to create a node.js web app hosted by a linux server. the app must read and parse a table in a word document.
I've looked around and saw that Powershell can trivially accomplish this. The problem is that Powershell is an MS scripting language, and its Mac port (pash) is very unstable and chokes whenever I want to execute something as simple as this:
$wd = New-Object -ComObject Word.Application
$wd.Visible = $true
$doc = $wd.Documents.Open($filename)
$doc.Tables | ForEach-Object {
$_.Cell($_.Rows.Count, $_.Columns.Count).Range.Text
}
I've looked into other solutions like Docsplit and it's too generic (ie it converts an entire word doc to just plain text, not granular enough for my purposes).
some suggested using the saaspose API, but it costs lotsa money! I think I can do this myself.
ideas?
Here's a python module that can read/write docx files:
https://github.com/mikemaccana/python-docx
If you're deploying on a Linux machine, it's probably best to use Docsplit and then parse the output text, or you could try Apache POI.
Another option would be to try MS COM API running on Wine, but I'm not sure if it's compatible.