I am having SharePoint 2019 high availability environment wherein Distributed Cache is up & running on WFE1. However, when I am running the Configuration Wizard in WFE2, its failing with the error message - Configuration Wizard failed with error 'CacheHostInfo is null'.
I tried multiple PowerShell commands on WFE2 and referred below articles, however issue still persists.
enter link description here
enter link description here
I have been facing this issue particularly today while deploying my application to the azure environment using Azure Devops.
One of the deployment task fails with the following eror message:
##[debug]Deployment Failed with Error: TypeError: Cannot read property 'scmUri' of undefined
When I retry to deploy a couple of times it succeeds and deployment is successfully done.
What is the issue causing this error?
Microsoft is aware:
We're investigating Intermittent failures of Release Pipelines in West Europe.
Customer using Azure App Service Deploy task in their Release Pipelines might see intermittent errors like: "##[error]TypeError: Cannot read property 'scmUri' of undefined"
Retrying the release may succeed.
Next Update: Before Wednesday, October 17th 2018 15:50 UTC
Source: https://blogs.msdn.microsoft.com/vsoservice/?p=17835.
Hopefully we'll get a response soon.
The latest report from https://blogs.msdn.microsoft.com/vsoservice/?p=17835:
Final Update: Wednesday, October 17th 2018 20:14 UTC
We’ve confirmed that all systems are back to normal as of 2018/10/17 19:35 UTC. Our logs show the incident started on 2018/10/17 13:42 UTC and that during the 5 hours and 53 minutes that it took to resolve the issue. Customers using Azure App Service Deploy task in their Release Pipelines might see intermittent errors like: "##[error]TypeError: Cannot read property 'scmUri' of undefined". Sorry for any inconvenience this may have caused.
Root Cause: Initial diagnosis from the Azure Web Apps team points to
issues with a few of their back end nodes sending bad api responses.
Chance of Re-occurrence: Medium
Lessons Learned: We are working both
minimizing resource-intensive activities in our post-deployment
steps, and are also working targeting monitors specifically to detect
post-deployment issues in the future.
Incident Timeline: 5 hours & 23
minutes – 2018/10/17 13:42 UTC through 2018/10/17 19:35 UTC
Sincerely,
Randy
There are several WSPs that I am trying to install, all of which were upgraded properly in SP2013 environment, but when I am trying to upgrade them in SP2016 then it is not working.
I am using a command of Solution.Upgrade to upgrade them.
Here's the error message received:
Type: Core Solution
Contains Web Application Resource: No
Contains Global Assembly: Yes
Contains Code Access Security Policy: No
Deployment Server Type: Front-end Web server
Deployment Status: Error
Deployed To: Globally deployed.
Last Operation Result: Some of the files failed to copy during deployment of the solution.
Last Operation Details: The solution has not been upgraded.
Last Operation Time: 5/17/2017 5:23 AM
Can any one tell me why?
Does it work if you retract the solution and deploy again? Try this if you have not.
How did you migrated your SP2013 sites to SP2016. Was it through powershell backup and restore command? There can be version conflict. So, try to migrate your site using database backup and restore.
Also, use Visual Studio 2015 to rebuild your solution and then deploy it. VS2015 has SP2016 templates. Take target framework 4.5 or above.
I am able to resolve this issue by seeing the logs carefully. While looking in to the logs I have found out that due to some reason Timer Service is stopped before my Upgrade code is run. So put a code to check the timer service status and if it is stopped then I started it again.
If you don't want to check for the stopped service. you can simply stop the service and start it again.
This hack has resolved my issue and now every wsp is deployed successfully.
We are using MS Azure Backup to backup our files from a specific folder on a local disk to an Azure backup service however it is not updating the cloud version of some files when they have been updated locally.
The errlog has recorded a number of the following errors
Failed: Hr: = [0x80070005] : CreateFile failed \?\Volume{...}\ with error :5
More worryingly the jobs in question on the jobs list show successful with no indication of any issues.
I only discovered this because 1 job from 3 days ago was tagged as having warning which appears to be a connectivity issue somewhere and came across these entries in the log.
Would someone be able to
Indicate how we can get these changed files to be backed up?
Answer why the MS Azure Backup jobs are listed as successful when these warnings have been recorded?
Thanks
Gavin
It seems like there is a permission issue to the files and folders which you are tying to backup to Azure. Please check if the folders or the drive you are backing up is formatted in NTFS.
Thanks
Hope this help.
I am getting the following error:
Error: The process cannot access the file 'C:\DWASFiles\Sites\mywebsitename\VirtualDirectory0\site\wwwroot\newrelic\NewRelic.Agent.Core.dll' because it is being used by another process.
In the Running deployment command... log file when attempting to deploy an Azure website from Github.
Would appreciate any pointers as to what could be causing this.
UPDATE: Turns out this is also failing when publishing directly from VS.NET with the following:
1>C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\Web\Microsoft.Web.Publishing.targets(4196,5): Warning : An error was encountered when processing operation 'Create File' on 'NewRelic.Agent.Core.dll'.
1>Retrying operation 'Update' on object filePath (mywebsitename\newrelic\NewRelic.Agent.Core.dll). Attempt 1 of 2.
1>C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\Web\Microsoft.Web.Publishing.targets(4196,5): Error : Web deployment task failed. ((06/07/2013 23:54:58) An error occurred when the request was processed on the remote computer.)
This was working before and I am not sure why it would have stopped.
NewRelic recommend stopping the website to unload the file and allow the deployment to go through.
As an alternative, you can set COR_ENABLE_PROFILING to 0 in your app settings on the configure tab to temporarily disable the profiling, which should then allow you to continue with the deployment while leaving the website operational throughout.
Instead of stopping the website you can temporarily turn off New Relic monitoring via the Configure tab on manage.windowsazure.com:
Configure > developer analytics > select "OFF" > Save
Deploy
Configure > developer analytics > select "ADD-ON" > Choose Add-on from dropdown > Save
Worked for me, both with a regular deployment from VS and an automatic build from VSO.
This is a known issue with the New Relic .NET agent for Azure Websites when performing an upgrade of the agent. The workaround is to stop the website to release the dll, finish the deployment and then restart the instance.
https://newrelic.com/docs/dotnet/azure-web-sites#h2-1
Not really a solution but more of a work-around, in the publish dialog view a preview of the changes and uncheck the NewRelic.Agent.Core.dll file so that it doesn't get published.
None of these answers work for me anymore. I have an Azure Basic tier website plan, which hosts multiple actual websites.
If I don't stop the website, I get the error mentioned above (newrelic.agent.core.dll is in use)...
If I do stop the website (or all of them), I get an error saying that the publishing endpoint isn't available.
If I go to the configure tab and disable the AddOn, we still get the error mentioned above (newrelic.agent.core.dll is in use)...
Pretty much we just republish over and over again with different permutations of the above until if works. It took me hours the other day, took me 10 minutes today.
If you are using webdeploy, then you can configure your webdeploy settings so that it ignores the file. However, if you do that, you will manually have to deploy any updates to the new relic agent.
I had a similar issue with the new relic log file being locked, and solved it by:
Moving the new relic log file to a subdirectory of the web root (e.g. \newreliclogs)
Adding 2 lines to my powershell script that configured the skip directive to ignore that whole directory. e.g. (where destBaseOptions is of type Microsoft.Web.Deployment.DeploymentBaseOptions
$skipDirective = new-object Microsoft.Web.Deployment.DeploymentSkipDirective("NewRelicLog","objectName=dirPath,absolutePath=.*\newreliclogs$")
$destBaseOptions.SkipDirectives.Add($skipDirective)
Depending on how you are using webdeploy, the configuration is achieved slightly differently, I used the following links to help me piece it together:
https://technet.microsoft.com/en-us/library/dd569089(WS.10).aspx
https://msdn.microsoft.com/en-us/library/microsoft.web.deployment.deploymentskipdirective(v=vs.90).aspx
https://msdn.microsoft.com/en-us/library/dd543313(v=vs.90).aspx
http://blogs.iis.net/jamescoo/archive/2009/11/03/msdeploy-api-scenarios.aspx
http://forums.iis.net/p/1192163/2031814.aspx#2031813
And I used the powershell script from the Octopus Deploy Library at https://library.octopusdeploy.com/#!/step-template/actiontemplate-web-deploy-publish-website-(msdeploy).