I have 12.0.0.6421 showing in Central Admin, which would seem to indicate that SP2 was installed. However, when I run an STSADM command to backup a site collection, I do not see the message informing me that it's "setting the site collection to be-read only for the duration of the backup" as described here:
http://bobfox.securespsite.com/FoxBlog/Lists/Posts/Post.aspx?ID=121
I simply get the "Operation completed successfully" message I used to get pre-SP2. Does this mean that SP2 wasn't installed correctly?
Anthony,
Showing a build of 6421 does indeed indicate that SP2 is in-place. Just to make sure, I checked my own farm and VMs as well as a reliable external source (an entry from Todd Klindt's blog: http://www.toddklindt.com/blog/Lists/Posts/Post.aspx?ID=154). I didn't doubt the build number, but it never hurts to confirm :-)
At first, I thought I understood where the issue might be, so I ran some tests. First, I ran an STSADM backup in catastrophic mode to backup my entire farm. Since this isn't a site collection backup, no locking should occur:
stsadm -o backup -directory \\ss-nas3\backups\test -backupmethod full
My catastrophic backup ran without issue, and I didn't receive any message about a lock or read-only behavior. I looked at my ULS logs, as well, and confirmed that no lock was being established (searching for "sitelock" and "lock"). This was as I expected, as I was doing a catastrophic backup -- not a site collection backup.
Next, I tried a site collection backup:
stsadm -o backup -url https://www.sculpted-system.com/pictures -filename \\ss-nas3\backups\test\SiteCollectionBackupTest.bak
Strangely enough, I didn't see a locking message here, either. I took a look at the ULS logs, and I saw nothing to indicate that a lock was put in place. Finally, I performed an
stsadm -o getsitelock ...
... while the backup was running and was greeted with this:
<SiteLock Lock="none" />
ARGH! That's not what I wanted (or expected) to see! Clearly, there was a problem ... so, I tried coming at it from a different angle. I took a look at the MSDN documentation for the STSADM -o backup command, and it clearly indicated that a lock should occur by default. It also indicated that the -nositelock switch should work to override the behavior. So, I tried adding -nositelock to my site collection backup command line.
Guess what: it choked on -nositelock with a command line error (invalid parameter).
Doing an STSADM -help backup indicated that -nositelock was not a valid switch for my environment. None of the new switches I expected (e.g., -nositelock and -force) were present. It's as if my production farm was stuck in pre-SP2 with regard to backups.
I decided to check a development VM I had that was also build 6421 (but different image -- amongst other things, Win2K8 instead of Win2K3 R2), I saw that the -nositelock was a valid command line option. So I checked another development VM that was also build 6421 (but Win2K3 R2 like my "regular farm"). -nositelock was a valid option there, too.
I had applied SP2 the same across all three environments when upgrading (WSSv3 SP2 bits, following by MOSS 2007 SP2 bits, followed by a run of the configuration wizard), so I wasn't sure what was going on.
For fun, I ran a site collection backup on each of the VMs that correctly displayed that -nositelock was a valid command line switch for site collection backups, and I was met with the locking message I didn't see earlier (and that you weren't seeing, either). Clearly, the SP2 updates were operating as I expected them to everywhere except my primary (production) farm.
I concluded I must have somehow done something wrong as part of upgrading my farm, so I tried re-running the WSSv3 SP2 update (first) and MOSS 2007 SP2 update (second) on each box. With each update on each box, I was told the that the update had already been applied. So, I dropped back and punted: I re-ran the configuration wizard to see if it would do anything. I then rebooted the two (virtual) boxes in the farm.
No change.
At this point, I can only confirm that you aren't losing your mind. Two of my all-in-one development VMs with SP2 build 6421 operate as expected, but my two-server/VM farm that is build 6421 that should be locking on site collection backup is not.
I think I'll probably follow up with a friend who is a Microsoft TAM. If I learn anything, I'll post it here and probably on my blog. In the meantime, you might want to follow up with Microsoft, as well. Clearly, something isn't working as expected.
For what it's worth!
There is a list of SharePoint Versions maintained by the SharePoint community here:
http://www.sharepointdevwiki.com/display/SharePointAdministrationWiki/SharePoint+Versions
Your version is correct for SP2; I wouldn't worry about the STSADM message appearing; it's a pretty inconsistent tool.
Related
An automated Windows update this morning left my Windows Server 2012 R2 Classic Virtual Machine on Azure in a semi-crashed state. The VM is a web server, and all the files and applications in it are still accessible via the browser. In other words, IIS and a number of other services are still running. Unfortunately, however, the VM is not accessible via Remote Desktop and is unresponsive to commands from the Azure management interface on the portal.azure.com website.
This type of error is quite common and can be found reported on many other websites. The error has been happening to Windows users (not just Windows Server) for many years already, and none of the solutions online will work for Azure users, because they involve restarting from a CD, pressing shift-f8 during boot, issuing DOS commands, restoring from backup, or unchecking certain properties in VMWare or other software.
Does anybody have a real solution for this problem on Microsoft Azure?
After struggling with this for weeks, I think I was able to fix this with the help of Microsoft support! I decide to post the solution here in case it can help someone in the future. Here are the three things that you need to do to fix this:
1-Restore the VM from a backup prior to the crash. The VM with the "Undoing Changes" crash is pretty much toast at this point. Now, proceed to steps 2 and 3 to ensure that the next batch of Windows Updates won't crash it again!
2-On your new VM, ensure that the Environment Variables for TEMP and TMP both point to C:\Windows\TEMP. In my case, they were both pointing to a temporary folder in the logged in user's profile.
3-Ensure that C:\Windows\TEMP is always empty. I achieved this by setting up a scheduled task that runs a simple BAT file that deletes all files and folders inside of the C:\Windows\TEMP once a day. I spoke with a Microsoft representative who said that even though you may have plenty of hard drive space in your C:\ drive, the Windows TEMP folder is really not supposed to get much bigger than 500MB. When it gets very large you may have some issues with Windows Updates (mine was just under 500MB when the updates were failing).
I would recommend contacting Azure support as something may have to be done by an engineer to fix the issue and unfortunately classic VMs don't have the redeploy feature.
I've added only InboundPort 3389 RPD, and works well now.
I have a SharePoint 2013 site collection backup and i am trying to restore this back up on another SharePoint 2013 site collection. Both SharePoint sites are on the same domain. But when i try to restore the site collection from backup, i am getting an error as -
Restore-SPSite : <nativehr>0x80070003</nativehr><nativestack></nativestack>
At line:1 char:1
+ Restore-SPSite-Identity http://ksptestinst2:9999 -Path
"E:\SiteBackup\BackupSPS ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~
+ CategoryInfo : InvalidData: (Microsoft.Share...dletRestoreSite:
SPCmdletRestoreSite) [Restore-SPSite], DirectoryNotFoundException
+ FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletRestoreS
ite
the command i use to restore site collection backup is -
Restore-SPSite -Identity http://ksptestinst2:9999 -Path "E:\SiteBackup\BackupSPSite.bak" -Force
i tried using
Restore-SPSite -Identity "http://ksptestinst2:9999/" -Path "E:\SiteBackup\BackupSPSite.bak" -Force -DatabaseServer KSQL2012SP\SQL
TESTDB -DatabaseName WSS_Content_KSPTESTINST2_9999
but both commands are giving same error.
Can anyone suggest how do we proceed?
Couple of Approaches that you could try:
1: Run SharePoint Config Wizard on both the servers
There could be server patches installed, SharePoint services installed, SQL patches installed, pending restarts or any other factor that you might want to rule out initially. Then perform a backup and restore operation. This is an easy one to cross off the list (quite often overlooked).
2: Match Environment Patch Levels
The best & recommended fix is to ensure that you match the SharePoint Configuration Database versions and/or Cumulative Updates/Patch levels to match both the environments- the environment from where you took the backup might be at a different patch level than the environment you are going to restore the patch to. (Goto Central Admin –> System Settings –> Manage Servers in this Farm and verify if there is any pending action. Keep not of the version to verify any version mismatches between environments)
In the same page, double check that you do not see any “Upgrade Required” mentioned against any of those servers. If it is mentioned there, please ensure that you run the SP Config Wizard before you proceed.
Once things look good, and you have compared the versions, download the latest KB from Microsoft and install them to match the SharePoint Configuration Database Schema versions. Perform all server remediation & CU (Cumulative Updates) installations. Remember to run the Configuration Wizards for each CU.
3: Using STSADM
This is a pretty interesting workaround. But sometimes I feel “Old is Gold”. Power up your SP Management Console and try to perform the restore operation using the good old STSADM command line. At times, when the new powershell commandlets fails, stsadm has worked for me.
stsadm –o restore –url "site url" -filename "backup filename"
4: Content DB Restore
Try a Content Database Backup and restore. Before doing this you might want to check in Central Admin (View All Site Collections –> Select the Site Collection and check the Content DB on which it is installed on) about which all site collections will get affected if you restore a particular Content DB. You do not want to lose any other Site Collections that shares the same Content DB.
5: Editing Backup File
(Not Recommended - But works like a Charm)
This is one of those quick and untidy fixes you could possibly try. To start, open the backup file (file you got from the Backup-SPSite command) in Notepad++. (or any other text editor; avoid Notepad though). It might look funny with special characters, but ignore all those for now.
If the file size is too large to open in Notepad++ (>100MB), you can use any standard File Splitter Program to split the file into multiple smaller files of say 10 MB. I have had success with FFSJ
When you edit the file (if you have split the files, open the first split file) in Notepad++ and look for version number that looks something like 15.0.XXXX.XXXX. It should appear somewhere in the beginning lines.
DO NOT MODIFY ANYTHING ELSE. Interestingly, this is the version number that the Restore-SPSite Commandlet checks initially. And if it sees the server version different from the backup version, it just throws the error
Now to know which version number to put there, all you have to do is open your ULS logs (15\Logs{latestlogfile}) and search for a text "schema version". You should see a message similar to this:
Could not deserialize site from E:\SiteCollection1.bak .
Microsoft.SharePoint.SPException: Schema version of backup
15.0.YYYY.YYYY does not match current schema version 15.0.XXXX.XXXX at Microsoft.SharePoint.SPSite.Restore(String filename, Boolean
isADMode, Boolean& readOnlyMode, Boolean& hadWriteLock)
If you are unable to find the above message in the logs, then the issue would be probably something else and there is little/no chance to get this option to work
If you succeed in finding the error message, pick the version number it was expecting from the above ULS error message and update the VERSION NUMBER ONLY in the backup file you were editing in Notepad++. Save it. If you had split the files using a file splitter tool, merge the files back into a single backup file.
Now run the restore command with the new backup file and see if it works.
Restore-SPSite -Identity {{SiteCollectionURL}} -Path "E:\SiteCollection1-New.bak" -Force
These are just a few of the options (not an exhaustive list, but hopefully a good start) that you could try.
I have had greater probability of success with option #1 & #5. Option #2 is something that should be done in the long run.
You could read a bit more on this from my post here as well.
The following issue just crept up on me. The steps mentioned below had worked just fine until about 2 days ago.
When I deploy a update to a solution (of web parts) to a SharePoint 2010 server I don't see the update. The solution does get installed, but from what I can tell the installed web parts are over a month old (nothing new is installed).
I do the following steps through PowerShell:
retract the solution from the web app
remove the solution
add the solution
install the solution to the web app
I have tried restarting the Web App, restarting IIS and also restarting the server. Nothing seems to work.
I notice that after I remove the solution it does get removed from the GAC. After I add/install it the solution does reappears in the GAC.
Am I missing something? Am I overlooking a step that I should be doing? Something to try?
I never deactivated/reactivated the Feature.
After following the same steps I mentioned in my question I just deactivated, then reactivated, the Feature and everything started to working fine.
This is an easy thing to I can start to implement with my solution updates. However, why did I never have to do this step before?
In general, you should check your ULS log to see which version of your solution is running. If you see the old one, then you can be sure that your activated site feature is still bound to the old version. In this case you have to Inactivate the site feature indeed to loose that tie and then Activate to bind to the new one (it appears Activate always ties the site feature to the newest version of the solution).
Maybe you had not to do this earlier, because you did not change the version number of your solution, appearing as the same version in GAC on the server. In this case you had your site feature already pointing to the correct version of your solution, therefore didn't have to reset the feature.
You have probably checked, but just in case. Make sure that the powershell script is not adding a month old package.
Is the problem in the web part code or the configuration? The configuration usually unghosts itself sooner or later and refuses to update from the solution - you can update the file in the gallery manually if anything has changed there. For most updates there won't be any changes because existing web parts won't get updates applied anyway - they will use new code but old configuration.
If the problem is the code itself, does the assembly appear to the system to be unchanged? All the hardcoded full name references in SharePoint config files mean that usually you are deploying a new assembly but with the same version numbers. This can mean that the system doesn't bother making the update. I have found it very useful to update AssemblyFileVersion (which does not affect binding) on every build and have a page in _layouts that displays the file versions of all the loaded assemblies so I know exactly what is running.
I have several custom web parts that I'm in the process of deploying to production. During this process I've found a handful of minor things that need to be tweaked in the various parts. To deploy the new code I create a new solution package, deactivate then delete the features, retract then delete the solution, then do it all again in reverse order with the new package. Needless to say, this can be time consuming. Is it necessary to completely remove a web part in order to upgrade it, or can a web part/feature/solution be upgraded in place?
It depends on what exactly is changing in your solution. There is an stsadm operation specifically for upgrading solutions, but it has some limitations as far as what it takes care of for, most notably the removing of old features and adding of new features. However, if all your new functionality exists in the webpart DLLs, running a solution upgrade will deploy your changes without need for you to do anything further.
http://msdn.microsoft.com/en-us/library/aa543659.aspx
We have used Visual Studio 2008 extensions for Windows SharePoint Services 3.0, v1.3 - Mar 2009 CTP. It has given us some problems, but when you get used to it and make sure that you do things in the right order it works.
http://www.microsoft.com/downloads/details.aspx?FamilyID=FB9D4B85-DA2A-432E-91FB-D505199C49F6&displaylang=en
This tool automates the retact / delete / deploy / activate .... job.
Another thing that we try to do is to keep as little functionality in the web parts as possible. Move what can be moved to separate dll's, then it is often possible to upgrade just by coping in a new version of the dll.
If you are making minor changes to your web parts then you can just replace the DLL's if the assembly version remains the same.
Of course use some discretion here about what is a minor change and won't break anything.
See this topic for how to use FileVersion and AssemblyVersion correctly.
Basically you keep the AssemblyVersion the same for minor updates while the FileVersion changes with ever compile.
This is exactly how Microsoft do it with things like Microsoft.SharePoint.dll - the AssemblyVersion is fixed at 12.0.??? while the FileVersion changes with every hotfix/service pack.
Oh - I just read the "Production Part" of your answer this shortcut may be more appropriate for Dev/Test rather than QA/Production
use this
stsadm -o upgradesolution -name "WSPName.wsp" -filename "c:/WSPName.wsp" -immediate -allowgacdeployment -allowcaspolicies
and then run the sharepoint job
stsadm -o execadmsvcjobs
from the other hand you can update the dlls using SharePoint PowerShell command
Set-location "C:\Users\Documents\WSP"
[System.Reflection.Assembly]::Load("System.EnterpriseServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a")
$publish = New-Object System.EnterpriseServices.Internal.Publish
$publish.GacInstall("C:\Users\Documents\WSP\wspcustom.dll")
My event logs on my production front end servers are getting filled with error messages:
"Failed to determine definition for Feature with ID"
Now, I've found the offending feature on one of the development servers - it is an InfoPath form with some code behind. But, it is nowhere to be found on the production servers.
I've tried running the following command on the production servers:
stsadm -o uninstallfeature -id (your GUID) -force
There was no change - the error is still being generated.
How do I get rid of the error?
I'm not sure but I think copying that feature definition to the production's 12/TEMPLATES/FEATURES and then uninstalling it may help.
But it is not clear from this error message "Failed to determine definition for Feature with ID" what part of your production system is tied to the feature and what action is performed which leads to this error. Increasing the verbosity of Sharepoint logs could help you to more precisely determine what exactly causes the error.
Try this: SharePoint Feature Administration and Clean Up Tool
Find faulty FeatureDefinitions and cleanly uninstall them.
Find Feature remainders in Sites, SiteCollections, WebApps and in the Farm, from e.g. forcefully uninstalled Features from farm without deactivating them before, causing errors.
Also, de-/activate Features Farm wide.