FileSystemWatcher no longer has old filename in some Windows 7 machines - rename

This one is too bizarre for me. In my Framework 4.0 WinForms app, FileSystemWatcher recently started giving me a null for OldName and only the parent folder for OldFullPath, not the full path of the old filename. However, some of the Windows 7 computers do this while others do not. I tried uninstalling our company anti-virus program temporarily but that didn't make any difference. I rolled back my code but it didn't make any difference.
I tried switching my application from Framework 4.0 to 4.5.2 but the problem persisted. In fact, I believe the problem is at a lower level than .NET because I wrote a test C++ program that uses ReadDirectoryChangesW() and a similar problem occurs: the problem computer never receives the FILE_ACTION_RENAMED_OLD_NAME notification, only the FILE_ACTION_RENAMED_NEW_NAME one.
I compared running processes and ended ones that are running on the problem computer but not on the non-problem one. Both computers are up to date with Windows Updates; I am hoping not to have to start uninstalling them.
I have one Windows 8 computer and the problem is not there; however, upgrading from 7 to 8 is not an option for several other deployments.
It just occurred to me to look at kernel32.dll on the respective machines, since that is where ReadDirectoryChangesW() lives. It's different.
Worky: v6.1.7601.18798
No worky: v6.1.7601.18869
Was there a recent change to the API that I need to accommodate?
Update: I found a non-working machine with v6.1.7601.18409 so that's not the problem.

In a word, Kaspersky.
To elaborate, I thought I had already tested removing KAS but maybe I didn't reboot after or something, and it's odd because it is also installed on a computer at work that does not present the problem--same version of KAS.
Note that this version is a corporate version, which installs:
Kaspersky Endpoint Security 10 for Windows
and
Kaspersky Security Center Network Agent
A central policy is pushed out to each client computer and enforced. It has control over settings, like trusted applications (a whitelist). When IT pushed out a whitelist entry for my specific application, it fixed the problem.
Note that there are several checkboxes to select for each trusted application entry. This fix only needed one of them.
Under Settings | Anti-Virus protected | Exclusions and trusted applications | Settings, there is a list that can be added to.
Do not scan opened files
X Do not monitor application activity
Do not inherit restrictions of the parent process (application)
Do not monitor child application activity
Allow interaction with application interface
Do not scan network traffic
Honourable mention must go to my co-worker, Arti Chauhan, who suggested more than once that KAS might be the problem. I thought I had fully tested when I guess I hadn't.

Related

System crashes while using clearcase 8.0.1.x /9.0.1.x (checking out files) on windows 10 (1803) platform

After upgrading system to Windows 10 - os 1803 we are getting below issues while working with ClearCase 8.0.1.x/9.0.1.x
Unable to checkin/checkout.
Not able to create views.
Not able to add any file to source control.
The system hangs & crashes while performing any ClearCase operation.
There is no error message, but I have attached screenshot for reference.
Please let us know if there is any issue with the Windows 10 ver(1803), any security system enabled?
Or has ClearCase provided any fix?
We have tried 9.0.1.5 and issue still persists.
This is what we got from windows event log.
The computer has rebooted from a bugcheck.
The bugcheck was:
0x000000c2 (0x0000000000000004, 0x00000000535be990, 0x000000000004efd3, 0xfffff803e01848b1)
for most of them whoever has upgraded to windows 1803 ver :( for people who are still using ver1709 it is working perfectly fine
Then I would recommand contacting IBM support: only them can update their ClearCase 9/Windows 10 compatibility matrix and confirm if MVFS is supported on a more recent (1803) Windows 10 edition.
We also facing same problem and I have raised the case with IBM. Still not yet resolved. As IBM said there are some limitations to work ClearCase with windows 10 and windows 2016.
We tried all the options except Secure boot disable. If possible please do disable secure boot option in Windows 10 and try to checkin/checkout code from CleraCase
Note : It works for Snapshot views. That means the issue related to MVFS
I'm seconding #VonC's recommendation to open a ticket with IBM. When you do that, save a step and collect a clearbug2 and a kernel memory dump to send in as soon as the case is opened. It will save the turn-around time of us asking you for it. If the installed programs list doesn't list installed security software (DLP, Privilege management sw like Avecto, other endpoint security tools), please list those separately as well.
I would also love to know who # IBM told you there are "limitations" with Win10-1803.
There are a few issues with Windows 10 "version upgrades" breaking things, but they generally don't cause system crashes. Windows 10 upgrades are actually full OS installs that then (imperfectly) migrate application settings. Anything that uses custom network providers (ClearCase is one example) will find that the network providers will be broken or partially broken. Reinstalling is usually required. Again, that has not yet been reported as a cause of a BSOD.
If the upgrade/reinstall didn't fix view creation, please post a separate question on the view creation issue. There may be things we can do to the SMB 2 caches to allow view creation to work in cases where the view storage is not on the client host.
I noticed that the screen shot you posted is a Terminal Services disconnect screenshot. Does the issue only occur over a Terminal Services client connection or does it also happen on a local connection?

Azure Classic VM. How to fix Error: "We couldn't complete the updates. Undoing changes. Don't turn off your computer."

An automated Windows update this morning left my Windows Server 2012 R2 Classic Virtual Machine on Azure in a semi-crashed state. The VM is a web server, and all the files and applications in it are still accessible via the browser. In other words, IIS and a number of other services are still running. Unfortunately, however, the VM is not accessible via Remote Desktop and is unresponsive to commands from the Azure management interface on the portal.azure.com website.
This type of error is quite common and can be found reported on many other websites. The error has been happening to Windows users (not just Windows Server) for many years already, and none of the solutions online will work for Azure users, because they involve restarting from a CD, pressing shift-f8 during boot, issuing DOS commands, restoring from backup, or unchecking certain properties in VMWare or other software.
Does anybody have a real solution for this problem on Microsoft Azure?
After struggling with this for weeks, I think I was able to fix this with the help of Microsoft support! I decide to post the solution here in case it can help someone in the future. Here are the three things that you need to do to fix this:
1-Restore the VM from a backup prior to the crash. The VM with the "Undoing Changes" crash is pretty much toast at this point. Now, proceed to steps 2 and 3 to ensure that the next batch of Windows Updates won't crash it again!
2-On your new VM, ensure that the Environment Variables for TEMP and TMP both point to C:\Windows\TEMP. In my case, they were both pointing to a temporary folder in the logged in user's profile.
3-Ensure that C:\Windows\TEMP is always empty. I achieved this by setting up a scheduled task that runs a simple BAT file that deletes all files and folders inside of the C:\Windows\TEMP once a day. I spoke with a Microsoft representative who said that even though you may have plenty of hard drive space in your C:\ drive, the Windows TEMP folder is really not supposed to get much bigger than 500MB. When it gets very large you may have some issues with Windows Updates (mine was just under 500MB when the updates were failing).
I would recommend contacting Azure support as something may have to be done by an engineer to fix the issue and unfortunately classic VMs don't have the redeploy feature.
I've added only InboundPort 3389 RPD, and works well now.

Windows Server 2008 won't let me create a log source, telling me it already exists (it does not)

I have a small winforms app that creates a new event log source.
I run it as administrator for the elevated privileges.
The code checks to ensure the specified event log does not exist and then creates the source. This worked fine on my Windows 7 machine, but when I run the app on Windows Server 2008 R2 SP1 it tells me the source already exists. I know it doesn't because a) this is a fresh installed of Windows Server 2008 R2, and b) I added code to return a list of all the log sources and my new one was not in the list.
I know about the "first 8 characters" being the significant ones and I made sure my source names was completely unique.
Here's the super-easy code (of course I have try/catches around this):
if (!EventLog.SourceExists(sourceName))
{
EventLog.CreateEventSource(sourceName, logName);
}
Can anyone tell me why Windows Server 2008 is lying to me?
Local (or domain) administrators are not the most powerful accounts on a Windows box.
There are other accounts that have higher (though also more limited) access.
SourceExists() will return false if it exits but you don't have access rights to know about it, and it's perfectly possible for an administrator to be denied access to something.
Also, there are reserved names for things in odd places that can trip you up. Creating folders with the names CON COM or LPT used to cause odd issues on server 2003.
So there also are a whole bunch of reasons why CreateEventSource() can fail - dig into the inner exception(s) as well, often those provide critical detail.
Which event log source name was failing for you?
Would you post the exception stack?

Are there recent Microsoft changes affecting the behavior of AppInitDLLs registry entry?

I support a product that detects unique key combinations when pressed to launch a notification alert.
This monitoring is done by a dll that is injected. Originally this was done specifically to winlogon.exe, but due to some changes in Vista we added the reference to our dll in AppInitDLLs to have it injected into every running process.
This is not working on my newest development machine, and some behavior on client machines mimicks the behavior. Another dll listed, C:\Windows\system32\nvinitx.dll, is still correctly being loaded, but mine is not.
Are there any known recent security patches that may affect this?
there are no new security changes as far as I know, you can inject any dll (but it must be compatible with the process you are injecting into) like if the process is 32bit your dll must be 32 and if the process is 64bit u need to inject 64bit or odd behavior will appear. another things that there is a new bool value must be set in windows 7 (not sure in vista) that is
"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\LoadAppInit_DLLs" must be set to one

Test deployment for Sharepoint by multiple developers on a single server

We are starting with Sharepoint development with a team of three and are currently setting up our development environments. We would like to avoid installing a Server 2008 for each developer, thus a single terminal server has been setup, using Remote Windows to start a VS2008 instance on each developer's machine. Now we would like to separate developers' testing environments (i.e. a different site colletion per developer), but have realized that the assemblies would need to be installed into GAC to show properly on the site. But since there is AFAIK only one GAC, developers wouldn't be able to test their stuff independently.
Is there any way we could create separate testing environments without installing a bunch of 2008 Servers?
So you're all going to remote in an fire up Visual Studio and be compiling stuff and restarting IIS, etc?
You're going to be stamping on each other's toes.
A wiser choice nowadays is to use Hyper-V (or some other virtualisation).
We use Windows Server 2008 on our laptops, and use Hyper-V to run our dev environments. We then have a dev environment (sandbox) each, and these have VS2008, SVN, Nunit, etc.
Our code is tested against each other thanks to CruiseControl on the only shared Hyper-V.
This has been great for us... we distribute the load, we can work on the move, we don't step on each others toes and if we need to do a demo we can switch Hyper-Vs and demo from the demo Hyper-V (branched from the dev one early on so that the environments are known).
Go virtual and don't look back.
PS: I've just seen your comment about one server... just put Hyper-V on that and run 3 instances. That's also what we do ;)
I don't know about installing the server on everything, but this sounds like an ideal task for Virtual Machines rather than physical ones- where I work we using VMWare a whole lot for this kind of work and it does very well.
It's also useful to be able to roll back to a snapshot when it comes to testing installation processes and so on.
No. In addition to the GAC there are all the SharePoint files in the 12 hive, such as features and site templates. It's not worth what you save on server costs.
(Of course if you don't use the GAC, but deploy to the bin folder, and you don't touch anything in the 12 hive, you can give each developer their own web application on the same server. But this approach puts a lot of restrictions on what they can do. It's still not worth it.)
Virtual machines will work, but they can be slow to develop on. For instance, you'll need to restart the application pool for every GAC deploy - which means a pause of maybe 15-60 seconds to reload the application, (depending on the hardware). This will become annoying.
Virtual machines work better for test and production, where you don't restart the application so often.
I recommend a physical server for each developer. This will minimize the code-deploy-test cycle time, and make sure they don't have to worry about stepping on each others toes.
You are on the wrong track with Terminal Services - its just not going to give you any separation.
A lot of people do recommend developing on W003/2008 server directly, and it does simplify some things like remote debugging.
I prefer the more traditional method of using VMWare to run virtual machines. These can be running on a local or remote host. Remote debugging is a little more complex to setup but still possible.
Finally - if possible then deploy to the bin dir rather than the GAC. This will make it much easier to deploy automatically after compilation.
The contributors are right that there are lots of stumbling blocks to multi-developer single server environments.
Number one developers will be trying to attach to the same Web Application process w2ps.exe so creating separate Web Applications on different ports is a must unless you are prepared to share time debugging. How to setup a development environment for sharepoint 2013
The second problem is when you try to collaborate and use shared components/features. Having a desire to work separately is debatable, I believe that the team developers should be collaborating and sharing so combing work is desirable to ensure seamless integration into a single final solution and that no work is duplicated. The multi-developer single server environment works perfectly until you try to collaborate 'One common mistake is to have one “development server” used by all team developers. Unless team members are working on totally unrelated components and never need to do common things such as restart IIS or attach a debugger to an IIS process, this type of environment generally doesn’t work well.' http://technet.microsoft.com/en-us/magazine/dn145990.aspx We made this mistake through lack of experience and knowledge, but once you make it it's possible to work round it.
My first attempt to share features was to copy developer 1's project into developer 2's solution and add a reference to it in developer’s 2's project and add all the features to developer 2's package. Deploying this works fine for developer 2, until as I discovered if developer 1 detaches their solution from the debugger it retracts the solution based on the duplicated solution id from the farm and therefore from each developer's web application. Therefore developer 2 has the rug pulled out from underneath them. Although this is a part solution and seemed to work for a while, it took me a while to work out what was happening and what combinations of dev 1 and 2 deployments make each other’s work and not work.
So I found a better solution. Under the project properties in Visual Studio under SharePoint tab there is a combo box called 'Auto-retract after debugging'. This by default retracts the solution when the developer stops the attached debugger and pulls the features out from underneath the other developers. Unticking this box prevents the retract and leaves each developers individual solutions deployed at farm level and on reattaching to the debugger just replaces the solution with minimal fuss.
In my experience recycling the IIS application pool is so fast other developers don't even notice, but with a larger team than 2 this might become more prevalent, so perhaps someone else could add their experiences. I also guess unless the other develops try to attach at exactly the same time that the recycle is happening it'll be fine, so is a really small chance of having a cross over time, and simply detaching and reattaching will fix this if it is ever experienced.

Resources