The Question:
Why can't the LocalSystem account (NT Authority\System) see files in the Recycle Bins or the Temporary Internet Files directory?
Background:
I created a scheduled task to run using the System account. The purpose of the task is to execute the Disk Cleanup Utility with predefined setting (for example: cleanmgr.exe sagerun:1). When it executes, it seems to run with no errors. But when I check the resources it's supposed to clean (Temporary Internet Files, Recycle Bin etc.), they're still there.
So I thought maybe cleaning up the two resources manually might work. I developed a console application in C# that clears the Recycle Bin and the Temporary Internet Files. I test it and it works just fine. But again, when I attempt to run it as a scheduled task with the System account, I run into the same issue again.
Following the log, it looks like when running the application with System account, it sees no files are in the Recycle Bin or the Temporary Internet Files directory.
Upon checking the Security tab for the Temporary Internet Files directory, it shows System as a full access account to that directory.
I'm so puzzled by this issue. I may be missing something but I assumed the LocalSystem account has the highest privilege on a machine. Is that not the case?
Related
Release Management Server defaults to the system drive in the current users local temp directory. I was scanning through its various configuration files but I could not seem to figure out if you can repoint its working directory to another drive. The builds are eating up space on my C drive. It there any way to repoint it to another drive?
Sure, this is actually really easy: Change the %TEMP% environment variable for the deployer user.
That said, the defaults for temp file retention are a bit crazy: 7 days!
Release Management Server is mostly a database that can be put wherever you like using standard SQL Server techniques for moving databases. Pretty sure it doesn't have a working directory - or are you doing something exotic?
The RM Deployment Agent receives builds on target nodes at C:\Users\$AccountNameUsedToRunDeployerService$\AppData\Local\Temp\RM\T\RM. I haven't seen anywhere to change this but you you can configure cleanup settings from the RM Client - Administration > Settings > Deployer Settings and maybe choosing something more aggressive may help.
Dear community members,
We have three of same hardware Windows 7 Professional computers. No one of them is connected to a domain or directory service etc.
We run same executable image on all three computers. In one of them, I had to rename it. Because, with my application's original filename, it has no write access to it's working directory.
I setup full access permisions to USER group in working directory manually but this did not solve.
I suspect some kind of deny mechanism in Windows based on executable's name.
I searched the registry for executable's name but I did not find something relevant or meaningfull.
This situation occured after lot of crashes and updates of my program on that computer (I am a developer). One day, it suddenly started not to open files. I did not touch registry or did not change something other on OS.
My executable's name is karbon_tart.exe
When it start, it calls CreateFile (open mode if exist or create mode if not exist) to open karbon_tart.log file and karbon_tart.ini file.
With the files are exist and without the file exists, I tried two times and none of them, the program can open the files.
But if I just rename the name to karbon_tart_a.exe, program can open files no matter if they are exist or not.
Thank you for your interest
Regards
Ömür Ölmez.
I figured out at the end.
It is because of an old copy of my application in Virtual Store.
What I'm trying to do:
I want to launch files to a .NET based website. Any time the dlls change, Windows recycles the web app. When I rsync files over the app can recycle several times because of the delay instead of the preferred single time. This brings the site out of commission for a longer period of time.
How I tried to solve it:
I attempted to remedy this by using the --delay-updates, which is supposed to stage all of the file changes in temporary files before changing them over. This appeared to be exactly what I wanted, however, giving the --delay-updates argument does not appear to behave as advertised. There is no discernable difference in the output (with -vv), and the end behavior is identical (the app recycles multiple times rather than once).
I don't want to run Cygwin on all of the production machines for stability reasons, otherwise I could rsync to a local staging directory, and then perform a local rsync, which would be fast enough to be "atomic".
I'm running Cygwin 1.7.17, with rsync 3.0.9.
I came across atomic-rsync (http://www.opensource.apple.com/source/rsync/rsync-40/rsync/support/atomic-rsync) which accomplishes this by rsyncing to a staging directory, renaming the existing directory, and then renaming the staging directory. Sadly this does not work in a Windows setting, because you cannot rename folders with running dll files in them (permission denied).
You are able to remove folders with running binaries, however this results in recycling the app every time, rather than just when there are updates to the dlls, which is worse.
Does anyone know how to either
Verify that --delay-updates is actually working
Accomplish my goal of updating all the files atomically (or rather, very very quickly)?
Thanks for the help.
This is pretty ancient, but I eventually discovered that --delay-updates was actually working as intended. The app only appeared to be recycling multiple times due to other factors.
I have a poweshell command which deletes the folder(i.e. Summer) from wwwroot directory and recreates the folder with the necessary files(images, css, dll etc) in it. The problem is every once in a while the IIS tends to lock some of the images or files in the directory so the powershell command fails to delete the file. I do recycle/stop the apppool before running powershell script which is used by site but still the problem persists. This issue is random i.e. the powershell script can delete the folder sometime while it can't other time. The weird thing is, if i start deleting the contents (subfolders, files) inside 'Summer', at the end, i am able to delete 'Summer' folder, but it is an manual process and which is tedious.
Is there any command which i can put in powershell or batch file to delete 'Summer' folder, even though when it is locked by IIS?
I agree with #Lynn Crumbling and recommend iisreset.
Sysinternals has two tools that provide other options:
The ProcExp tool allows you to find which processes have open handles to a given file, and allows you to close that handle. The downside of this tool is that it's not a command line tool.
The MoveFile tool allows you to schedule the file to be removed after reboot.
You can use the IIS powershell commandlets to start and stop app pools, web sites etc
Import-Module WebAdministration;
Stop-WebAppPool ${appPoolName}
Stop-WebSite ${webSiteName}
you can then start them again afterwards using the opposite commands
Start-WebAppPool ${appPoolName}
Start-WebSite ${webSiteName}
As put in comment, fully stopping IIS using iisreset stop would work.
Also, you may want to stop only the application from which you are trying to delete files from. Look at the Administration Guide.
I want to clear out the working directory in a CruiseControl.NET build after the site has been deployed because space is an issue and there's no requirement to keep it.
The way things are set up at the moment everything is on 1 machine (that's unlikely to change), this is acting as both Mercurial repository server, testing web server and CruiseControl.NET build server.
So on C:\Repositories\ and C:\inetpub\wwwroot\ we have a folder per website. Also in C:\CCNet\Projects we have a folder per website per type of build (Test and Live) - so that means we've got at least 4 copies of each website on the server and at around 100mb per site X 100 sites that's adding up to a lot of disk space.
What I thought I would like to do is to simply delete the Working Directory on successful build, it only takes 5-10 seconds to completely get a fresh copy (one small advantage to the build server being the same machine as the hg server) and only keep a handful of relatively active projects current. Of the 100 or so sites we'll probably work on no more than 10 in a week (team of 5).
I have experimented with a task that runs cmd.exe to /del /s /q the Working Directory folder. Sometimes this will complete successfully, othertimes it will fail with the message that "The process cannot access the file because it is being used by another process". When it does complete ok the build kicks off again, presumably because the WD is not found and it needs to be recreated, so I'm finding I'm in a never ending loop there.
Are there any ways I can reduce the amount of space required to run these builds or do I need to put together a business case for increasing hosting costs for our servers?
You need to create your own ccnet task and build the logic into it.
Create a new project called ccnet.[pluginname].plugin.
Use the artifact cleanup task source as base to get going quickly
Change the base directory from result.ArtifactDirectory to whatever you need it to be
Compile and place \bin\Debug\ccnet.[pluginname].plugin.dll to c:\Program Files\CruiseControl.NET\server or wherever CCNET is installed.
Restart the service and you should be able to use your task in a very similar way as the artifacts cleanup task