What I'm trying to do:
I want to launch files to a .NET based website. Any time the dlls change, Windows recycles the web app. When I rsync files over the app can recycle several times because of the delay instead of the preferred single time. This brings the site out of commission for a longer period of time.
How I tried to solve it:
I attempted to remedy this by using the --delay-updates, which is supposed to stage all of the file changes in temporary files before changing them over. This appeared to be exactly what I wanted, however, giving the --delay-updates argument does not appear to behave as advertised. There is no discernable difference in the output (with -vv), and the end behavior is identical (the app recycles multiple times rather than once).
I don't want to run Cygwin on all of the production machines for stability reasons, otherwise I could rsync to a local staging directory, and then perform a local rsync, which would be fast enough to be "atomic".
I'm running Cygwin 1.7.17, with rsync 3.0.9.
I came across atomic-rsync (http://www.opensource.apple.com/source/rsync/rsync-40/rsync/support/atomic-rsync) which accomplishes this by rsyncing to a staging directory, renaming the existing directory, and then renaming the staging directory. Sadly this does not work in a Windows setting, because you cannot rename folders with running dll files in them (permission denied).
You are able to remove folders with running binaries, however this results in recycling the app every time, rather than just when there are updates to the dlls, which is worse.
Does anyone know how to either
Verify that --delay-updates is actually working
Accomplish my goal of updating all the files atomically (or rather, very very quickly)?
Thanks for the help.
This is pretty ancient, but I eventually discovered that --delay-updates was actually working as intended. The app only appeared to be recycling multiple times due to other factors.
Related
The Question:
Why can't the LocalSystem account (NT Authority\System) see files in the Recycle Bins or the Temporary Internet Files directory?
Background:
I created a scheduled task to run using the System account. The purpose of the task is to execute the Disk Cleanup Utility with predefined setting (for example: cleanmgr.exe sagerun:1). When it executes, it seems to run with no errors. But when I check the resources it's supposed to clean (Temporary Internet Files, Recycle Bin etc.), they're still there.
So I thought maybe cleaning up the two resources manually might work. I developed a console application in C# that clears the Recycle Bin and the Temporary Internet Files. I test it and it works just fine. But again, when I attempt to run it as a scheduled task with the System account, I run into the same issue again.
Following the log, it looks like when running the application with System account, it sees no files are in the Recycle Bin or the Temporary Internet Files directory.
Upon checking the Security tab for the Temporary Internet Files directory, it shows System as a full access account to that directory.
I'm so puzzled by this issue. I may be missing something but I assumed the LocalSystem account has the highest privilege on a machine. Is that not the case?
I've installed trickle but can't seem to get it to throttle overGrive.
$ trickle -d 500 -u 100 overgrive
trickle: Could not reach trickled, working independently: No such file or directory
trickle: exec(): No such file or directory
Is there another way to get overGrive to stop sucking up all my bandwidth while syncing?
I managed to solve the issue I had and my overGrive works fine for the last couple of weeks. It turned out that it had synchronized with some public files created by different users, and which didn't have anything in common with my Google Drive account. What was common for all these files is that it belonged to some courses and had names like: "MATH&141, Mod 01: Quiz 2 Toolkit". For some reason these files didn't have .doc extension and had symbols & and : in names, which seems to cause overGrive stack on it forever.
Anyway, I did perfom the following steps and it fixed the issue:
downloaded and installed the latest version of overgrive.
clear all trash files from Google Drive online
delete all files from your local Google Drive folder if present and restart overGrive:
.overgrive.lastsync
.overgrive.cache
turn off automatic sync, and start synchronization manually.
wait until full synchronization is finished.
You can check in your Home folder log file called .overgrive.log to see if there are some errors during the synchronization. It might happen that overGrive blocks on some specific file and try to synchronize it over and over again causing large use of download/upload.
I want to be able to work across multiple workstations synchronously jumping from one to the other without having to worry about committing.
I have windows personal and work desktop and a Mac OSX laptop. At the moment, I point my project to a cloud directory and have the local install of Android Studio pointing to a gradle offline cache in another cloud directory. This keeps failing as it tells me that the path to gradle is invalid. Which I understand because gradle is referenced in different locations on different machine (considering the differing file management system in MACOSX and Windows7).
Edit: When I try to open the project, it brings up the "Import Project from Gradle" screen. To which it has the option for me to select "Use local gradle distribution" and select the Gradle home directory. I pointed it to the cache directory, and it tells me:
Cannot Save Settings
Gradle location is incorrect.
Location:C:/Users/Username/.gradle
All my research (include these answers here, and here) suggest that VCS is the way to go. However, I don't see this as a solution to my problem. I'm not looking to version control, I'm looking to transition seamlessly across workstations. Of course I will still use Version Control System for the purpose of saving a working version of my code, or sharing it with other developers, but there has to be a better way when I simply just want to keep all workstations synced.
I come from web development, and I synchronise local environment on AMPPS across multiple computers without any issue. This meant I can transition from my personal desktop, laptop, and work desktop instantly. It frustrates me if I have to remember to commit every time I move around. If I have to do this 20 times a day, and it takes about a minute to do this, that's 20 minutes that could have been spent writing a couple of functions. And what if I forget to commit, then I get to work, or home, that would be a day wasted because I won't actually have the current up to date code...
So the question remains, is there a way to instantly synchronise Android Studio projects? How do I keep all my code base (ie gradle) in sync?
Ok thanks to the comments above which pointed me in the right direction.
Android Studio create some local files that are specific to the machine that you are on. Following on this principle, to sync the "source" files (files that are specific to your application only), you must ignore all these local files. This is similar to what you would store on github. I followed the answer for this question to apply the ignore rules.
Having ignored all the "local files", when I create a new project, the source files are synchronised across all my workstations. In order to establish a local version, I need to "import" the project first. Once it has been imported, "local files" will be created for that particular machine. From then on, I can "open" the project locally.
To summarise:
Set your sync to ignore files as per .gitignore or refer to this question.
Create a project on one of your workstation and save it in the cloud.
When you are ready to work on the project for the first time on another workstation, "import" the project.
Once the project has been imported, all local files should have been created.
From then on, use the "open" option to continue working on the project.
I hope this helps somebody else, saving hours on googling.
How can I instruct RSYNC server to keep a copy of the old versions of the files that were updated?
Background info:
I have a simple RSYNC server running on Linux which I am using as a backup of a large file system (many TB). Let's call it the backup server.
On the source server, we run daily:
$ rsync -avzc /local/folder user#backup_server::remote_folder
In theory, no files should be changed on the source server, we should only receive new files. But, nonetheless, it might be possible that some updates are legit (very very seldom). If rsync detects the change, it overwrites the old version of file on the backup server with the new one. Now, here is the problem: if the change was a mistake, I lose the data and do not have the ability to recover it.
Ideally, I'd like that rsync server keeps a backup of the replaced files. Is there a way to configure that?
My backups are local to the same machine (but different drive on a mount point of /backup/)
I use --backup-dir=/backup/backups-`date +%F`/ but then it starts nesting the things rather than having a load of backups-yyyy-mm-dd/ in the /backup/ folder.
If someone has a similar issue, there is a easy solution:
Execute a simple cron that changes access rights in the destination computer.
I want to clear out the working directory in a CruiseControl.NET build after the site has been deployed because space is an issue and there's no requirement to keep it.
The way things are set up at the moment everything is on 1 machine (that's unlikely to change), this is acting as both Mercurial repository server, testing web server and CruiseControl.NET build server.
So on C:\Repositories\ and C:\inetpub\wwwroot\ we have a folder per website. Also in C:\CCNet\Projects we have a folder per website per type of build (Test and Live) - so that means we've got at least 4 copies of each website on the server and at around 100mb per site X 100 sites that's adding up to a lot of disk space.
What I thought I would like to do is to simply delete the Working Directory on successful build, it only takes 5-10 seconds to completely get a fresh copy (one small advantage to the build server being the same machine as the hg server) and only keep a handful of relatively active projects current. Of the 100 or so sites we'll probably work on no more than 10 in a week (team of 5).
I have experimented with a task that runs cmd.exe to /del /s /q the Working Directory folder. Sometimes this will complete successfully, othertimes it will fail with the message that "The process cannot access the file because it is being used by another process". When it does complete ok the build kicks off again, presumably because the WD is not found and it needs to be recreated, so I'm finding I'm in a never ending loop there.
Are there any ways I can reduce the amount of space required to run these builds or do I need to put together a business case for increasing hosting costs for our servers?
You need to create your own ccnet task and build the logic into it.
Create a new project called ccnet.[pluginname].plugin.
Use the artifact cleanup task source as base to get going quickly
Change the base directory from result.ArtifactDirectory to whatever you need it to be
Compile and place \bin\Debug\ccnet.[pluginname].plugin.dll to c:\Program Files\CruiseControl.NET\server or wherever CCNET is installed.
Restart the service and you should be able to use your task in a very similar way as the artifacts cleanup task