ruTorrent creates files that shouldn't download - bittorrent

Even files that have their priority set to "Don't download" seem to be created and fully allocated using rutorrent 3.8 on a Debian box.
I believe this stems from a packet 'accidentally' being received and the files thus being created so as not to waste said packet.
How do I correct this behavior? Is there any way of setting up a script/plugin to automatically delete these files as soon as they're created?
Thanks in advance

You have to edit the ~/.rtorrent.rc file and change or insert the line:
system.file_allocate.set = no
This will prevent rtorrent from fully allocating files during download.

Related

autogenerating ".uuid" files in linux directory

OS - Debian Stable
I downloaded fonts from a website (that seemed legitimate to me) and transferred the contents to /usr/share/fonts/directory. There's a .uuid file being generated for every directory with a string like this as its sole content:
f25e9432-c6f1-4bbe-a33c-89289a8d17f1
This file regenerates right after I delete it. Is this a malicious program? Is this indexing by the OS itself or is it something like fc-cache running in the background? What could be the cause of this?
This has nothing to do with the fact that you've downloaded your own fonts. This is simply just fontconfigdoing it's job. It very well could just be the cached data created. Only rather, the binary data is being converted into a uuid string/unique ID.
So, I would say no. I do not believe this to be the cause of anything malicious. Nor are any of these occurrences a result from you downloading fonts from your web browser.

minidlna doesn't like hardlinks

I have a video files in:
/home/private/movies/video1.mkv
/home/private/movies/video2.mkv
/home/private/movies/video3.mkv
I have hardlinks to those mkv files in:
/home/minidlna/videos/video1.mkv
/home/minidlna/videos/video2.mkv
/home/minidlna/videos/video3.mkv
My minidlna share is:
/home/minidlna
The video files show up on the minidlna cilent (my TV) after I do a full rescan of the minidlna share, however, they don't show up if I create new hardlinks with the inotify interval set really low.
The files do show up if they are not hardlinks.
My guess is that there seems to be a problem with minidlna and the way it processes the 'filesystem changes' using 'inotify'. Perhaps a hardlink isn't necessary a 'change' to notify minidlna.
My video library is rather large and continually doing rescans seems very inefficient and takes a long time. I would appreciate if someone can shed some light on this or have a workaround.
I'm running minidlna version 1.1.4
It appears it is indeed a problem with minidlna.
Depending on your use case, maybe you can create the new video file in the minidlna directory and make the one in your private movies a hardlink. The resulting filesystem will be the same, but now the first operation minidlna sees should be a full-fledged create, and therefore work.
Looks like there's no workaround to my exact problem and unfortunately my setup doesn't allow reversing the minidlna share <> hardlink directory.
The only solution I found was to rebuild minidlna RPM with IN_CREATE in inotify.c (more details here - http://sourceforge.net/p/minidlna/bugs/227/)
Hopefully Readynas makes that the default for future builds.

Is it possible to force delete VSAM file used by another Job/User?

We have a Job which takes a Back-up of VSAM file followed by standard Delete-Define-Repro of the same VSAM file. To handle the scenario of trying to delete a non-existing file we are following a standard practice to set MAXCC/LASTCC to 0 if Delete returns a non-zero return code and then continue the process as if there are no errors.
But sometimes we are facing a situation where Delete is not working because file is opened by some user or being read in some other Job. In this case Job fails because while Defining a new VSAM file because file is already present (Delete could not purge it).
Any work-arounds for this situation? Or can we force delete a file even if it is held by some other process/user?
Thanks for reading!
You should be able to work out that it would not be a good idea to delete a VSAM file (or any other) whilst it is being used by "something else".
Why don't you test for the specific value from the DELETE?
If you are doing a backup, then delete/define, it would be a really, really good idea to get exclusive control of the file, else something is going to get messed-up.
You could put a DD with DSN being the VSAM file in question with DISP=OLD, so that your job would only be selected when nothing is using the file.
How are you doing the backup? Why are other jobs accessing the file at the same time anyway? Is this in a "test" environment? What type of VSAM file is it? Why are you doing the REPRO, and do you feel that that is the best way to do it?
An actual answer is difficult without knowing all this, and more.

How to handle "System.IO.IOException: There is not enough space on the disk." in Windows Azure

I have a problem in Windows Azure. I'm storing temporary files in local storage. After a certain time i get a System.IO.IOException: There is not enough space on the disk.
So I have read some articles about it and microsoft themself recommends to catch the error and try to clear the files. So my question at this point is how is the best way to accomplish this?
At the moment I would try this but I don't know if this is the best approach:
public static void ClearTempFolder(string localStorageName)
{
System.IO.DirectoryInfo downloadedMessageInfo = new DirectoryInfo(RoleEnvironment.GetLocalResource(localStorageName).RootPath);
foreach (FileInfo file in downloadedMessageInfo.GetFiles())
file.Delete();
foreach (DirectoryInfo dir in downloadedMessageInfo.GetDirectories())
dir.Delete(true);
}
Thanks for your help.
If you're happy for all the files to go - then, yes, that should work fine. You may want to trap for exceptions that will be thrown if a file is still open.
However, it may be better to examine your code to see whether you can remove the temporary file immediately when you've finished with it.
Check out http://msdn.microsoft.com/en-us/library/windowsazure/hh134851.aspx
The default TEMP/TMP directory limit is... 100MB! Even if you have 200GB+ local storage.
Your solution should be two fold:
1) Clean up temporary files when you're done with them (if you write a file to the temp folder, delete it when you're finished with it)
2) Increase the local storage size (as above) so you can store files larger than 100MB on temporary disk storage

CruiseControl.Net Deleted Files

I'm using CC.net on against a Source Safe database, and have a problem that someone deleted some files from the database, and the deleted files weren't removed. I didn't see a config switch or anything that I could set for it to clear the code directory prior to building.
Am I missing something?
As Alex says there is a CleanCopy flag in the source code block. However, my situation was a little different. I use subversion and I found the CleanCopy flag was NOT doing what it said it would on the box.
To solve the problem I added a task which runs a batch file that clears out the build's working copy prior to checkout. It is a bit slower (about 1 min for code base of 400Mb) but guarantees no old code.
Kindness,
Dan
All you need to do is set CleanCopy to true in your source control block. The documentation is very clear on this. The above answer is the wrong way.

Resources