Batch File Deleting extra files - string

Our programming archives contain tons of PLC programs (thousands of files)
Was recreating our backup structure, and wanted to filter through some of the junk. Made a batch file to delete all folders containing BAK with extension .acd, all files with .SEM, and .WRK, as these three are extra files that are created with opening the program, and are not needed. Some have gotten copied to the archives and duplicated many times.
I tested it on a copy of the folders, and wanted to run it routinely before the structure gets duplicated to other backup systems to prevent the backups from becoming cluttered again.
Here's the script I used:
del /q /s "Y:\Bays\*BAK*.acd"
del /q /s "Y:\Bays\*.Sem*"
del /q /s "Y:\Bays\*.Wrk*"
It deleted thousands of files, but as I watched I noticed three that did not make sense to me.
See the middle two deleted files:
"With Email" file:
These three were deleted, yet don't contain BAK in their names. I don't want to routinely run this if it will risk removing any copies of programs that aren't the automatically generated ones. Just hoping someone may be able to explain why these three were the only ones out of the thousands of deleted files to not follow the rule.

As you know, batch cannot delete files from directories with spaces between words. That's actually a bug that cannot be fixed, and wasn't built-in. So, I think that it's because of the unknown file extension, it happened the same to me. To make it known to your device, type regedit at the search bar, go to HKEY_CLASSES_ROOT, add a new key, and name it as the extension. Close regedit and try again.

Volume has property to enable generate short file names (8.3).
This property affects the execution of commands (del, for).
I have volume D where 8dot3 name creation is disabled
There is a file with name 1.abcd in folder.
Command for %i in (*.abc) do echo %i not find any files
I have volume C where 8dot3 name creation is enabled
There is a file with the same name 1.abcd in folder.
Command for %i in (*.abc) do echo %i find this file
Maybe if you use long file names then you need to disable generating short file names. You can do it with fsutil.

Related

Robocopy "Documents" folder issue?

I have a custom script I use for backing up my hard drive to a temporary external drive. It's a simply a number of robocopy lines (without /PURGE). I've having trouble with the windows documents folder. If I have a command: "robocopy C:\users\me\documents D:\backups\somerandomdirectoryname ..", every time it's done, Windows thinks that directory is a Documents directory and even renames "somerandomdirectoryname" to "Documents". It changes the icon, and then I can not actually eject the USB drive because Windows will not let it go. What is causing Windows to do this to me? Is there something I have to exclude to make it "just a normal directory" on my external device?
Found a cure to this, use the option:
/XA:SH
which stops copying system and hidden files - which are how the special attributes of the Document directory appear to be copied. Worked for me, I only wanted the data files.

TFS creates a $tf folder with gigabytes of .gz files. Can I safely delete it?

I am using visual studio 2012 with Microsoft TFS 2012.
On the workspace that is created on my c: drive, a hidden folder $tf is created. I suspect TFS from creating this folder. It's lurking diskspace as the current size is several gigabytes now and it's about 25% diskspace of the total amount of gigabytes needed for the complete workspace. So this hidden $tf folder is quite huge.
The structure is like this:
c:\workspace\$tf\0\{many files with guid in filename}.gz
c:\workspace\$tf\1\{many files with guid in filename}.gz
Does anyone know if I can delete this $tf folder safely or if it is absolutely necessary to keep track of changes inside the workspace?
TFS keeps a hash and some additional information on all file in the workspace so that it can do change tracking for Local Workspaces and quickly detect the changes in the files. It also contains the compressed baseline for your files. Binary files and already compressed files will clog up quite a bit of space. Simple .cs files should stay very small (depending on your FAT/NTFS cluster size).
If you want to get rid of these, then set the Workspace type to a server workspace, but lose the advantages of local workspaces.
Deleting these files will be only temporarily since TFS will force their recreation as soon as you perform a Get operation.
You can reduce the size of this folder by doing a few things:
Create small, targeted workspaces (only grab the items you need to do the changes you need to make)
Cloak folders, exclude folders containing items you don't need. Especially folders containing lots of large binary files
Put your dependencies in NuGet packages instead of checking them into source control..
Put your TFS workspace on a drive with a small NTFS/FAT cluster size (a cluster size of 64Kb will seriously enlarge the amount of disk space required if all you have are 1KB files.
To setup a server workspace, change the setting hidden in the advanced workspace setting section:
The simple answer: I deleted the $tf files once: the net result was that newly added files showed up in my pending changes, but when I changed an existing file, the change did not show up in my pending changes. So I would not recommend deleting this folder.
To answer the original question, the answer is yes. However, in order for TFS to track changes, it will need to be recreated, albeit with fewer folders and much smaller disk space. To do that:
First delete all the tf$ folders currently in your current workspace folder.
Next, move all of the remaining contents of the original folder to another empty folder, preferably one on another drive;
Perform a "Get latest" into the original (now empty) workspace folder (this will cause a single tf$ folder to be created in that original folder).
Now copy all of the contents you moved into the backup folder over the top of the 'Get latest' results in the original workspace folder.
By performing these steps in that order, you will end up with the tf$ entries TFS needs, but in a single folder and much more compact - additionally, the deltas of any changes you made that had not been checked in will be preserved and TFS will recognize them as pending changes as it should.
Our Certitude AMULETs C++ solution has 72 advanced projects in it, and we have to do this once a month to keep compiling and search speeds reasonable.
I deleted the $tf directory, and GetLatest behaved - it asked me if I wanted to keep the local files or replace with server. I could then check as normal.
The mildly annoy part was about 30 files I had locally that I had told to ignore appeared.

Dir cmd prompt omits files - why?

I am using the following cmd prompt to acquire a list of the files and folders in a directory: v:>dir/s>name.txt.
The text file seems to be too small for my directory (3700 items), as it omits items listed lower on the directory. I initially thought it was the size of the text file causing the problem because of the last comment in this thread:
Is there a size limit on a text file?
I tried changing the prompt to v:\dir/s>name.xls. This worked, but when I opened the excel sheet, the list still omitted files lower down in the directory. This is surprising because according to microsoft,
http://office.microsoft.com/en-ca/excel-help/excel-specifications-and-limits-HP005199291.aspx
an Excel sheet can be filled up to 65, 536 rows, and my newly sheet created only went to row 3561.
I could solve the problem by running the cmd prompt at the subfolder level, but I will have to run this command many, many times. If you have a solution, it would be much appreciated.
This will give you a list of all the files, hidden or system or not.
dir /b /s /a-d >file.txt

Overwrite file in copying IF content to of them not the same

I have a lot of files from one side (A) and a lot of other files in other place (B)
I'm copying A to B, there are a lot of files are the same, but content could be different!
Usually I used mc (Midnight Commander) to do it, and selected "Overwrite if different size".
But there is a situation when size are the same, but content is different. In this case mc keeps file in B place and not overwrite it.
In mc overwrite dialog there is a work "Update" I don't know what it is doing? In help there is no such information, maybe this is a solution?
So I'm searching solution which can help me copy all files from A to B and overwrite files in B place if they exists AND content is different from A.
if file in "B" place exists (the same name) and content is different it has to be overwritten by file from "A" place every time.
Do you know any solution?
I'd use rsync as this will not rely on the file date but actually check whether the content of the file has changed. For example:
#> rsync -cr <directory to copy FROM> <directory to copy TO>
Rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts).
-c, --checksum skip based on checksum, not mod-time & size
-r, --recursive recurse into directories
See man rsync for more options and details.
Have you tried the command line:
cp -ru A/* B/
Should copy recursively all changed files (more recent timestamp) from directory A to directory B.
You can also use -a instead of -r in the command line, depending on what you want to do. See the cp man page.
You might want to keep some sort of 'index' file that holds the SHA-1 hash of the files, which you create when you write them. You can then calculate the 'source' hash and compare it against the 'destination' hash from the index file. This will only work if this process is the only way files are written to the destination.
http://linux.math.tifr.res.in/manuals/man/mc.html
The replace dialog is shown when you attempt to copy or move a file on the top of an existing file. The dialog shows the dates and sizes of the both files. Press the Yes button to overwrite the file, the No button to skip the file, the alL button to overwrite all the files, the nonE button to never overwrite and the Update button to overwrite if the source file is newer than the target file. You can abort the whole operation by pressing the Abort button

Linux - Restoring a file

I've written a vary basic shell script that moves a specified file into the dustbin directory. The script is as follows:
#!/bin/bash
#move items to dustbin directory
mv "$#" ~/dustbin/
echo "File moved to dustbin"
This works fine for me, any file I specify gets moved to the dustbin directory. However, what I would like to do is create a new script that will move the file in the dustbin directory back to its original directory. I know I could easily write a script that would move it back to a location specified by the user, but I would prefer to have one that would move it to its original directory.
Is this possible?
I'm using Mac OS X 10.6.4 and Terminal
You will have to store where the original file is coming from then. Maybe in a seperate file, a database, or in the files attributes (meta-data).
Create a logfile with 2 columns:
The complete filename in the dustbin
The complete original path and filename
You will need this logfile anyway - what will you do when a user deleted 2 files in different directories, but with the same name? /home/user/.wgetrc and /home/user/old/.wgetrc ?
What will you do when a user deletes a file, makes a new one with the same name, and then deletes that too? You'll need versions or timestamps or something.
You need to store the original location somewhere, either in a database or in an extended attribute of the file. A database is definitely the easiest way to do it, though an extended attribute would be more robust. Looking in ~/.Trash/ I see some, but not all files have extended attributes, so I'm not sure how Apple does it.
You need to somehow encode the source directory in the file. I think the easiest would be to change the filename in the dustbin directory. So that /home/user/music/song.mp3 becomes ~/dustbin/song.mp3|home_user_music
And when you copy it back your script needs to process the file name and construct the path beginning at |.
Another approach would be to let the filesystem be your database.
A file moved from /some/directory/somewhere/filename would be moved to ~/dustbin/some/directory/somewhere/filename and you'd do find ~/dustbin -name "$file" to find it based on its basename (from user input). Then you'd just trim "~/bustbin" from the output of find and you'd have the destination ready to use. If more than one file is returned by find, you can list the proposed files for user selection. You could use ~/dustbin/$deletiondate if you wanted to make it possible to roll back to earlier versions.
You could do a cron job that would periodically remove old files and the directories (if empty).

Resources