Overwrite file in copying IF content to of them not the same - linux

I have a lot of files from one side (A) and a lot of other files in other place (B)
I'm copying A to B, there are a lot of files are the same, but content could be different!
Usually I used mc (Midnight Commander) to do it, and selected "Overwrite if different size".
But there is a situation when size are the same, but content is different. In this case mc keeps file in B place and not overwrite it.
In mc overwrite dialog there is a work "Update" I don't know what it is doing? In help there is no such information, maybe this is a solution?
So I'm searching solution which can help me copy all files from A to B and overwrite files in B place if they exists AND content is different from A.
if file in "B" place exists (the same name) and content is different it has to be overwritten by file from "A" place every time.
Do you know any solution?

I'd use rsync as this will not rely on the file date but actually check whether the content of the file has changed. For example:
#> rsync -cr <directory to copy FROM> <directory to copy TO>
Rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts).
-c, --checksum skip based on checksum, not mod-time & size
-r, --recursive recurse into directories
See man rsync for more options and details.

Have you tried the command line:
cp -ru A/* B/
Should copy recursively all changed files (more recent timestamp) from directory A to directory B.
You can also use -a instead of -r in the command line, depending on what you want to do. See the cp man page.

You might want to keep some sort of 'index' file that holds the SHA-1 hash of the files, which you create when you write them. You can then calculate the 'source' hash and compare it against the 'destination' hash from the index file. This will only work if this process is the only way files are written to the destination.

http://linux.math.tifr.res.in/manuals/man/mc.html
The replace dialog is shown when you attempt to copy or move a file on the top of an existing file. The dialog shows the dates and sizes of the both files. Press the Yes button to overwrite the file, the No button to skip the file, the alL button to overwrite all the files, the nonE button to never overwrite and the Update button to overwrite if the source file is newer than the target file. You can abort the whole operation by pressing the Abort button

Related

Copying a file, but appending index if file exists

I have several directories with filenames being the same, but their data inside is different.
My program identifies these files (among many others) and I would like to copy all the matches to the same directory.
I am using shutil.copy(src,dst) but I don't want to overwrite files that already exist in that directory (previous matches) if they have the same name. I'd like to be able to append an integer if it already exists. Similar to the behavior in Windows10 when you copy where you can "keep both versions".
So for example, if I have file.txt in several places, the first time it would copy into dst directory it would be file.txt, the next time it would be file-1.txt (or something similar), and the next time it would be file-2.txt.
Are there any flags for shutil.copy or some other copy mechanism in Python that I could use to accomplish this?

How to rename a file test.c to Test.c in perforce?

I have a file of name: test.c in perforce, but i want it as Test.c (capital T).
I tried rename, delete and then add but both methods are not useful! The file gets update in my machine, but when some one else accesses it, the file remains, test.c itself, not Test.c!
What can i do in this case?
And i have many files of same name inside the directory in perforce, i want to rename them all.
Ex:
dir1->test.c , dir2
dir2 ->test.c , dir3
dir3->test.c
This should become:
dir1->Test.c , dir2
dir2 ->Test.c , dir3
dir3->Test.c
If the file name appears correct when looking at the tree in P4V, but is the wrong case on the client machine, try removing the file from the workspace and then resyncing. Windows won't rename the file if it's already on disk because it's a case insensitive file system.
This is a longstanding bug with Perforce/Helix that they have consistently refused to fix for over 14 years.
The Helix knowledgebase workaround does NOT work, don't waste your time.
The closest I've found to a solution is the following:
Rename the file to an interim string that you will never use in the future and have never used before - eg add a UUID postfix
Commit this change
Copy up to the highest applicable parent stream
Merge down into ALL streams that will ever need the change and have the original erroneous name. You can filter the merge down to just the rename/move operation.
Get Latest on ALL Windows workspaces for the affected streams
Rename the file to the final case-sensitive string
Commit this change
Copy up as per step 3
Merge down as per step 4
Important: You MUST copy up and merge down while the file has the temporary name.
Perforce does not take interim changes into account when merging down or copying up - this is different to every other source control system that I'm aware of and caused much heartache.
Note: While file history across the rename is apparently preserved in the database, it appears that you can only see that the rename occurred and cannot diff or merge across the change.

Getting files names inside a rar/zip file without unzip

Does anyone know if it is possible to get the name of files inside a rar/zip without having to unrar/unzip the file.. and if yes, is there a way to block it or make difficult..
Thanks
The file names in a zip file are visible even if the data is encrypted. If you want to hide the names, the easy solution is to zip the zip file encrypted.
Later versions of PKZip do have an option to encrypt the file names as well with –cd=encrypt. (cd means central directory.)
The -l flag to unzip(1) does just that:
-l
list archive files (short format). The names, uncompressed file sizes and modification dates and times of the specified files are printed, along with totals for all files specified.
unrar(1) has the l option:
l
List archive content.

How to create a copy of a directory on Linux with links

I have a series of directories on Linux and each directory contains lots of files and data. The data in those directories are automatically generated, but multiple users will need to perform more analysis on that data and generate more files, change the structure, etc.
Since these data directories are very large, I don't want several people to make a copy of the original data so I'd like to make a copy of the directory and link to the original from the new one. However, I'd like any changes to be kept only in the new directory, and leave the original read only. I'd prefer not to link only specific files that I define because the data in these directories is so varied.
So I'm wondering if there is a way to create a copy of a directory by linking to the original but keeping any changed files in the new directory only.
It turns out this is what I wanted to:
cp -al <origdir> <newdir>
It will copy an entire directory and create hard links to the original files. If the original file is deleted, the copied file still exists, and vice-versa. This will work perfectly, but I found newdir must not already exist. As long as the original files are read-only, you'll be able to create an identical, safe copy of the original directory.
However, since you are looking for a way that people can write back changes, UnionFS is probably what you are looking for. It provides means to combine read-only and read-write locations into one.
Unionfs allows any mix of read-only and read-write branches, as well as insertion and deletion of branches anywhere in the fan-out.
Originally I was going to recommend this (I use it a lot):
Assuming the permissions aren't an issue (e.g. only reading is required) I would suggest to bind-mount them into place.
mount -B <original> <new-location>
# or
mount --bind <original> <new-location>
<new-location> must exist as a folder.

Linux - Restoring a file

I've written a vary basic shell script that moves a specified file into the dustbin directory. The script is as follows:
#!/bin/bash
#move items to dustbin directory
mv "$#" ~/dustbin/
echo "File moved to dustbin"
This works fine for me, any file I specify gets moved to the dustbin directory. However, what I would like to do is create a new script that will move the file in the dustbin directory back to its original directory. I know I could easily write a script that would move it back to a location specified by the user, but I would prefer to have one that would move it to its original directory.
Is this possible?
I'm using Mac OS X 10.6.4 and Terminal
You will have to store where the original file is coming from then. Maybe in a seperate file, a database, or in the files attributes (meta-data).
Create a logfile with 2 columns:
The complete filename in the dustbin
The complete original path and filename
You will need this logfile anyway - what will you do when a user deleted 2 files in different directories, but with the same name? /home/user/.wgetrc and /home/user/old/.wgetrc ?
What will you do when a user deletes a file, makes a new one with the same name, and then deletes that too? You'll need versions or timestamps or something.
You need to store the original location somewhere, either in a database or in an extended attribute of the file. A database is definitely the easiest way to do it, though an extended attribute would be more robust. Looking in ~/.Trash/ I see some, but not all files have extended attributes, so I'm not sure how Apple does it.
You need to somehow encode the source directory in the file. I think the easiest would be to change the filename in the dustbin directory. So that /home/user/music/song.mp3 becomes ~/dustbin/song.mp3|home_user_music
And when you copy it back your script needs to process the file name and construct the path beginning at |.
Another approach would be to let the filesystem be your database.
A file moved from /some/directory/somewhere/filename would be moved to ~/dustbin/some/directory/somewhere/filename and you'd do find ~/dustbin -name "$file" to find it based on its basename (from user input). Then you'd just trim "~/bustbin" from the output of find and you'd have the destination ready to use. If more than one file is returned by find, you can list the proposed files for user selection. You could use ~/dustbin/$deletiondate if you wanted to make it possible to roll back to earlier versions.
You could do a cron job that would periodically remove old files and the directories (if empty).

Resources