Copying a file, but appending index if file exists - python-3.x

I have several directories with filenames being the same, but their data inside is different.
My program identifies these files (among many others) and I would like to copy all the matches to the same directory.
I am using shutil.copy(src,dst) but I don't want to overwrite files that already exist in that directory (previous matches) if they have the same name. I'd like to be able to append an integer if it already exists. Similar to the behavior in Windows10 when you copy where you can "keep both versions".
So for example, if I have file.txt in several places, the first time it would copy into dst directory it would be file.txt, the next time it would be file-1.txt (or something similar), and the next time it would be file-2.txt.
Are there any flags for shutil.copy or some other copy mechanism in Python that I could use to accomplish this?

Related

Linux - copying files like on Windows (ignoring case with the same names)

How can I copy files to ignore their case?
For example, I want to copy the test.txt file to the target directory, which already has Test.txt (same name, different case).However, I don't want it to be copied as a new file, but to replace the existing one.
I want to get the same file copy effect as on Windows. Windows is case insensitive, so files and directories with the same name but different case are overwritten.
I would like to improve the installation of mods for games installed with Wine/Proton. I have to use Windows applications like 'Total Commander' to get everything copied properly.

Linux terminal script to create boilerplate files in current working directory with one varying word?

I have to create two boilerplate files, both of which always have the same content, with the EXCEPTION of a single word. I'm thinking of creating a command or something that I can run in the Linux terminal (Ubuntu), along with an argument that represents the one word which can vary in the files created. Perhaps a batch file will accomplish this, but I don't know what it will look like.
I will be able to run this command every time I create these boilerplate files, instead of pasting the boilerplate and changing the one word in the file that has to be changed.
These file paths relative to my current working directory are:
registration.php
etc/module.xml
A simple Python script that reads in the file as string and replaces the occurrence would probably be the quickest. Something like:
with open('somefile.txt', 'r+') as inputFile:
txt=inputFile.read().replace('someword', 'replacementword')
inputFile.seek(0)
inputFile.write(txt)
inputfile.close()

How to create a copy of a directory on Linux with links

I have a series of directories on Linux and each directory contains lots of files and data. The data in those directories are automatically generated, but multiple users will need to perform more analysis on that data and generate more files, change the structure, etc.
Since these data directories are very large, I don't want several people to make a copy of the original data so I'd like to make a copy of the directory and link to the original from the new one. However, I'd like any changes to be kept only in the new directory, and leave the original read only. I'd prefer not to link only specific files that I define because the data in these directories is so varied.
So I'm wondering if there is a way to create a copy of a directory by linking to the original but keeping any changed files in the new directory only.
It turns out this is what I wanted to:
cp -al <origdir> <newdir>
It will copy an entire directory and create hard links to the original files. If the original file is deleted, the copied file still exists, and vice-versa. This will work perfectly, but I found newdir must not already exist. As long as the original files are read-only, you'll be able to create an identical, safe copy of the original directory.
However, since you are looking for a way that people can write back changes, UnionFS is probably what you are looking for. It provides means to combine read-only and read-write locations into one.
Unionfs allows any mix of read-only and read-write branches, as well as insertion and deletion of branches anywhere in the fan-out.
Originally I was going to recommend this (I use it a lot):
Assuming the permissions aren't an issue (e.g. only reading is required) I would suggest to bind-mount them into place.
mount -B <original> <new-location>
# or
mount --bind <original> <new-location>
<new-location> must exist as a folder.

Overwrite file in copying IF content to of them not the same

I have a lot of files from one side (A) and a lot of other files in other place (B)
I'm copying A to B, there are a lot of files are the same, but content could be different!
Usually I used mc (Midnight Commander) to do it, and selected "Overwrite if different size".
But there is a situation when size are the same, but content is different. In this case mc keeps file in B place and not overwrite it.
In mc overwrite dialog there is a work "Update" I don't know what it is doing? In help there is no such information, maybe this is a solution?
So I'm searching solution which can help me copy all files from A to B and overwrite files in B place if they exists AND content is different from A.
if file in "B" place exists (the same name) and content is different it has to be overwritten by file from "A" place every time.
Do you know any solution?
I'd use rsync as this will not rely on the file date but actually check whether the content of the file has changed. For example:
#> rsync -cr <directory to copy FROM> <directory to copy TO>
Rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts).
-c, --checksum skip based on checksum, not mod-time & size
-r, --recursive recurse into directories
See man rsync for more options and details.
Have you tried the command line:
cp -ru A/* B/
Should copy recursively all changed files (more recent timestamp) from directory A to directory B.
You can also use -a instead of -r in the command line, depending on what you want to do. See the cp man page.
You might want to keep some sort of 'index' file that holds the SHA-1 hash of the files, which you create when you write them. You can then calculate the 'source' hash and compare it against the 'destination' hash from the index file. This will only work if this process is the only way files are written to the destination.
http://linux.math.tifr.res.in/manuals/man/mc.html
The replace dialog is shown when you attempt to copy or move a file on the top of an existing file. The dialog shows the dates and sizes of the both files. Press the Yes button to overwrite the file, the No button to skip the file, the alL button to overwrite all the files, the nonE button to never overwrite and the Update button to overwrite if the source file is newer than the target file. You can abort the whole operation by pressing the Abort button

Linux - Restoring a file

I've written a vary basic shell script that moves a specified file into the dustbin directory. The script is as follows:
#!/bin/bash
#move items to dustbin directory
mv "$#" ~/dustbin/
echo "File moved to dustbin"
This works fine for me, any file I specify gets moved to the dustbin directory. However, what I would like to do is create a new script that will move the file in the dustbin directory back to its original directory. I know I could easily write a script that would move it back to a location specified by the user, but I would prefer to have one that would move it to its original directory.
Is this possible?
I'm using Mac OS X 10.6.4 and Terminal
You will have to store where the original file is coming from then. Maybe in a seperate file, a database, or in the files attributes (meta-data).
Create a logfile with 2 columns:
The complete filename in the dustbin
The complete original path and filename
You will need this logfile anyway - what will you do when a user deleted 2 files in different directories, but with the same name? /home/user/.wgetrc and /home/user/old/.wgetrc ?
What will you do when a user deletes a file, makes a new one with the same name, and then deletes that too? You'll need versions or timestamps or something.
You need to store the original location somewhere, either in a database or in an extended attribute of the file. A database is definitely the easiest way to do it, though an extended attribute would be more robust. Looking in ~/.Trash/ I see some, but not all files have extended attributes, so I'm not sure how Apple does it.
You need to somehow encode the source directory in the file. I think the easiest would be to change the filename in the dustbin directory. So that /home/user/music/song.mp3 becomes ~/dustbin/song.mp3|home_user_music
And when you copy it back your script needs to process the file name and construct the path beginning at |.
Another approach would be to let the filesystem be your database.
A file moved from /some/directory/somewhere/filename would be moved to ~/dustbin/some/directory/somewhere/filename and you'd do find ~/dustbin -name "$file" to find it based on its basename (from user input). Then you'd just trim "~/bustbin" from the output of find and you'd have the destination ready to use. If more than one file is returned by find, you can list the proposed files for user selection. You could use ~/dustbin/$deletiondate if you wanted to make it possible to roll back to earlier versions.
You could do a cron job that would periodically remove old files and the directories (if empty).

Resources