File renaming Linux - linux

I have tried to rename several files on my Linux system. I usedrename 's/foo/bar/g' * All the files that I wish to change are in the current directory.
It does not change the name of the files but I think it should. Any help would be appreciated.

An easy way would to do:
mv file2rename newname

You have mentioned that you want to rename multiple files at once using rename expression. Technically you can't use only * sign for change file names. * means all files with same name. We know same file types doesn't exist with same name but you can rename some selected part from file. For an example
admin#home:~/works$ ls test*.c
test_car.c test_van.c test_dog.c
you can rename some part of these files not full name. because there cannot be exist same file name with same extention
admin#home:~/works$ rename 's/test/practice/' *.c
After executing this command every test replace with practice.
admin#home:~/works$ ls practice*.c
practice_car.c practice_van.c practice_dog.c

Rename a file mv
mv old_name new_name
The use of the mv command changes the name of the file from old_name to new_name.

Another way to rename file extentions in the current directory, for instance renaming all .txt files in .csv:
for file in $(ls .); do
mv $file ${file/.txt/.csv}
done
This will not affect files that don't have the .txt extention and it will prompt an error (should be developed further depending on your needs).

some posts points out the usage of for x in $(something); do..
please - Don't (ever, under any circumstances) use that! (see below)
Say you have a file(and, other .txt files):
"my file with a very long file - name-.txt"
and you do
for f in $(ls *.txt); do echo $f; done
(or something like that) it will output
.
..
a.sh
docs
my
file
with
a
very
long
file
-
name-.txt
(or something similar)
Instead, try the following:
#! /bin/sh
if [ $# -eq 0 ]; then
echo -n "example usage: bash $0 .txt .csv <DIR>"
echo -n "(renames all files(ending with .txt to .csv) in DIR"
exit
fi
A="$1" # OLD PREFIX (e.g .txt )
B="$2" # NEW PREFIX (e.g .csv )
DIR="$3*$A" # DIR (e.g ./ )
# for file f;
# in path $DIR;
for f in $DIR; do
## do the following:
# || here just means:
# only continue IFF(if and only if)
# the previous is-file-check's exit-status returns non-zero
[ -e "$f" ] || continue
# move file "$f" and rename it's ending $A with $B (e.g ".txt" to ".csv")
# (still, in $DIR)
mv "$f" "${f/$A/$B}"
done
### $ tree docs/
# docs/
# ├── a.txt
# ├── b.txt
# ├── c.txt
# └── d.txt
#
# 0 directories, 4 files
#
### $ bash try3.sh .txt .csv docs/
# $ tree docs/
# docs/
# ├── a.csv
# ├── b.csv
# ├── c.csv
# └── d.csv
#
# 0 directories, 4 files
##
#-------------------#
References:
Bash Pitfalls
DontReadLinesWithFor
Quotes
Bash FAQ
MAN's: ($ man "- the following")
- bash
- mv
- ls
Note: I do not mean to be offensive - so please don't take it as offense (I got the main command-idea from meniluca actually!
But since it was (for x in $(ls ..)) I decided to create a whole script, rather than just edit.

Related

How could I add the containing directory as a prefix to a copied file name?

The issue: I have bunch of files split across multiple directories, which all have the same name (input.txt).
What I am after: I want to firstly copy all of these to a new directory, while adding the containing directory as a suffux to avoid confusion between them and prevent overwriting. This is the basis of what I am trying to do:
cp -nr /foo/bar/*/input.txt /new/path/
Where do I go from here?
To respond to the comments below, if my file structure in /old/directory contains folders:
/old/directory/1/input.txt
/old/directory/2/input.txt
/old/directory/3/input.txt
This is an example of my desired output:
/new/directory/ should contain:
1input.txt
2input.txt
3input.txt
Thanks
This will do the trick and also handle any directories that might have spaces in their names (or any other odd characters).
#!/bin/bash
tld=./old/directory
newpath=./new/directory
while IFS= read -r -d $'\0' file; do
tmp="${file#*${tld}/}"
echo cp "$file" "$newpath/${tmp//\//}"
done < <(find "$tld" -type f -name "input.txt" -print0)
Proof of Concept
$ tree ./old/directory/
./old/directory/
├── 1
│   └── input.txt
├── 2
│   └── input.txt
└── 3
├── 3a
│   └── input.txt
└── input.txt
4 directories, 4 files
$ ./mvinput.sh
cp ./old/directory/3/input.txt ./new/directory/3input.txt
cp ./old/directory/3/3a/input.txt ./new/directory/33ainput.txt
cp ./old/directory/1/input.txt ./new/directory/1input.txt
cp ./old/directory/2/input.txt ./new/directory/2input.txt
Well, the tough news is that there's not an obvious manner of doing this in one line - not one that isn't nonsensically difficult to understand anyway. There may be a way to do it with rsync, and I'm sure that someone smarter than I could do it in awk, but in my opinion, you're better off making a script, or even writing a custom binary that does this for you.
find . -name input.txt | while read line
do
cp "$line" /new/path/`echo $line | cut -d '/' -f3- | sed 's/\//_/'`
done
Note that you'll probably have to change the -f3- part of the cut command in order to select which directory name you want to start your suffix as.
One approach is to use an array to save the files, also since / is not allowed on file names an alternative is to change it to something else, like say an underscore.
#!/usr/bin/env bash
##: Just in case there are no files/directories the * glob will not expand by itself.
shopt -s nullglob
files=(foo/bar/*/input.txt)
for file in "${files[#]}"; do
new_file=${file//\//_} ##: Replace all /'s with an _ by Parameter Expansion
echo cp -v "$file" new/path/"$new_file"
done
As per OP's request here is the new answer.
#!/usr/bin/env bash
shopt -s nullglob
##: Although files=(/old/directory/{1..3}/input.txt)
##: can be a replacement, no need for nullglob
files=(/old/directory/*/input.txt)
for file in "${files[#]}"; do
tmp=${file%[0-9]*}
new_file=${file#*$tmp}
echo cp -v "$file" new/path/"${new_file//\//}"
done
Another option is to split the fields using / as the delimiter.
#!/usr/bin/env bash
##: Do not expand a literal glob * if there are no files/directories
shopt -s nullglob
##: Save the whole path and files in an array.
files=(/old/directory/*/input.txt)
for file in "${files[#]}"; do
IFS='/' read -ra path <<< "$file" ##: split via / and save in an array
tmp=${path[#]:(-2)} ##: Remain only the last 2, e.g. 1 input.txt
new_file=${tmp// } ##: Remove the space, becomes 1input.txt
echo cp -v "$file" new/path/"$new_file"
done
Remove the echo if you think that the output is correct.
It is easy to understand which directory the file came from just replace the under scores with a /

Finding and deleting files using python script [duplicate]

This question already has answers here:
Get a filtered list of files in a directory
(14 answers)
Closed 6 years ago.
I am writing a Python script to find and remove all .py files having corresponding .pyc files.
How to extract this file list and remove them?
For example : consider there some file in /foo/bar:
file.py
file.pyc
file3.py
file2.py
file2.pyc...etc
I want to delete file.py,file2.py and not file3.py as it do not have corresponding .pyc file.
and I want to do in all folders under '/'.
Is there one-liner bash code for the same?
P.S : I am using CentOS 6.8, having python2.7
Here's my solution:
import os
ab=[]
for roots,dirs,files in os.walk("/home/foo/bar/"):
for file in files:
if file.endswith(".py"):
ab.append(os.path.join(roots,file))
bc=[]
for i in range(len(ab)):
bc.append(ab[i]+"c")
xy=[]
for roots,dirs,files in os.walk("/home/foo/bar/"):
for file in files:
if file.endswith(".pyc"):
xy.append(os.path.join(roots,file))
ex=[x[:-1] for x in bc if x in xy]
for i in ex:
os.remove(i)
P.S: Newbie in python scriptiing.
Bash solution:
#!/bin/bash
find /foo/bar -name "*.py" -exec ls {} \; > file1.txt
find /foo/bar/ -name "*.pyc" -exec ls {} \; > file2.txt
p=`wc -l file1.txt| cut -d' ' -f1`
for ((c=1;c<=$p;c++))
do
grep `sed -n ${c}p file1.txt | sed s/$/c/g` file2.txt > /dev/null
if [ $? -eq 0 ]
then
list=`sed -n ${c}p file1.txt`
echo " exist : $list"
rm -rf `sed -n ${c}p file1.txt`
fi
done
this is a very operating-system-near solution
maybe make a shell script from the following commands and invoke it from python using subprocess.call (How to call a shell script from python code?, Calling an external command in Python)
find . -name "*.pyc" > /tmp/pyc.txt
find . -name "*.py" > /tmp/py.txt
from the entries of these files remove path and file ending using sed or basename:
for f in $(cat /tmp/pyc.txt) ; do
sed 's/.*\///' remove path
sed 's/\.[^.]*$//' remove file ending
done
for f in $(cat /tmp/py.txt) ; do
sed 's/.*\///' remove path
sed 's/\.[^.]*$//' remove file ending
done
(https://unix.stackexchange.com/questions/44735/how-to-get-only-filename-using-sed)
awk 'FNR==NR{a[$1];next}($1 in a){print}' /tmp/pyc.txt /tmp/py.txt > /tmp/rm.txt (https://unix.stackexchange.com/questions/125155/compare-two-files-for-matching-lines-and-store-positive-results)
for f in $(cat /tmp/rm.txt) ; do
rm $f
done (Unix: How to delete files listed in a file)
The following code will work for a single layer directory. (Note: I wasn't sure how you wanted to handle multiple layers of folders --- e.g. if you have A.py in one folder and A.pyc in another, does it count as having both present, or do they have to be in the same layer of the same folder? If the latter case, it should be fairly simple to just loop through the folders and just call this code within each loop.)
import os
# Produces a sorted list of all files in a directory
dirList = os.listdir(folder_path) # Use os.listdir() if want current directory
dirList.sort()
# Takes advantage of fact that both py and pyc files will share same name and
# that pyc files will appear immediately after their py counterparts in dirList
lastPyName = ""
for file in dirList:
if file[-3:] == ".py":
lastPyName = file[:-3]
elif file[-4:] == ".pyc":
if lastPyName == file[:-4]:
os.remove(lastPyName + ".py")
os.remove(lastPyName + ".pyc") # In case you want to delete this too

Copy text from multiple files, same names to different path in bash (linux)

I need help copying content from various files to others (same name and format, different path).
For example, $HOME/initial/baby.desktop has text which I need to write into $HOME/scripts/baby.desktop. This is very simple for a single file, but I have 2500 files in $HOME/initial/ and the same number in $HOME/scripts/ with corresponding names (same names and format). I want append (copy) the content of file in path A to path B (which have the same name and format), to the end of file in path B without erase the content of file in path B.
Example content of $HOME/initial/*.desktop to final $HOME/scripts/*.desktop. I tried the following, but it don't work:
cd $HOME/initial/
for i in $( ls *.desktop ); do egrep "Icon" $i >> $HOME/scripts/$i; done
Firstly, I would backup $HOME/initial and $HOME/scripts, because there is lots of scope for people misunderstanding your question. Like this:
cd $HOME
tar -cvf initial.tar initial
tar -cvf scripts.tar scripts
That will put all the files in $HOME/initial into a single tarfile called initial.tar and all the files in $HOME/scripts into a single tarfile called scripts.tar.
Now for your question... in general, if you want to put the contents of FileB onto the end of FileA, the command is
cat FileB >> FileA
Note the DOUBLE ">>" which means "append" rather than single ">" which means overwrite.
So, I think you want to do this:
cd $HOME/initial/baby.desktop
cat SomeFile >> $HOME/scripts/baby.desktop/SomeFile
where SomeFile is the name of any file you choose to test with. I would test that has worked and then, if you are happy with that, go ahead and run the same command inside a loop:
cd $HOME/initial/baby.desktop
for SOURCE in *
do
DESTINATION="$HOME/scripts/baby.desktop/$SOURCE"
echo Appending "$SOURCE" to "$DESTINATION"
#cat "$SOURCE" >> "$DESTINATION"
done
When the output looks correct, remove the "#" at the start of the penultimate line and run it again.
I solved it, if some people want learn how to resolve is very simple:
using Sed
I need only the match (or pattern) line "Icon=/usr/share/some_picture.png into $HOME/initial/example.desktop to other with same name and format $HOME/scripts/example.desktop, but I had a lot of .desktop files (2500 files)
cd $HOME/initial
STRING_LINE=`grep -l -R "Icon=" *.desktop`
for i in $STRING_LINE; do sed -ne '/Icon=/ p' $i >> $HOME/scripts/$i ; done
_________
If you need only copy all to other file with same name and format
using cat
cd $HOME/initial
STRING_LINE=`grep -l -R "Icon=" *.desktop`
for i in $STRING_LINE; do cat $i >> $HOME/scripts/$i ; done

Moving multiple files in directory that might have duplicate file names

can anyone help me with this?
I am trying to copy images from my USB to an archive on my computer, I have decided to make a BASH script to make this job easier. I want to copy files(ie IMG_0101.JPG) and if there is already a file with that name in the archive (Which there will be as I wipe my camera everytime I use it) the file should be named IMG_0101.JPG.JPG so that I don't lose the file.
#method, then
mv IMG_0101.JPG IMG_0101.JPG.JPG
else mv IMG_0101 path/to/destination
for file in "$source"/*; do
newfile="$dest"/"$file"
while [ -e "$newfile" ]; do
newfile=$newfile.JPG
done
cp "$file" "$newfile"
done
There is a race condition here (if another process could create a file by the same name between the first done and the cp) but that's fairly theoretical.
It would not be hard to come up with a less primitive renaming policy; perhaps replace .JPG at the end with an increasing numeric suffix plus .JPG?
Use the last modified timestamp of the file to tag each filename so if it is the same file it doesn't copy it over again.
Here's a bash specific script that you can use to move files from a "from" directory to a "to" directory:
#!/bin/bash
for f in from/*
do
filename="${f##*/}"`stat -c %Y $f`
if [ ! -f to/$filename ]
then
mv $f to/$filename
fi
done
Here's some sample output (using the above code in a script called "movefiles"):
# ls from
# ls to
# touch from/a
# touch from/b
# touch from/c
# touch from/d
# ls from
a b c d
# ls to
# ./movefiles
# ls from
# ls to
a1385541573 b1385541574 c1385541576 d1385541577
# touch from/a
# touch from/b
# ./movefiles
# ls from
# ls to
a1385541573 a1385541599 b1385541574 b1385541601 c1385541576 d1385541577

Combining two files in different folders in Linux

I have two set of folders that have files with the same filenames and structure. The folder structure is something like this:
\outputfolder\
|---\folder1\
| |---file1.txt
| |---file2.txt
|
|---\folder2\
|---file1.txt
|---file2.txt
So what I need to do is to combine (append) all the files with the same name in these folders (file1.txt with file1.txt etc.) into another file inside the outputfolder. After getting these combined files I also need to create a tar.gz file from all of these combined files.
How can I accomplish this in a Linux based command line environment? The folder name (folder1 and folder2 etc) is variable so this needs to be given but the files need not and it should automatically combine all the files with the same name.
Also, these files have headers for column names, so I would need to remove that as well while appending.
Here's some code to get you started
topdir=outputfolder
dir1=folder1
dir2=folder2
for f in $topdir/$dir1/*.txt
do
outf=$topdir/`basename $f .txt`-concat.txt
cp $f $outf
sed -e '1 d' $topdir/$dir2/`basename $f` >> $outf
done
tar czf foo.tar.gz $topdir/*-concat.txt
Edit: added the part removing the header of the 2nd file.
find . -name 'file1.txt' | xargs cat >file1_concat.txt
This will work even if there are some files only in folder1 and some files only in folder2:
concat_files() {
for dir in "$#"; do
for file in "$dir"/*; do
this=$(basename "$file")
{ [[ -f "$this" ]] && sed 1d "$file" || cat "$file"; } >> "$this"
done
done
tar zcvf allfiles.tar.gz *
}
concat_files folder1 folder2
It will work if you have more than 2 folders for your concatenation job.
I assume you want to keep the header in the resulting file.
Have you tried the cat command (concatenation)?
cat file1 file2 >> outputfile
Might want to chuck this in a small bash script to go through the directory. This should start you off.
Best of luck.
Leo

Resources