Rename file names in Linux based on conditions - linux

I have some files in Linux directory like below.
email_Tracking_export_2018_08_26.zip
email_Tracking_export_2018_08_27.zip
email_Tracking_export_2018_08_28.zip
email_Tracking_export_2018_08_29.zip
email_Tracking_export_2018_09_03.zip
email_Tracking_export_history_Novemeber.zip
email_Tracking_export_history_December.zip
email_Tracking_export_history_january.zip
email_Tracking_export_history_february.zip
email_Tracking_export_history_march.zip
email_Tracking_export_history_April.zip
Now I want to change the files names to be like below.
email_Tracking_export_2018_08_26.zip
email_Tracking_export_2018_08_27.zip
email_Tracking_export_2018_08_28.zip
email_Tracking_export_2018_08_29.zip
email_Tracking_export_2018_09_03.zip
email_Tracking_export_2017_11_01.zip
email_Tracking_export_2017_12_01.zip
email_Tracking_export_2018_01_01.zip
email_Tracking_export_2018_02_01.zip
email_Tracking_export_2018_03_01.zip
email_Tracking_export_2018_04_01.zip
Conditions:
If the file names are in yyyy-mm-dd format then leave them as is
if the file names are in Alphabetical form convert to yyyy-mm-dd
if month has passed in that particular year than leave as is if not then year should be previous year.
How can I achive that in bash/Linux

for f in email_Tracking_export_*.zip; do
case "$f" in
email_Tracking_export_????_??_??.zip) : ignore ;;
*) date=$(stat -c %Y "$f") # mod time in seconds
fmtdate=$(date --date="#$date" +%Y_%m_%d) # formatted
mv "$f" email_Tracking_export_$fmtdate.zip
;;
esac
done

Here are the steps in small parts which you can find answers of and execute as a bash script,
Make a key value pair of mapping, which maps a month to its numerical value. (Take care of lowercase/uppercase)
For each file, check their format.
Generate the new name for each file.
Then, use the mv command to rename the file with the new name.

Related

Loop through directory and files with with date string to find the file with highest suffix (e.g. "firsttable_20230113093000_12")

I am looking to adapt a shell script to find a way to cycle through files that have different table names as well as different dates between files that have the same table name, and return the highest suffix file.
An example of my files in a given directory:
firsttable_20230112093000_1
firsttable_20230112093000_2
firsttable_20230112093000_3
firsttable_20230112093000_4
firsttable_19990202090000_1
firsttable_19990202090000_2
secondtable_20220112090000_1
secondtable_20220112090000_2
secondtable_20220112090000_3
Desired Result:
firsttable_20230112093000_4
firsttable_19990202090000_2
secondtable_20220112090000_3
What's been done
Originally I only needed to find the highest suffix as the dates would be the same for all tables, and what I had worked:
allTables=(
'firsttable'
'secondtable'
'thirdtable'
...
)
for table in ${allTables[#]}; do
substring="_2"
searchstring="$table$substring"
# Check if the file for a given table exists:
if ls $Path/$searchString* 1> /dev/null 2>&1; then
echo "$searchString* files exist. Proceeding..."
lastFile=$(ls "$Path/searchString"* | sort -rV | head -n1)
echo "Highest suffix file: $lastFile"
else
echo "File searchstring not found: '$Path/$searchString' "
fi
done
If I was to apply that to my new directory shown above, it would only be able to find:
Highest suffix file: firsttable_20230112093000_4
Highest suffix file: secondtable_20220112090000_3
I need to find a way to make the script also look at the dates and see if they are different, and if they are, treat them as such. Would this require a regex to assess the filename? The filename format stays the same: "tablename_$$$$$$$$$$$$$$_nn" (underscore placing after table and date, suffix can go above single figures, date is always 14 characters)
Thanks in advance for any help!

Comparing file created yesterday with file created today

I have files in a directory as shown in the below format.
$today and $yesterday are two variables holding today's date and yesterday's date , both will hold date as shown in below structure .
today=$(date +"%Y-%m-%d")
yesterday=$(date -d "yesterday 13:00" '+%Y-%m-%d')
example-$today.txt
polar-$today.txt
example-$yesteday.txt
polar-$yesteday.txt
Example yesterday : example-2020-09-24.txt
Example today: example-2020-09-25.txt
Files are created on a daily basis using cronjob , so there will be files in below structure with tomorrow's date.
example-$tomorrow.txt
polar-$tomorrow.txt
I want to compare files starting with same name on different dates and if there is difference execute a python script.The python script takes today's file as first argument if there is a difference.
if diff example-$today.txt example-$yesteday.txt
then
echo "No difference"
else
python script.py example-$today.txt
fi
If I have only 2 or 3 files I can use if else code for each file using diff as mentioned above, but the list will be populated with more unique names in future , and writing if command is tedious.
Requirement :
Compare all the txt files in the directory with same named file names on yesterday and today , if there is a difference execute the python script.
Seems like it's probably not too terrible to do:
for base in example polar; do
if ! diff ${base}-$today.txt ${base}-$yesteday.txt; then
python script.py ${base}-$today.txt
fi
done
That should be fairly maintainable, and you can write list='example polar ...' ... for base in $list, or list=$( cmd to dynamically generate names), or use an array. There's a lot of flexibility. For example, if you don't want to maintain the list of files, you could do:
for file in *-${today}.txt; do
base="${file%-${today}.txt}"
if ! diff "${base}-$today.txt" "${base}-$yesteday.txt"; then
python script.py "${base}-$today.txt"
fi
done
Note that I've removed the excess verbosity. Succeed quietly, fail loudly.

Change file's name using command line arguments Bash [duplicate]

This question already has answers here:
Change file's numbers Bash
(2 answers)
Closed 2 years ago.
I need to implement a script (duplq.sh) that would rename all the text files existing in the current directory using the command line arguments. So if the command duplq.sh pic 0 3 was executed, it would do the following transformation:
pic0.txt will have to be renamed pic3.txt
pic1.txt to pic4.txt
pic2.txt to pic5.txt
pic3.txt to pic6.txt
etc…
So the first argument is always the name of a file the second and the third always a positive digit.
I also need to make sure that when I execute my script, the first renaming (pic0.txt to pic3.txt), does not erase the existing pic3.txt file in the current directory.
Here's what i did so far :
#!/bin/bash
name="$1"
i="$2"
j="$3"
for file in $name*
do
echo $file
find /var/log -name 'name[$i]' | sed -e 's/$i/$j/g'
i=$(($i+1))
j=$(($j+1))
done
But the find command does not seem to work. Do you have other solutions ?
The problem you're trying to solve is actually somewhat tricky, and I don't think you've fully thought it through. For instance, what's the difference between duplq.sh pic 0 3 and duplq.sh pic 2 5 -- it looks like both should just add 3 to the number, or would the second skip "pic0.txt" and "pic1.txt"? What effect would either one have on files named "pic", "pic.txt", "picture.txt", "picture2.txt", "pic2-2.txt", or "pic999.txt".
There are also a bunch of basic mistakes in the script you have so far:
You should (almost) always put variable references in double-qotes, to avoid unexpected word-splitting and wildcard expansion. So, for example, use echo "$file" instead of echo $file. In for file in $name*, you should put double-quotes around the variable but not the *, because you want that to be treated as a wildcard. Hence, the correct version is for file in "$name"*
Don't put variable references in single-quotes, they aren't expanded there. So in the find and sed commands, you aren't passing the variables' values, you're passing literal dollar signs followed by letters. Again, use double-quotes. Also, you don't have a "$" before "name", so it won't be treated as a variable even in double-quotes.
But the find and sed commands don't do what you want anyway. Consider find /var/log -name "name[1]" -- that looks for files named "name1", not "name1" + some extension. And it looks in the current directory and all subdirectories, which I'm pretty sure you don't want. And the "1" ("$i") may not be the number in the current filename. Suppose there are files named "pic0.jpg", "pic0.png", and "pic0.txt" -- on the first iteration, the loop might find all three with a pattern like "pic0*", then on the second and third iterations try to find "pic1*" and "pic2*, which don't exist. On the other hand, suppose there are files named "pic0.txt", "pic5.txt", and "pic8.txt" -- again, it might look for "pic0*" (ok), then "pic1*" (not found), and then "pic2*" (ditto).
Also, if you get to multi-digit numbers, the pattern "name[10]" will match "file0" and "file1", but not "file10". I don't know why you added the brackets there, but they don't do anything you'd want.
You already have the files being listed one at a time in the $file variable, searching again with different criteria just adds confusion.
Also, at no point in the script do you actually rename anything. The find | sed line will (if it works) print the new name for the file, but not actually rename it.
BTW, when you do use the mv command, use either mv -n or mv -i to keep it from silently and irretrievably overwriting files if/when a name conflict occurs.
To prevent overwriting when incrementing file numbers, you need to do the renames in reverse numeric order (i.e. rename "pic3.txt" to "pic6.txt" before renaming "pic0.txt" to "pic3.txt"). This is especially tricky because if you just sort filenames in reverse alphabetic order, you'll get "pic7.txt" before "pic10.txt". But you can't do a numeric sort without removing the "pic" and ".txt" parts first.
IMO this is actually the trickiest problem to be solved in order to get this script to work right. It might be simplest to specify the largest index number as one of the arguments, and have it start there and count down to 0 (looping over numbers rather than files), and then for each number iterate over matching files (e.g. "pic0.jpg", "pic0.png", and "pic0.txt").
So I assume that 0 3 is just a measurement for the difference of old num and new num and equivalent to 1 4 or 100 103.
To avoid overwriting existing files, create a new temp dir, move all affected files there, and move all of them back in the end.
#/bin/bash
#
# duplq.sh pic 0 3
base="$1"
delta=$(( $3 - $2 ))
# echo delta $delta
target=$(mktemp -d)
echo $target
# /tmp/tmp.7uXD2GzqAb
add () {
f="$1"
b="$2"
d=$3
num=${f#./${b}}
# echo -e "file: $f \tnum: $num \tnum + d: $((num + d))" ;
echo -e "$((num + d))" ;
}
for f in $(find -maxdepth 1 -type f -regex ".*/${base}[0-9]+")
do
newnum=$(add "$f" "${base}" $delta)
echo mv "$f" "$target/${base}$newnum"
done
# exit
echo mv $target/${base}* .
First I tried to just use bash syntax, to check, whether removal of the prefix (pic) results in just digits remaining. I also didn't use the extension .txt - this is left as an exercise for the reader. From the question it is unclear - it is never explicitly told, that all files share the same extension, but all files in the example do.
With the -regex ".*/${base}[0-9]+") in find, the values are guaranteed to be just digits.
num=${f#./${b}}
removes from file f the base ("pic"). Delta d is added.
Instead of really moving, I just echoed the mv-command.
#TODO: Implement the file name extension conservation.
And 2 other pitfalls came to my mind: If you have 3 files pic0, pic00 and pic000 they all will be renamed to pic3. And pic08 will be cut into pic and 08, 08 will then be tried to be read as octal number (or 09 or 012129 and so on) and lead to an error.
One way to solve this issue is, that you prepend the extracted number (001 or 018) with a "1", then add 3, and remove the leading 1:
001 1001 1004 004
018 1018 1021 021
but this clever solution leads to new problems:
999 1999 2002 002?
So a leading 1 has to be cut off, a leading 2 has to be reduced by 1. But now, if the delta is bigger, let's say 300:
018 1018 1318 318
918 1918 2218 1218
Well - that seems to be working.

How can I make a copy of a file with a new name that contains the timestamp of the original in the filename?

Writing a bash script that will copy a file into a directory where the new copy has the same name, but with the timestamp appended to the filename (prior to the extension).
How can I achieve this?
to insert the time stamp of the file itself into the original file name, as well as preserving that timestamp in the target file, the following works in GNU environments:
file="/some/dir/path-to-file.xxx";
cp -p "$file" "${file%.*}-$(date -r"$file" '+%Y%m%d-%H%M%S').${file##*.}"
Adding proper use of the basename(1) command into the mix would allow you to copy the file into a different directory.
It's more challenging to do this outside of GNU/Linux environments and you have to start visiting languages like awk, perl, python, even php, to replace the date -r command.
file="file_to_copy"
cp $file "/path/to/dest/$file"`stat --printf "%X" $file`
You can look at the manual page of stat (man 1 stat) to choose the appropriate timestamp for your needs (creation, last access etc.)
In this example, I chose %X which means time of last access, seconds since Epoch
Suppose
var="/path/to/filename.ext" #path is optional
Do
var1="${var##*/}
cp "$var" "/path/to/new/directory/${var1%.*}$(date +%s).${var1##*.}"
For more on ${var%.*} & ${var##*.} , see [ shell parameter expansion ].
date manpage says :
%s seconds since 1970-01-01 00:00:00 UTC

How can i append files to one another in the order i want in linux using pipes or redirects?

Lets say i have different files in a folder that contains the same day data such as :
ThisFile_2012-10-01.txt
ThatFile_2012-10-01.txt
AnotherSilly_2012-10-01.txt
InnovativeFilesEH_2012-10-01.txt
How to i append them to each other in any preferred order? Would below be the exact way i need to type in my shellscript? The folder gets same files everyday but with different dates. Old dates disappear so every day there are these 4 files.
InnovativeFilesEH_*.txt >> ThatFile_*.txt
ThisFile_*.txt >> ThatFile_*.txt
AnotherSilly_*.txt >> ThatFile_*.txt
Finally, a use for "cat" as intended :-):
cat InnovativeFilesEH_*.txt ThisFile_*.txt AnotherSilly_*.txt >> ThatFile_*.txt
Assumption:
Want to preserve some specific ordering in which these files are appended.
Using the example you provided:
#!/bin/sh
# First find the actual files we want to operate on
# and save them into shell variables:
final_output_file="Desired_File_Name.txt"
that_file=$(find -name ThatFile_*.txt)
inno_file=$(find -name InnovativeFilesEH_*.txt)
this_file=$(find -name ThisFile_*.txt)
another_silly_file=$(find -name AnotherSilly_*.txt)
# Now append the 4 files to Desired_File_Name.txt in the specific order:
cat $that_file > $final_output_file
cat $inno_file >> $final_output_file
cat $this_file >> $final_output_file
cat $another_silly_file >> $final_output_file
Adjust the ordering in which you want the files to be appended by reordering / modifying the cat statements

Resources