How to replace date part in filename with current date - linux

How to replace only date part to current date of all files present in diretory in unix.
Folder path: C:/shan
Sample files:
CN_Apria_837p_20180924.txt
DN_Apria_837p_20150502.txt
GN_Apria_837p_20160502.txt
CH_Apria_837p_20170502.txt
CU_Apria_837p_20180502.txt
PN_Apria_837p_20140502.txt
CN_Apria_837p_20101502.txt
Desired result should be:
CN_Apria_837p_20190502.txt
DN_Apria_837p_20190502.txt
GN_Apria_837p_20190502.txt
CH_Apria_837p_20190502.txt
CU_Apria_837p_20190502.txt
PN_Apria_837p_20190502.txt
CN_Apria_837p_20190502.txt
Edit:
I'm completely new to unix sell scripting. I tried this below, however it's not working.
#!/bin/bash
for i in ls $1 | grep -E '[0-9]{4}-[0-9]{2}-[0-9]{2}'
do
x=echo $i | grep -oE '[0-9]{4}-[0-9]{2}-[0-9]{2}'
y=echo $i | sed "s/$x/$(date +%F)/g"
mv $1/$i $1/$y 2>/dev/null #incase if old date is same as current date
done

I would use regular expressions here. From the bash man-page:
An additional binary operator, =~, is available, with the same
precedence as == and !=. When it is used, the string to the right
of the operator is considered an extended regular expression and
matched accordingly (as in regex(3)). The return value is 0 if the
string matches the pattern, and 1 otherwise. .... Substrings
matched by parenthesized subexpressions within the regular
expression are saved in the array variable BASH_REMATCH. ...
The element of BASH_REMATCH with indexn is the portion of the
string matching the nth parenthesized sub-expression.
Hence, assuming that the variable x holds the name of one of the files
in question, the code
if [[ $x =~ ^(.*_)[0-9]+([.]txt$) ]]
then
mv "$x" "$BASH_REMATCH[1]$(date +%Y%m%d)$BASH_REMATCH[2]"
fi
first tests roughly whether the file indeed follows the required naming scheme, and then modifies the name accordingly.
Of course in practice, you will tailor the regexp to match your application better. Only you can know what variations in the file name are permitted.

The below should do this
for f in $(find /path/to/files -name "*_*_*_*.txt")
do
newname=$(echo "$f" | sed -r "s/[12][0-9]{3}[01][0-9][0-3][0-9]/$(date '+%Y%m%d')/g")
mv "$f" "$newname"
done

Try this Shellcheck-clean code:
#! /bin/bash -p
readonly dir=$1
shopt -s nullglob # Make glob patterns that match nothing expand to nothing
readonly dateglob='20[0-9][0-9][0-9][0-9][0-9][0-9]'
currdate=$(date '+%Y%m%d')
# shellcheck disable=SC2231
for path in "$dir"/*_${dateglob}.* ; do
name=${path##*/}
newname=${name/_${dateglob}./_${currdate}.}
if [[ $newname != "$name" ]] ; then
newpath="$dir/$newname"
printf "%q -> %q\\n" "$path" "$newpath"
mv -i -- "$path" "$newpath"
fi
done
shopt -s nullglob stops the code trying to process a garbage path if nothing matches the glob pattern in for path in ....
The pattern assigned to dateglob assumes that you will not have to process dates before 2000 (or after 2099!). Change it if that assumption is not valid.
The # shellcheck ... line is to prevent Shellcheck warning about the use of ${dateglob} without quotes. The quotes would be wrong in this case because they would prevent the glob pattern being expanded.
The pattern used to match filenames (*_${dateglob}.*) will match many more forms of filename than the examples given (e.g. A_20180313.tar.gz). You might want to change it.
See Removing part of a string (BashFAQ/100 (How do I do string manipulation in bash?)) for information about the Bash string manipulation mechanisms used (${path##...}, ${name/...}).
I've added a printf to output details of what is being moved.
The -i option to mv prompts for confirmation if a file would be overwritten. This turns out to be an issue for the example files because both CN_Apria_837p_20180924.txt and CN_Apria_837p_20101502.txt are identical except for the date, so the code tries to rename them to the same thing.
If any of the files with dates in their names have names beginning with '.', the code will not process them. Add line shopt -s dotglob somewhere before the loop if that is an issue.

Related

How to delete numbers, dashes and underscores in the beginning of a file name

I have thousands of mp3 files but all with unusual file names such as 1-2songone.mp3, 2songtwo.mp3, 2_2_3_songthree.mp3. I want to remove all the numbers, dashes and underscores in the beginning of these files and get the result:
songone.mp3
songtwo.mp3
songthree.mp3
This can be done using extended globbing:
$ ls
1-2songone.mp3 2_2_3_songthree.mp3 2songtwo.mp3
$ shopt -s extglob
$ for fname in *.mp3; do mv -- "$fname" "${fname##*([-_[:digit:]])}"; done
$ ls
songone.mp3 songthree.mp3 songtwo.mp3
This uses parameter expansion: ${fname##pattern} removes the longest possible match from the beginning of fname. As the pattern, we use *([-_[:digit:]]), where *(pattern) stands for "zero or more matches of pattern", and the actual pattern is a bracket expression for hyhpens, underscores and digits.
Remarks:
The -- after mv indicates the end of options for move and makes sure that filenames starting with - aren't interpreted as options.
The *() expression requires the extglob shell option. As pointed out, if you don't want extended globs later, you have to unset it again with shopt -u extglob.
As per Gordon Davisson's comment: this will clobber files if you have, for example, something like 1file.mp3 and 2file.mp3. To avoid that, you can either use mv -i (or --interactive), which will prompt you before overwriting a file, or mv -n (or --noclobber), which will just not overwrite any files.
triplee points out that this needlessly moves files onto themselves if they don't start with slash, underscore or digit. To avoid that, we can iterate only over matching files with
for fname in [-_[:digit:]]*.mp3; do mv -- "$fname" "${fname##*([-_[:digit:]])}"; done
which makes sure that there is something to rename.
Benjamin W.'s answer is helpful and efficient, but has two drawbacks:
It requires setting global shell option extglob, which should be restored to its previous value afterward (the alternative, at the cost of creating an extra process, is to use a subshell: (shopt -s extglob; for fname ...)).
The extglob syntax, an extension to regular glob syntax, is familiar to few people and still less powerful than true regular expressions.
Using Bash's regex-matching operator, =~:
for f in *.mp3; do [[ $f =~ ^[0-9_-]+(.+)$ ]] && echo mv "$f" "${BASH_REMATCH[1]}"; done
Remove the echo to perform actual renaming.
$f =~ ^[0-9_-]+(.+)$ matches the longest nonempty sequence of digits, hyphens, and underscores at the start of the filename, followed by any nonempty sequence of characters captured in a parenthesized subexpression (capture group).
If the match succeeds (&&), the mv command is invoked, with the captured subexpression - accessible via element 1 of special BASH array variable ${BASH_REMATCH[#]} - forming the target filename.
You may do it this way too :
find . -type f -name "*.mp3" -print0 | while read -r -d '' line
do
mv "$line" "$( sed -E 's!(.*)/[^[:alpha:]]*([[:alpha:]].*mp3)$!\1/\2!' <<<"$line")" 2>/dev/null
done
Using sed gives you more control over the regex, I guess. Also, the 2>/dev/null is for ignoring the mv error for already converted/correct filenames.
Note:
This will recursively change the filenames across subfolders too.

basename command confusion

Given the following command:
$(basename "/this-directory-does-not-exist/*.txt" ".txt")
it outputs not only txt files but other files as well. On the other hand if I change ".txt" to something like "gobble de gook" it returns:
*.txt
I'm confused with regard to why it returns the other extension types.
Your problem doesn't stem from basename, but from inadvertent use of the shell's pathname expansion (globbing) feature due to lack of quoting:
If you use the result of your command substitution ($(...)) unquoted:
$ echo $(basename "/this-directory-does-not-exist/*.txt" ".txt")
you effectively execute the following:
$ echo * # unquoted '*' expands to all files and folders in the current dir
because basename "/this-directory-does-not-exist/*.txt" ".txt" returns literal * (it strips the extension from filename *.txt;
the reason that the filename pattern *.txt didn't expand to an actual filename is that the shell leaves globbing patterns that don't match anything unmodified (by default).)
If you double-quote the command substitution, the problem goes away:
$ echo "$(basename "/this-directory-does-not-exist/*.txt" ".txt")" # -> *
However, even with this problem resolved, your basename command will only work correctly if the glob expands to one matching file, because the syntax form you're using only supports one filename argument.
GNU basename and BSD basename support the non-POSIX -s option, which allows for multiple file operands from which to strip the extension:
basename -s .txt "/some-dir/*.txt"
Assuming you use bash, you can put it all together robustly as follows:
#!/usr/bin/env bash
names=() # initialize result array
files=( *.txt ) # perform globbing and capture matching paths in an array
# Since the shell by default returns a pattern as-is if there are no matches,
# we test the first array item for existence; if it refers to an existing
# file or dir., we know that at least 1 match was found.
if [[ -e ${files[0]} ]]; then
# Apply the `basename` command with suffix-stripping to all matches
# and read the results robustly into an array.
# Note that just `names=( $(basename ...) )` would NOT work robustly.
readarray -t names < <(basename -s '.txt' "${files[#]}")
# Note: `readarray` requires Bash 4; in Bash 3.x, use the following:
# IFS=$'\n' read -r -d '' -a names < <(basename -s '.txt' "${files[#]}")
fi
# "${names[#]}" now contains an array of suffix-stripped basenames,
# or is empty, if no files matched.
printf '%s\n' "${names[#]}" # print names line by line
Note: The -e test comes with a tiny caveat: if there are matches and the first match is a broken symlink, the test will mistakenly conclude that there are no matches.
A more robust option is to use shopt -s nullglob to make the shell expand non-matching globs to the empty string, but note that this is a shell-global option, and it is good practice to return it to its previous value afterward, which makes that approach more cumbersome.
Try to put quotes around the whole thing, what you is globbing happening, your command becomes * which then is converted to all files in the current directory, this does not happen inside single or double quotes.

If multiple directories exist then move the directories - test if a globbing pattern matches anything

I want to know how I can use an if statement in a shell script to check the existence of multiple directories.
For example, if /tmp has subdirectories test1, test2, test3, I want to move them to another directory.
I am using if [ -d /tmp/test* ]; then mv test* /pathOfNewDir
but it does not work on the if statement part.
The -d test only accepts one argument, so you'll need to test each directory individually. I would also not recommend moving test* as it may match more than you intended.
Use the double-bracket syntax test syntax (e.g. if [[ -d...), which is bash-specific but tends to be clearer and have fewer gotchas than the single-bracket syntax. If you just need to check a few directories, you can do it with a simple statement like if [[ -d /tmp/test1 && -d /tmp/test2 && -d /tmp/test3 ]]; then...
Unfortunately, the shell's file-testing operators (such as -d and -f) operate on a single, literal path only:
A conditional such as [ -d /tmp/test* ] won't work, because if /tmp/test* expands to multiple matches, you'll get a syntax error (only 1 argument accepted).
The bash variant [[ -d /tmp/test* ]] doesn't work either, because no globbing (pathname expansion) is performed inside [[ ... ]].
To test whether a globbing pattern matches anything, the cleanest approach is to define an auxiliary function (this solution is POSIX-compliant):
exists() { [ -e "$1" ]; }
Invoke it with an [unquoted] pattern, e.g.:
exists foo* && echo 'HAVE MATCHES'
# or, in an `if` statement:
if exists foo*; then # ...
The only caveat is that if shopt -s failglob is in effect in bash, an error message will be printed to stderr if there's no match, and the rest of the command will not be executed.
See below for an explanation of the function.
Applied to your specific scenario, we get (using bash syntax):
# Define aux. function
exists() { [[ -e $1 ]]; }
exists /tmp/test*/ && mv /tmp/test*/ /path/to/new/dir
Note the trailing / in /tmp/test*/ to ensure that only directories match, if any.
&& ensures that the following command is only executed if the function's exit code indicates true.
mv /tmp/test*/ ... moves all matching directories at once to the new target directory.
Alternatively, capture globbing results in an helper array variable:
if matches=(/tmp/test*/) && [[ -e ${matches[0]} ]]; then
mv "${matches[#]}" /path/to/new/dir
fi
Or, process matches individually:
for d in /tmp/test*/; do
[[ -e $d ]] || break # exit, if no actual match
# Process individual match.
mv "$d" /path/to/new/dir
done
Explanation of auxiliary function exists() { [ -e "$1" ]; }:
It takes advantage of several shell features:
If you invoke it with a[n unquoted] pattern such as exists foo*, the shell will expand foo* to all matching files/directories and pass their names as individual arguments to the function.
If there are no matches, the pattern will be passed as is to the function - this behavior is mandated by POSIX.
Caveat: bash has configuration items that allow changing this behavior (shell options failglob and nullglob) - though by default it acts as mandated by POSIX in this case. (zsh, sadly, by default fails if there's no match.)
Inside the function, it's sufficient to examine the 1st argument ($1) to determine whether any matches were found:
If the 1st argument, $1 refers to an actual, existing filesystem item (as indicated by the exit code of the -e file-test operator), the implication is that the pattern indeed matched something (at least one, possibly more items).
Otherwise, the implication is that the pattern was passed as is, implying that no matches were found.
Note that the exit code of the -e test - due to being the last command in the function - implicitly serves as the exit code of the function as a whole.
It looks like you may want to use find:
find /tmp -name "test*" -maxdepth 1 -type d -exec mv \{\} /target/directory \;
This finds all test* directories directly under /tmp without recursion and moves them to /target/directory.
This approach uses ls and grep to create a list of matching directories or write an error in case no such directories are found:
IFS="
" # input is separated with newlines
if dirs=$( ls -1 -F | grep "^test.*/" | tr -d "/" )
then
# directories found - move them:
for d in $dirs
do
mv "$d" "$target_directory"/
done
else
# no directories found - send error
fi
While it would seem feasible to use find for such a task, find does not directly provide feedback on the number of matches as required by the OP according to the comments.
Note: Using ls for the task introduces a few limitations on filenames. This approach will not work with filenames containing newlines or wildcard characters.

How to remove the extension of a file?

I have a folder that is full of .bak files and some other files also. I need to remove the extension of all .bak files in that folder. How do I make a command which will accept a folder name and then remove the extension of all .bak files in that folder ?
Thanks.
To remove a string from the end of a BASH variable, use the ${var%ending} syntax. It's one of a number of string manipulations available to you in BASH.
Use it like this:
# Run in the same directory as the files
for FILENAME in *.bak; do mv "$FILENAME" "${FILENAME%.bak}"; done
That works nicely as a one-liner, but you could also wrap it as a script to work in an arbitrary directory:
# If we're passed a parameter, cd into that directory. Otherwise, do nothing.
if [ -n "$1" ]; then
cd "$1"
fi
for FILENAME in *.bak; do mv "$FILENAME" "${FILENAME%.bak}"; done
Note that while quoting your variables is almost always a good practice, the for FILENAME in *.bak is still dangerous if any of your filenames might contain spaces. Read David W.'s answer for a more-robust solution, and this document for alternative solutions.
There are several ways to remove file suffixes:
In BASH and Kornshell, you can use the environment variable filtering. Search for ${parameter%word} in the BASH manpage for complete information. Basically, # is a left filter and % is a right filter. You can remember this because # is to the left of %.
If you use a double filter (i.e. ## or %%, you are trying to filter on the biggest match. If you have a single filter (i.e. # or %, you are trying to filter on the smallest match.
What matches is filtered out and you get the rest of the string:
file="this/is/my/file/name.txt"
echo ${file#*/} #Matches is "this/` and will print out "is/my/file/name.txt"
echo ${file##*/} #Matches "this/is/my/file/" and will print out "name.txt"
echo ${file%/*} #Matches "/name.txt" and will print out "/this/is/my/file"
echo ${file%%/*} #Matches "/is/my/file/name.txt" and will print out "this"
Notice this is a glob match and not a regular expression match!. If you want to remove a file suffix:
file_sans_ext=${file%.*}
The .* will match on the period and all characters after it. Since it is a single %, it will match on the smallest glob on the right side of the string. If the filter can't match anything, it the same as your original string.
You can verify a file suffix with something like this:
if [ "${file}" != "${file%.bak}" ]
then
echo "$file is a type '.bak' file"
else
echo "$file is not a type '.bak' file"
fi
Or you could do this:
file_suffix=$(file##*.}
echo "My file is a file '.$file_suffix'"
Note that this will remove the period of the file extension.
Next, we will loop:
find . -name "*.bak" -print0 | while read -d $'\0' file
do
echo "mv '$file' '${file%.bak}'"
done | tee find.out
The find command finds the files you specify. The -print0 separates out the names of the files with a NUL symbol -- which is one of the few characters not allowed in a file name. The -d $\0means that your input separators are NUL symbols. See how nicely thefind -print0andread -d $'\0'` together?
You should almost never use the for file in $(*.bak) method. This will fail if the files have any white space in the name.
Notice that this command doesn't actually move any files. Instead, it produces a find.out file with a list of all the file renames. You should always do something like this when you do commands that operate on massive amounts of files just to be sure everything is fine.
Once you've determined that all the commands in find.out are correct, you can run it like a shell script:
$ bash find.out
rename .bak '' *.bak
(rename is in the util-linux package)
Caveat: there is no error checking:
#!/bin/bash
cd "$1"
for i in *.bak ; do mv -f "$i" "${i%%.bak}" ; done
You can always use the find command to get all the subdirectories
for FILENAME in `find . -name "*.bak"`; do mv --force "$FILENAME" "${FILENAME%.bak}"; done

Linux shell script to add leading zeros to file names

I have a folder with about 1,700 files. They are all named like 1.txt or 1497.txt, etc. I would like to rename all the files so that all the filenames are four digits long.
I.e., 23.txt becomes 0023.txt.
What is a shell script that will do this? Or a related question: How do I use grep to only match lines that contain \d.txt (i.e., one digit, then a period, then the letters txt)?
Here's what I have so far:
for a in [command i need help with]
do
mv $a 000$a
done
Basically, run that three times, with commands there to find one digit, two digits, and three digit filenames (with the number of initial zeros changed).
Try:
for a in [0-9]*.txt; do
mv $a `printf %04d.%s ${a%.*} ${a##*.}`
done
Change the filename pattern ([0-9]*.txt) as necessary.
A general-purpose enumerated rename that makes no assumptions about the initial set of filenames:
X=1;
for i in *.txt; do
mv $i $(printf %04d.%s ${X%.*} ${i##*.})
let X="$X+1"
done
On the same topic:
Bash script to pad file names
Extract filename and extension in bash
Using the rename (prename in some cases) script that is sometimes installed with Perl, you can use Perl expressions to do the renaming. The script skips renaming if there's a name collision.
The command below renames only files that have four or fewer digits followed by a ".txt" extension. It does not rename files that do not strictly conform to that pattern. It does not truncate names that consist of more than four digits.
rename 'unless (/0+[0-9]{4}.txt/) {s/^([0-9]{1,3}\.txt)$/000$1/g;s/0*([0-9]{4}\..*)/$1/}' *
A few examples:
Original Becomes
1.txt 0001.txt
02.txt 0002.txt
123.txt 0123.txt
00000.txt 00000.txt
1.23.txt 1.23.txt
Other answers given so far will attempt to rename files that don't conform to the pattern, produce errors for filenames that contain non-digit characters, perform renames that produce name collisions, try and fail to rename files that have spaces in their names and possibly other problems.
for a in *.txt; do
b=$(printf %04d.txt ${a%.txt})
if [ $a != $b ]; then
mv $a $b
fi
done
One-liner:
ls | awk '/^([0-9]+)\.txt$/ { printf("%s %04d.txt\n", $0, $1) }' | xargs -n2 mv
How do I use grep to only match lines that contain \d.txt (IE 1 digit, then a period, then the letters txt)?
grep -E '^[0-9]\.txt$'
Let's assume you have files with datatype .dat in your folder. Just copy this code to a file named run.sh, make it executable by running chmode +x run.sh and then execute using ./run.sh:
#!/bin/bash
num=0
for i in *.dat
do
a=`printf "%05d" $num`
mv "$i" "filename_$a.dat"
let "num = $(($num + 1))"
done
This will convert all files in your folder to filename_00000.dat, filename_00001.dat, etc.
This version also supports handling strings before(after) the number. But basically you can do any regex matching+printf as long as your awk supports it. And it supports whitespace characters (except newlines) in filenames too.
for f in *.txt ;do
mv "$f" "$(
awk -v f="$f" '{
if ( match(f, /^([a-zA-Z_-]*)([0-9]+)(\..+)/, a)) {
printf("%s%04d%s", a[1], a[2], a[3])
} else {
print(f)
}
}' <<<''
)"
done
To only match single digit text files, you can do...
$ ls | grep '[0-9]\.txt'
One-liner hint:
while [ -f ./result/result`printf "%03d" $a`.txt ]; do a=$((a+1));done
RESULT=result/result`printf "%03d" $a`.txt
To provide a solution that's cautiously written to be correct even in the presence of filenames with spaces:
#!/usr/bin/env bash
pattern='%04d' # pad with four digits: change this to taste
# enable extglob syntax: +([[:digit:]]) means "one or more digits"
# enable the nullglob flag: If no matches exist, a glob returns nothing (not itself).
shopt -s extglob nullglob
for f in [[:digit:]]*; do # iterate over filenames that start with digits
suffix=${f##+([[:digit:]])} # find the suffix (everything after the last digit)
number=${f%"$suffix"} # find the number (everything before the suffix)
printf -v new "$pattern" "$number" "$suffix" # pad the number, then append the suffix
if [[ $f != "$new" ]]; then # if the result differs from the old name
mv -- "$f" "$new" # ...then rename the file.
fi
done
There is a rename.ul command installed from util-linux package (at least in Ubuntu) by default installed.
It's use is (do a man rename.ul):
rename [options] expression replacement file...
The command will replace the first occurrence of expression with the given replacement for the provided files.
While forming the command you can use:
rename.ul -nv replace-me with-this in-all?-these-files*
for not doing any changes but reading what changes that command would make. When sure just reexecute the command without the -v (verbose) and -n (no-act) options
for your case the commands are:
rename.ul "" 000 ?.txt
rename.ul "" 00 ??.txt
rename.ul "" 0 ???.txt

Resources