Change file's numbers Bash - linux

I need to implement a script (duplq.sh) that would rename all the text files existing in the current directory using the command line arguments. So if the command duplq.sh pic 0 3 was executed, it would do the following transformation:
pic0.txt will have to be renamed pic3.txt
pic1.txt to pic4.txt
pic2.txt to pic5.txt
pic3.txt to pic6.txt
etc…
So the first argument is always the name of a file the second and the third always a positive digit.
I also need to make sure that when I execute my script, the first renaming (pic0.txt to pic3.txt), does not erase the existing pic3.txt file in the current directory.
Here's what i did so far :
#!/bin/bash
name="$1"
i="$2"
j="$3"
for file in $name*
do
echo $file
find /var/log -name 'name[$i]' | sed -e 's/$i/$j/g'
i=$(($i+1))
j=$(($j+1))
done
But the find command does not seem to work. Do you have other solutions ?

A possible solution...
#!/bin/sh
NUMBERS=$(ls $1|sed -e 's/pic//g' -e 's/\.txt//g'|sort -n -r)
for N in $NUMBERS
do
NEW=$(($N + 3))
echo "pic$N.txt -> pic$NEW.txt"
mv "pic$N.txt" "pic$NEW.txt"
done

find possible files, sort the names by number, use sed to remove files with lower numbers than $1, reverse sort by number, and start at the top renaming from the highest number down to the lowest:
#!/bin/sh
find . -maxdepth 1 -type f -name "$1"'[0-9]*' |
sort -g | sed -n "/^$1$2."'/,$p' | sort -gr |
while read x ; do
p="${x%%[0-9]*.*}"; s="${x##*[0-9]}" ; i="${x%$s}" i="${i#$p}"
mv "$x" "$p$((i+$3))$s"
done

Related

How to output difference of files from two folders and save the output with the same name on different folder

I have two folders which have same file names, but different contents. So, I am trying to generate a script to get the difference and to see what is being changed. I wrote a script below :
folder1="/opt/dir1"
folder2=`ls/opt/dir2`
find "$folder1/" /opt/dir2/ -printf '%P\n' | sort | uniq -d
for item in `ls $folder1`
do
if [[ $item == $folder2 ]]; then
diff -r $item $folder2 >> output.txt
fi
done
I believe this script has to work, but it is not giving any output on output folder.
So the desired output should be in one file . Ex:
cat output.txt
diff -r /opt/folder1/file1 /opt/folder2/file1
1387c1387
< ALL X'25' BY SPACE
---
> ALL X'0A' BY SPACE
diff -r /opt/folder1/file2 /opt/folder2/file2
2591c2591
< ALL X'25' BY SPACE
---
> ALL X'0A' BY SPACE
Any help is appreciated!
Ok. So twofold:
First get the files in one folder. Never use ls. Forget it exists. ls is for nice printing in our console. In scripts, use find.
Then for each file do some command. A simple while read loop.
So:
{
# make find print relative to `/opr/dir1` director
cd /opt/dir1 &&
# Use `%P` so that print without leading `./`
find . -mindepth 1 -type f -print "%P\n"
} |
while IFS= read -r file; do
diff /opt/dir1/"$file" /opt/dir2/"$file" >> output/"$file"
done
Notes:
always quote your variable
Why you shouldn't parse the output of ls(1)

Rm and Egrep -v combo

I want to remove all the logs except the current log and the log before that.
These log files are created after 20 minutes.So the files names are like
abc_23_19_10_3341.log
abc_23_19_30_3342.log
abc_23_19_50_3241.log
abc_23_20_10_3421.log
where 23 is today's date(might include yesterday's date also)
19 is the hour(7 o clock),10,30,50,10 are the minutes.
In this case i want i want to keep abc_23_20_10_3421.log which is the current log(which is currently being writen) and abc_23_19_50_3241.log(the previous one)
and remove the rest.
I got it to work by creating a folder,putting the first files in that folder and removing the files and then deleting it.But that's too long...
I also tried this
files_nodelete=`ls -t | head -n 2 | tr '\n' '|'`
rm *.txt | egrep -v "$files_nodelete"
but it didnt work.But if i put ls instead of rm it works.
I am an amateur in linux.So please suggest a simple idea..or a logic..xargs rm i tried but it didnt work.
Also read about mtime,but seems abit complicated since I am new to linux
Working on a solaris system
Try the logadm tool in Solaris, it might be the simplest way to rotate logs. If you just want to get things done, it will do it.
http://docs.oracle.com/cd/E23823_01/html/816-5166/logadm-1m.html
If you want a solution similar (but working) to your try this:
ls abc*.log | sort | head -n-2 | xargs rm
ls abc*.log: list all files, matching the pattern abc*.log
sort: sorts this list lexicographical (by name) from oldes to to newest logfile
head -n-2: return all but the last two entry in the list (you can give -n a negativ count too)
xargs rm: compose the rm command with the entries from stdin
If there are two or less files in the directory, this command will return an error like
rm: missing operand
and will not delete any files.
It is usually not a good idea to use ls to point to files. Some files may cause havoc (files which have a [Newline] or a weird character in their name are the usual exemples ....).
Using shell globs : Here is an interresting way : we count the files newer than the one we are about to remove!
pattern='abc*.log'
for i in $pattern ; do
[ -f "$i" ] || break ;
#determine if this is the most recent file, in the current directory
# [I add -maxdepth 1 to limit the find to only that directory, no subdirs]
if [ $(find . -maxdepth 1 -name "$pattern" -type f -newer "$i" -print0 | tr -cd '\000' | tr '\000' '+' | wc -c) -gt 1 ];
then
#there are 2 files more recent than $i that match the pattern
#we can delete $i
echo rm "$i" # remove the echo only when you are 100% sure that you want to delete all those files !
else
echo "$i is one of the 2 most recent files matching '${pattern}', I keep it"
fi
done
I only use the globbing mechanism to feed filenames to "find", and just use the terminating "0" of the -printf0 to count the outputed filenames (thus I have no problems with any special characters in those filenames, I just need to know how many files were outputted)
tr -cd "\000" will keep only the \000, ie the terminating NUL character outputed by print0. Then I translate each \000 to a single + character, and I count them with the wc -c. If I see 0, "$i" was the most recent file. If I see 1, "$i" was the one just a bit older (so the find sees only the most recent one). And if I see more than 1, it means the 2 files (mathching the pattern) that we want to keep are newer than "$i", so we can delete "$i"
I'm sure someone will step in with a better one, but the idea could be reused, I guess...
Thanks guyz for all the answers.
I found my answer
files=`ls -t *.txt | head -n 2 | tr '\n' '|' | rev |cut -c 2- |rev`
rm `ls -t | egrep -v "$files"`
Thank you for the help

Operating on multiple results from find command in bash

Hi I'm a novice linux user. I'm trying to use the find command in bash to search through a given directory, each containing multiple files of the same name but with varying content, to find a maximum value within the files.
Initially I wasn't taking the directory as input and knew the file wouldn't be less than 2 directories deep so I was using nested loops as follows:
prev_value=0
for i in <directory_name> ; do
if [ -d "$i" ]; then
cd $i
for j in "$i"/* ; do
if [ -d "$j" ]; then
cd $j
curr_value=`grep "<keyword>" <filename>.txt | cut -c32-33` #gets value I'm comparing
if [ $curr_value -lt $prev_value ]; then
curr_value=$prev_value
else
prev_value=$curr_value
fi
fi
done
fi
done
echo $prev_value
Obviously that's not going to cut it now. I've looked into the -exec option of find but since find is producing a vast amount of results I'm just not sure how to handle the variable assignment and comparisons. Any help would be appreciated, thanks.
find "${DIRECTORY}" -name "${FILENAME}.txt" -print0 | xargs -0 -L 1 grep "${KEYWORD}" | cut -c32-33 | sort -nr | head -n1
We find the filenames that are named FILENAME.txt (FILENAME is a bash variable) that exist under DIRECTORY.
We print them all out, separated by nulls (this avoids any problems with certain characters in directory or file names).
Then we read them all in again using xargs, and pass the null-separated (-0) values as arguments to grep, launching one grep for each filename (-L 1 - let's be POSIX-compliant here). (I do that to avoid grep printing the filenames, which would screw up cut).
Then we sort all the results, numerically (-n), in descending order (-r).
Finally, we take the first line (head -n1) of the sorted numbers - which will be the maximum.
P.S. If you have 4 CPU cores you can try adding the -P 4 option to xargs to try to make the grep part of it run faster.

Removing 10 Characters of Filename in Linux

I just downloaded about 600 files from my server and need to remove the last 11 characters from the filename (not including the extension). I use Ubuntu and I am searching for a command to achieve this.
Some examples are as follows:
aarondyne_kh2_13thstruggle_or_1250556383.mus should be renamed to aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 should be renamed to aarondyne_kh2_darknessofunknow.mp3
It seems that some duplicates might exist after I do this, but if the command fails to complete and tells me what the duplicates would be, I can always remove those manually.
Try using the rename command. It allows you to rename files based on a regular expression:
The following line should work out for you:
rename 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
The following changes will occur:
aarondyne_kh2_13thstruggle_or_1250556383.mus renamed as aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 renamed as aarondyne_kh2_darknessofunknow.mp3
You can check the actions rename will do via specifying the -n flag, like this:
rename -n 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
For more information on how to use rename simply open the manpage via: man rename
Not the prettiest, but very simple:
echo "$filename" | sed -e 's!\(.*\)...........\(\.[^.]*\)!\1\2!'
You'll still need to write the rest of the script, but it's pretty simple.
find . -type f -exec sh -c 'mv {} `echo -n {} | sed -E -e "s/[^/]{10}(\\.[^\\.]+)?$/\\1/"`' ";"
one way to go:
you get a list of your files, one per line (by ls maybe) then:
ls....|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
this will print the mv a b command
e.g.
kent$ echo "aarondyne_kh2_13thstruggle_or_1250556383.mus"|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
mv aarondyne_kh2_13thstruggle_or_1250556383.mus aarondyne_kh2_13thstruggle_or.mus
to execute, just pipe it to |sh
I assume there is no space in your filename.
This script assumes each file has just one extension. It would, for instance, rename "foo.something.mus" to "foo.mus". To keep all extensions, remove one hash mark (#) from the first line of the loop body. It also assumes that the base of each filename has at least 12 character, so that removing 11 doesn't leave you with an empty name.
for f in *; do
ext=${f##*.}
new_f=${base%???????????.$ext}
if [ -f "$new_f" ]; then
echo "Will not rename $f, $new_f already exists" >&2
else
mv "$f" "$new_f"
fi
done

Script for renaming files with logical

Someone has very kindly help get me started on a mass rename script for renaming PDF files.
As you can see I need to add a bit of logical to stop the below happening - so something like add a unique number to a duplicate file name?
rename 's/^(.{5}).*(\..*)$/$1$2/' *
rename -n 's/^(.{5}).*(\..*)$/$1$2/' *
Annexes 123114345234525.pdf renamed as Annex.pdf
Annexes 123114432452352.pdf renamed as Annex.pdf
Hope this makes sense?
Thanks
for i in *
do
x='' # counter
j="${i:0:2}" # new name
e="${i##*.}" # ext
while [ -e "$j$x" ] # try to find other name
do
((x++)) # inc counter
done
mv "$i" "$j$x" # rename
done
before
$ ls
he.pdf hejjj.pdf hello.pdf wo.pdf workd.pdf world.pdf
after
$ ls
he.pdf he1.pdf he2.pdf wo.pdf wo1.pdf wo2.pdf
This should check whether there will be any duplicates:
rename -n [...] | grep -o ' renamed as .*' | sort | uniq -d
If you get any output of the form renamed as [...], then you have a collision.
Of course, this won't work in a couple corner cases - If your files contain newlines or the literal string renamed as, for example.
As noted in my answer on your previous question:
for f in *.pdf; do
tmp=`echo $f | sed -r 's/^(.{5}).*(\..*)$/$1$2/'`
mv -b ./"$f" ./"$tmp"
done
That will make backups of deleted or overwritten files. A better alternative would be this script:
#!/bin/bash
for f in $*; do
tar -rvf /tmp/backup.tar $f
tmp=`echo $f | sed -r 's/^(.{5}).*(\..*)$/$1$2/'`
i=1
while [ -e tmp ]; do
tmp=`echo $tmp | sed "s/\./-$i/"`
i+=1
done
mv -b ./"$f" ./"$tmp"
done
Run the script like this:
find . -exec thescript '{}' \;
The find command gives you lots of options for specifing which files to run on, works recursively, and passes all the filenames in to the script. The script backs all file up with tar (uncompressed) and then renames them.
This isn't the best script, since it isn't smart enough to avoid the manual loop and check for identical file names.

Resources