I have a series of image files with filename formats as follows.
9407-C-9406-B-038.jpg
9407-C-9406-B-118.jpg
9422-AC-012.jpg
9422-AC-112.jpg
9422-BD-043.jpg
9422-BD-Still-001.jpg
9405_M.jpg
9792A.jpg
9792B.jpg
The relevant portion for my purpose is the first 4 characters, the rest is irrelevant.
I'd like to rename the files such that the leading 4 char string is retained, the extension is retained, but the rest is incremented when a duplicate is encountered :
e.g :
9422-AC-012.jpg becomes 942200.jpg
9422-AC-112.jpg becomes 942201.jpg
9422-BD-043.jpg becomes 942202.jpg
9422-BD-Still-001.jpg becomes 942203.jpg
9405_M.jpg becomes 940500.jpg
9792A.jpg becomes 979200.jpg
9792B.jpg becomes 979201.jpg
Using 'rename' I can strip the string after the first 4 chars, and I can increment in totality, which results with the incremental portion of the filename ending up in the thousands.
rename -n -N 0001 's/(?<=.{4}).*/$N.jpg/' *.jpg
Can anyone suggest a way to strip the filename after the first 4 chars, rename the file, and increment only when a duplicate is encountered?
awk to the rescue!
With the filenames listed in file
$ while read f t;
do
echo "mv $f $t";
done < <(awk -F\. '{k=substr($0,1,4); printf "%s\t%s%04d.%s\n", $0,k,a[k]++,$NF}' file)
mv 9407-C-9406-B-038.jpg 94070000.jpg
mv 9407-C-9406-B-118.jpg 94070001.jpg
mv 9422-AC-012.jpg 94220000.jpg
mv 9422-AC-112.jpg 94220001.jpg
mv 9422-BD-043.jpg 94220002.jpg
mv 9422-BD-Still-001.jpg 94220003.jpg
mv 9405_M.jpg 94050000.jpg
mv 9792A.jpg 97920000.jpg
mv 9792B.jpg 97920001.jpg
remove echo for the actual renaming, assumes no white space in the file name, at least 4 chars long and has extension.
the example shows two digit counter, you mentioned the counter to be in the thousands, here it's controlled by the print format %04d, change 4 to 2 if you don't expect more than 100. Regardless, the counter will create unique file but may extend from the allocated number of digits (e.g. it will generate 00,01,..99, and for the next one will be 100.)
Related
I would like to clean up a folder with videos. I have a bunch of videos that were downloaded with different resolutions, so each file will start with the same name and then end with "_480p" or "_720p" etc.
I just want to keep the largest file of each such set.
So I am looking for a way to delete files based on
check if name before "_" is identical
if true, then delete all files except largest one
Thinking of a flexible and fast way to approach the problem, you can gather a list of files ending in "[[:digit:]]+p" and then a quick way to parse the names is to provide them on stdin to awk and let awk index an array with the file prefix (path + part of name before '_') so it will be unique for files allowing the different format size to be obtained and stored at that index.
Then it's a simply matter of comparing the stored resolution number for the file against the current file number and deleting the lesser of the two.
Your find command to locate all files in the directory below the current, recursively, could be:
find ./tmp -type f -regex "^.*[0-9]+p$"
What I would do is then pipe the filename output to a short awk script where an array stores the last seen number for a given file prefix, and then if the current record (line) resolution number if bigger than the value stored in the array, a filename using the array number is created and that file deleted with system() using rm filename. If the current line resolution number is less than what is already stored in the array for the file, you simply delete the current file.
You can do that as:
#!/usr/bin/awk -f
BEGIN { FS = "/" }
{
num = $NF # last field holds number up to 'p'
prefix = $0 # prefix is name up to "_[[:digit:]]+p
sub (/^.*_/, "", num) # isolate number
sub (/p$/, "", num) # remove 'p' at and
sub (/_[[:digit:]]+p$/, "", prefix) # isolate path and name prefix
if (prefix in a) { # current file in array a[] ?
rmfile = $0 # set file to remove to current
if (num + 0 > a[prefix] + 0) { # current number > array number
rmfile = prefix "_" a[prefix] "p" # for remove filename from array
a[prefix] = num # update array with higher num
}
system ("rm " rmfile); # delete the file
}
else
a[prefix] = num # if no num for prefix in array, store first
}
(note: the field-separator splits the fields using the directory separator so you have all file components to work with.)
Example Use/Output
With a representative set of files in a tmp/ directory below the current, e,g.
$ ls -1 tmp
a_480p
a_720p
b_1080p
b_480p
c_1080p
c_720p
Running the find command piped to the awk script named awkparse.sh would be as follows (don't forget to make the awk script executable):
$ find ./tmp -type f -regex "^.*[0-9]+p$" | ./awkparse.sh
Looking at the directory after piping the results of find to the awk script, the tmp/ directory now only contains the highest resolution (largest) files for any given filename, e.g.
$ ls -1
a_720p
b_1080p
c_1080p
This would be highly efficient. It could also handle all files in a nested directory structure where multiple directory levels hold files you need to clean out. Look things over and let me know if you have questions.
This shell script might be what you want:
previous_prefix=
for file in *_[0-9]*[0-9]p*; do
prefix=${file%_*}
resolution=${file##*_}
resolution=${resolution%%p*}
if [ "$prefix" = "$previous_prefix" ]; then
if [ "$resolution" -gt "$greater_resolution" ]; then
file_to_be_removed=$greater_file
greater_file=$file
greater_resolution=$resolution
else
file_to_be_removed=$file
fi
echo rm -- "$file_to_be_removed"
else
greater_resolution=$resolution
greater_file=$file
previous_prefix=$prefix
fi
done
Drop the echo if the output looks good.
I would try to:
list all non-smallest files (non-480p): *_720p* and *_1080p*
for each of them replace *_720p*/*_1080p* in the name with all possible smaller resolutions
and try to delete those files with rm -f, whether they exist or not
#!/bin/sh -e
shopt -s nullglob
for file in *_1080p*; do
rm -f -- "${file//_1080p/_720p}"
rm -f -- "${file//_1080p/_480p}"
done
for file in *_720p*; do
rm -f -- "${file//_720p/_480p}"
done
And here is a Bash script using nested loops to automate the above:
#!/bin/bash -e
shopt -s nullglob
res=(_1080p _720p _480p _240p)
for r in ${res[#]}; do
res=("${res[#]:1}") # remove the first element in res array
for file in *$r*; do
for r2 in ${res[#]}; do
rm -f -- "${file//$r/$r2}"
done
done
done
I have a file filename with 2 lines:
2018-Feb-22 06:02:01.1234|AVC-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|
2018-Feb-22 07:02:01.1234|BCV-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|
I want to concat string
"Warning: Time elapsed:,3444, is smaller than Name2:44222"
At the end of the line which is equal with
Var1="2018-Feb-22 06:02:01.1234|AVC-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|"
Or has the following pattern
Var2="2018-Feb-22 06:02:01.1234|AVC-00123HHGF|"
And then filename will contain
2018-Feb-22 06:02:01.1234|AVC-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|"Warning: Time elapsed:,3444, is smaller than Name2:44222"
2018-Feb-22 07:02:01.1234|BCV-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|
This is what i've tried:
Var3='2018-Feb-22 06:02:01.1234|AVC-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|"Warning: Time elapsed:,3444, is smaller than Name2:44222"'
sed -i 's/'"$Var1"'/'"$Var3"'/' filename
sed -i "s/$Var1/$Var3/" filename
Var4='"Warning: Time elapsed:,3444, is smaller than Name2:44222"'
sed -i "/$Var1/a $Var4" filename
But nothing happens. Not even an error.
It's there any other way to do this? I need to keep the same order of the lines within filename.
UPDATE: i've gave up on using sed and tried a less optimal solution, but it works.
I have 2 files:
File_to_change
File_with_lines_to_add
While read line; do
Prkey=##calculate pk
N=0
While read linetoadd; do
Prmkey=##calculate pk
If [ "$Prkey" =="$Prmkey" ]; then
N=1
echo "$line$linetoadd">>outfile
Fi
Done < File_with_lines_to_add
If [ "$N" == "0" ]; then
echo "$line">>outfile
Fi
Done < File_to_change
suffix="Warning: Time elapsed:,3444, is smaller than Name2:44222"
pattern="AVC-"
sed -E "/$pattern/s/^(.*)$/\1$suffix/" filename
2018-Feb-22 06:02:01.1234|AVC-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|Warning: Time elapsed:,3444, is smaller than Name2:44222
2018-Feb-22 07:02:01.1234|BCV-00123HHGF|427654|Default|Name1 [1]|2334|2344444|(00:00:00.45567)|
sed -E : -E allows later usage of () for grouping, without masking
"..." : the command. Double qoutes allow $x expressions to be evaluated by the shell, before sed gets them to read
/$pattern/ : look for this pattern and only act, if pattern is found
s/a/b/ : substitute expression a with b
/^(.*)$/ : our a-expression
^ Start of line
(.*) : an arbitrary character, and in arbitrary count, captured as a group for later reference as \1, since it's the first group.
$ : end of line
/\1$suffix/ : our b-expression
\1 : what matched above the (.*) pattern
$suffix : what was replaced by the shell
filename
Note that many keywords (better key-characters, since most of them are only 1 character long) change their meaning by context, and quotation is important, and flags like -E, -i, -r.
For example, the $ can be interpreted by the shell, but if not touched, in can mean 'end of line' or 'last line' or 'Dollar Sign'.
'+' can mean at least one, '.' can mean 'any character', a \ is used for masking in sed, to introduce back references like \1. It's a mass but very useful to learn.
Use sed with care.
The vertical bar in "34|AVC-00123HHGF|42" will be interpreted by sed als alternative, either 4 or A and either F or 4. So that would match:
"34VC-00123..."
"3AVC-00123.."
"...HHGF2"
"...HHG42"
which makes for 4 combinations of 2x2 alternatives, none of them matching "34|AVC-00123HHGF|42". How to handle that? Well - masking:
"34\|AVC-00123HHGF\|42"
which might again be done by other sed programs, but you guess where that leads to.
"34.AVC-00123HHGF.42" would match, so make reasonable paranoid decisions, and test and control. :)
Try this:
sed -i '' '/2018-Feb-22 06:02:01.1234|AVC-00123HHGF|/s/$/\"Warning: Time elapsed:,3444, is smaller than Name2:44222\"/' gilename
If that doesn't work, retreat to something simpler, tell us what happens when you try this:
sed 's/2018/XXXX/' filename
I've looked everywhere and I'm out of luck.
I am trying to count the files in my current directory and all sub directories so that when I run the shell script count_files.sh it will produce a similar output to:
$
2 sh
4 html
1 css
2 noexts
(EDIT the above output should have each count and extension on a newline)
$
where noexts are either files without any period as an extension (ex: fileName ) or files with a period but no extension (ex: fileName. ).
this pipeline:
find * | awf -F . '{print $NF}'
gives me a comprehensive list of all the files, and I've figured out how to remove files without any period (ex: fileName ) using sed '/\//d'
MY ISSUE is that I cannot remove the files from the output of the above pipeline that are separated by a period but have NULL after the period (ex: fileName. ), as it is separated by the delimiter '.'
How can I use sed like above to remove a null character from a pipe input?
I understand this could be a quick fix, but I've been googling like a madman with no luck. Thanks in advance.
Chip
To filter filenames that end with ., since filenames are the whole input line in find's output, you could use
sed '/\.$/d'
Where \. matches a literal dot and $ matches the end of the line.
However, I think I'd do the whole thing in awk. Since sorting does not appear to be necessary:
EDIT: Found a nicer way to do it with awk and find's -printf action.
find . -type f -printf '%f\n' | awk -F. '!/\./ || $NF == "" { ++count["noext"]; next } { ++count[$NF] } END { for(k in count) { print k " " count[k] } }'
Here we pass -printf '%f\n' to find to make it print only the file name without the preceding directory, which makes it much easier to work with for our purposes -- this way there's no need to worry about periods in directory names (such as /etc/somethingorother.d). The field separator is '.', the awk code is
!/\./ || $NF == "" { # if the line (the filename) does not contain
# a period or there's nothing after the last .
++count["noext"] # increment the "noext" counter
# note that this will be collated with files that
# have ".noext" as filename extension. see below.
next # go to the next line
}
{ # in all other lines
++count[$NF] # increment the counter for the file extension
}
END { # in the very end:
for(k in count) { # print the counters.
print count[k] " " k
}
}
Note that this way, if there is a file "foo.noext", it will be counted among the files without a filename extension. If this is a worry, use a special counter for files without an extension -- either apart from the array or with a key that cannot be a filename extension (such as one that includes a . or the empty string).
a lot of files I download have crap/spam in their filenames, e.g.
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
I've come up with two ways for dealing with them but they both seem pretty clunky:
with parameter expansion:
if [[ ${base_name} != ${base_name//\[+([^\]])\]} ]]
then
mv -v "${dir_name}/${base_name}" "${dir_name}/${base_name//\[+([^\]])\]}" &&
base_name="${base_name//\[+([^\]])\]}"
fi
if [[ ${base_name} != ${base_name//www.*.com - /} ]]
then
mv -v "${dir_name}/${base_name}" "${dir_name}/${base_name//www.*.com - /}" &&
base_name="${base_name//www.*.com - /}"
fi
# more of these type of statements; one for each type of frequently-encountered pattern
and then with echo/sed:
tmp=`echo "${base_name}" | sed -e 's/\[[^][]*\]//g' | sed -e 's/\s-\s//g'`
mv "${base_name}" "{tmp}"
I feel like the parameter expansion is the worse of the two but I like it because I'm able to keep the same variable assigned to the file for further processing after the rename (the above code is used in a script that's called for each file after the file download is complete).
So anyway I was hoping there's a better/cleaner way to do the above that someone more knowledgeable than myself could show me, preferably in a way that would allow me to easily reassign the old/original variable to the new/renamed file.
Thanks
Two answer: using perl rename or using pure bash
As there are some people who dislike perl, I wrote my bash only version
Renaming files by using the rename command.
Introduction
Yes, this is a typical job for rename command which was precisely designed for:
man rename | sed -ne '/example/,/^[^ ]/p'
For example, to rename all files matching "*.bak" to strip the
extension, you might say
rename 's/\.bak$//' *.bak
To translate uppercase names to lower, you'd use
rename 'y/A-Z/a-z/' *
More oriented samples
Simply drop all spaces and square brackets:
rename 's/[ \[\]]*//g;' *.ext
Rename all .jpg by numbering from 1:
rename 's/^.*$/sprintf "IMG_%05d.JPG",++$./e' *.jpg
Demo:
touch {a..e}.jpg
ls -ltr
total 0
-rw-r--r-- 1 user user 0 sep 6 16:35 e.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 d.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 c.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 b.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 a.jpg
rename 's/^.*$/sprintf "IMG_%05d.JPG",++$./e' *.jpg
ls -ltr
total 0
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00005.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00004.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00003.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00002.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00001.JPG
Full syntax for matching SO question, in safe way
There is a strong and safe way using rename utility:
As this is perl common tool, we have to use perl syntax:
rename 'my $o=$_;
s/[ \[\]]+/-/g;
s/-+/-/g;
s/^-//g;
s/-\(\..*\|\)$/$1/g;
s/(.*[^\d])(|-(\d+))(\.[a-z0-9]{2,6})$/
my $i=$3;
$i=0 unless $i;
sprintf("%s-%d%s", $1, $i+1, $4)
/eg while
$o ne $_ &&
-f $_;
' *
Testing rule:
touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext'
ls -1
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
rename 'my $o=$_; ...
...
...' *
ls -1
www.crap.com-file.name-1.ext
www.crap.com-file.name.ext
touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext'
ls -1
www.crap.com-file.name-1.ext
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
www.crap.com-file.name.ext
rename 'my $o=$_; ...
...
...' *
ls -1
www.crap.com-file.name-1.ext
www.crap.com-file.name-2.ext
www.crap.com-file.name-3.ext
www.crap.com-file.name.ext
... and so on...
... and it's safe while you don't use -f flag to rename command: file won't be overwrited and you will get an error message if something goes wrong.
Renaming files by using bash and so called bashisms:
I prefer doing this by using dedicated utility, but this could even be done by using pure bash (aka without any fork)
There is no use of any other binary than bash (no sed, awk, tr or other):
#!/bin/bash
for file;do
newname=${file//[ \]\[]/.}
while [ "$newname" != "${newname#.}" ] ;do
newname=${newname#.}
done
while [ "$newname" != "${newname//[.-][.-]/.}" ] ;do
newname=${newname//[.-][.-]/-};done
if [ "$file" != "$newname" ] ;then
if [ -f $newname ] ;then
ext=${newname##*.}
basename=${newname%.$ext}
partname=${basename%%-[0-9]}
count=${basename#${partname}-}
[ "$partname" = "$count" ] && count=0
while printf -v newname "%s-%d.%s" $partname $[++count] $ext &&
[ -f "$newname" ] ;do
:;done
fi
mv "$file" $newname
fi
done
To be run with files as argument, for sample:
/path/to/my/script.sh \[*
Replacing spaces and square bracket by dot
Replacing sequences of .-, -., -- or .. by only one -.
Test if filename don't differ, there is nothing to do.
Test if a file exist with newname...
split filename, counter and extension, for making indexed newname
loop if a file exist with newname
Finaly rename the file.
Take advantage of the following classical pattern:
job_select /path/to/directory| job_strategy | job_process
where job_select is responsible for selecting the objects of your job, job_strategy prepares a processing plan for these objects and job_process eventually executes the plan.
This assumes that filenames do not contain a vertical bar | nor a newline character.
The job_select function
# job_select PATH
# Produce the list of files to process
job_select()
{
find "$1" -name 'www.*.com - *' -o -name '[*] - *'
}
The find command can examine all properties of the file maintained by the file system, like creation time, access time, modification time. It is also possible to control how the filesystem is explored by telling find not to descend into mounted filesystems, how much recursions levels are allowed. It is common to append pipes to the find command to perform more complicated selections based on the filename.
Avoid the common pitfall of including the contents of hidden directories in the output of the job_select function. For instance, the directories CVS, .svn, .svk and .git are used by the corresponding source control management tools and it is almost always wrong to include their contents in the output of the job_select function. By inadvertently batch processing these files, one can easily make the affected working copy unusable.
The job_strategy function
# job_strategy
# Prepare a plan for renaming files
job_strategy()
{
sed -e '
h
s#/www\..*\.com - *#/#
s#/\[^]]* - *#/#
x
G
s/\n/|/
'
}
This commands reads the output of job_select and makes a plan for our renaming job. The plan is represented by text lines having two fields separated by the character |, the first field being the old name of the file and the second being the new computed file of the file, it looks like
[ www.crap.com ] file.name.1.ext|file.name.1.ext
www.crap.com - file.name.2.ext|file.name.2.ext
The particular program used to produce the plan is essentially irrelevant, but it is common to use sed as in the example; awk or perl for this. Let us walk through the sed-script used here:
h Replace the contents of the hold space with the contents of the pattern space.
… Edit the contents of the pattern space.
x Swap the contents of the pattern and hold spaces.
G Append a newline character followed by the contents of the hold space to the pattern space.
s/\n/|/ Replace the newline character in the pattern space by a vertical bar.
It can be easier to use several filters to prepare the plan. Another common case is the use of the stat command to add creation times to file names.
The job_process function
# job_process
# Rename files according to a plan
job_process()
{
local oldname
local newname
while IFS='|' read oldname newname; do
mv "$oldname" "$newname"
done
}
The input field separator IFS is adjusted to let the function read the output of job_strategy. Declaring oldname and newname as local is useful in large programs but can be omitted in very simple scripts. The job_process function can be adjusted to avoid overwriting existing files and report the problematic items.
About data structures in shell programs
Note the use of pipes to transfer data from one stage to the other: apprentices often rely on variables to represent such information but it turns out to be a clumsy choice. Instead, it is preferable to represent data as tabular files or as tabular data streams moving from one process to the other, in this form, data can be easily processed by powerful tools like sed, awk, join, paste and sort — only to cite the most common ones.
You can use rnm
rnm -rs '/\[crap\]|\[spam\]//g' *.ext
The above will remove [crap] or [spam] from filename.
You can pass multiple regex pattern by terminating them with ; or overloading the -rs option.
rnm -rs '/[\[\]]//g;/\s*\[crap\]//g' -rs '/crap2//' *.ext
The general format of this replace string is /search_part/replace_part/modifier
search_part : regex to search for.
replace_part : string to replace with
modifier : i (case insensitive), g (global replace)
uppercase/lowercase:
A replace string of the form /search_part/\c/modifier will make the selected part of the filename (by the regex search_part) lowercase while \C (capital \C) in replace part will make it uppercase.
rnm -rs '/[abcd]/\C/g' *.ext
## this will capitalize all a,b,c,d in the filenames
If you have many regex patterns that need to be dealt with, then put those patterns in a file and pass the file with -rs/f option.
rnm -rs/f /path/to/regex/pattern/file *.ext
You can find some other examples here.
Note:
rnm uses PCRE2 (revised PCRE) regex.
You can undo an unwanted rename operation by running rnm -u
P.S: I am the author of this tool.
If you are using Ubunntu/Debian os use rename command to rename multiple files at time.
If you want to use something not depending on perl, you can use the following code (let's call it sanitizeNames.sh). It is only showing a few cases, but it's easily extensible using string substitution, tr (and sed too).
#!/bin/bash
ls $1 |while read f; do
newfname=$(echo "$f" \
|tr -d '\[ ' \ # Removing opened square bracket
|tr ' \]' '-' \ # Translating closing square bracket to dash
|tr -s '-' \ # Squeezing multiple dashes
|tr -s '.' \ # Squeezing multiple dots
)
newfname=${newfname//-./.}
if [ -f "$newfname" ]; then
# Some string magic...
extension=${newfname##*\.}
basename=${newfname%\.*}
basename=${basename%\-[1-9]*}
lastNum=$[ $(ls $basename*|wc -l) ]
mv "$f" "$basename-$lastNum.$extension"
else
mv "$f" "$newfname"
fi
done
And use it:
$ touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext' '[ www.crap.com ] - file.name.ext' '[www.crap.com ].file.anothername.ext2' '[www.crap.com ].file.name.ext'
$ ls -1 *crap*
[ www.crap.com ] - file.name.ext
[ www.crap.com ] file.name.ext
[www.crap.com ].file.anothername.ext2
[www.crap.com ].file.name.ext
www.crap.com - file.name.ext
$ ./sanitizeNames.sh *crap*
$ ls -1 *crap*
www.crap.com-file.anothername.ext2
www.crap.com-file.name-1.ext
www.crap.com-file.name-2.ext
www.crap.com-file.name-3.ext
www.crap.com-file.name.ext
I would like to rename multiple files, but not just appending my string to the end or the beginning of the file. I would like to place it in a specific location. This is the command I am working with right now, but it can only add things at the beginning and the end of the file name.
for f in `ls ~/tmp/*`; do FILE=`basename $f`; echo "Rename:"$f;echo $FILE; mv "$f" "/home/tmp/JC_"${FILE%.*}"_hg19."${FILE#*.}""; done
Lets say the file names are as follows hell_1.txt (and lets say there is a ton of them each with a different number for simplicity) I would like to add an o into the file name so the resulting name would be hello_1.txt it would be nice if you had a general solution not just for this example.
this should work:
for x in ~/tmp/*.txt; do mv $x `echo $x | sed -e 's#hell#hello#'`; done
if i understand you wish to change any "hell.*_NNN.txt" to "hel.*o_NNN.txt" (keeping the .* between "hell" and "_NNN.txt" (NNN being any number).
then:
for x in ~/tmp/*.txt; do
mv "$x" "$(echo "$x" | LC_COLLATE=C sed -e 's#\(hell.*\)\(_[0-9]*\.txt$\)#\1o\2#')"
done
I added the LC_COLLATE=C during sed invocation so you can rely on the "[0-9]" matching only digits '0' or '1' or ... or '9'
(If you wonder why adding the LC_COLLATE: with some locales [A-Z] could match every letters A-Z or a-y (except 'z'!) as in such locales letters appears in this order: 'A' 'a' 'B' 'b' ... 'Z' 'z'. And with other locales, who knows?)
(note: you could also replace "[0-9]" with the "[[:digit:]]" notation, but it could be less portable : "old" version of sed won't know about this notation and will try to match any of '[' or ':' or ... or 't' or ':', followed by a ']' (*, so 0,1 or more times) ... That's why I don't like using those special [[:things:]] with sed, tr, etc : i see them as less portable. Use perl instead if you prefer to use those?)
How about
rename 's/hell_/hello_/' /tmp/*.txt