How to rename files in bash to increase number in name? - linux

I have a few thousand files named as follows:
Cyprinus_carpio_600_nanopore_trim_reads.fasta
Cyprinus_carpio_700_nanopore_trim_reads.fasta
Cyprinus_carpio_800_nanopore_trim_reads.fasta
Cyprinus_carpio_900_nanopore_trim_reads.fasta
Vibrio_cholerae_3900_nanopore_trim_reads.fasta
for 80 variations of the first two words (80 different species), i would like to rename all of these files such that the number is increased by 100 - for example:
Vibrio_cholerae_3900_nanopore_trim_reads.fasta
would become
Vibrio_cholerae_4000_nanopore_trim_reads.fasta
or
Cyprinus_carpio_300_nanopore_trim_reads.fasta
would become
Cyprinus_carpio_400_nanopore_trim_reads.fasta
Unfortunately I can't work out how to get to rename them, i've had some luck with following the solutions on https://unix.stackexchange.com/questions/40523/rename-files-by-incrementing-a-number-within-the-filename
But i can't get it to work for the inside of the name, i'm running on Ubuntu 18.04 if that helps

If you can get hold of the Perl-flavoured version of rename, that is simple like this:
rename -n 's/(\d+)/$1 + 100/e' *fasta
Sample Output
'Ciprianus_maximus_11_fred.fasta' would be renamed to 'Ciprianus_maximus_111_fred.fasta'
'Ciprianus_maximus_300_fred.fasta' would be renamed to 'Ciprianus_maximus_400_fred.fasta'
'Ciprianus_maximus_3900_fred.fasta' would be renamed to 'Ciprianus_maximus_4000_fred.fasta'
If you can't read Perl, that says... "Do a single substitution as follows. Wherever you see a bunch of digits next to each other in a row (\d+), remember them (because I put that in parentheses), and then replace them with the evaluated expression of that bunch of digits ($1) plus 100.".
Remove the -n if the dry-run looks correct. The only "tricky part" is the use of e at the end of the substitution which means evaluate the expression in the substitution - or I call it a "calculated replacement".

If there is only one number in your string then below two line of code should provide help you resolve your issue
filename="Vibrio_cholerae_3900_nanopore_trim_reads.fasta"
var=$(echo $filename | grep -oP '\d+')
echo ${filename/${var}/$((var+100))}
Instead of echoing the changed file name, you can take it into a variable and use mv command to rename it

Considering the filename conflicts in the increasing order, I first thought of reversing the order but there still remains the possibility of conflicts in the alphabetical (standard) sort due to the difference to the numerical sort.
Then how about a two-step solution: in the 1st step, an escape character (or whatever character which does not appear in the filename) is inserted in the filename and it is removed in the 2nd step.
#!/bin/bash
esc=$'\033' # ESC character
# 1st pass: increase the number by 100 and insert a ESC before it
for f in *.fasta; do
num=${f//[^0-9]/}
num2=$((num + 100))
f2=${f/$num/$esc$num2}
mv "$f" "$f2"
done
# 2nd pass: remove the ESC from the filename
for f in *.fasta; do
f2=${f/$esc/}
mv "$f" "$f2"
done

Mark's perl-rename solution looks great but you should apply it twice with a bump of 50 to avoid name conflict. If you can't find this flavor of rename you could try my rene.py (https://rene-file-renamer.sourceforge.io) for which the command would be (also applied twice) rene *_*_*_* *_*_?_* B/50. rene would be a little easier because it automatically shows you the changes and asks whether you want to make them and it has an undo if you change your mind.

Related

Using bash script to move folders based on their name

I have a file (list.txt) with a list of directories in it, I want to read each line & then move the folders to a specific destination folder based on a number range in the directories name. It needs to work no matter how many directories are listed in the file because this will change daily.
In list.txt each line lists the directories like so;
/20211601_113212_hello_LGD_text.csv/KLS_8938I1_02020721_1/
I'm interested in the text in bold only, if that number is in a certain range e.g. 1-500 I want to move it to 'folder1', or 500-100 'folder2' and so on. The number will always precede the _1/ & will always be 4 numbers.
Using
grep -0 '......$' /list.txt | cut -d_ -f1
I've managed to grep the number I want, but I haven't been able to do anything with that information. I'm thinking I could split out each line into separate files & then use a for loop to iterate through each file, but I don't know how that would exactly work. Any suggestions?
declare -i number
while IFS= read -r directory; do
((number = 10#"${directory: -7:-3}"))
echo mv "${directory%/}" "folder$(((number + 499) / 500))/"
done < list.txt
The syntax highlighting above malfunctions, # is not a start of a comment in this case.
The for-cycle reads whole lines without messing with spaces and without considering backslash escapes. This is to make sure that the directory names can be (almost) arbitrary.
We extract the number by starting at the 7. character from the end and ending before the 3. character from the end (both 1-based). Alternatively, the _1/ could be also stripped using ${directory%_1/}.
We treat the number as an integer (declare -i) and make sure it is not interpreted as an octal number (10#), because otherwise integer literals starting with 0 are octal (e.g. $((08)) yields an error).
We generate the mv command — remove the echo once satisfied with the output — where ${directory%/} strips the final slash (which you may want to tweak) and then $(((number + 499) / 500)) generates the required 1-based “folder” number, so e.g. 499 yields 1, 500 yields (still) 1, 501 yields 2 etc. You will need to tweak that to your liking.

Using sed to obtain pattern range through multiple files in a directory

I was wondering if it was possible to use the sed command to find a range between 2 patterns (in this case, dates) and output these lines in the range to a new file.
Right now, I am just looking at one file and getting lines within my time range of the file FileMoverTransfer.log. However, after a certain time period, these logs are moved to new log files with a suffix such as FileMoverTransfer.log-20180404-xxxxxx.gz. Here is my current code:
sed -n '/^'$start_date'/,/^'$end_date'/p;/^'$end_date'/q' FileMoverTransfer.log >> /public/FileMoverRoot/logs/intervalFMT.log
While this doesn't work, as sed isn't able to look through all of the files in the directory starting with FileMoverTransfer.log?
sed -n '/^'$start_date'/,/^'$end_date'/p;/^'$end_date'/q' FileMoverTransfer.log* >> /public/FileMoverRoot/logs/intervalFMT.log
Any help would be greatly appreciated. Thanks!
The range operator only operates within a single file, so you can't use it if the start is in one file and the end is in another file.
You can use cat to concatenate all the files, and pipe this to sed:
cat FileMoverTransfer.log* | sed -n "/^$start_date/,/^$end_date/p;/^$end_date/q" >> /public/FileMoverRoot/logs/intervalFMT.log
And instead of quoting and unquoting the sed command, you can use double quotes so that the variables will be expanded inside it. This will also prevent problems if the variables contain whitespace.
awk solution
As the OP confirmed that an awk solution would be acceptable, I post it.
(gunzip -c FileMoverTransfer.log-*.gz; cat FileMoverTransfer.log ) \
|awk -v st="$start_date" -v en="$end_date" '$1>=st&&$1<=en{print;next}$1>en{exit}'\
>/public/FileMoverRoot/logs/intervalFMT.log
This solution is functionally almost identical to Barmar’s sed solution, with the difference that his solution, like the OP’s, will print and quit at the first record matching the end date, while mine will print all lines matching the end date and quit at the first record past the end date, without printing it.
Some remarks:
The OP didn't specify the date format. I suppose it is a format compatible with ordinary string order, otherwise some conversion function should be used.
The files FileMoverTransfer.log-*.gz must be named in such a way that their alphabetical ordering corresponds to the chronological order (which is probably the case.)
I suppose that the dates are separated from the rest of the line by whitespace. If they aren’t, you have to supply the -F option to awk. E.g., if the dates are separated by -, you must write awk -F- ...
awk is much faster than sed in this case, because awk simply looks for the separator (whitespace or whatever was supplied with -F) while sed performs a regexp match.
There is no concept of range in my code, only date comparison. The only place where I suppose that the lines are ordered is when I say $1>en{exit}, that is exit when a line is newer than the end date. If you remove that final pattern and its action, the code will run through the whole input, but you could drop the requirement that the files be ordered.

concatenate two strings and one variable using bash

I need to generate filename from three parts, two strings, and one variable.
for f in `cat files.csv`; do echo fastq/$f\_1.fastq.gze; done
files.csv has the following lines:
Sample_11
Sample_12
I need to generate the following:
fastq/Sample_11_1.fastq.gze
fastq/Sample_12_1.fastq.gze
My problem is that I got the below files:
_1.fastq.gze_11
_1.fastq.gze_12
the string after the variable deletes the string before it.
I appreciate any help
Regards
By the way your idiom: for f in cat files.csv should be avoid. Refer: Dangerous Backticks
while read f
do
echo "fastq/${f}/_1.fastq.gze"
done < files.csv
You can make it a one-liner with xargs and printf.
xargs printf 'fastq/%s_1.fastq.gze\n' <files.csv
The function of printf is to apply the first argument (the format string) to each argument in turn.
xargs says to run this command on as many files as it can fit onto the command line (splitting it up into multiple invocations if the input file is too large to fit all the arguments onto a single command line, subject to the ARG_MAX constant in your kernel).
Your best bet, generally, is to wrap the variable name in braces. So, in this case:
echo fastq/${f}_1.fastq.gz
See this answer for some details about the general concept, as well.
Edit: An additional thought looking at the now-provided output makes me think that this isn't a coding problem at all, but rather a conflict between line-endings and the terminal/console program.
Specifically, if the CSV file ends its lines with just a carriage return (ASCII/Unicode 13), the end of Sample_11 might "rewind" the line to the start and overwrite.
In that case, based loosely on this article, I'd recommend replacing cat (if you understandably don't want to re-architect the actual script with something like while) with something that will strip the carriage returns, such as:
for f in $(tr -cd '\011\012\040-\176' < temp.csv)
do
echo fastq/${f}_1.fastq.gze
done
As the cited article explains, Octal 11 is a tab, 12 a line feed, and 40-176 are typeable characters (Unicode will require more thinking). If there aren't any line feeds in the file, for some reason, you probably want to replace that with tr '\015' '\012', which will convert the carriage returns to line feeds.
Of course, at that point, better is to find whatever produces the file and ask them to put reasonable line-endings into their file...

How to remove part of file names between periods?

I would like to rename many files in this format
abc.123.fits
abcd.1234.fits
efg.12.fits
to this format
abc.fits
abcd.fits
efg.fits
I tried the rename function, but since the part I'm trying to replace is not the same in all files, it did not work. I am using Linux.
for f in *; do mv "$f" "${f%%.*}.${f##*.}"; done`
${f%%.*} removes everything after the first period, including the period. ${f##*.} removes everything before the last period, including the period (i.e. it gets the file extension). Concatenating these two, with a period between them, gives you the desired result.
You can change the * to a more restrictive pattern such as *.fits if you don't want to rename all files in the current directory. The quotes around the parameters to mv are necessary if any filenames contain whitespace.
Many other variable substitution expressions are available in bash; see a reference such as TLDP's Bash Parameter Substitution for more information.

Copy a section within two keywords into a target file

I have thousand of files in a directory and each file contains numbers of defined variables starting with keyword DEFINE and ending with a semicolon (;), I want to copy all the occurrences of the data between this keyword(Inclusive) into a target file.
Example: Below is the content of the text file:
/* This code is for lookup */
DEFINE variable as a1 expr= extract (n123f1 using brach, code);
END.
Now from the above content i just want to copy the section starting with DEFINE and ending with ; into a target file i.e. the output should be:
DEFINE variable as a1 expr= extract (n123f1 using brach, code);
this needs to done for thousands of scripts and multiple occurences, Please help out.
Thanks a lot , the provided code works, but to a limited extent only when the whole sentence is in a single line but the data is not supposed to be in one single line it is spread in multiple line like below:
/* This code is for lookup */
DEFINE variable as a1 expr= if branchno > 55
then
extract (n123f1 using brach, code)
else
branchno = null
;
END.
The code is also in the above fashion i need to capture all the data between DEFINE and semicolon (;) after every define there will be an ending semicolon ;, this is the pattern.
It sounds like you want grep(1):
grep '^DEFINE.*;$' input > output
Try using grep. Let's say you have files with extension .txt in present directory,
grep -ho 'DEFINE.*;' *.txt > outfile
Output:
DEFINE variable as a1 expr= extract (n123f1 using brach, code);
Short Description
-o will give you only matching string rather than whole line, if line also contains something else and want to ommit it.
-h will suppress file names before matching result
Read man page of grep by typing man grep on your terminal
EDIT
If you want capability to search in multiple lines, you can use pcregrep with -M option
pcregrep -M 'DEFINE.*?(\n|.)*?;' *.txt > outfile
Works fine on my system. Check man pcregrep for more details
Reference : SO Question
One can make a simple solution using sed with version :
sed -n -e '/^DEFINE/{:a p;/;$/!{n;ba}}' your-file
Option -n prevents sed from printing every line; then each time a line begins with DEFINE, print the line (command p) then enter a loop: until you find a line ending with ;, grab the next line and loop to the print command. When exiting the loop, you do nothing.
It looks a bit dirty; it seems that the version sed15 has a shorter (and more straightforward) way to achieve this in one line:
sed -n -e '/^DEFINE/,/;$/p' your-file
Indeed, only for this version of sed, both patterns are treated; for other versions of sed like mine under cygwin, the range patterns must be on separate lines to work properly.
One last thing to remember: it does not treat inclusive patterned ranges, i.e. it stops printing after the first encountered end-pattern even if multiple start patterns have been matched. Prefer something with awk if this is a feature you are looking for.

Resources