Rename large number of files in bash [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a large list of files that I want to rename.
Much like this
So this is what my files look like
something.pcap1
something.pcap10
something.pcap11
something.pcap12
...
something.pcap111
something.pcap1111
essentially I want to rename all of the files so that the numbers get padded with 0's and they are 5 digit numbers.
something.pcap00001

A simple for loop should do the trick (can be script file):
for file in $(ls -1 something.pcap*); do
[[ ${file} =~ ^something.pcap([[:digit:]]*).* ]]
newfile=$(printf "something.pcap%05d" ${BASH_REMATCH[1]})
mv ${file} ${newfile}
done

Something like this?
rename 's/\d+$/sprintf("%05d",$&)/e' soemthing.pcap*
Note: this works with the rename as found in debian and its derivates.

What about something like this?
#!/bin/bash
for i in $(ls something.pcap*); do
q=$(echo $i|sed -e 's/pcap/pcap00000/;s/pcap0*\([0-9]\{6,\}\)$/pcap\1/')
mv $i $q
done
I hope this will help

Related

How do I copy grep output to another text file in different directory? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 days ago.
Improve this question
I am trying to copy specific words from a text file in a directory to another using grep. I have the retrieval of the words I want from the text file, now I just am wondering how I would go about moving it another text file, say in my home directory.
Here is the grep command.
grep -E '^.[*ing]{5}$' words
and here is what I have tried
grep -E '^.[*ing]{5}$' words > words.txt $home
grep -E '^.[*ing]{5}$' words > words.txt /home/
Any help is appreciated!
grep -E '^.[*ing]{5}$' words > /home/words.txt

Get a subset of several alphabetically ordered files in Bash [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Let's assume I have a group of files "aab.md", "aac.md", "aad.md" ... "csdw.md". Content/filenames are actually in (non-latin) utf-8. They can be sorted alphabetically.
How can I get in Bash a subset of those files starting with e.g. "aad.md" and upwards?
declare -a files
while IFS= read -r -d $'\0' file; do
filename=${file##*/}
if [[ ! "$filename" < "aad.md" ]]
then
files=("${files[#]}" "$file")
fi
done < <(find . -name "*.md" -print0)
The array "${files[#]}" should now contain paths to files whose basename is greater than aad.md.
This uses a number of less well-known techniques in bash: arrays, prefix substitution, zero-terminated records (and their reading), and process substitution; so don't hesitate to ask if something is unclear.
Note that bash [[...]] construct doesn't know about >= operator, so we need to improvise with ! ...<....
This is almost pure bash, no external commands except find. If you accept external commands, $(basename $file) is more obvious than ${file##*/}, but at that point you might as well use awk... and if you can use awk, why not Ruby?
ruby -e "puts Dir['**/*.md'].select{|x| File.basename(x) >= 'aad.md'}"

How to loop over a list containing two different pattern files in linux (bash)? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I got a list containing filenames that match the following two patterns:
one is like XXX_01.fastq
another is XXX_01_001.fastq
I am going to write a for loop (in bash) to loop over all the filenames with different patterns and I need to determine which ones match the patterns above. Any help about it?
Contents of list.txt:
$ cat list.txt
AAA_01.fastq
AA_01_001.fastq
BBB_01_002.fastq
BBB_02.fastq
Example using bash pattern matching:
for file in `cat list.txt`; do
if [[ $file =~ [A-Z]{3}_[0-9]{2}\.fastq || $file =~ [A-Z]{3}_[0-9]{2}_[0-9]{3}\.fastq ]]; then
echo "MATCH $file";
fi;
done
Output:
MATCH: AAA_01.fastq
MATCH: BBB_01_002.fastq
MATCH: BBB_02.fastq

Trimming linux log files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
It seems like a trivial issue, but I did not find a solution.
I have a number of log files in a php installation on Debian/Linux that tend to grow quite a bit and I would like to trim nightly to the last 500 lines or so.
How do I do it, possibly in shell and applying a command to *log?
For this, I would suggest to use logrotate with a configuration to your liking instead of programming your own script.
There might be a more elegant way to do this programmatically, but it is possible to use tail and a for-loop for this:
for file in *.log; do
tail -500 "$file" > "$file.tmp"
mv -- "$file.tmp" "$file"
done
If you want to save history of older files, you should check out logrotate.
Otherwise, this can be done trivially with the command line:
LOGS="/var/log"
MAX_LINES=500
find "$LOGS" -type f -name '*.log' -print0 | while read -d '' file; do
tmp=$(mktemp)
tail -n $MAX_LINES $file > $tmp
mv $tmp $file
done

Shell command to extract string within bracket (String) in a variable like status(running) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am building a AIX bash shell utility whereby i get a dynamic variable with value like status(running).
I just need the string within brakets that is running.
Right now i am able to get the whole word with status along with the brackets using awk print.
Can anyone suggest me how to just extract the running out of it. Thanks.
Let's say:
s='(running)'
Using pure BASH:
echo "${s//[()]/}"
running
Using sed:
echo "$s" | sed 's/[()]//g'
running
Using tr:
tr -d '()' <<< "$s"
running
UPDATE: As per comments by OP:
s='status(running)'
Using sed:
echo "$s" | sed 's/^.*(\(.*\)).*$/\1/g'
running
Using pure BASH:
t="${s#*\(}"
echo "${t%)*}"
running

Resources