Processing file with xargs for concurrency - linux

There is an input like:
folder1
folder2
folder3
...
foldern
I would like to iterate over taking multiple lines at once and processes each line, remove the first / (and more but for now this is enough) and echo the. Iterating over in bash with a single thread can be slow sometimes. The alternative way of doing this would be splitting up the input file to N pieces and run the same script with different input and output N times, at the end you can merge the results.
I was wondering if this is possible with xargs.
Update 1:
Input:
/a/b/c
/d/f/e
/h/i/j
Output:
mkdir a/b/c
mkdir d/f/e
mkdir h/i/j
Script:
for i in $(<test); do
echo mkdir $(echo $i | sed 's/\///') ;
done
Doing it with xargs does not work as I would expect:
xargs -a test -I line --max-procs=2 echo mkdir $(echo $line | sed 's/\///')
Obviously I need a way to execute the sed on the input for each line, but using $() does not work.

You probably want:
--max-procs=max-procs, -P max-procs
Run up to max-procs processes at a time; the default is 1. If
max-procs is 0, xargs will run as many processes as possible at
a time. Use the -n option with -P; otherwise chances are that
only one exec will be done.
http://unixhelp.ed.ac.uk/CGI/man-cgi?xargs

With GNU Parallel you can do:
cat file | perl -pe s:/:: | parallel mkdir -p
or:
cat file | parallel mkdir -p {= s:/:: =}

Related

Concatenate (using bash) all file names in subdirectories with option

I have directory work_dir, and there are some subdirectories inside. And inside subdirectories there are zip archives. I can see all zip archives in terminal:
find . -name *.zip
The output:
./folder2/sub/dir/test2.zip
./folder3/test3.zip
./folder1/sub/dir/new/test1.zip
Now I want to concatinate all these file names in single row with some option. For example I want single row:
my_command -f ./folder2/sub/dir/test2.zip -f ./folder3/test3.zip -f ./folder1/sub/dir/new/test1.zip -u user1 -p pswd1
In this example:
my_command is some command
-f the option
-u user1 another option with value
-p pswd1 another option with value
Can you help me please, how can I do this in Linux BASH ?
One way is: (updated per #M. Nejat Aydin comments)
find . -name "*.zip" -print0 | xargs -0 -n1 printf -- '-f\0%s\0' | xargs -0 -n100000 my_command -u user1 -p pswd1
Note that -n100000 parameter forces all output of the previous xargs to be executed on the same line with the assumption that number of findings will be less than 100000.
I used null terminated versions (notice: -0 flag, -print0) because file names can contain spaces.
This is a bash script that should do what you wanted.
#!/usr/bin/env bash
user=user1
passwd=pswd1
while IFS= read -rd '' files; do
args+=(-f "$files")
done < <(find . -name '*.zip' -print0)
args=("${args[#]}" -u "$user" -p "$passwd")
##: Just for the human eye to see the output,
##: change this line of code according to the comment below.
printf 'mycommand %s\n' "${args[*]}"
The output should be in one-line, like what you wanted, but do change the last line from
printf 'mycommand %s\n' "${args[*]}"
into
mycommand "${args[#]}"
If you actually want to execute mycommand with the arguments.
Change the value of user and passwd too.
A while + read loop was used with IFS.
See How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Why the last line should be change.
See Arguments
Shell quoting is a basic but common mistake when dealing with spaces in file/path name.
See How can I find and safely handle file names containing
Also the find command/utiliy.
The construct "${args[#}" is an array.
See Array1 Array2 Array3
You can do this by making a bash script.
Make a new file called whatever.sh
Type chmod +x ./whatever.sh so it becomes executable on the terminal
Add the BASH scripting as shown below..
#!/bin/bash
# Get all the zip files from your FolderName
files="`find ./FolderName -name *.zip`"
# Loop through the files and build your args
arg=""
for file in $files; do
arg="$arg -f $file"
done
# Run your command
mycommand $arg -u user1 -p pswd1

Shell - iterate over content of file but do something only the first x lines

So guys,
I need your help trying to identify the fastest and the most "fault" tolerant solution to my problem.
I have a shell script which executes some functions, based on a txt file, in which I have a list of files.
The list can contain from 1 file to X files.
What I would like to do is iterate over the content of the file and execute my scripts for only 4 items out of the file.
Once the functions have been executed for these 4 files, go over to the next 4 .... and keep on doing so until all the files from the list have been "processed".
My code so far is as follows.
#!/bin/bash
number_of_files_in_folder=$(cat list.txt | wc -l)
max_number_of_files_to_process=4
Translated_files=/home/german_translated_files/
while IFS= read -r files
do
while [[ $number_of_files_in_folder -gt 0 ]]; do
i=1
while [[ $i -le $max_number_of_files_to_process ]]; do
my_first_function "$files" & # I execute my translation function for each file, as it can only perform 1 file per execution
find /home/german_translator/ -name '*.logs' -exec mv {} $Translated_files \; # As there will be several files generated, I have them copied to another folder
sed -i "/$files/d" list.txt # We remove the processed file from within our list.txt file.
my_second_function # Without parameters as it will process all the files copied at step 2.
done
# here, I want to have all the files processed and don't stop after the first iteration
done
done < list.txt
Unfortunately, as I am not quite good at shell scripting, I do not know how to structure it so that it won't waste any resources and mostly, to make sure that it "processes" everything from that file.
Do you have any advice on how to achieve what I am trying to achieve?
only 4 items out of the file. Once the functions have been executed for these 4 files, go over to the next 4
Seems to be quite easy with xargs.
your_function() {
echo "Do something with $1 $2 $3 $4"
}
export -f your_function
xargs -d '\n' -n 4 bash -c 'your_function "$#"' _ < list.txt
xargs -d '\n' for each line
-n 4 take for arguments
bash .... - run this command with 4 arguments
_ - the syntax is bash -c <script> $0 $1 $2 etc..., see man bash.
"$#" - forward arguments
export -f your_function - export your function to environment so child bash can pick it up.
I execute my translation function for each file
So you execute your translation function for each file, not for each 4 files. If the "translation function" is really for each file with no inter-file state, consider rather executing 4 processes in parallel with same code and just xargs -P 4.
If you have GNU Parallel it looks something like this:
doit() {
my_first_function "$1"
my_first_function "$2"
my_first_function "$3"
my_first_function "$4"
my_second_function "$1" "$2" "$3" "$4"
}
export -f doit
cat list.txt | parallel -n4 doit

GNU Nano sort out integers in files

I have a problem working with GNU Nano program code. This is my task:
Generate 100 files and in each one has to be one number(shuf -i1-1000 - n1). Then scan files and write numbers ascending order to a file named "output.txt".
My code:
#!/bin/bash
mkdir files
find /etc/ -name "*.txt"|xargs du -h >output.txt
for x in {1..100}
do
shuf -i 1-1000 -n 1 > files/$x.txt
done
for x in {1..100}
do
input=$(cat files/$x.txt)
done
I wanted to ask how to sort out numbers which are in files and write them all to output.txt file?
Thanks
Use sort to sort the numbers.
#! /bin/bash
mkdir files
shuf -i1-1000 -n100 | for i in {1..100} ; do
read n
echo $n > files/$i.txt
done
sort -n files/*.txt > files/output.txt

ssh tail with nested ls and head cannot access

am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).

Bash grep command finding the same file 5 times

I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!

Resources