Running a script inside a loop - echo file names - linux

I have a basic script that runs inside another script. I call mass_split.sh which then invokes split_file.sh. The splite_file.sh takes two arguments -s file_name.txt and -c 1 (how many slices to cut the file). However I trying to run a loop to find all text file names in directory ./ and then input the results to the cut_file.sh . I am getting no results back and then text files are not being split.
mass_split.sh
#!/bin/bash
for f in ./*.txt
do
sudo bash split_file.sh -s echo "file '$f'"; -c 10
done

Maybe this has something to do with that errant semicolon after the string literal, which is almost certainly not doing what you want (unless you have another executable that you're intentionally running called -c).

Related

How to share an argument with system calls on bash?

Actually I don't know much about bash programming. I've read that pipes allows we to use the output of a program as the input of another one. Then I expected that some expression like bellow to works:
echo "newdirectory" | (mkdir && cd)
Where mkdirreceives the outputed string from echo as it first argument and after cd too. The other point is that pipes not executes synchronously from left processes to the right (is that?).
There is a way to reuse an argument through the system calls on bash?
Especially in this case of creating a new directory and change to it.
You can use variables for this, and pass command line arguments to the the two commands mkdir and cd, instead of trying to pipe data to these commands.
MYDIR="newdirectory"
mkdir "$MYDIR" && cd "$MYDIR"
With this,
echo "newdirectory" | (mkdir && cd)
You connect standard input of both mkdir and cd. A program/command need to know
if it should read data from stdin, and what to do about it. Neither the mkdir or cd command does this, they expect you to give them command line arguments.
Even in the case the commands could read data from standard input, in this case mkdir would consume the input, and not leave anything for cd. In other cases where you connect the same pipe to several commands/processes, you cannot determine which one of them would read the data.
Moreover the parenthesis in (mkdir && cd) means that the commands are run in a sub-shell. However cd affects only the current shell, so you would not be able to observe any effect of the cd command.
mkdir `echo NewDirectorName`
also uses the output of a program as an argument to another program.
Another way to accomplish this is with the xargs command.
echo NewDirectoryName | xargs mkdir
#nos's answer is the most correct for your situation, though.

Running multiple scripts from bash on parallel without printing output to console

Let assume I have multiple files paths for run from terminal,
I want them to be run on parallel in the background without printing their output to the console screen. (Their output should be saved to some other log path which is defined in the python file itself).
The paths are in this format:
/home/Dan/workers/1/run.py
/home/Dan/workers/2/run.py etc.
When I try to do something like running one worker in background it seems to work:
In example: cd /home/Dan/workers/1/
and python run.py > /dev/null 2>&1 &
And ps -ef |grep python , indeed show the script running on background without printing to console, but printing to its predefined log path.
However, when I try to launch them all via bash script, I've no python scripts run after the following code:
#!/bin/bash
for path in /home/Dan/workers/*
do
if [-f path/run.py ]
then
python run.py > /dev/null 2>&1 &
fi
done
Any idea what is the difference?
In bash script I try to launch many scripts one after another just like I did for only one script.
#!/bin/bash
for path in /home/Dan/workers/*
do # VV V-added ${}
if [ -f "${path}/run.py" ]
then
python "${path}/run.py" > /dev/null 2>&1 &
# or (cd $path; python run.py > /dev/null 2>&1 &) like Terje said
fi
done
wait
Use ${path} instead of just path. path is the name of the variable, but what you want when you are testing the file is the value that is stored in path. To get that, prefix with $. Note that $path will also work in most situations, but if you use ${path} you will be more clear about exactly which variable you mean. Especially when learning bash, I recommend sticking with the ${...} form.
Edit: Put the whole name in double-quotes in case ${path} contains any spaces.
#!/bin/bash
There is nothing bash-specific about this script; write #! /bin/sh instead. (Don't write bash-specific scripts, ever; if a bash-specific feature appears to be the easiest way to solve a problem, that is your cue to rewrite the entire thing in a better programming language instead.)
for path in /home/Dan/workers/*
do
This bit is correct.
if [-f path/run.py ]
... but this is wrong. Shell variables are not like variables in Python. To use (the jargon term is "expand") a shell variable you have to put a $ in front of it. Also, you need to put double quotation marks around the entire "word" containing the shell variable to be expanded, or "word splitting" will happen, which you don't want. (There are cases where you want word splitting, and then you leave the double quotes out, but only do that when you know you want word splitting.) Also also, there needs to be a space between [ and -f. Putting it all together, this line should read
if [ -f "$path/run.py" ]
.
then
python run.py > /dev/null 2>&1 &
fi
The run.py on this line should also read "$path/run.py". It is possible, depending on what each python script does, that you instead want the entire line to read
( cd "$path" && exec python run.py > /dev/null 2>&1 ) &
I can't say for sure without knowing what the scripts do.
done
There should probably be another line after this reading just
wait
so that the outer script does not terminate until all the workers are done.
One important difference is that your bash script does not cd into the subdirectories before calling run.py
The second last line should be
python ${path}/run.py > /dev/null 2>&1 &
or
(cd $path; python run.py > /dev/null 2>&1 &)
What I would suggest is, not to invoke via loop. Your path suggests that you will be having multiple scripts. Actually the loop statement which you have included invokes each python script sequentially. It will run second one only when first one is over.
Better add
#!/bin/python on the top in your python scripts (or whichever path your python is installed)
and then run
/home/Dan/workers/1/run.py && /home/Dan/workers/2/run.py > /dev/null 2>&1 &

Using at to run a .txt file or command with flags

I need to run (in bash) a .txt file containing a bunch of commands written to it by another program, at a specific time using at. Normally I would run this with bash myfile.txt but of I choose to run at bash myfile.txt midnight it doesn't like it, saying
syntax error. Last token seen: b
Garbled time
How can I sort this out?
Try this instead:
echo 'bash myfile.txt' | at midnight
at reads commands from standard input or a specified file (parameter -f filename); not from the command line.

Read and execute lines with spaces from file in bash

I'm facing a problem in a bash shell script when I try to read some lines from a file and execute them one by one. The problem occurs when the line has an argument with spaces. Code:
while read i
do
$i
done < /usr/bin/tasks
tasks file:
mkdir Hello\ World
mkdir "Test Directory"
Both of the above instructions work perfectly when executed directly from the terminal, creating only two directories called "Hello World" and "Test Directory" respectively, but the same doesn't happen when the instructions are read and executed from the script, meaning that four directories are created.
Having said that, I would like to keep my code as simple as possible and, if possible, I'd prefer not to use the cat command. Thanks in advance for any help.
As simple as possible? You are re-implementing the . (or source, as bash allows you to spell it) command:
. /usr/bin/tasks
or
source /usr/bin/tasks
To execute one line at a time, use eval.
while IFS= read -r i; do
eval "$i"
done
This assumes that each line of the file contains one or more complete commands that can be executed individually.

What does this shell script line of code mean

I need some help understanding following shell script line,
apphome = "`cd \`dirname $0\` && pwd && cd - >/dev/null`"
All I understand is, this is creating a variable called apphome.
This is not a valid shell code.
The shell don't allow spaces around =
For the rest, while this seems broken, it try to cd to the dir of the script itself, display the current dir & finally cd back to the latest cd place redirecting his standard output STDOUT to the /dev/null trash-bin (that's makes not any sense, cd display only on standard error STDERR when it fails, never on STDOUT)
If you want to do this in a proper a simple way :
apphome="$(dirname $0)"
That's all you need.
NOTE
The backquote
`
is used in the old-style command substitution, e.g.
foo=`command`
The
foo=$(command)
syntax is recommended instead. Backslash handling inside $() is less surprising, and $() is easier to nest. See http://mywiki.wooledge.org/BashFAQ/082
It seems to assign a command to the "apphome" variable. This command can be executed later.
dirname returns a directory portion of a file name. $0 is the name of the script this line contains (if I am not mistaken).
Now, executing dirname <name> will return a directory, and cd will use the value.
So, what it would do is execute three command in the row assuming that each one of them succeeds. The commands are:
cd `dirname [name of the script]`
pwd
cd -
First command will change directory to the directory containing your script; second will print current directory; third will take yo back to the original directory. Output of the third command will not be printed out.
In summary, it will print out a name of a directory containing the script that contains the line in question.
At least, this is how I understand it.

Resources