How to parallel multiple run with ncverilog? - linux

I would like to run parallel multiple run ncverilog.
Normally, we use ncverilog run script like this.
Run.scr-
ncveriog blar~blar~
But this is run at once. It means that if I want to run 100 scripts , I have to run new after previous script end. But I want to run at simultant 100 script.
How do I run the scripts at simultant?

Use GNU Parallel
parallel ncverilog bla bla
Normally, that will run as many ncverilogs in parallel as you have CPU cores, but you can add -j 25, for example, if you specifically want 25 to run in parallel.
If you need to supply lots of different parameters, just make a loop or script that generates the parameters and feeds a list of jobs into GNU Parallel like this:
for i in {0..99}; do
echo ncverilog $i <and some other parameters>
done | parallel -j 25
Example
Example
Documentation

Related

Parallel processing or threading in Shell script

I am new in Parallel processing. I have some python scripts which I should not change them for some reasons. Each of these python scripts only use one cpu core and does some processing on an input image. I run these python scripts with a shell script one after another. Can I do paralel threading in shell script without thouching python scripts so that each python script uses multiple cpu cores and the processing speed on each image gets increasd?
Yes, start them with GNU Parallel.
So, if you want to run your script 10 times, with parameters 0..9:
parallel python yourScript.py {} ::: {0..9}
If you want to see what would run without actually running anything:
parallel --dry-run ...
If you want a progress meter:
parallel --progress ...

How to schedule jobs and passing arguments in linux and running in parallel

I want to run an script at a specific time at background. The job receive an input argument. In order to schedule a job, I found out that I should use the at command and run it like this:
at -f ./myjob now
and it works. However when I want to run it with an argument like this :
at -f ./myjob 1 now
it gives me the Garbled time error message. Does anyone have any idea how to solve the problem?
Update :
I want to run the job with different parameters in parallel. like this
at -f ./myjob 1 now
at -f ./myjob 2 now
at -f ./myjob 3 now
The at command has the -f file option which reads commands from a file rather than standard input. Therefore put your commands in a file, for example cmds, which would contain the following:
./myjob 1
To run multiple jobs in parallel use the ampersand operator to fork each job:
./myjob 1 &
./myjob 2 &
./myjob 3
Then run:
at -f ./cmds now
More information can be found by reading the at man page, via man at.

Run two shell script in parallel and capture their output

I want have a shell script, which configure several things and then call two other shell scripts. I want these two scripts run in parallel and I want to be able to get and print their live output.
Here is my first script which calls the other two
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh
$path/instance1_commands.sh
These two process trying to deploy two different application and each of them took around 5 minute so I want to run them in parallel and also see their live output so I know where are they with the deploying tasks. Is this possible?
Running both scripts in parallel can look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >instance2.out 2>&1 &
$path/instance1_commands.sh >instance1.out 2>&1 &
wait
Notes:
wait pauses until the children, instance1 and instance2, finish
2>&1 on each line redirects error messages to the relevant output file
& at the end of a line causes the main script to continue running after forking, thereby producing a child that is executing that line of the script concurrently with the rest of the main script
each script should send its output to a separate file. Sending both to the same file will be visually messy and impossible to sort out when the instances generate similar output messages.
you may attempt to read the output files while the scripts are running with any reader, e.g. less instance1.out however output may be stuck in a buffer and not up-to-date. To fix that, the programs would have to open stdout in line buffered or unbuffered mode. It is also up to you to use -f or > to refresh the display.
Example D from an article on Apache Spark and parallel processing on my blog provides a similar shell script for calculating sums of a series for Pi on all cores, given a C program for calculating the sum on one core. This is a bit beyond the scope of the question, but I mention it in case you'd like to see a deeper example.
It is very possible, change your script to look like this:
#!/bin/bash
#CONFIGURE SOME STUFF
$path/instance2_commands.sh >> script.log
$path/instance1_commands.sh >> script.log
They will both output to the same file and you can watch that file by running:
tail -f script.log
If you like you can output to 2 different files if you wish. Just change each ling to output (>>) to a second file name.
This how I end up writing it using Paul instruction.
source $path/instance2_commands.sh >instance2.out 2>&1 &
source $path/instance1_commands.sh >instance1.out 2>&1 &
tail -q -f instance1.out -f instance2.out --pid $!
wait
sudo rm instance1.out
sudo rm instance2.out
My logs in two processes was different so I didn't care if aren't all together, that is why I put them all in one file.

Can multithreading be used to run 100 perl scripts in parallel?

Here is my problem:
I have 100 perl scripts which were created over time; each script takes its own time--from 5 minutes to 5 hours.
Today I am running all these scripts from command prompt in a sequential manner as a suite, and it takes close to 1.5 days to run all of them.
I am wondering if 100 command prompts can be opened simultaneously, and if I can run one perl script on each command prompt in parallel...so all my scripts can complete in 5 hours (the maximum time a single script takes).
Is this possible by any tool?
Can we use multithreading to achieve the above?
Please suggest what is the better way to appraoch?
Instead of:
perl script1
perl script2
...
perl script100
you can do
perl script1 &
perl scipt2 &
...
perl script100 # no & here!
This is not exactly multithreading, though.
If you have all scripts, and only those scripts in a dedicated directory (say parscripts), you can do the following:
for s in parscripts/*.pl;do perl $s & ;done
wait
echo "All scripts completed."
But this, of course presupposes that the scripts are independent! See also #KlasLindbäcks answer.
Starting scripts in parallel is easy.
In Linux/Unix just add an ampersand at the end of each command to start it in the background.
Example:
myscript &
You need to be aware of 2 things:
Some scripts may have dependencies to each other, so that they should not be started until some other script has completed.
The total time may be longer than 5 hours because of bottlenecks when several scripts run in parallel.
The first problem is solved by group dependent scripts into script files, så your start script may look something like this:
#!/bin/sh
perl script1 &
perl script2 &
script_group1 &
script_group2 &
...
Where a script group would look something like:
#!/bin/sh
# Note that there is no '&' at the end of these lines,
# because they need to run consecutively:
perl dependentscript1
perl dependentscript2
perl dependentscript3
If these multiple scripts are needed to be run regularly, you should consider writing a shell script that calls them. Or, we could write a makefile.
A makefile should be used then when there are dependencies between various scripts, and you need to express “foo needs to be run before bar”. The make program will then automatically find a correct order that satisfies these dependencies. You can also specify how many parallel jobs make will start: make -j 4 for four parallel jobs.
A makefile consists of recipies, which have dependencies and a body. In the body, each line is taken to be a shell command. The command will be printed to the terminal, and then executed. To suppress the printing, prefix the command with #. Example:
foo: bar something_else
<tab ># echo "I am about to execute the foo command:"
<tab >perl /some/path/foo.pl
bar:
<tab ># echo "I am about to execute the bar:"
<tab >perl /some/path/bar.pl
something_else
<tab >perl /some/path/something.pl | perl /some/path/else.pl >/some/path/output.txt
The <tab > must be changed to a literal tab character. Intendation by whitespace doesn't work.
The disadvantage of this solution is that the makefile is three times as long as a simple shell script. The advantage is that you can directly specify how many parallel jobs you want (this gives even load without too much idling), and you don't have to manually order the scripts like Klas Lindbäck proposed in his answer. With make you'd just have to specify the actual dependencies.

Robot Framework parallel command execution

I have a testcase containing multiple Execute Commands (SSH Library) which are calling different commands in Linux environment. The main thing I would like to do is to run some of them in parallel. By default Robot performs one command and after it finishes, performs the next one.
As for me it is not a good behavior, I would like to have my command executed during execution of previous one. For example:
Execute Command ./script.sh
Execute Command ./script_parallel.sh
What I would like Robot to do:
Execute script.sh
During execution perform script_parallel.sh (which will finish before script.sh finishes)
Finish script.sh
Will it be possible to use GNU Parallel?
Execute Command parallel ::: ./script.sh ./script_parallel.sh
Have you tried Start command? It starts the command in background and returns immediately. To verify successful execution of commands you need Read Command Output.

Resources