How do I make a Perl script stop when running Matlab code fails? - linux

I would like to make the Perl script run some Matlab code, then wait, then run another Matlab code in Linux. If the Matlab code fails, then it should give an error message. The Perl script below would run through even when Matlab code 1 or 2 has an error. How do I make the Perl script stop and give an error message when the Matlab codes fails?
print("run Matlab code 1!\n");
`matlab -nodisplay -r myfile1`;
print("run Matlab code 2!\n");
`matlab -nodisplay -r myfile2`;
print("End!\n");

First, store the return code of the command you are running:
my $returnCode = system("matlab -nodisplay -r myfile1");
Then, before you move to the next step, make sure the return code is 0 (or whatever indicates success in your case):
if ($returnCode != 0) {
die "Command did not finish successfully.";
}
Just determine what is a valid return code and tell the script to die in any other case.

Related

How to make a .m file read an input csv file passed as a parameter?

I am new in Matlab and facing difficulty in making a .m file read the input csv file that I am passing as an argument from the command prompt. I understand that a function has to be written to read the input file as a parameter. Here is the code I wrote inside the .m file to accept the input file:
function data=input(filename);
addpath(genpath('./matlab_and_R_scripts'));
tic
D=csvread(filename,1,1);
I want the filename passed as an argument to be read by the function "csvread" and save it in D. I am using the following command to execute the script:
matlab -nodisplay -nosplash -nodesktop -r "input 'exp2_1_DMatrix.csv';run('matlab_filename.m');exit;"
I am able to execute the script without any errors but it is not reading the input file as the downstream analysis should have saved a new file if it was able to read the file and execute some functions on it.
Can anyone please suggest how to read the input file in my matlab script and the proper command to pass?
Try using this function. Take care on not using reserved names:
readdat.m
function data=readdat(filename);
addpath(genpath('./matlab_and_R_scripts'));
tic;
data=csvread(filename,1,1);
toc;
For testing, you can execute this function directly in the Command Window and editing in the Matlab Editor, which is where most functions, most of the time, are tested until you are fully satisfied with your processing.
>> data=readdat(filename);
Looking at data you can realize if the file was or not read as it should.
>> data(:,1)
You can keep running other scripts, such as matlab_filename.m. But the best choice is having a function working with everything:
processdat.m
function data=processdat(filename)
% Original Function
addpath(genpath('./matlab_and_R_scripts'));
tic;
data=csvread(filename,1,1);
toc;
% Paste in here all the matlab_filename.m code, or do a call:
matlab_filename;
% Do not uncomment this exit in here, since you want to keep working until the function do what you need
% exit;
Later, if you want some serious automation and want some programming tasks having repetitions of your code under some schedule, you can of course arrange a Windows bat command, and in this case, you can uncomment that exit; terminator at the end of processdat.m.
processdat.bat
matlab -nodisplay -nosplash -nodesktop -r processdat
Remember to ensure the OS can access matlab, and that Matlab can access processdat. If in doubt, place the proper paths:
processdat.bat
c\programs\bin\matlab -nodisplay -nosplash -nodesktop -r c\files\processdat.m
I solved the problem by gaining some insight from #Brethlosze's answer. If you want to avoid a local function then function shouldn't have a name but start with an []. Here is what I did to pass an input argument in my myScript.m script:
function [] = myScript(input_file, output_file)
addpath(genpath('../matlab_and_R_scripts'));
tic
D=csvread(input_file,1,1);
% Some code operations
save(output_file,'save_what_you_want')
toc
end
And I executed the script from command line using the following command:
matlab -nodisplay -nosplash -nodesktop -r "myScript 'example.csv' 'example.mat'"
The input_file is 'example.csv' and output_file is 'example.mat'.

cron script needs conditional to run

I have two python scripts that Im trying to run on my server
I currently have process_one running through a cron job every five minutes I want to add the second script to the cron job.
I was told by the freelancer that both programs can run automatically by writing a shell script. If process_one generates data in its output_folder (i.e.
process_two's input_folder) then it will return system status "0" (OK) to the operating system, otherwise it returns a
ERROR signal - even in the case of "no errors, yet nothing new produced".
Im at a loss Ive never written shell scripts before. Im looking on here and else where but I dont know how to write this. Any help would be appreciated
/path/to process1/process_one && /path/to process2/process_two
What you have for the cron job is correct, process one will run and if successful process_two will run. The only thing that is missing is for process_two to check for new data in the process_one output directory before it runs.
I would suggest that you use a few lines at the top of process_two python script to check for recent data and exit if not found. Either you can modify the python script itself to do this, or write a bash wrapper as process_two that simply calls the python script after checking for recent data.
py
import os.path, time
fileCreation = os.path.getctime(filePath)
now = time.time()
fivemin_ago = now - 300
if fileCreation < fivemin_ago:
sys.exit "data was older than five minutes"
bash
NOW=$(date +%s)
MODIFIED=$(date -d "$(stat FILEPATH | grep Modify | awk '{print $2" "$3}')" +%s)
if [ $NOW -gt $[$MODIFIED+300] ];then
echo "data was older than five minutes"
exit 1
else
process_two.py
fi
The best way will make a third file where you can call the first script and get the return after the return you run or not the second script, it will be something like.
#!/usr/bin/env bash
variable=$(download "something")
if [-f "../path/your/file.pdf" ]; then
run here your second command // if file exists run it
fi

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

Missing something in the linux terminal after launching matlab from the command line

I'm having a weird behaviour when launching matlab from the command line in linux.
I've a bash script in linux that execute a function in matlab from the command line and does other operations with custom functions written in C++ as follows:
#!/bin/bash
# prepare input data just to be sure it has not been written by other test!
matlab2011a -nodesktop -nosplash -r "prepare_data_matlab( 'A' ); quit"
# launch C++ program
...
# prepare more data
matlab2011a -nodesktop -nosplash -r "prepare_data_matlab( 'B' ); quit"
When the script is finished I can not see what I'm writing in the terminal, although the commands have effects. I need to reset the terminal.
The fact is that everything works fine if I only launch matlab with the prepare_data_matlab( 'A' ) but the problem comes when I execute the function with option prepare_data_matlab( 'B' ).
I have commented line by line and found that the problem is with option B that call the function
dlmwrite(file_name, B, ' ');
which is not used in prepare_data_matlab( 'A' ).
So, how should I execute the matlab from the command line to avoid this behaviour? Is there a known bug with the dlmwrite() function?
I'm using Ubuntu 12.04 64 bits, GNU bash, versión 4.2.24(1)-release (x86_64-pc-linux-gnu) and matlab2011a.
EDITED: The output generated for prepare_data_matlab( 'A' ) is
The output generated for prepare_data_matlab( 'B' ) is
EDITED: file_name is created as strcat(path_to_data,f); where path_to_data = /tmp/ and f = data_out.txt. Matrix B is not displayed before or after.
The only output to the terminal before or after the MATLAB script is generated from the bash script as follow:
echo "#### SELECT DATA FROM WORKSPACE ####"
matlab2011a -nodesktop -nosplash -r "prepare_data_matlab( 'B' ); quit";
echo "#### Process Data as input in a C++ programs ####"
The MATLAB function select data from the workscape and save it to disk as follows:
function [ ] = prepare_data_matlab( type )
if strcmp(type,'A')
% load data from workscape
load ('workspace_with_my_arrays.mat', 'A');
% save data as a standalone variable
save('/tmp/A.mat', 'A');
elseif strcmp(type,'B')
% load data from workscape
load ('workspace_with_my_arrays.mat', 'B');
path_to_data = '/tmp/';
f = 'data_out.txt';
file_name = strcat(path_to_data,f);
% save data as a txt file
dlmwrite(file_name, B, ' ');
end
end
EDITED: whos -file workspace_with_my_arrays.mat
Name Size Bytes Class Attributes
A 610x340x103 170897600 double
B 610x340x103 170897600 double
P 610x340 1659200 double
t1 38855x100 31084000 double
t2 3921x2x100 6273600 double
There are more arrays in the workspace but those are which I load.
The prepare_data_matlab function is the same as posted above but with an argument error checking as follow:
%% Load data from file
% Data is saved in a MATLAB variable or in TXT
if nargin ~= 1
error('Use: prepare_data_matlab( [ A | B ] )')
end
and the following command:
cd /data/matlab;
which is executed after the arguments error check in both cases (option Aand option B), that is, before the if statement.
The problem is not with dlmwrite. This seems to be a bug in some versions of MATLAB, as reported in this link.
The proposed solution (if you have a buggy version of MATLAB) is to use nohup:
nohup matlab -nodesktop -nosplash -r ...........
UPDATE:
Per #Amro 's suggestion, #pQB reported the problem to MathWorks Support. Their response was:
The problem is a known issue in versions prior to R2012a. Run MATLAB under a different shell. For example, neither tcsh or zsh have this issue.
OLD answer:
The problem is not with dlmwrite, but with the content of your matrix. Furthermore, unless file_name points to stdout (e.g., file_name='/dev/stdout';), the dlmwrite function will not write anything to screen and will not mess your terminal. Either file_name points to stdout or you are displaying the matrix B right before (or after) the dlmwrite call.
In any case, the problem is with the contents of your matrix B (see the strange characters in your output). You need to fix the problem with your matrix B. Perhaps the method you are using to read its input data is faulty.
If you want to ignore output from MATLAB (like the banner printed at the beginning), launch the process and redirect both the standard input and error to /dev/null device:
#!/bin/sh
echo '### running MATLAB ###'
matlab -nodesktop -nosplash -r "..." > /dev/null 2>&1
echo '### done ###'
./other_script.sh
matlab -nodesktop -nosplash -r "..." > /dev/null 2>&1
Note that you should be careful since MATLAB process returns immediately possibly before it has finished running, which could cause problems if your next program depends on files produced by MATLAB. See here for a possible solution.

Any way to exit bash script, but not quitting the terminal

When I use exit command in a shell script, the script will terminate the terminal (the prompt). Is there any way to terminate a script and then staying in the terminal?
My script run.sh is expected to execute by directly being sourced, or sourced from another script.
EDIT:
To be more specific, there are two scripts run2.sh as
...
. run.sh
echo "place A"
...
and run.sh as
...
exit
...
when I run it by . run2.sh, and if it hit exit codeline in run.sh, I want it to stop to the terminal and stay there. But using exit, the whole terminal gets closed.
PS: I have tried to use return, but echo codeline will still gets executed....
The "problem" really is that you're sourcing and not executing the script. When you source a file, its contents will be executed in the current shell, instead of spawning a subshell. So everything, including exit, will affect the current shell.
Instead of using exit, you will want to use return.
Yes; you can use return instead of exit. Its main purpose is to return from a shell function, but if you use it within a source-d script, it returns from that script.
As §4.1 "Bourne Shell Builtins" of the Bash Reference Manual puts it:
return [n]
Cause a shell function to exit with the return value n.
If n is not supplied, the return value is the exit status of the
last command executed in the function.
This may also be used to terminate execution of a script being executed
with the . (or source) builtin, returning either n or
the exit status of the last command executed within the script as the exit
status of the script.
Any command associated with the RETURN trap is executed
before execution resumes after the function or script.
The return status is non-zero if return is used outside a function
and not during the execution of a script by . or source.
You can add an extra exit command after the return statement/command so that it works for both, executing the script from the command line and sourcing from the terminal.
Example exit code in the script:
if [ $# -lt 2 ]; then
echo "Needs at least two arguments"
return 1 2>/dev/null
exit 1
fi
The line with the exit command will not be called when you source the script after the return command.
When you execute the script, return command gives an error. So, we suppress the error message by forwarding it to /dev/null.
Instead of running the script using . run2.sh, you can run it using sh run2.sh or bash run2.sh
A new sub-shell will be started, to run the script then, it will be closed at the end of the script leaving the other shell opened.
Actually, I think you might be confused by how you should run a script.
If you use sh to run a script, say, sh ./run2.sh, even if the embedded script ends with exit, your terminal window will still remain.
However if you use . or source, your terminal window will exit/close as well when subscript ends.
for more detail, please refer to What is the difference between using sh and source?
This is just like you put a run function inside your script run2.sh.
You use exit code inside run while source your run2.sh file in the bash tty.
If the give the run function its power to exit your script and give the run2.sh
its power to exit the terminator.
Then of cuz the run function has power to exit your teminator.
#! /bin/sh
# use . run2.sh
run()
{
echo "this is run"
#return 0
exit 0
}
echo "this is begin"
run
echo "this is end"
Anyway, I approve with Kaz it's a design problem.
I had the same problem and from the answers above and from what I understood what worked for me ultimately was:
Have a shebang line that invokes the intended script, for example,
#!/bin/bash uses bash to execute the script
I have scripts with both kinds of shebang's. Because of this, using sh or . was not reliable, as it lead to a mis-execution (like when the script bails out having run incompletely)
The answer therefore, was
Make sure the script has a shebang, so that there is no doubt about its intended handler.
chmod the .sh file so that it can be executed. (chmod +x file.sh)
Invoke it directly without any sh or .
(./myscript.sh)
Hope this helps someone with similar question or problem.
To write a script that is secure to be run as either a shell script or sourced as an rc file, the script can check and compare $0 and $BASH_SOURCE and determine if exit can be safely used.
Here is a short code snippet for that
[ "X$(basename $0)" = "X$(basename $BASH_SOURCE)" ] && \
echo "***** executing $name_src as a shell script *****" || \
echo "..... sourcing $name_src ....."
I think that this happens because you are running it on source mode
with the dot
. myscript.sh
You should run that in a subshell:
/full/path/to/script/myscript.sh
'source' http://ss64.com/bash/source.html
It's correct that sourced vs. executed scripts use return vs. exit to keep the same session open, as others have noted.
Here's a related tip, if you ever want a script that should keep the session open, regardless of whether or not it's sourced.
The following example can be run directly like foo.sh or sourced like . foo.sh/source foo.sh. Either way it will keep the session open after "exiting". The $# string is passed so that the function has access to the outer script's arguments.
#!/bin/sh
foo(){
read -p "Would you like to XYZ? (Y/N): " response;
[ $response != 'y' ] && return 1;
echo "XYZ complete (args $#).";
return 0;
echo "This line will never execute.";
}
foo "$#";
Terminal result:
$ foo.sh
$ Would you like to XYZ? (Y/N): n
$ . foo.sh
$ Would you like to XYZ? (Y/N): n
$ |
(terminal window stays open and accepts additional input)
This can be useful for quickly testing script changes in a single terminal while keeping a bunch of scrap code underneath the main exit/return while you work. It could also make code more portable in a sense (if you have tons of scripts that may or may not be called in different ways), though it's much less clunky to just use return and exit where appropriate.
Also make sure to return with expected return value. Else if you use exit when you will encounter an exit it will exit from your base shell since source does not create another process (instance).
Improved the answer of Tzunghsing, with more clear results and error re-direction, for silent usage:
#!/usr/bin/env bash
echo -e "Testing..."
if [ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ]; then
echo "***** You are Executing $0 in a sub-shell."
exit 0
else
echo "..... You are Sourcing $BASH_SOURCE in this terminal shell."
return 0
fi
echo "This should never be seen!"
Or if you want to put this into a silent function:
function sExit() {
# Safe Exit from script, not closing shell.
[ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ] && exit 0 || return 0
}
...
# ..it have to be called with an error check, like this:
sExit && return 0
echo "This should never be seen!"
Please note that:
if you have enabled errexit in your script (set -e) and you return N with N != 0, your entire script will exit instantly. To see all your shell settings, use, set -o.
when used in a function, the 1st return 0 is exiting the function, and the 2nd return 0 is exiting the script.
if your terminal emulator doesn't have -hold you can sanitize a sourced script and hold the terminal with:
#!/bin/sh
sed "s/exit/return/g" script >/tmp/script
. /tmp/script
read
otherwise you can use $TERM -hold -e script
If a command succeeded successfully, the return value will be 0. We can check its return value afterwards.
Is there a “goto” statement in bash?
Here is some dirty workaround using trap which jumps only backwards.
#!/bin/bash
set -eu
trap 'echo "E: failed with exitcode $?" 1>&2' ERR
my_function () {
if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
echo "this is run"
return 0
else
echo "fatal: not a git repository (or any of the parent directories): .git"
goto trap 2> /dev/null
fi
}
my_function
echo "Command succeeded" # If my_function failed this line is not printed
Related:
https://stackoverflow.com/a/19091823/2402577
How to use $? and test to check function?
I couldn't find solution so for those who want to leave the nested script without leaving terminal window:
# this is just script which goes to directory if path satisfies regex
wpr(){
leave=false
pwd=$(pwd)
if [[ "$pwd" =~ ddev.*web ]]; then
# echo "your in wordpress instalation"
wpDir=$(echo "$pwd" | grep -o '.*\/web')
cd $wpDir
return
fi
echo 'please be in wordpress directory'
# to leave from outside the scope
leave=true
return
}
wpt(){
# nested function which returns $leave variable
wpr
# interupts the script if $leave is true
if $leave; then
return;
fi
echo 'here is the rest of the script, executes if leave is not defined'
}
I have no idea whether this is useful for you or not, but in zsh, you can exit a script, but only to the prompt if there is one, by using parameter expansion on a variable that does not exist, as follows.
${missing_variable_ejector:?}
Though this does create an error message in your script, you can prevent it with something like the following.
{ ${missing_variable_ejector:?} } 2>/dev/null
1) exit 0 will come out of the script if it is successful.
2) exit 1 will come out of the script if it is a failure.
You can try these above two based on ur req.

Resources