Copy data from production DB to Archival DB - linux

Please suggest how to fix below error.
My .sh file location: /archiveDB/service_arc.sh (x.x.x.85 server)
service_arc.sh script:
export PGPASSWORD=puser
/appdb/edb/install/9.5AS/bin/psql -h x.x.x.85 -p 5444 -U puser -d pdb -c "COPY (SELECT * FROM service_arc_old ) TO STDOUT;" | /appdb/edb/9.5as/bin/psql -h x.x.x.92 -p 2442 -U puser -d pdb -c "COPY service_arc FROM STDIN;"
nohup ./service_arc.sh > service_arc.log 2>&1 &
tail -f service_arc.log
Error:
nohup: ignoring input
./service_arc.sh: line 2: /appdb/edb/9.5as/bin/psql: No such file or directory

Related

How to log output of multiple shell scripts

I am not familiar with this platform, so if this is in the frozen section my apologies :P
I am working on upgrading a raspberry pi script project. It decodes NOAA APT Satellite images and it runs from AT Scheduler (I think) and scripts. Scripts are used to start recordings and do auto processing.
I have been having some problems and am trying to get a log of what is processed through the scripts, their are 3. I have tired adding something like ...) >> log.txt to the files but they are always empty.
I cant call them as sh -x script.sh >>log.txt because they are scheduled to trigger at different times and it would be a pain to replace all the calls.
Idealy i would like something i could add at the end of each script to log all the things they process and stick them in their own log file (script1.log, script2.log, script3.log)
Thanks!!
Jake
edit: I was advised to post the scripts. These are not "mine" i got them off of an instructable and made some changes to fit my needs. And i would rather not screw them up more than i have. ideally i would like something i could put after the #!/bin/bash line where it would log all of the commands processed by the script.
Thanks!
Script 1, the main scheduling script. some of them have been comented out because i dont use NOAA 15 or Meteor M2.
#!/bin/bash
# Update Satellite Information
wget -qr https://www.celestrak.com/NORAD/elements/weather.txt -O /home/pi/weather/predict/weather.txt
#grep "NOAA 15" /home/pi/weather/predict/weather.txt -A 2 > /home/pi/weather/predict/weather.tle
grep "NOAA 18" /home/pi/weather/predict/weather.txt -A 2 >> /home/pi/weather/predict/weather.tle
grep "NOAA 19" /home/pi/weather/predict/weather.txt -A 2 >> /home/pi/weather/predict/weather.tle
#grep "METEOR-M 2" /home/pi/weather/predict/weather.txt -A 2 >> /home/pi/weather/predict/weather.tle
#Remove all AT jobs
for i in `atq | awk '{print $1}'`;do atrm $i;done
#Schedule Satellite Passes:
/home/pi/weather/predict/schedule_satellite.sh "NOAA 19" 137.1000
/home/pi/weather/predict/schedule_satellite.sh "NOAA 18" 137.9125
#/home/pi/weather/predict/schedule_satellite.sh "NOAA 15" 137.6200
script 2, the individual satellite scheduler. It uses information from the first script to find times the satellite is passing overhead.
#!/bin/bash
PREDICTION_START=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" | head -1`
PREDICTION_END=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" | tail -1`
var2=`echo $PREDICTION_END | cut -d " " -f 1`
MAXELEV=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" | awk -v max=0 '{if($5>max){max=$5}}END{print max}'`
while [ `date --date="TZ=\"UTC\" #${var2}" +%D` == `date +%D` ]; do
START_TIME=`echo $PREDICTION_START | cut -d " " -f 3-4`
var1=`echo $PREDICTION_START | cut -d " " -f 1`
var3=`echo $START_TIME | cut -d " " -f 2 | cut -d ":" -f 3`
TIMER=`expr $var2 - $var1 + $var3`
OUTDATE=`date --date="TZ=\"UTC\" $START_TIME" +%Y%m%d-%H%M%S`
if [ $MAXELEV -gt 28 ]
then
echo ${1//" "}${OUTDATE} $MAXELEV
echo "/home/pi/weather/predict/receive_and_process_satellite.sh \"${1}\" $2 /home/pi/weather/${1//" "}${OUTDATE} /home/pi/weather/predict/weather.tle $var1 $TIMER" | at `date --date="TZ=\"UTC\" $START_TIME" +"%H:%M %D"`
fi
nextpredict=`expr $var2 + 60`
PREDICTION_START=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" $nextpredict | head -1`
PREDICTION_END=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" $nextpredict | tail -1`
MAXELEV=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" $nextpredict | awk -v max=0 '{if($5>max){max=$5}}END{print max}'`
var2=`echo $PREDICTION_END | cut -d " " -f 1`
done
the final script takes care of recording the audio from the satellite at the specified frequency, calculated for dopler shift, auto decodes/processes it, and posts it to my archive and webserver.
#!/bin/bash
# $1 = Satellite Name
# $2 = Frequency
# $3 = FileName base
# $4 = TLE File
# $5 = EPOC start time
# $6 = Time to capture
sudo timeout $6 rtl_fm -f ${2}M -s 60k -g 45 -p 55 -E wav -E deemp -F 9 - | sox -t wav - $3.wav rate 11025
#pass start 150 was 90
PassStart=`expr $5 + 150`
if [ -e $3.wav ]
then
/usr/local/bin/wxmap -T "${1}" -H $4 -p 0 -l 0 -o $PassStart ${3}-map.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e ZA $3.wav ${3}.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e NO $3.wav ${3}.NO.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e MSA $3.wav ${3}.MSA.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e MCIR $3.wav ${3}.MCIR.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e MSA-PRECIP $3.wav ${3}.MSA-PRECIP.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e EC $3.wav ${3}.EC.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e HVCT $3.wav ${3}.HVCT.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e CC $3.wav ${3}.CC.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e SEA $3.wav ${3}.SEA.png
fi
NOW=$(date +%m-%d-%Y_%H-%M)
mkdir /home/pi/weather/Pictures/${NOW}
sudo cp /home/pi/weather/*.png /home/pi/weather/Pictures/${NOW}/ #move pictures to date folder in pi/pictures
sudo mv /var/www/html/APT_Pictures/PREVIOUS/* /var/www/html/APT_Pictures/ARCHIVE #move previous to archive
sudo mv /var/www/html/APT_Pictures/LATEST/* /var/www/html/APT_Pictures/PREVIOUS #move latest pictures to previous folder
sudo cp /home/pi/weather/Pictures/${NOW} /var/www/html/APT_Pictures/LATEST -r #copys date folder to latest
sudo cp /home/pi/weather/*-map.png home/pi/weather/Pictures/${NOW}/ #copys map to archive folder
##sudo mv /home/pi/weather/Pictures/${NOW}/*-map.png /home/pi/weather/maps #moves map from /pi/pictures date to maps folder
sudo rm /home/pi/weather/*.png #removes pictures from weather folder
sudo mv /home/pi/weather/*.wav /home/pi/weather/audio #moves audio to audio folder
Perhaps the scripts are outputing status messages to stderr instead of stdout (which your ...) >> log.txt method would have captured.)?
Here's how I'd capture stdout and stderr for debugging purposes.
$ /bin/bash script1.sh 1>>script1_stdout.log 2>>script1_stderr.log
$ /bin/bash script2.sh 1>>script2_stdout.log 2>>script2_stderr.log
$ /bin/bash script3.sh 1>>script3_stdout.log 2>>script3_stderr.log
Or combine the two streams into a single log file:
$ /bin/bash script1.sh 1>>script1.log 2>&1
$ /bin/bash script2.sh 1>>script2.log 2>&1
$ /bin/bash script3.sh 1>>script3.log 2>&1
The "1" in 1>> refers to stdout and the "2" in 2>> refers to stderr.
Edit: If you want to continue to see stdout/stderr messages and still write them to file, use tee as described here. tee prints stdin it receives, writes a copy of stdout to the file path provided.
$ /bin/bash script1.sh 2>&1 | tee script1.log
$ /bin/bash script2.sh 2>&1 | tee script2.log
$ /bin/bash script3.sh 2>&1 | tee script3.log
Reference about stdout and stderr.

Unable to have aws userdata echo onto /etc/crontab

Long story short I am trying to have my aws userdata create a cronjob on launch. I am unable to echo into the /etc/crontab location. Here is a snippet of my code.
echo '# description of cronjob being addeded'
echo '0 16 * * 2,4,6 root some commands' | sudo tee -a /etc/crontab > /dev/null
echo ' ' | sudo tee -a /etc/crontab > /dev/null
Here is the Answer incase someone else needs to know.
"sudo sh -c 'echo \"# description of cronjob being addeded\" >> /etc/crontab' \n",
"sudo sh -c 'echo \"0 16 * * 2,4,6 root some commands\" >> /etc/crontab' \n",
"sudo sh -c 'echo \" \" >> /etc/crontab' \n",

passing double-quotes through weka machine learning

I am using CLI of Weka, namely, Primer and I have tried many different combo of passing several argument with no success. When I pass sth like this:
weka_options=("weka.classifiers.functions.SMOreg -C 1.0 -N 0")
the program runs with no issue, but passing something like this:
weka_options=("weka.classifiers.functions.SMOreg -C 1.0 -N 0 -I \"weka.classifiers.functions.supportVector.RegSMOImproved -L 1.0e-3 -W 1 -P 1.0e-12 -T 0.001 -V\" -K \"weka.classifiers.functions.supportVector.NormalizedPolyKernel -C 250007 -E 8.0\"")
with/withOut escape character and even single quoted `, brings me error in my bash scripts:
bash ./weka.sh "$sub_working_dir" $train_percentage "$weka_options" $files_string > $predictions
where weka.sh contains:
java -Xmx1024m -classpath ".:$WEKAPATH" $weka_options -t "$train_set" -T "$test_set" -c 53 -p 53
Here is what I get:
---Registering Weka Editors---
Trying to add database driver (JDBC): jdbc.idbDriver - Error, not in CLASSPATH?
Weka exception: Can't open file No suitable converter found for '0.001'!.
Can anyone pinpoint the issue?
Updated question: here is the codes:
# Usage:
#
# ./aca2_explore.sh working-dir datasets/*
# e.g.
# ./aca2_explore.sh "aca2-explore-working-dir/" datasets/*
#
# Place this script in the same folder as aca2.sh and the folder containing the datasets.
#
#
# Please note that:
# - All the notes contained in aca2.sh apply
# - This script will erase the contents of working-dir
# to properly sort negative floating numbers, independently of local language options
export LC_ALL=C
# parameters parsing
output_directory=$1
first_file_index=2
files=${#:$first_file_index}
# global constants
datasets=$(($# - 1))
output_row=$(($datasets + 3))
output_columns_range="2-7"
learned_model_mae_column=4
results_learned_model_mae_column=4
# parameters
working_dir="$output_directory"
if [ -d "$working_dir" ];
then
rm -r "$working_dir"
fi
mkdir "$working_dir"
sub_working_dir="$working_dir""aca2-explore-sub-working-dir/"
path_to_results_file="$sub_working_dir""results.csv"
train_percentage=25
logfile="$working_dir""aca2_explore_log.csv"
echo "" > "$logfile"
reduced_log_header="Options,average_test_set_speedup,null_model_mae,learned_model_mae,learned_model_rmse,mae_ratio,R^2"
reduced_logfile="$working_dir""aca2_explore_reduced_log.csv"
echo "$reduced_log_header" > "$reduced_logfile"
sorted_reduced_logfile="$working_dir""aca2_explore_sorted_reduced_log.csv"
weka_options_list=(
"weka.classifiers.functions.LinearRegression -S 0 -R 1.0E-8"
"weka.classifiers.functions.MultilayerPerceptron -L 0.3 -M 0.2 -N 100 -V 0 -S 0 -E 20 -H a"
"weka.classifiers.meta.AdditiveRegression -S 1.0 -I 10 -W weka.classifiers.trees.DecisionStump"
"weka.classifiers.meta.Bagging -P 100 -S 1 -num-slots 1 -I 10 -W weka.classifiers.trees.REPTree -- -M 2 -V 0.001 -N 3 -S 1 -L -1 -I 0.0"
"weka.classifiers.meta.CVParameterSelection -X 10 -S 1 -W weka.classifiers.rules.ZeroR"
"weka.classifiers.meta.MultiScheme -X 0 -S 1 -B \"weka.classifiers.rules.ZeroR \""
"weka.classifiers.meta.RandomCommittee -S 1 -num-slots 1 -I 10 -W weka.classifiers.trees.RandomTree -- -K 0 -M 1.0 -V 0.001 -S 1"
"weka.classifiers.meta.RandomizableFilteredClassifier -S 1 -F \"weka.filters.unsupervised.attribute.RandomProjection -N 10 -R 42 -D Sparse1\" -W weka.classifiers.lazy.IBk -- -K 1 -W 0 -A \"weka.core.neighboursearch.LinearNNSearch -A \"weka.core.EuclideanDistance -R first-last\"\""
"weka.classifiers.meta.RandomSubSpace -P 0.5 -S 1 -num-slots 1 -I 10 -W weka.classifiers.trees.REPTree -- -M 2 -V 0.001 -N 3 -S 1 -L -1 -I 0.0"
"weka.classifiers.meta.RegressionByDiscretization -B 10 -K weka.estimators.UnivariateEqualFrequencyHistogramEstimator -W weka.classifiers.trees.J48 -- -C 0.25 -M 2"
"weka.classifiers.meta.Stacking -X 10 -M \"weka.classifiers.rules.ZeroR \" -S 1 -num-slots 1 -B \"weka.classifiers.rules.ZeroR \""
"weka.classifiers.meta.Vote -S 1 -B \"weka.classifiers.rules.ZeroR \" -R AVG"
"weka.classifiers.rules.DecisionTable -X 1 -S \"weka.attributeSelection.BestFirst -D 1 -N 5\""
"weka.classifiers.rules.M5Rules -M 4.0"
"weka.classifiers.rules.ZeroR"
"weka.classifiers.trees.DecisionStump"
"weka.classifiers.trees.M5P -M 4.0"
"weka.classifiers.trees.RandomForest -I 100 -K 0 -S 1 -num-slots 1"
"weka.classifiers.trees.RandomTree -K 0 -M 1.0 -V 0.001 -S 1"
"weka.classifiers.trees.REPTree -M 2 -V 0.001 -N 3 -S 1 -L -1 -I 0.0")
files_string=""
for file in ${files[#]}
do
files_string="$files_string""$file"" "
done
#echo $files_string
for weka_options in "${weka_options_list[#]}"
do
echo "$weka_options"
echo "$weka_options" >> "$logfile"
bash ./aca2.sh "$sub_working_dir" $train_percentage "$weka_options" $files_string
cat "$path_to_results_file" >> "$logfile"
result_columns=$(tail -n +"$output_row" "$path_to_results_file" | head -1 | cut -d, -f"$output_columns_range")
echo "$weka_options"",""$result_columns" >> "$reduced_logfile"
echo "" >> "$logfile"
done
tail -n +2 "$reduced_logfile" > "$sorted_reduced_logfile"
sort --field-separator=',' --key="$results_learned_model_mae_column" "$sorted_reduced_logfile" -o "$sorted_reduced_logfile"".tmp"
echo "$reduced_log_header" > "$sorted_reduced_logfile"
cat "$sorted_reduced_logfile"".tmp" >> "$sorted_reduced_logfile"
rm "$sorted_reduced_logfile"".tmp"
where the file aca2.sh is:
#!/bin/bash
# Run this script as ./script.sh working-directory train-set-filter-percentage "weka_options" datasets/*
#
# e.g.
# Place this script in a folder together with a directory containing your datasets. Call then the script as
# ./aca2.sh "aca2-working-dir/" 25 "weka.classifiers.functions.LinearRegression -S 0 -R 1.0E-8" datasets_folder/*
#
# NOTE: the script will erase the content of working-directory
# for correct behaviour $WEKAHOME environment variable must be set to the folder containing weka.jar, otherwise modify the call to the weka classifier below
#
# To define the error measures used in this script, I made use of some of the notions found in this article:
# http://scott.fortmann-roe.com/docs/MeasuringError.html
# parameters parsing
output_directory=$1
train_set_percentage=$2
if [ $train_set_percentage -lt 1 ] || [ $train_set_percentage -gt 100 ];
then
echo "Invalid train set percentage: "$train_set_percentage
exit 1
fi
weka_options=$3
first_file_index=4
files=${#:$first_file_index}
# global constants
predictions_characters_range_value="23-28"
predictions_characters_range_error="34-39"
tmp_dir="$output_directory"
if [ -d "$tmp_dir" ];
then
rm -r "$tmp_dir"
fi
mkdir "$tmp_dir"
results_header="testfile,average_test_set_speedup,null_model_mae,learned_model_mae,learned_model_rmse,mae_ratio,R^2"
results_file=$tmp_dir"results.csv"
echo "$results_header" > "$results_file"
arff_header="% ARFF conversion of CSV dataset
#RELATION program
#ATTRIBUTE ...
#DATA"
# global constants
datasets_per_program=5
entries_per_dataset=128
train_set_instances_to_select=$((datasets_per_program*entries_per_dataset*train_set_percentage/100))
all_prediction="$tmp_dir""all_predictions.txt"
count=0
prediction_efficiency_ideal_avg=0
arff_header_file="$tmp_dir""arff_header.txt"
echo "$arff_header" > "$arff_header_file"
count=0
for filename in ${files[#]}
do
echo "Test set: $filename"
echo "$filename" >> "$all_prediction"
cur_dir="$tmp_dir$filename.dir/"
mkdir -p $cur_dir
testfile=$filename
train_set="$cur_dir""train_set.arff"
echo "$arff_header" > $train_set
selected_train_subset="$cur_dir""selected_train_subset.csv"
for trainfile in ${files[#]}
do
if [ "$trainfile" != "$testfile" ]; then
# filter train set to feed only top 25% for model generation
sort --field-separator=',' --key=53 "$trainfile" -o "$selected_train_subset"
head -$train_set_instances_to_select "$selected_train_subset" >> $train_set
fi
done
test_set="$cur_dir""test_set.arff"
#echo "$arff_header" > $test_set
cp "$testfile" "$test_set"
# This file will contain the full configuration space dataset relative to the test program
complete_test_set="$cur_dir""complete_test_set.csv"
cp "$test_set" "$complete_test_set"
sort --field-separator=',' --key=53 "$test_set" -o "$test_set"
head -8 "$test_set" > "$test_set"".tmp"
mv "$test_set"".tmp" "$test_set"
cur_prediction="$cur_dir""cur_prediction.tmp"
# generate basis for predicted test set file by copying the actual test set, removing speedups
predicted_test_set="$cur_dir""predicted_test_set.csv"
cp "$test_set" "$predicted_test_set"
cut -d, -f53 --complement "$predicted_test_set" > "$predicted_test_set"".tmp"
mv "$predicted_test_set"".tmp" "$predicted_test_set"
cat "$arff_header_file" "$test_set" > "$test_set"".tmp"
mv "$test_set"".tmp" "$test_set"
java -Xmx1024m -classpath ".:$WEKAHOME/weka.jar:$WEKAJARS/*" $weka_options -t "$train_set" -T "$test_set" -c 53 -p 53 | tail -n +6 | head -8 > "$cur_prediction"
predictions_file="$cur_dir""predictions.csv"
cut -c"$predictions_characters_range_value" "$cur_prediction" | tr -d " " > "$predictions_file"
paste -d',' "$actual_speedups" "$predictions_file" > "$predictions_file"".tmp"
mv "$predictions_file"".tmp" "$predictions_file"
done
You almost have this right. You were trying to do the right thing it looks like (or just getting accidentally close).
You cannot use a string for arbitrarily quoted arguments (this is Bash FAQ 050).
You need to use an array instead. But you need an array with a separate element for each argument. Not just one argument.
weka_options=(weka.classifiers.functions.SMOreg -C 1.0 -N 0)
or
weka_options=(weka.classifiers.functions.SMOreg -C 1.0 -N 0 -I "weka.classifiers.functions.supportVector.RegSMOImproved -L 1.0e-3 -W 1 -P 1.0e-12 -T 0.001 -V" -K "weka.classifiers.functions.supportVector.NormalizedPolyKernel -C 250007 -E 8.0")
(I assume the string weka.classifiers.functions.supportVector.RegSMOImproved -L 1.0e-3 -W 1 -P 1.0e-12 -T 0.001 -V is the argument to the -I flag and that the string weka.classifiers.functions.supportVector.NormalizedPolyKernel -C 250007 -E 8.0 is the argument to the -K flag. If that's not the case then those quotes likely want to get removed also.)
And then when you use the array you need to use "${weka_options[#]}" to get the elements of the array as individual quoted words.
java -Xmx1024m -classpath ".:$WEKAPATH" "${weka_options[#]}" -t "$train_set" -T "$test_set" -c 53 -p 53

Retrieving data from postgreSQL with a shell script and send it by mail

I want to daily retrieve data from my postgreSQL database and send the results by mail, what is the best way to do it please ?
I think about a shell script that execute my SELECT and then send it with mail function but I do not know how to do the SELECT part.
#!/bin/sh
todayDate = $(date);
nbUsers = mySelect;
nbPosts = mySelect;
echo "We are ".$date." dans there are ".$nbUsers." users and ".$nbPosts." posts " | mail -s "[DAILY]" test#test.com
EDIT (Updated code) :
#!/bin/sh
todayDate=$(date +%y%m%d%H%M%S)
nbUsers=$(su postgres -c "psql -d database -c 'SELECT COUNT(*) FROM table1'")
nbPosts=$(su postgres -c "psql -d database -c 'SELECT COUNT(*) FROM table2'")
echo "We are $todayDate. There are $nbUsers users and $nbPosts posts." | mail -s "[DAILY] Databse" test#test.com
EDIT2 (Updated code)
#!/bin/sh
su - postgres
todayDate=$(date +"%d/%m/%y")
todayTime=$(date +"%T")
nbUsers=$(psql -A -t -d database -c "SELECT COUNT(id) FROM table1;")
nbPosts=$(psql -A -t -d database -c "SELECT COUNT(id) FROM table2;")
echo "We are $todayDate. There are $nbUsers users and $nbPosts posts." | mail -s "[DAILY] Databse" test#test.com
You can retrieve data using psql command from postgresql database
nbUsers=`su postgres -c "psql -d databasename -c 'SELECT * FROM tableName'"`
after that you can send output of this command to mail

How to make a bash script

How can i make a bash script for this?My command is :
mysql -h(sample) -u(sample) -p -e "insert into databasename.tablename select count( * )from information_schema.processlist;"
Truly yours,
Joey
-bash-4.2$ cat > my_bash_script << "EOF"
> #!/bin/bash
>
> mysql -h(sample) -u(sample) -p -e "insert into databasename.tablename select count( * )from information_schema.processlist;"
> EOF
-bash-4.2$ chmod 755 my_bash_script

Resources