Need help - Getting an error: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated) - linux

Checking if anybody else had the similar issue.
Code in the shell script:
## Convert file into Unix format first.
## THIS is IMPORTANT.
#####################
dos2unix "${file}" "${file}";
#####################
## Actual DB Change
db_change_run_op="$(ssh -qn ${db_ssh_user}#${dbserver} "sqlplus $dbuser/${pswd}#${dbname} <<ENDSQL
#${file}
ENDSQL
")";
Summary:
1. From a shell script (on a SunOS source server) I'm running a sqlplus session via ssh on a target machine to run a .sql script.
2. Output of this target ssh session (running sqlplus) is getting stored in a variable within the shell script. Variable name: db_change_run_op (as shown above in the code snapshot).
3. Most of the .sql scripts (that the variable "${file}" stores) that I'm running, shell script runs it fine and returns me the output of the .sql file (ran on target server via ssh from source server) provided, if the .sql file contains something which doesn't take much time to complete -or generates reasonable amount of output log/lines.
for ex: Let's assume if .sql I want to run does the following, then it runs fine.
select * from database123;
udpate table....
alter table..
insert ....
...some procedure .... which doesn't take much time to create....
...some more sql commands which complete..within few minutes to an hour....
4. Now, the issue I'm facing is:
Let's assume I have a .sql file where a single select command from a table have couple of hundred thousands - upto 1-5millions of lines i.e.
select * from database321;
assume the above generates the above bullet 4 condition.
In this case, I'm getting the following error message thrown by the shell script (running on the source server).
Error:
*./db_change_load.sh: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)*
My questions:
1. Did the .sql script complete - I assume yes. But, how can I get the output LOG file of the .sql file generated on the target server directly. If this can be done, then I won't need the variable to hold the output of whole ssh session sqlplus command and then create a log file on source server by doing [ echo "${db_change_run_op}" > sql.${file}.log ] way.
I assume the error is coming as the output or no. of lines generated by the ssh session i.e. by the sqlplus is so big that it can't fit Unix/Linux BASH variable's limit and thus, xrealloc error.
Please advise if on the above 2 questions if you have any experience or how can i solve this.
I assume, I'll try using " | tee /path/on.target.ssh.server/sql.${file}.log" soon after << ENDSQL or final close of ENDSQL (here doc keyword), wondering if that would work or not..

OK. got it working. No more store stuff in a var and then echo $var to a file.
Luckily, I had a same mount point on both source and target server i.e. if I go to /scm on source and on target, the mount (df -kvh .) shows same output for Share/NAS mount value.
Filesystem size used avail capacity Mounted on
ServerNAS02:/vol/vol1/scm 700G 560G 140G 81% /scm
Now, instead of using the variable to store the whole output of ssh session calling sqlplus session, all I did is was to create a file on the remote server using the following code.
## Actual DB Change
#db_change_run_op="$(ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
#set echo off
#set echo on
#set timing on
#set time on
#set serveroutput on size unlimited
##${file}
#ENDSQL
#")";
ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
seems like unlimited doesn't work in 11g so I had to use the 1000000 value (these small sql cmds help to show command with its output, show clock time for each output line etc).
But basically, in the above code, I'm calling the ssh command directly without using a variable="$(.....)" way.. and after the <
Even if I wouldn't have the same mount, I could have tee'd the output to a file on the remote server path (which is not available from source server) but atleast I can see upto what level the .sql command completed or generated output as now output is going directly to a file on remote server and Unix/Linux doesn't care much about the file size until there's no space left.

Related

Unable to output script results with column/table formatting

Answered - previously titled 'Cron job for shell script not running'
I recently downloaded Speedtest onto my Raspberry Pi, and wrote a script to output the results in csv format to a CSV file.
I'm trying to do this regularly via a cron job, but for some reason, it won't execute the shell script as intended.
Here's the script below. I've commented/cut out a lot to try and find the issue
#!/bin/bash
# Commented out if statement detects presence of data file and creates one if it doesn't exist. Was going to adjust later to include variables/input options if I wanted to used script on alternate systems, but commented out while working on main issue.
file='/home/User/Documents/speedtestdata.csv'
# have tried this with and without quotes, does not seem to make a difference either way
#HEADERS='/usr/bin/speedtest-cli --csv-header'
SPEEDTEST='/usr/bin/speedtest-cli --csv'
# Used absolute path for the executable
#LOG=/home/User/scripts/testreclog.txt
#DATE=$( date )
# Was using the above to log steps of script running successfully with timestamp, commented out
#if [ ! -f $file ]
#then
# echo "Creating results file">>$LOG
# touch $file
# $HEADERS > $file
#fi
#echo "Running speedtest">>$LOG
$SPEEDTEST >> $file
#echo "Formatting results">>$LOG
#column -s, -t < $file
# this step was used to format the log file neatly
#echo "Time completed ",$DATE>>$LOG
And here's how the crontab currently looks
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
*/5 * * * * /bin/bash /home/User/scripts/testandrec.sh
# 2> /home/User/scripts/testrecerror.txt
# Was attempting to log errors to this file, nothing seen so commented out on a newline.
#* * * * * /home/User/scripts/testscript.sh test to verify cron works (it does)
I've added my scripts folder to the end of my path, but for some reason this only shows up when I'm using the Pi directly, when I ssh in I'm missing the scripts folder on the end.
However, given that I've used absolute path for everything I'm not sure why this would be an issue.
First I tested whether a simple Cron job would work, so I created testscript.sh, which simply returned 'Test' and a timestamp to a specific file and used the same shebang, and used the absolute paths, and functioned as intended.
I have checked systemctl for Cron, restarted Cron with sudo service cron restart and made sure a new line is in place in the crontab.
I have tried with and without /bin/bash in the cron tab entry, it seemingly hasn't made a difference.
I tried cd /home/User/scripts && ./testandrec.sh but no luck.
I changed the run time to every 5 then every 10 minutes, which has not worked.
I have noticed that when I ran the script manually with column -s, -t < $file left in, when cating the results file it is formatted as intended.
However, the next instance of when the cron job should run reverts this to CSV with a , as a delimitter, so clearly something is running.
To confuse matters further, I think the script may be firing once after restarting cron, and then not working when it should be running subsequently. When I leave the column line in, this appears to just revert the formatting, but if I comment it out it appears to run a speed test and append the results, but only once. However, I may be wrong on this and reproducing it
If I instead try 0 * * * * /usr/bin/speedtest-cli --csv >> /home/User/Documents/speedtestdata.csv && column -s, -t < /home/User/Documents/speedtestdata.csv, it appeared to perform/append speedtest but does not action the column command.
I would much rather neatly tie up the process in a shell script, however, rather than have the above which isn't very DRY code.
I've looked extensively, but none of the solutions I've found on this site or others have fixed the issue.
Any troubleshooting suggestions/help would be greatly appreciated.
Here you go - the solution is simple:
#!/bin/bash
# Commented out if statement detects presence of data file and creates one if it doesn't exist. Was going to adjust later to include variables/input options if I wanted to used script on alternate systems, but commented out while working on main issue.
file='/home/User/Documents/speedtestdata.csv'
# have tried this with and without quotes, does not seem to make a difference either way
#HEADERS='/usr/bin/speedtest-cli --csv-header'
SPEEDTEST='/usr/bin/speedtest-cli --csv'
# Used absolute path for the executable
#LOG=/home/User/scripts/testreclog.txt
#DATE=$( date )
# Was using the above to log steps of script running successfully with timestamp, commented out
#if [ ! -f $file ]
#then
# echo "Creating results file">>$LOG
# touch $file
# $HEADERS > $file
#fi
#echo "Running speedtest">>$LOG
$SPEEDTEST | column -s, -t >> $file
Just check the last line ;)

Retrieve underlying file of tee command

References
Fullcode of what will be discussed here:
https://github.com/djon2003/com.cyberinternauts.linux.backup
activateLogs question that solved how to log to file and screen: https://stackoverflow.com/a/70792273/214898
Limitation
Just a small reminder from the last question: this script is executed on limited environment, on a QNAP (NAS).
Background
I have a function that activate logging which now has three modes: SCREEN, DISK, BOTH. With some help (from the question of the link above), I achieve to make work the BOTH option. DISK & BOTH use a file descriptor numbered 3. The first is pointing to a file and the second to stdout.
On exit of my script (using trap), it detects if there were logged errors and send them via email.
Code
function sendErrorMailOnExit()
{
## If errors happened, then send email
local isFileDescriptor3Exist=$(command 2>/dev/null >&3 && echo "Y")
if [ "$isFileDescriptor3Exist" = "Y" ]; then
local logFile=$(readlink /proc/self/fd/3 | sed s/.log$/.err/)
local logFileSize=$(stat -c %s "$logFile")
if [ $logFileSize -gt 0 ]; then
addLog "N" "Sending error email"
local logFileName=$(basename "$logFile")
local logFileContent=$(cat "$logFile")
sendMail "Y" "QNAP - Backup error" "Error happened on backup. See log file $logFileName\n\nLog error file content:\n$logFileContent"
fi
fi
}
trap sendErrorMailOnExit EXIT
Problem
As you can see, this works well because the file descriptor #3 is using a file. But now, using the BOTH option, the file descriptor #3 is pointing to stdout and the file is written via tee. Hence my question, how could I get the location of the file of tee.
Why not only using a variable coming from my function activateLogs would you say? Because, this function relaunches the script to be able to get all the logs not caught before the function is called. Thus why using this method to retrieve the error file location.
Possible solutions, but not the best (I hope)
One way would be to pass the file location through a script
parameter, but I would prefer not do that if that can be avoid.
Another, would be to create a "fake" file descriptor #4 (probably my best solution up to now) that would always point to the file.
Does anyone have an idea?
I finally opted for the creation of a "fake" file descriptor #4 that does not nothing except pointing to the current log file.

How to use one environment variable when calling a bash script repeatedly

I have a task to monitor the system with a quota, if the monitored result is over the quota, send a warning email. But this monitor program should be called once in half an hour, after one warning email is sent out, the next time if the monitored state is still the same as last time, there is no need to send the same warning email again.
In order to do this, I would like to make use of environment variable to store the state of the last monitored result, so that the next time it can be checked and duplicate email would not be sent. One of my solution is to add or update the export syntax in .bashrc, but in order to activate the updated export syntax, I have to run bash, which might be unnecessary.
So I would like ask is there any way to update the environment variable so that every time when the monitor program Bash script is called, it gets the fresh updated value?
This is a self contained solution using a heredoc. At first glance it may seem an elaborate and inperfect solution, it does have its uses in that it's resilient and it works well when deploying across more than one machine, requires no special monitoring or permissions of external files, and most importantly, there are no unwanted surprises with environment.
This example uses bash, but it will work with sh if the $thisfile variable is set manually, or another way.
This example assumes that 20 is already in the script file as mymonitorvalue, and uses argument $1 as a proof of concept. You would obviously change newval="$1" to whatever calculates the quota:
Example usage:
#bash $>./script 10
Value from previous run was 20
Value from this test was 10
Updating value ...
#bash $>./script 10
not doing anything ... result is the same as last time
#bash $>
Script:
#!/bin/bash
thisfile="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" ; thisfile="${DIR}${0}"
read -d '' currentvalue <<'EOF'
mymonitorval=20
EOF
eval "$currentvalue"
function changeval () {
sed -E -i "s/(^mymonitorval\=)(.*)/mymonitorval\="$1"/g" "$thisfile"
}
newvalue="$1"
if [[ "$newvalue" != "$mymonitorval" ]]; then
echo "Value from previous run was $mymonitorval"
echo "Value from this test was "$1""
echo "Updating value ..."
changeval "$newvalue"
else
echo "not doing anything ... result is the same as last time"
fi
Explanation:
thisfile= can be set manually for script location. This example uses the automated solution from here: https://stackoverflow.com/a/246128
read -d...EOF is the heredoc which is saved into variable $currentvalue
eval "$currentvalue" in this case is the equivalent of typing mymonitorval=10 into a terminal
function changeval...} updates the contents of the heredoc in place (it changes the physical .sh script file)
newvalue="$1" is for the purpose of testing. $newvalue would be determined by whatever your script is that is calculating quota
if.... block is to perform two alternate sets of actions depending on whether $newvalue is the same as it was last time or not.
Store environment variable in different .file and then source <.file>

Rollover shell script

Assuming a shell script(commands.sh) with few commands.
I need to write a script which sends the output of commands executed by commands.sh to a file f1.csv
if file size exceeds 1MB then the output flowing should go to file f2.csv
if the file size exceeds 1 mb again here,the output flowing should go to file f3.csv
if f3.csv exceeds the size 1mb,then the older f1 should be deleted and again new file f1 should be created,
output flowing should be to written to f1. This process should go on .
I can write the crontab file, just the shell script is a bit tricky
I have been experimenting:
#!/usr/bin/env bash
PREFIX="f"
# Maximum size after which you want a new file in bytes
MAX_SIZE=1048576
LAST_FILE=`ls "$prefix"*.csv | tail -1`
# Check if file exists and if it does not, create it.
if [[ -z "$LAST_FILE" ]]
then
LAST_FILE=$PREFIX"1.csv"
touch $LAST_FILE
fi
LAST_FILE_NO=`echo $LAST_FILE | sed s/$PREFIX/''/ | sed s/.csv/''/`
LAST_FILE_SIZE=`stat -c %s $LAST_FILE`
if [ `stat -c %s $LAST_FILE` -lt 200 ]
then
`/bin/sh ./sam.sh >> $LAST_FILE`
else
UPCOMING_FILE_NO=$((LAST_FILE_NO+1))
`/bin/sh ./sam.sh >> $PREFIX$UPCOMING_FILE_NO.csv`
fi
help is appreciated guys.
EDIT: Have got the secondary shell script to work too...
Now if anyone could help me with resetting after 3 files are done and starting from f1.
thanks
It sounds like you'd be better off using logrotate, depending on how your script is running. If you are running 'commands.sh' on a cron, you can have logrotate rotate out the logs. There is a good guide on logrotate here:
http://linuxers.org/howto/howto-use-logrotate-manage-log-files
If your commands.sh isn't going to be on a cron, meaning it's not a regular time interval that triggers it, you could manually set up a log rotation at the beginning of your script. I once had to do something similar. I found this guide really useful:
http://wazem.blogspot.com/2013/11/simple-bash-log-rotate-function.html

stdout all at once instead of line by line

I wrote a script that gets load and mem information for a list of servers by ssh'ing to each server. However, since there are around 20 servers, it's not very efficient to wait for the script to end. That's why I thought it might be interesting to make a crontab that writes the output of the script to a file, so all I need to do is cat this file whenever I need to know load and mem information for the 20 servers. However, when I cat this file during the execution of the crontab it will give me incomplete information. That's because the output of my script is written line by line to the file instead of all at once at termination. I wonder what needs to be done to make this work...
My crontab:
* * * * * (date;~/bin/RUP_ssh) &> ~/bin/RUP.out
My bash script (RUP_ssh):
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca
done
Thanks,
niefpaarschoenen
You can buffer the output to a temporary file and then output all at once like this:
outputbuffer=`mktemp` # Create a new temporary file, usually in /tmp/
trap "rm '$outputbuffer'" EXIT # Remove the temporary file if we exit early.
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca >> "$outputbuffer" # gather info to buffer file
done
cat "$outputbuffer" # print buffer to stdout
# rm "$outputbuffer" # delete temporary file, not necessary when using trap
Assuming there is a string to identify which host the mem/load data has come from you can update your txt file as each result comes in. Asuming the data block is one line long you could use
for comp in `cat ~/bin/servers`; do
output=$( ssh $comp ~/bin/ca )
# remove old mem/load data for $comp from RUP.out
sed -i '/'"$comp"'/d' RUP.out # this assumes that the string "$comp" is
# integrated into the output from ca, and
# not elsewhere
echo "$output" >> RUP.out
done
This can be adapted depending on the output of ca. There is lots of help on sed across the net.

Resources