I am trying to comprehend how, if even it can be done, can I avoid subshell?
Is this the only way the code can be written or is there another way?
I tried to use braces { ... }, but it won't pass shellcheck and won't run.
is_running_interactively ()
# test if file descriptor 0 = standard input is connected to the terminal
{
[ -t 0 ]
}
is_tput_available ()
# check if tput coloring is available
{
command -v tput > /dev/null 2>&1 &&
tput bold > /dev/null 2>&1 &&
tput setaf 1 > /dev/null 2>&1
}
some_other_function ()
# so far unfinished function
{
# is this a subshell? if so, can I avoid it somehow?
( is_running_interactively && is_tput_available ) || # <-- HERE
{
printf '%b' "${2}"
return
}
...
}
It is a compound-list, and yes those commands are run in a subshell. To avoid it, use curly braces instead of parentheses:
{ is_running_interactively && is_tput_available; } || ...
Related
When i am passing value of a variable declared in jenkins Groovy script its value is not retained in for loop which is running on a remote server. Strange thing is i am able to access the same value outside the for loop.
Here is the sample code i am trying to use
#!/usr/bin/env groovy
def config
def COMMANDS_TO_CHECK='curl grep hello awk tr mkdir bc'
pipeline {
agent {
label "master"
}
stages {
stage ('Validation of commands') {
steps {
script {
sh """
#!/bin/bash
/usr/bin/sshpass -p passwrd ssh user#host << EOF
hostname
echo $COMMANDS_TO_CHECK ---> This is printed
for CURRENT_COMMAND in \$COMMANDS_TO_CHECK
do
echo ${CURRENT_COMMAND} ---> Why This is not printed?
echo \${CURRENT_COMMAND} ----> Why This is not printed?
done
hostname
EOF
exit
"""
}
}
}
}
}
Output
[workspace#3] Running shell script
+ /usr/bin/sshpass -p passwrd ssh user#host
Pseudo-terminal will not be allocated because stdin is not a terminal.
illinsduck01
curl grep hello awk tr mkdir bc
illinsduck01
+ exit
You can wrap sh in """ ... """ as below
#!/usr/bin/env groovy
def config
pipeline {
agent {
label "master"
}
stages {
stage ('Validation of commands') {
steps {
script {
sh """#!/bin/sh
/usr/bin/sshpass -p password ssh username#hostname << EOF
COMMANDS_TO_CHECK="curl grep hello awk tr mkdir bc"
hostname
echo \$COMMANDS_TO_CHECK
for CURRENT_COMMAND in \$COMMANDS_TO_CHECK
do
echo \$CURRENT_COMMAND
which \$CURRENT_COMMAND
status=\$?
if [ \${status} -eq 0 ]
then
echo \${CURRENT_COMMAND} command is OK
else
echo "Failed to find the \${CURRENT_COMMAND} command"
fi
done
hostname
EOF
exit
"""
}
}
}
}
}
I am trying to reference custom modules shortcuts (ie use ts paths mapping feature) for my typescript app, with the following config.
Project structure
dist/
src/
lyrics/
... ts files
app/
... ts files
Full project structure is here: github.com/adadgio/npm-lyrics-ts, dist folder is not commited of course)
tsconfig.json
{
"compilerOptions": {
"outDir": "dist",
"module": "commonjs",
"target": "es6",
"sourceMap": true,
"moduleResolution": "node",
"experimentalDecorators": true,
"emitDecoratorMetadata": true,
"removeComments": true,
"noImplicitAny": false,
"baseUrl": ".",
"paths": {
"*": ["src/lyrics/*"], // to much here !! but none work
"zutils/*": ["./src/lyrics/*", "src/lyrics/utils/*", "dist/lyrics/utils/*"]
},
"rootDir": "."
},
"exclude": [
"dist",
"node_modules"
],
"include": [
"src/**/*.ts"
]
}
When i run my npm start/compile or watch script, i get no Typescript errors. The following works (Atom is my IDE)
// string-utils.ts does exist, no IDE error, typescript DOES compile
`import { StringUtils } from 'zutils/string-utils';`
But i then get the NodeJS following error:
Error: Cannot find module 'zutils/string-utils'
at Function.Module._resolveFilename (module.js:470:15)
at Function.Module._load (module.js:418:25)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/home/adadgio/WebServer/projects/adadgio/npm-lyrics-ts/src/index.ts:7:1)
at Module._compile (module.js:571:32)
at Module.m._compile (/home/adadgio/WebServer/projects/adadgio/npm-lyrics-ts/node_modules/ts-node/src/index.ts:413:23)
at Module._extensions..js (module.js:580:10)
at Object.require.extensions.(anonymous function) [as .ts] (/home/adadgio/WebServer/projects/adadgio/npm-lyrics-ts/node_modules/ts-node/src/index.ts:416:12)
at Module.load (module.js:488:32)
It looks like the module is trying to be resolved from the node_modules folder. I've read docs about Typescript paths mapping but i cannot get it to work.
I was doing lot of research on this.
I am using atom, typescript and nodejs.
The thing is, when you compile typescript, it does search for paths (the path to a .ts file to include). However, the final compiled .js file does not get path substituted.
The solution:
Compile ts files, use paths in tsconfig
Use script to substitute path token in final .js files
Run node application
So essentially, part of tsconfig will look like this
"baseUrl": "./app",
"paths" : {
"#GameInstance" : ["model/game/GameInstance"],
"#Map" : ["model/game/map/Map"],
"#MapCreator" : ["model/game/map/creator/MapCreator"],
"#GameLoop" : ["model/game/GameLoop"],
"#Point" : ["model/other/math/geometry/Point"],
"#Rectangle" : ["model/other/math/geometry/Rectangle"],
"#Functions" : ["model/other/Functions"]
}
And consider Rectangle.ts file
import { Point } from '#Point';
import { Vector } from '#Vector';
/**
* Represents a rectangle.
* You can query it for collisions or whether rectangles are touching
*/
export class Rectangle {
//more code
Where Rectangle.ts is in
./src/app/model/other/math/geometry/Rectangle/Rectangle.ts
We run
tsc
which will compile all .ts files we set up. There, the paths will be substituted in runtime, if you get error, run
tsc --traceResolution > tmp && gedit tmp
and search for the fiel wehere is undefined path include. You will be able to see substitution log
Then we are left with Rectangle.js (builded)
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
var _Point_1 = require("#Point");
//more code
As you see, the #Point will not exist and we can not run node application like this.
For that, however, I created a script, that recursivelly searches js files and replaces the tokens by going up to root and then to target path.
There is requirePaths.data file where you define token and the path.
'#GameLoop' : "./app/model/game/GameLoop"
'#GameInstance' : "./app/model/game/GameInstance"
"#Map" : "./app/model/game/map/Map"
"#MapCreator" : "./app/model/game/map/creator/MapCreator"
"#Point" : "./app/model/other/math/geometry/Point"
"#Rectangle" : "./app/model/other/math/geometry/Point"
Now, this is NOT universal, it is just my hot script. Please, take a note, that structure is
src
|-app
| |-model
-build
|-src
|-app
|-model
|-test
Technically, the app/model... structure in src, is just copied to the src/build
The tsc takes source from /src/app and compiles it. Compiled result is in /src/build
Then, there is the substitutePathsInJS.sh script.
This scand for build path, and whenever it finds token #Rectangle, it replaces it (more explanation below...) Code:
#!/bin/bash
function testreqLevel()
{
local srcPath="$1"
local replacingIn="$2"
local expectedLevel=$3
getPathLevel "$replacingIn"
local res=$?
if [ ! $res -eq $expectedLevel ]; then
echo "[-] test $srcPath , $replacingIn FAILED. ($res != $expectedLevel)"
fi
}
function assertreqPath()
{
local input="$1"
local expected="$2"
if [ ! "$input" = "$expected" ]; then
echo "[-] test $expected FAILED"
echo "computed: $input"
echo "expected: $expected"
fi
}
function testGetPathLevel()
{
testreqLevel "./build/src" "./build/src/app/model/game/GameObject.js" 4
testreqLevel "./build/src" "./build/src/file.js" 1
testreqLevel "./build/src" "./build/src/app/model/game/GameObject.js" 4
}
function testGetPathToRoot()
{
local path=$(getPathToRoot "./build/src" "./build/src/app/model/game/GameObject.js")
assertreqPath "$path" "../../../../../"
path=$(getPathToRoot "./" "./server.js")
assertreqPath "$path" "./"
path=$(getPathToRoot "./" "./app/model/game/GameInstance.js")
assertreqPath "$path" "../../../"
}
function err()
{
echo "[-] $1"
}
function getPathLevel()
{
#get rid of starting ./
local input=$(echo "$1" | sed "s/^\.\///")
local contains=$(echo "$input" | grep '/')
if [ -z "$contains" ]; then
return 0
fi
#echo "$input"
local slashInput=$(echo "$input" | awk -F/ '{print NF-1}')
return $(($slashInput - 1))
}
#given ROOT, and PATH, returns a path P such as being in directory PATH, from there using P we get to root
#example:
#ROOT=./src
#PATH=./src/model/game/objects/a.js
#returns ../../
function getPathToRoot()
{
local root="$1"
local input="$2"
getPathLevel "$input"
local level=$?
if [ $level -eq 0 ]; then
echo "./"
return 0
fi
for ((i=1; i <= level + 1; i++)); do
echo -n '../'
done
#echo "$root" | sed 's/^\.\///'
}
function parseData()
{
echo "**************"
echo "**************"
local data="$1"
let lineNum=1
while read -r line; do
parseLine "$line" $lineNum
if [ $? -eq 1 ]; then
return 1
fi
let lineNum++
done <<< "$data"
echo 'Parsing ok'
echo "**************"
echo "**************"
return 0
}
function parseLine()
{
if [[ "$1" =~ ^\#.*$ ]] || [[ "$1" =~ ^\ *$ ]]; then
#comment line
return 0
fi
local line=$(echo "$1" | sed "s/\"/'/g")
let lineNum=$2
local QUOTE=\'
local WORD_IN_QUOTES=$QUOTE[^:$QUOTE]*$QUOTE
if [[ "$line" =~ ^\ *$WORD_IN_QUOTES\ *:\ *$WORD_IN_QUOTES\ *$ ]]; then
# valid key : value pair
local key=$(echo "$line" | awk -F: '{print $1}' | sed 's/^ *//g' \
| sed 's/ *$//g' | sed 's/\//\\\//g' | sed "s/'//g" | sed "s/\./\\\./g")
local val=$(echo "$line" | awk -F: '{print $2}' | sed 's/^ *//g' \
| sed 's/ *$//g' | sed "s/'//g")
echo "[+] Found substitution from '$key' : '$val'"
if [ -z "$REPLACEMENT_KEY_VAL" ]; then
REPLACEMENT_KEY_VAL="$key|$val"
else
REPLACEMENT_KEY_VAL="$REPLACEMENT_KEY_VAL;$key|$val"
fi
else
err "Parse error on line $lineNum"
echo "Expecting lines 'token' : 'value'"
return 1
fi
return 0
}
function replaceInFiles()
{
cd "$WHERE_SUBSTITUTE"
echo "substitution root $WHERE_SUBSTITUTE"
local fileList=`find . -type f -name "*.js" | grep -v "$EXCLUDE"`
echo "$fileList"| while read fname; do
export IFS=";"
echo "Replacing in file '$WHERE_SUBSTITUTE$fname'"
for line in $REPLACEMENT_KEY_VAL; do
local key=`echo "$line" | awk -F\| '{print $1}'`
local val=`echo "$line" | awk -F\| '{print $2}'`
local finalPath=$(getPathToRoot "./" "$fname")"$val"
if [ $VERBOSE -eq 1 ]; then
echo -e "\tsubstitute '$key' => '$val'"
#echo -e "\t$finalPath"
echo -e "\treplacing $key -> $finalPath"
fi
#escape final path for sed
#slashes, dots
finalPath=$(echo "$finalPath" | sed 's/\//\\\//g'| sed 's/\./\\\./g')
if [ $VERBOSE -eq 1 ]; then
echo -e '\t\tsed -i.bak '"s/require(\(.\)$key/require(\1$finalPath/g"\ "$fname"
fi
sed -i.bak "s/require(\(.\)$key\(.\))/require(\1$finalPath\2)/g" "$fname"
done
done
return 0
}
function quit()
{
echo "*************************************"
echo "*****SUBSTITUTING PATHS EXITING******"
echo "*************************************"
echo
exit $1
}
#######################################
CURRENTDIR=`dirname "$(realpath $0)"`
WHERE_SUBSTITUTE='./build/src'
REPLACEMENT_KEY_VAL="";
VERBOSE=0
FILE="$CURRENTDIR/requirePaths.data"
EXCLUDE='./app/view'
if [ "$1" = "-t" ]; then
testGetPathLevel
testGetPathToRoot
echo "[+] tests done"
exit 0
fi
if [ "$1" = "-v" ]; then
VERBOSE=1
fi
echo "*************************************"
echo "********SUBSTITUTING PATHS***********"
echo "*************************************"
if [ ! -f "$FILE" ]; then
err "File $FILE does not exist"
quit 1
fi
DATA=`cat "$FILE"`
parseData "$DATA"
if [ $? -eq 1 ]; then
quit 1
fi
replaceInFiles
quit $?
This seems confusing, but consider excample.
We have Rectangle.js file.
The script loads bunch of input tokens from requirePaths.data file, in this case, lets focus on line
"#Point" : "./app/model/other/math/geometry/Point"
Script runs from ./src, and is given root directory ./src/build/src
Script does cd ./src/build/src
Executes find. There, it will receive
./model/other/math/geometry/Rectangle/Rectangle.ts
The absolute path of that is
./src/build/src/app/model/other/math/geometry/Rectangle/Rectangle.ts
But we do not care about absolute path now.
Calculates path such as he gets from the directory up
Whis will result is something like
./../../../../
Where he will like that get from directory
/src/build/app/model/other/math/geometry/Rectangle
to directory
/src/build/app
Then, behind that string, we add the path provided from the data file
./../../../.././app/model/other/math/geometry/Point
So the final substitution for file Rectangle.js (in BUILD folder somewhere) is
before
require("#Point")
after
require("./../../../.././app/model/other/math/geometry/Point")
Which is terrible, but we do not care about what is in js anyway. Main thing is that it works.
Drawbacks
You can NOT combine it with code monitor. Monitoring tsc and then, when change in code is done, do automatic tsc compile, then automatically run the shell path substitution and then tun nodeJS upon final js files is possible, BUT for some reason, then the sh script substitutes paths, the monitoring software considers is as change in code (no idea why, it has build excluded from monitor) and compiles again. Therefore, you gen an infinite loop
You must compile it manually, step by step, or JUST use monitor on tsc compilation. When you write some code, go and run substitution and test the nodeJS functionality.
When adding new Class Food, you must define a token for it (#Food) and path to file in 2 places (tsconfig) and the input for shell script
You make entire compilation process longer. To be honest, tsc takes most of time anyway, and the bash script is not so time consuming surprisingly....
When implementing tests with mocha, you must again do step by step compilation and when finished, run mocha above final js files. But for that you can write scripts....
Some people usually substitute only like #app or some directories. The problem with that is, whenever you move source file around, you have to do lot of changes...
The good sides
When moving file around, you change one string (in two places....)
No more relative paths that make big project impossible to maintain
It is interesting, but it really works, and I did not encounter major problems (if used properly)
I have a runit service I use to run a rails app using unicorn.
Its restart command uses a signal (USR2) to handle a zero-downtime restart. Basically, it waits until the new process is ready before the old ones die.
This causes a very long (40 seconds) restart time, in which service myservice restart doesn't return until the end.
While I can give runit a longer timeout (which I already do), I want to make this restart a fire-and-forget kind of action so it'll return instantly (or after the USR2 signal was fired, but without waiting for it to complete.
The entire logic is taken from multiple blog posts about zero-downtime rails deployments with unicorn restarts:
https://gist.github.com/czarneckid/4639793
https://gist.github.com/JeanMertz/8996796
https://nulogy.com/who-we-are/company-blog/articles/zero-downtime-deployments-with-chef-nginx-and-unicorn/
This is the runit script (generated by chef):
#!/bin/bash
#
# This file is managed by Chef, using the <%= node.name %> cookbook.
# Editing this file by hand is highly discouraged!
#
exec 2>&1
#
# Since unicorn creates a new pid on restart/reload, it needs a little extra
# love to manage with runit. Instead of managing unicorn directly, we simply
# trap signal calls to the service and redirect them to unicorn directly.
#
RUNIT_PID=$$
APPLICATION_NAME=<%= #options[:application_name] %>
APPLICATION_PATH=<%= File.join(#options[:path], 'current') %>
BUNDLE_CMD="<%= #options[:bundle_command] ? "#{#options[:bundle_command]} exec" : '' %>"
UNICORN_CMD=<%= #options[:unicorn_command] ? #options[:unicorn_command] : 'unicorn' %>
UNICORN_CONF=<%= #options[:unicorn_config_path] ? #options[:unicorn_config_path] : File.join(#options[:path], 'current', 'config', 'unicorn.rb') %>
RAILS_ENV=<%= #options[:rails_env] %>
CUR_PID_FILE=<%= #options['pid'] ? #options['pid'] : File.join(#options[:path], 'current', 'shared', 'pids', "#{#options[:application_name]}.pid") %>
ENV_PATH=<%= #options[:env_dir] %>
OLD_PID_FILE=$CUR_PID_FILE.oldbin
echo "Runit service restarted (PID: $RUNIT_PID)"
function is_unicorn_alive {
set +e
if [ -n $1 ] && kill -0 $1 >/dev/null 2>&1; then
echo "yes"
fi
set -e
}
if [ -e $OLD_PID_FILE ]; then
OLD_PID=$(cat $OLD_PID_FILE)
echo "Old master detected (PID: $OLD_PID), waiting for it to quit"
while [ -n "$(is_unicorn_alive $OLD_PID)" ]; do
sleep 5
done
fi
if [ -e $CUR_PID_FILE ]; then
CUR_PID=$(cat $CUR_PID_FILE)
if [ -n "$(is_unicorn_alive $CUR_PID)" ]; then
echo "Detected running Unicorn instance (PID: $CUR_PID)"
RUNNING=true
fi
fi
function start {
unset ACTION
if [ $RUNNING ]; then
restart
else
echo 'Starting new unicorn instance'
cd $APPLICATION_PATH
exec chpst -e $ENV_PATH $BUNDLE_CMD $UNICORN_CMD -c $UNICORN_CONF -E $RAILS_ENV
sleep 3
CUR_PID=$(cat $CUR_PID_FILE)
fi
}
function stop {
unset ACTION
echo 'Initializing graceful shutdown'
kill -QUIT $CUR_PID
while [ -n "$(is_unicorn_alive $CUR_PID)" ]; do
echo '.'
sleep 2
done
echo 'Unicorn stopped, exiting Runit process'
kill -9 $RUNIT_PID
}
function restart {
unset ACTION
echo "Restart request captured, swapping old master (PID: $CUR_PID) for new master with USR2"
kill -USR2 $CUR_PID
sleep 2
echo 'Restarting Runit service to capture new master PID'
exit
}
function alarm {
unset ACTION
echo 'Unicorn process interrupted'
}
trap 'ACTION=stop' STOP TERM KILL
trap 'ACTION=restart' QUIT USR2 INT
trap 'ACTION=alarm' ALRM
[ $RUNNING ] || ACTION=start
if [ $ACTION ]; then
echo "Performing \"$ACTION\" action and going into sleep mode until new signal captured"
elif [ $RUNNING ]; then
echo "Going into sleep mode until new signal captured"
fi
if [ $ACTION ] || [ $RUNNING ]; then
while true; do
[ "$ACTION" == 'start' ] && start
[ "$ACTION" == 'stop' ] && stop
[ "$ACTION" == 'restart' ] && restart
[ "$ACTION" == 'alarm' ] && alarm
sleep 2
done
fi
This is a super weird way to use Runit, move your reload logic to the control/h script and use sv hup (or since it doesn't seem to be anything more than sending USR2 sv 2). The main run script shouldn't be involved.
I am using the nagios to monitor gearman and getting error "CRITICAL - function 'xxx' is not registered in the server"
Script that nagios execute to check the gearman is like
#!/usr/bin/env perl
# taken from: gearmand-0.24/libgearman-server/server.c:974
# function->function_name, function->job_total,
# function->job_running, function->worker_count);
#
# this code give following result with gearadmin --status
#
# FunctionName job_total job_running worker_count
# AdsUpdateCountersFunction 0 0 4
use strict;
use warnings;
use Nagios::Plugin;
my $VERSION="0.2.1";
my $np;
$np = Nagios::Plugin->new(usage => "Usage: %s -f|--flist <func1[:threshold1],..,funcN[:thresholdN]> [--host|-H <host>] [--port|-p <port>] [ -c|--critworkers=<threshold> ] [ -w|--warnworkers=<threshold>] [-?|--usage] [-V|--version] [-h|--help] [-v|--verbose] [-t|--timeout=<timeout>]",
version => $VERSION,
blurb => 'This plugin checks a gearman job server, expecting that every function in function-list arg is registered by at least one worker, and expecting that job_total is not too much high.',
license => "Brought to you AS IS, WITHOUT WARRANTY, under GPL. (C) Remi Paulmier <remi.paulmier\#gmail.com>",
shortname => "CHECK_GEARMAN",
);
$np->add_arg(spec => 'flist|f=s',
help => q(Check for the functions listed in STRING, separated by comma. If optional threshold is given (separated by :), check that waiting jobs for this particular function are not exceeding that value),
required => 1,
);
$np->add_arg(spec => 'host|H=s',
help => q(Check the host indicated in STRING),
required => 0,
default => 'localhost',
);
$np->add_arg(spec => 'port|p=i',
help => q(Use the TCP port indicated in INTEGER),
required => 0,
default => 4730,
);
$np->add_arg(spec => 'critworkers|c=i',
help => q(Exit with CRITICAL status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 1,
);
$np->add_arg(spec => 'warnworkers|w=i',
help => q(Exit with WARNING status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 4,
);
$np->getopts;
my $ng = $np->opts;
# manage timeout
alarm $ng->timeout;
my $runtime = {'status' => OK,
'message' => "Everything OK",
};
# host & port
my $host = $ng->get('host');
my $port = $ng->get('port');
# verbosity
my $verbose = $ng->get('verbose');# look for gearadmin, use nc if not found
my #paths = grep { -x "$_/gearadmin" } split /:/, $ENV{PATH};
my $cmd = "gearadmin --status -h $host -p $port";
if (#paths == 0) {
print STDERR "gearadmin not found, using nc\n" if ($verbose != 0);
# $cmd = "echo status | nc -w 1 $host $port";
$cmd = "echo status | nc -i 1 -w 1 $host $port";
}
foreach (`$cmd 2>/dev/null | grep -v '^\\.'`) {
chomp;
my ($fname, $job_total, $job_running, $worker_count) =
split /[[:space:]]+/;
$runtime->{'funcs'}{"$fname"} = {job_total => $job_total,
job_running => $job_running,
worker_count => $worker_count };
# print "$fname : $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
}
# get function list
my #flist = split /,/, $ng->get('flist');
foreach (#flist) {
my ($fname, $fthreshold);
if (/\:/) {
($fname, $fthreshold) = split /:/;
} else {
($fname, $fthreshold) = ($_, -1);
}
# print "defined for $fname: $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
# if (defined($runtime->{'funcs'}{"$fname"})) {
# print "$fname is defined\n";
# } else {
# print "$fname is NOT defined\n";
# }
if (!defined($runtime->{'funcs'}{"$fname"}) &&
$runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL, "function '$fname' is not registered in the server");
} else {
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('critworkers') && $runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL,
"less than " .$ng->get('critworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('warnworkers') && $runtime->{'status'} <= WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
"less than " .$ng->get('warnworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'job_total'} > $fthreshold
&& $fthreshold != -1 && $runtime->{'status'}<=WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
$runtime->{'funcs'}{"$fname"}{'job_total'}.
" jobs for $fname exceeds threshold $fthreshold");
}
}
}
$np->nagios_exit($runtime->{'status'}, $runtime->{'message'});
When the script is executed simply by command line it says "everything ok"
But in nagios it shows error "CRITICAL - function 'xxx' is not registered in the server"
Thanks in advance
After spending long time on this, finally got the answer all that have to do is.
yum install nc
nc is what that was missing from the system.
With Regards,
Bankat Vikhe
Not easy to say but it could be related to your script not being executable as embedded Perl.
Try with # nagios: -epn at the beginning of the script.
#!/usr/bin/env perl
# nagios: -epn
use strict;
use warnings;
Be sure to check all the hints in the Perl Plugins section of the Nagios Plugin Development Guidelines
I'm writing unit tests for bash script and want to check if script quits on error (I use exit with errorcode).
Is there any way to catch exit (I know about trap), but don't interrupt command flow (something like exception catching)?
My test script:
do_smth1 && echo OK || echo Fail
do_smth2 && echo OK || echo Fail
do_smth3 && echo OK || echo Fail
my main script:
do_smth1(){
...
...
[ $? -eq 0 ] && success || error_exit
}
and so on.
I'd like to execute all tests one after another. Now flow interrupts after first command.
You can write:
success=0
error_code=1
do_smth1(){
...
...
[ $? -eq 0 ] && return $success || return $error_code
}
do_smth1 && echo OK || echo Fail
do_smth2 && echo OK || echo Fail
do_smth3 && echo OK || echo Fail
Refer the man pages for exit and return, to know when to use which.