Access rejected by local host in freeradius - linux

I am no able to execute the radtest command and i cant figure out what the issue it i keep getting the error :
rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=82, length=20
here is the execution:
rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=75, length=20
root#localhost:/etc/freeradius# radtest testing password 127.0.0.1 0 testing123
Sending Access-Request of id 82 to 127.0.0.1 port 1812
User-Name = "testing"
User-Password = "password"
NAS-IP-Address = 127.0.0.1
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=82, length=20
here is the debug output:
+group authorize {
++[preprocess] = ok
++policy rewrite_calling_station_id {
+++? if (Calling-Station-Id =~ /([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})/i)
(Attribute Calling-Station-Id was not found)
? Evaluating (Calling-Station-Id =~ /([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})/i) -> FALSE
+++? if (Calling-Station-Id =~ /([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})/i) -> FALSE
+++else else {
++++[noop] = noop
+++} # else else = noop
++} # policy rewrite_calling_station_id = noop
[authorized_macs] expand: %{Calling-Station-ID} ->
++[authorized_macs] = noop
++? if (!ok)
? Evaluating !(ok) -> TRUE
++? if (!ok) -> TRUE
++if (!ok) {
+++[reject] = reject
++} # if (!ok) = reject
+} # group authorize = reject
Using Post-Auth-Type Reject
# Executing group from file /etc/freeradius/sites-enabled/default
+group REJECT {
[eap] Request didn't contain an EAP-Message, not inserting EAP-Failure
++[eap] = noop
[attr_filter.access_reject] expand: %{User-Name} -> testing
attr_filter: Matched entry DEFAULT at line 11
++[attr_filter.access_reject] = updated
+} # group REJECT = updated
Delaying reject of request 2 for 1 seconds
Going to the next request
Waking up in 0.9 seconds.
Sending delayed reject for request 2
Sending Access-Reject of id 82 to 127.0.0.1 port 55664
Waking up in 4.9 seconds.
Cleaning up request 2 ID 82 with timestamp +269
Ready to process requests.
this is my users:
testing Cleartext-Password := "password"

Related

issue connecting from sql developer to OracleVM virtualbox over port 1521

Hello I have used https://oracle-base.com/articles/12c/oracle-db-12cr2-installation-on-oracle-linux-6-and-7 to create a oracle DB in oracle virtualbox and I am trying to connect to it using sql developer on the same desktop. I have disabled the firewall already. Any help would be greatly appreciated.
Here is my tnsnames.ora and listener.ora files
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/19.3.0.1/db_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
LISTENER_CDB1 =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-122.localdomain)(PORT = 1521))
CDB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-122.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cdb1)
)
)
[oracle#ol7-122 bin]$ cat /u01/app/oracle/product/19.3.0.1/db_1/network/admin/listener.ora
# listener.ora Network Configuration File: /u01/app/oracle/product/19.3.0.1/db_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ol7-122.localdomain)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
Listener Status
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 20-APR-2021 11:15:43
Copyright (c) 1991, 2016, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-122.localdomain)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 19-APR-2021 13:17:15
Uptime 0 days 21 hr. 58 min. 27 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/19.3.0.1/db_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/ol7-122/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-122.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ol7-122.localdomain)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/cdb1/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "bfcef1c73c025a39e0536a38a8c0e70d" has 1 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
Service "cdb1" has 1 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
Service "cdb1XDB" has 1 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
The command completed successfully
netstat of port 1521 in oracle VM
tcp 0 0 192.168.56.106:27356 192.168.56.106:1521 ESTABLISHED
tcp6 0 0 :::1521 :::* LISTEN
tcp6 0 0 192.168.56.106:1521 192.168.56.106:27356 ESTABLISHED
unix 2 [ ACC ] STREAM LISTENING 51182 /var/tmp/.oracle/sEXTPROC1521
telnet from windows to oracle VM over port 1521
Connecting To 192.168.56.106...Could not open connection to the host, on port 1521: Connect failed

SPNEGO uses wrong KRBTGT principal name

I am trying to enable Kerberos authentication for our website - The idea is to have users logged into a Windows AD domain get automatic login (and initial account creation)
Before I tackle the Windows side of things, I wanted to get it work locally.
So I made a test KDC/KADMIN container using git#github.com:ist-dsi/docker-kerberos.git
Thee webserver is in a local docker container with nginx and the spnego module compiled in.
The KDC/KADMIN container is at 172.17.0.2 and accessible from my webserver container.
Here is my local krb.conf:
default_realm = SERVER.LOCAL
[realms]
SERVER.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
and the krb.conf on the webserver container
[libdefaults]
default_realm = SERVER.LOCAL
default_keytab_name = FILE:/etc/krb5.keytab
ticket_lifetime = 24h
kdc_timesync = 1
ccache_type = 4
forwardable = false
proxiable = false
[realms]
LOCALHOST.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
Here is the principals and keytab config (keytab is copied to the web container under /etc/krb5.keytab)
rep ~/project * rep_krb_test $ kadmin -p kadmin/admin#SERVER.LOCAL -w hunter2
Authenticating as principal kadmin/admin#SERVER.LOCAL with password.
kadmin: list_principals
K/M#SERVER.LOCAL
kadmin/99caf4af9dc5#SERVER.LOCAL
kadmin/admin#SERVER.LOCAL
kadmin/changepw#SERVER.LOCAL
krbtgt/SERVER.LOCAL#SERVER.LOCAL
noPermissions#SERVER.LOCAL
rep_movsd#SERVER.LOCAL
kadmin: q
rep ~/project * rep_krb_test $ ktutil
ktutil: addent -password -p rep_movsd#SERVER.LOCAL -k 1 -f
Password for rep_movsd#SERVER.LOCAL:
ktutil: wkt krb5.keytab
ktutil: q
rep ~/project * rep_krb_test $ kinit -C -p rep_movsd#SERVER.LOCAL
Password for rep_movsd#SERVER.LOCAL:
rep ~/project * rep_krb_test $ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: rep_movsd#SERVER.LOCAL
Valid starting Expires Service principal
02/07/20 04:27:44 03/07/20 04:27:38 krbtgt/SERVER.LOCAL#SERVER.LOCAL
The relevant nginx config:
server {
location / {
uwsgi_pass django;
include /usr/lib/proj/lib/wsgi/uwsgi_params;
auth_gss on;
auth_gss_realm SERVER.LOCAL;
auth_gss_service_name HTTP;
}
}
Finally etc/hosts has
# use alternate local IP address
127.0.0.2 server.local server
Now I try to access this with curl:
* Trying 127.0.0.2:80...
* Connected to server.local (127.0.0.2) port 80 (#0)
* gss_init_sec_context() failed: Server krbtgt/LOCAL#SERVER.LOCAL not found in Kerberos database.
* Server auth using Negotiate with user ''
> GET / HTTP/1.1
> Host: server.local
> User-Agent: curl/7.71.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
....
As you can see it is trying to use the SPN "krbtgt/LOCAL#SERVER.LOCAL" whereas kinit has "krbtgt/SERVER.LOCAL#SERVER.LOCAL" as the SPN
How do I get this to work?
Thanks in advance..
So it turns out that I needed
auth_gss_service_name HTTP/server.local;
Some other tips for issues encountered:
Make sure the keytab file is readable by the web server process with user www-data or whatever user
Make sure the keytab principals are in the correct order
Use export KRB5_TRACE=/dev/stderr and curl to test - kerberos gives a very detailed log of what it's doing and why it fails

Error while submitting a spark job using spark-jobserver

I face following error occasionally while submitting job. This error goes away if I remove the rootdir of filedao, datadao and sqldao. That means I have to restart the job-server and re-upload my jar.
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/1995aeba-com.spmsoftware.distributed.job.TestJob#-1370794810]] after [10000 ms]. Sender[null] sent message of type \"spark.jobserver.JobManagerActor$StartJob\".",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)", "akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)", "scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)", "scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:331)", "akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:282)", "akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:286)", "akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:238)", "java.lang.Thread.run(Thread.java:745)"]
}
}
My config file is as follows:
# Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = <spark_master>
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 4
jobserver {
port = 8090
context-per-jvm = false
context-creation-timeout = 100 s
# Note: JobFileDAO is deprecated from v0.7.0 because of issues in
# production and will be removed in future, now defaults to H2 file.
jobdao = spark.jobserver.io.JobSqlDAO
filedao {
rootdir = /tmp/spark-jobserver/filedao/data
}
datadao {
rootdir = /tmp/spark-jobserver/upload
}
sqldao {
slick-driver = slick.driver.H2Driver
jdbc-driver = org.h2.Driver
rootdir = /tmp/spark-jobserver/sqldao/data
jdbc {
url = "jdbc:h2:file:/tmp/spark-jobserver/sqldao/data/h2-db"
user = ""
password = ""
}
dbcp {
enabled = false
maxactive = 20
maxidle = 10
initialsize = 10
}
}
result-chunk-size = 1m
short-timeout = 60 s
}
context-settings {
num-cpu-cores = 2 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
}
}
akka {
remote.netty.tcp {
# This controls the maximum message size, including job results, that can be sent
# maximum-frame-size = 200 MiB
}
}
# check the reference.conf in spray-can/src/main/resources for all defined settings
spray.can.server.parsing.max-content-length = 250m
I am using spark-2.0-preview version.
I have faced the same error before and was related with timeout, for sure is an syncronus request (sync=true) togheter you must provide the timeout (in seconds) who is a value relative with how long it takes to process your request.
This an example how the request should look like:
curl -k --basic -d '' 'http://localhost:5050/jobs?appName=app&classPath=Main&context=test-context&sync=true&timeout=40'
if your request needs more than 40 seconds maybe you also need to modify the application.conf located on
spark-jobserver-master/job-server/src/main/resources/application.conf
ànd on the spray.can.server section modify:
idle-timeout = 210 s
request-timeout = 200 s

Expect ignoring pattern matching and not exiting

I'm new using expect and is puzzling me big time. It works perfectly with one pattern but when the second case comes up it just ignores the exit completely. First, this is my code.
#!/usr/bin/expect
#Usage migration_test.xpct <ssh_password> <vmname> <no_migraciones>
set timest [ timestamp -format %Y-%m-%d_%H-%M ]
set vmname [lindex $argv 1]
log_file migtest_${vmname}_${timest}.log ;
set password [lindex $argv 0]
set num [lindex $argv 2]
set failureMsg "Status: Failure\n\r"
set timeout 60
spawn ssh admin#localhost -p 10000
expect "yes/no" {
send "yes\r"
expect "*?assword" { send "$password\r" }
} "*?assword" { send "$password\r" }
for {set i 0} {$i < $num} {incr i 1} {
expect "OVM> " {
send "show Vm name=$vmname\r"
expect {
$failureMsg { }
-re "Status = Running\n\r" {
exp_continue
}
-re "Server = .*? \\\[(.*?)(1|2)?\\\]\n\r" {
set destserver $expect_out(2,string);
if { $destserver == 1 } {
send_user "\n\nMIGRATION [ expr $i+1 ] of $num\n\n"
send "migrate Vm name=$vmname destServer=serv_prod02\r"
expect {
-re "JobId: (.*?)\n\r" {
set jobid $expect_out(1,string);
send "show Job id=$jobid\r";
expect {
-re "Command:(.*?)\n\r" { send_user "\n\nWaiting 30secs before next migration\n\n";
sleep 30; }
}
}
-re "Status: Failure\n\r" { send_user "\n\nExiting\n"; exit 1 }
}
} else {
send_user "\n\nMIGRATION [expr $i+1] of $num\n\n"
send "migrate Vm name=$vmname destServer=serv_prod01\r"
expect {
-re "JobId: (.*?)\n\r" {
set jobid $expect_out(1,string);
send "show Job id=$jobid\r";
expect {
-re "Command:(.*?)\n\r" { send_user "\n\nWaiting 30secs before next migration\n\n";
sleep 30; }
}
}
-re "Status: Failure\n\r" { send_user "\n\nExiting\n"; exit 1 }
}
}
}
}
}
}
send "exit\r"
expect eof
The problem comes when it reaches the "migrate vm" section. That's a job I'm sending to a CLI (oracle ovm cli to be precise) and the job can either fail or success. I want to print the job details when it success but finish the entire execution if the job fails (since it already shows the reason and I don't have to expand the job details).
Here is how the output of a successful job looks:
MIGRATION 5 of 12
migrate Vm name=slestest_temp_share_vm destServer=serv_prod01
Command: migrate Vm name=slestest_temp_share_vm
destServer=serv_prod01
Status: Success
Time: 2016-04-13 10:45:24,174
JobId: 12345678978
OVM> show Job id=12345678978
Command: show Job id=12345678978
Status: Success Time: 2016-04-13 10:45:24,188
Data:
Run State = Success
Summary State = Success
Done = Yes
Summary Done = Yes
Job Group = No
Username = admin
Creation Time = Apr 13, 2016 10:44:45 am
Start Time = Apr 13, 201 10:44:45 am
End Time = Apr 13, 2016 10:45:23 am
Duration = 37s
Id = 12345678978 [Migrate Vm: slestest_temp_share_vm to Server: serv_prod01]
Name = Migrate Vm: slestest_temp_share_vm to Server:serv_prod01
Description = Migrate Vm: slestest_temp_share_vm to
Server: serv_prod01 Locked = false
OVM>
Waiting 30secs before next migration
And here is how a failured job looks like:
MIGRATION 4 of 12
migrate Vm name=slestest_temp_share_vm destServer=serv_prod01
Command: migrate Vm name=slestest_temp_share_vm destServer=serv_prod01
Status: Failure
Time: 2016-04-13 11:31:08,819
JobId: 1460564963372
Error Msg: Job failed on Core: OVMAPI_5001E Job: 1460564963372/Migrate Vm: slestest_temp_share_vm to Server: serv_prod01/Migrate Vm: slestest_temp_share_vm serv_prod01, failed. Job Failure Event: 1460565064570/Server Async Command Failed/OVMEVT_00C014D_001 Async command failed serv_prod02. Object: slestest_temp_share_vm, PID: 1724,
Server error: Command: ['xm', 'migrate', '--live', '0004fb00000600009f354416bab38df6', '8.8.8.1'] failed (1): stderr: Error: ti
stdout: Usage: xm migrate
Migrate a domain to another machine.
Options:
-h, --help Print this help.
-l, --live Use live migration.
-p=portnum, --port=portnum
Use specified port for migration.
-n=nodenum, --node=nodenum
Use specified NUMA node on target.
-s, --ssl Use ssl connection for migration.
-c, --change_home_server
Change home server for managed domains.
, on server: serv_prod02, associated with object: 0004fb00000600009f354416bab38df6 [Wed Apr 13 11:31:04 2016]
Why does the Status: Failure is ignored? Also, when that happens it seems it jumps an iteration of the loop, if it was in the 5th it then shows "Migration 7 of 12" for example.
Thanks everyone
I can suggest two things, one you can rewrite code to avoid duplicacy. Second, I think you are matching for both \n\r at the end of pattern. Try with \n alone or use \n?\r? which will match zero, one, or both line endings.
-re "Server = .*? \\\[(.*?)(1|2)?\\\]\n" {
set destserver $expect_out(2,string);
send_user "\n\nMIGRATION [ expr $i+1 ] of $num\n\n"
if { $destserver == 1 } {
send "migrate Vm name=$vmname destServer=serv_prod02\r"
} else {
send "migrate Vm name=$vmname destServer=serv_prod01\r"
}
expect {
-re "JobId: (.*?)\n" {
set jobid $expect_out(1,string);
send "show Job id=$jobid\r";
expect {
-re "Command:(.*?)$" {
send_user "\n\nWaiting 30secs before next migration\n\n";
sleep 30;
}
}
}
-re "Status: Failure\n" { send_user "\n\nExiting\n"; exit 1 }
}
}
Well, after some tests I found the problem. It seems I didn't understand how the timeout worked in expect. Every time a failured migration was performed it exceeded the timeout.
This wasn't evident for me because, although the timeout was exceeded, the script still kept waiting for the answer and printed it anyways, just none of the patterns I was expecting to get were being checked.
The solution was either use the "timeout" command or set it higher. I did the later and everything is running fine now.

CHECK_GEARMAN CRITICAL - function 'BulkEmail' is not registered in the server

I am using the nagios to monitor gearman and getting error "CRITICAL - function 'xxx' is not registered in the server"
Script that nagios execute to check the gearman is like
#!/usr/bin/env perl
# taken from: gearmand-0.24/libgearman-server/server.c:974
# function->function_name, function->job_total,
# function->job_running, function->worker_count);
#
# this code give following result with gearadmin --status
#
# FunctionName job_total job_running worker_count
# AdsUpdateCountersFunction 0 0 4
use strict;
use warnings;
use Nagios::Plugin;
my $VERSION="0.2.1";
my $np;
$np = Nagios::Plugin->new(usage => "Usage: %s -f|--flist <func1[:threshold1],..,funcN[:thresholdN]> [--host|-H <host>] [--port|-p <port>] [ -c|--critworkers=<threshold> ] [ -w|--warnworkers=<threshold>] [-?|--usage] [-V|--version] [-h|--help] [-v|--verbose] [-t|--timeout=<timeout>]",
version => $VERSION,
blurb => 'This plugin checks a gearman job server, expecting that every function in function-list arg is registered by at least one worker, and expecting that job_total is not too much high.',
license => "Brought to you AS IS, WITHOUT WARRANTY, under GPL. (C) Remi Paulmier <remi.paulmier\#gmail.com>",
shortname => "CHECK_GEARMAN",
);
$np->add_arg(spec => 'flist|f=s',
help => q(Check for the functions listed in STRING, separated by comma. If optional threshold is given (separated by :), check that waiting jobs for this particular function are not exceeding that value),
required => 1,
);
$np->add_arg(spec => 'host|H=s',
help => q(Check the host indicated in STRING),
required => 0,
default => 'localhost',
);
$np->add_arg(spec => 'port|p=i',
help => q(Use the TCP port indicated in INTEGER),
required => 0,
default => 4730,
);
$np->add_arg(spec => 'critworkers|c=i',
help => q(Exit with CRITICAL status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 1,
);
$np->add_arg(spec => 'warnworkers|w=i',
help => q(Exit with WARNING status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 4,
);
$np->getopts;
my $ng = $np->opts;
# manage timeout
alarm $ng->timeout;
my $runtime = {'status' => OK,
'message' => "Everything OK",
};
# host & port
my $host = $ng->get('host');
my $port = $ng->get('port');
# verbosity
my $verbose = $ng->get('verbose');# look for gearadmin, use nc if not found
my #paths = grep { -x "$_/gearadmin" } split /:/, $ENV{PATH};
my $cmd = "gearadmin --status -h $host -p $port";
if (#paths == 0) {
print STDERR "gearadmin not found, using nc\n" if ($verbose != 0);
# $cmd = "echo status | nc -w 1 $host $port";
$cmd = "echo status | nc -i 1 -w 1 $host $port";
}
foreach (`$cmd 2>/dev/null | grep -v '^\\.'`) {
chomp;
my ($fname, $job_total, $job_running, $worker_count) =
split /[[:space:]]+/;
$runtime->{'funcs'}{"$fname"} = {job_total => $job_total,
job_running => $job_running,
worker_count => $worker_count };
# print "$fname : $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
}
# get function list
my #flist = split /,/, $ng->get('flist');
foreach (#flist) {
my ($fname, $fthreshold);
if (/\:/) {
($fname, $fthreshold) = split /:/;
} else {
($fname, $fthreshold) = ($_, -1);
}
# print "defined for $fname: $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
# if (defined($runtime->{'funcs'}{"$fname"})) {
# print "$fname is defined\n";
# } else {
# print "$fname is NOT defined\n";
# }
if (!defined($runtime->{'funcs'}{"$fname"}) &&
$runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL, "function '$fname' is not registered in the server");
} else {
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('critworkers') && $runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL,
"less than " .$ng->get('critworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('warnworkers') && $runtime->{'status'} <= WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
"less than " .$ng->get('warnworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'job_total'} > $fthreshold
&& $fthreshold != -1 && $runtime->{'status'}<=WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
$runtime->{'funcs'}{"$fname"}{'job_total'}.
" jobs for $fname exceeds threshold $fthreshold");
}
}
}
$np->nagios_exit($runtime->{'status'}, $runtime->{'message'});
When the script is executed simply by command line it says "everything ok"
But in nagios it shows error "CRITICAL - function 'xxx' is not registered in the server"
Thanks in advance
After spending long time on this, finally got the answer all that have to do is.
yum install nc
nc is what that was missing from the system.
With Regards,
Bankat Vikhe
Not easy to say but it could be related to your script not being executable as embedded Perl.
Try with # nagios: -epn at the beginning of the script.
#!/usr/bin/env perl
# nagios: -epn
use strict;
use warnings;
Be sure to check all the hints in the Perl Plugins section of the Nagios Plugin Development Guidelines

Resources