Perl for SNMP V3 Not Working, but works with SNMP V1/2 (Redhat Linux) - linux

I have a Perl Script, which registers SNMP OIDs. With SNMP 1/2c, it is able to successfully register all OIDs. However, with SNMP V3, it only partially works.
As you see below, with SNMP V3, it is able to register "$root_OID.0.0.0" successfully. However, it timeouts when trying to invoke the java code for "$root_OID.0.0.1".
Does anyone know, why I'm able to make a successful java call in SNMP V1/2c, but not SNMP V3?
Many Thanks
Here is my Perl script:
#!/usr/bin/perl
use NetSNMP::OID (':all');
use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER);
use NetSNMP::agent (':all');
sub myhandler {
my ($handler, $registration_info, $request_info, $requests) = #_;
my $request;
my $root_OID = ".1.3.6.1.4.1.8072.9999.9999.0";
my $CLASSPATH = "/opt/BPL/JBoss/BPL_JBossJMX.jar:/opt/jboss-5.1/client/*";
my $CLASSNAME = "com.XXXXX.XXXXX.XXXXX.jmx.BPLJbossJMX_For_SNMP";
my $ENV = "localhost";
my $PORT = "8099";
my $LOG4JFILELOC = "/opt/BPL/JBoss/JBoss-BPL-Log4j.xml";
for($request = $requests; $request; $request = $request->next()) {
my $oid = $request->getOID();
if ($request_info->getMode() == MODE_GETNEXT) {
if ($oid < new NetSNMP::OID("$root_OID.0.0.0")) {
my $INPUTSTRNAME = "HeapMemoryUsageZZZZZ";
$request->setOID("$root_OID.0.0.0");
$request->setValue(ASN_OCTET_STR, $INPUTSTRNAME);
} elsif ($oid < new NetSNMP::OID("$root_OID.0.0.1")) {
my $INPUTSTRNAME = "HeapMemoryUsage";
my $OUTPUT= `java -cp $CLASSPATH $CLASSNAME $ENV $PORT $INPUTSTRNAME $LOG4JFILELOC`;
chomp($OUTPUT);
$request->setOID("$root_OID.0.0.1");
$request->setValue(ASN_INTEGER, $OUTPUT);
}
}
}
}
my $rootOID = ".1.3.6.1.4.1.8072.9999.9999.0";
my $regoid = new NetSNMP::OID($rootOID);
$agent->register("BPL-JBoss", $regoid, \&myhandler);
Here is my /etc/snmp/snmpd.conf file (SNMP V1/2c disabled):
###############################################################################
# snmpd.conf:
###############################################################################
#com2sec notConfigUser default public
# groupName securityModel securityName
#group notConfigGroup v1 notConfigUser
#group notConfigGroup v2c notConfigUser
view systemview included .1.3.6.1.4.1.8072.1.3.2
view systemview included .1.3.6.1.2.1
view systemview included .1.3.6.1.2.1.25.1.1
view systemview included .1.3.6.1.4.1.2021
view systemview included .1.3.6.1.4.1.8072.9999.9999
#access notConfigGroup "" any noauth exact systemview none none
###############################################################################
syslocation Unknown (edit /etc/snmp/snmpd.conf)
syscontact Root <root#localhost> (configure /etc/snmp/snmp.local.conf)
###############################################################################
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
###############################################################################
perl do "/home/XXXXXXX/JBoss_hello_world.pl"
rouser TEST_USERNAME priv
Here is the results of my SNMPWALK, when using SNMPV3.
-$snmpwalk -v 3 -l authPriv -a sha -A TEST_PASSWORD -x AES -X TEST_PASSWORD -u TEST_USERNAME localhost .1.3.6.1.4.1.8072.9999.9999
NET-SNMP-MIB::netSnmpPlaypen.0.0.0.0 = STRING: "HeapMemoryUsageZZZZZ"
Timeout: No Response from localhost

Related

KVM with Terraform: SSH permission denied (Cloud-Init)

I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file
You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered
i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!

Deploing a single bash script with nixops

I'm just starting to learn nix / nixos / nixops. I needed to install a simple bash script to remote host with nixops. And I can not realize how to do it. I have two files:
just-deploy-bash-script.nix
{
resources.sshKeyPairs.ssh-key = {};
test-host = { config, lib, pkgs, ... }: {
deployment.targetEnv = "digitalOcean";
deployment.digitalOcean.region = "sgp1";
deployment.digitalOcean.size = "s-2vcpu-4gb";
environment.systemPackages =
let
my-package = pkgs.callPackage ./my-package.nix { inherit pkgs; };
in [
pkgs.tmux
my-package
];
};
}
my-package.nix
{ pkgs ? import <nixpkgs> {}, ... }:
let
pname = "my-package";
version = "1.0.0";
stdenv = pkgs.stdenv;
in
stdenv.mkDerivation {
inherit pname version;
src = ./.;
installPhase =
let
script = pkgs.writeShellScriptBin "my-test" ''
echo This is my test script
'';
in
''
mkdir $out;
cp -r ${script} $out/
'';
}
I deploy as follows. I go to the directory in which these two files are located and then sequentially execute two commands:
nixops create -d test just-deploy-bash-script.nix
nixops deploy -d test
Deployment passes without errors and completes successfully. But when I login to the newly created remote host, I find that the tmux package from the standard set is present in the system, and my-package is absent:
nixops ssh -d test test-host
[root#test-host:~]# which tmux
/run/current-system/sw/bin/tmux
[root#test-host:~]# find /nix/store/ -iname tmux
/nix/store/hd1sgvb4pcllxj69gy3qa9qsns68arda-nixpkgs-20.03pre206749.5a3c1eda46e/nixpkgs/pkgs/tools/misc/tmux
/nix/store/609zdpfi5kpz2c7mbjcqjmpb4sd2y3j4-ncurses-6.0-20170902/share/terminfo/t/tmux
/nix/store/4cxkil2r3dzcf5x2phgwzbxwyvlk6i9k-system-path/share/bash-completion/completions/tmux
/nix/store/4cxkil2r3dzcf5x2phgwzbxwyvlk6i9k-system-path/bin/tmux
/nix/store/606ni2d9614sxkhnnnhr71zqphdam6jc-system-path/share/bash-completion/completions/tmux
/nix/store/606ni2d9614sxkhnnnhr71zqphdam6jc-system-path/bin/tmux
/nix/store/ddlx3x8xhaaj78xr0zasxhiy2m564m2s-nixos-17.09.3269.14f9ee66e63/nixos/pkgs/tools/misc/tmux
/nix/store/kvia4rwy9y4wis4v2kb9y758gj071p5v-ncurses-6.1-20190112/share/terminfo/t/tmux
/nix/store/c3m8qvmn2yxkgpfajjxbcnsgfrcinppl-tmux-2.9a/share/bash-completion/completions/tmux
/nix/store/c3m8qvmn2yxkgpfajjxbcnsgfrcinppl-tmux-2.9a/bin/tmux
[root#test-host:~]# which my-test
which: no my-test in (/root/bin:/run/wrappers/bin:/root/.nix-profile/bin:/etc/profiles/per-user/root/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin)
[root#test-host:~]# find /nix/store/ -iname *my-test*
[root#test-host:~]#
Help me figure out what's wrong with my scripts. Any links to documentation or examples of the implementation of such a task are welcome.
The shell can not find your script because it is copied into the wrong directory.
This becomes apparent after building my-package.nix:
$ nix-build my-package.nix
$ ls result/
zh5bxljvpmda4mi4x0fviyavsa3r12cx-my-test
Here you see the basename of a storepath inside a store path. This is caused by the line:
cp -r ${script} $out/
Changing it to something like this should fix that problem:
cp -r ${script}/* $out/

Given an array of hostnames, how can I generate a set of files based on those hostnames in puppet

I am not sure there is a way to even do this in puppet, but here is what I am trying to do.
Given this skeletal puppet class definition ...
class make_files (
$rabbit_servers = ['rabbit-1','rabbit-2'],
$mongo_servers = ['mongo-1','mongo-2'],
) {
...
}
... generate the files ...
# pwd
/root/conf/hosts
# more rabbit-*
::::::::::::::
rabbit-1.cfg
::::::::::::::
define host {
use linux-server
host_name rabbit-1
alias Rabbit MQ host
hostgroups rabbit_hosts
address 10.29.103.33
}
::::::::::::::
rabbit-2.cfg
::::::::::::::
define host {
use linux-server
host_name rabbit-2
alias Rabbit MQ host
hostgroups rabbit_hosts
address 10.29.103.34
}
# more mongo-*
::::::::::::::
mongo-1.cfg
::::::::::::::
define host {
use linux-server
host_name mongo-1
alias Mongo DB host
hostgroups mongo_hosts
address 10.29.103.31
}
::::::::::::::
mongo-2.cfg
::::::::::::::
define host {
use linux-server
host_name mongo-2
alias Mongo DB host
hostgroups mongo_hosts
address 10.29.103.32
}
Where the IP addresses are the IP address of the corresponding host.
Any help is much appreciated.
If the names belong to nodes that are managed by Puppet, you can fetch the addresses from PuppetDB directly using the puppetdbquery module
$ipaddress = query_facts("hostname=$host_name", ['ipaddress'])['ipaddress']
The code is untested, I'm not sure whether the invocation is correct.
I have found a solution ...
class make_files (
$rabbit_servers = ['rabbit-1:10.29.103.33','rabbit-2:10.29.103.34'],
$mongo_servers = ['ost-mongo-el7-001:10.29.103.31'],
) {
define my_file ($content, $hostgroup) {
$tuple = split($name, ':')
$host_name = $tuple[0]
$file_name = "/tmp/$host_name.cfg"
$ipaddress = $tuple[1]
$config = "define host {
use linux-server
host_name $host_name
alias $host_name
hostgroups $hostgroup
address $ipaddress
}"
file { $file_name:
ensure => file,
content => $config,
}
}
$rabbit_content = join($rabbit_servers, ',')
my_file { $rabbit_servers: content => $rabbit_content, hostgroup => 'rabbit_hosts' }
$mongo_content = join($mongo_servers, ',')
my_file { $mongo_servers: content => $mongo_content, hostgroup => 'mongo_hosts' }
}
... and I found that I needed to change the content of the files slightly.
This is probably not the best answer but it seems to work. I am open to suggested improvements.
Thanks

CHECK_GEARMAN CRITICAL - function 'BulkEmail' is not registered in the server

I am using the nagios to monitor gearman and getting error "CRITICAL - function 'xxx' is not registered in the server"
Script that nagios execute to check the gearman is like
#!/usr/bin/env perl
# taken from: gearmand-0.24/libgearman-server/server.c:974
# function->function_name, function->job_total,
# function->job_running, function->worker_count);
#
# this code give following result with gearadmin --status
#
# FunctionName job_total job_running worker_count
# AdsUpdateCountersFunction 0 0 4
use strict;
use warnings;
use Nagios::Plugin;
my $VERSION="0.2.1";
my $np;
$np = Nagios::Plugin->new(usage => "Usage: %s -f|--flist <func1[:threshold1],..,funcN[:thresholdN]> [--host|-H <host>] [--port|-p <port>] [ -c|--critworkers=<threshold> ] [ -w|--warnworkers=<threshold>] [-?|--usage] [-V|--version] [-h|--help] [-v|--verbose] [-t|--timeout=<timeout>]",
version => $VERSION,
blurb => 'This plugin checks a gearman job server, expecting that every function in function-list arg is registered by at least one worker, and expecting that job_total is not too much high.',
license => "Brought to you AS IS, WITHOUT WARRANTY, under GPL. (C) Remi Paulmier <remi.paulmier\#gmail.com>",
shortname => "CHECK_GEARMAN",
);
$np->add_arg(spec => 'flist|f=s',
help => q(Check for the functions listed in STRING, separated by comma. If optional threshold is given (separated by :), check that waiting jobs for this particular function are not exceeding that value),
required => 1,
);
$np->add_arg(spec => 'host|H=s',
help => q(Check the host indicated in STRING),
required => 0,
default => 'localhost',
);
$np->add_arg(spec => 'port|p=i',
help => q(Use the TCP port indicated in INTEGER),
required => 0,
default => 4730,
);
$np->add_arg(spec => 'critworkers|c=i',
help => q(Exit with CRITICAL status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 1,
);
$np->add_arg(spec => 'warnworkers|w=i',
help => q(Exit with WARNING status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 4,
);
$np->getopts;
my $ng = $np->opts;
# manage timeout
alarm $ng->timeout;
my $runtime = {'status' => OK,
'message' => "Everything OK",
};
# host & port
my $host = $ng->get('host');
my $port = $ng->get('port');
# verbosity
my $verbose = $ng->get('verbose');# look for gearadmin, use nc if not found
my #paths = grep { -x "$_/gearadmin" } split /:/, $ENV{PATH};
my $cmd = "gearadmin --status -h $host -p $port";
if (#paths == 0) {
print STDERR "gearadmin not found, using nc\n" if ($verbose != 0);
# $cmd = "echo status | nc -w 1 $host $port";
$cmd = "echo status | nc -i 1 -w 1 $host $port";
}
foreach (`$cmd 2>/dev/null | grep -v '^\\.'`) {
chomp;
my ($fname, $job_total, $job_running, $worker_count) =
split /[[:space:]]+/;
$runtime->{'funcs'}{"$fname"} = {job_total => $job_total,
job_running => $job_running,
worker_count => $worker_count };
# print "$fname : $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
}
# get function list
my #flist = split /,/, $ng->get('flist');
foreach (#flist) {
my ($fname, $fthreshold);
if (/\:/) {
($fname, $fthreshold) = split /:/;
} else {
($fname, $fthreshold) = ($_, -1);
}
# print "defined for $fname: $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
# if (defined($runtime->{'funcs'}{"$fname"})) {
# print "$fname is defined\n";
# } else {
# print "$fname is NOT defined\n";
# }
if (!defined($runtime->{'funcs'}{"$fname"}) &&
$runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL, "function '$fname' is not registered in the server");
} else {
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('critworkers') && $runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL,
"less than " .$ng->get('critworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('warnworkers') && $runtime->{'status'} <= WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
"less than " .$ng->get('warnworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'job_total'} > $fthreshold
&& $fthreshold != -1 && $runtime->{'status'}<=WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
$runtime->{'funcs'}{"$fname"}{'job_total'}.
" jobs for $fname exceeds threshold $fthreshold");
}
}
}
$np->nagios_exit($runtime->{'status'}, $runtime->{'message'});
When the script is executed simply by command line it says "everything ok"
But in nagios it shows error "CRITICAL - function 'xxx' is not registered in the server"
Thanks in advance
After spending long time on this, finally got the answer all that have to do is.
yum install nc
nc is what that was missing from the system.
With Regards,
Bankat Vikhe
Not easy to say but it could be related to your script not being executable as embedded Perl.
Try with # nagios: -epn at the beginning of the script.
#!/usr/bin/env perl
# nagios: -epn
use strict;
use warnings;
Be sure to check all the hints in the Perl Plugins section of the Nagios Plugin Development Guidelines

using external redis server for testing tcl scripts

I am running Ubuntu 11.10.
i am trying to run TCL test scripts using external redis server.
using the following :
sb#sb-laptop:~/Redis/redis$ tclsh tests/test_helper.tcl --host 192.168.1.130 --port 6379
Getting the following error :
Testing unit/type/list
[exception]: Executing test client: couldn't open socket: connection refused.
couldn't open socket: connection refused
while executing
"socket $server $port"
(procedure "redis" line 2)
invoked from within
"redis $::host $::port"
(procedure "start_server" line 9)
invoked from within
"start_server {tags {"protocol"}} {
test "Handle an empty query" {
reconnect
r write "\r\n"
r flush
assert_equal "P..."
(file "tests/unit/protocol.tcl" line 1)
invoked from within
"source $path"
(procedure "execute_tests" line 4)
invoked from within
"execute_tests $data"
(procedure "test_client_main" line 9)
invoked from within
"test_client_main $::test_server_port "
the redis.conf is set to default binding, but it is commented out.
If this is possible, what i am doing wrong?
Additional Information:
Below is the tcl code that is responsible for starting the server
proc start_server {options {code undefined}} {
# If we are runnign against an external server, we just push the
# host/port pair in the stack the first time
if {$::external} {
if {[llength $::servers] == 0} {
set srv {}
dict set srv "host" $::host
dict set srv "port" $::port
set client [redis $::host $::port]
dict set srv "client" $client
$client select 9
# append the server to the stack
lappend ::servers $srv
}
uplevel 1 $code
return
}
# setup defaults
set baseconfig "default.conf"
set overrides {}
set tags {}
# parse options
foreach {option value} $options {
switch $option {
"config" {
set baseconfig $value }
"overrides" {
set overrides $value }
"tags" {
set tags $value
set ::tags [concat $::tags $value] }
default {
error "Unknown option $option" }
}
}
set data [split [exec cat "tests/assets/$baseconfig"] "\n"]
set config {}
foreach line $data {
if {[string length $line] > 0 && [string index $line 0] ne "#"} {
set elements [split $line " "]
set directive [lrange $elements 0 0]
set arguments [lrange $elements 1 end]
dict set config $directive $arguments
}
}
# use a different directory every time a server is started
dict set config dir [tmpdir server]
# start every server on a different port
set ::port [find_available_port [expr {$::port+1}]]
dict set config port $::port
# apply overrides from global space and arguments
foreach {directive arguments} [concat $::global_overrides $overrides] {
dict set config $directive $arguments
}
# write new configuration to temporary file
set config_file [tmpfile redis.conf]
set fp [open $config_file w+]
foreach directive [dict keys $config] {
puts -nonewline $fp "$directive "
puts $fp [dict get $config $directive]
}
close $fp
set stdout [format "%s/%s" [dict get $config "dir"] "stdout"]
set stderr [format "%s/%s" [dict get $config "dir"] "stderr"]
if {$::valgrind} {
exec valgrind --suppressions=src/valgrind.sup src/redis-server $config_file > $stdout 2> $stderr &
} else {
exec src/redis-server $config_file > $stdout 2> $stderr &
}
# check that the server actually started
# ugly but tries to be as fast as possible...
set retrynum 100
set serverisup 0
if {$::verbose} {
puts -nonewline "=== ($tags) Starting server ${::host}:${::port} "
}
after 10
if {$code ne "undefined"} {
while {[incr retrynum -1]} {
catch {
if {[ping_server $::host $::port]} {
set serverisup 1
}
}
if {$serverisup} break
after 50
}
} else {
set serverisup 1
}
if {$::verbose} {
puts ""
}
if {!$serverisup} {
error_and_quit $config_file [exec cat $stderr]
}
# find out the pid
while {![info exists pid]} {
regexp {\[(\d+)\]} [exec cat $stdout] _ pid
after 100
}
# setup properties to be able to initialize a client object
set host $::host
set port $::port
if {[dict exists $config bind]} { set host [dict get $config bind] }
if {[dict exists $config port]} { set port [dict get $config port] }
# setup config dict
dict set srv "config_file" $config_file
dict set srv "config" $config
dict set srv "pid" $pid
dict set srv "host" $host
dict set srv "port" $port
dict set srv "stdout" $stdout
dict set srv "stderr" $stderr
# if a block of code is supplied, we wait for the server to become
# available, create a client object and kill the server afterwards
if {$code ne "undefined"} {
set line [exec head -n1 $stdout]
if {[string match {*already in use*} $line]} {
error_and_quit $config_file $line
}
while 1 {
# check that the server actually started and is ready for connections
if {[exec cat $stdout | grep "ready to accept" | wc -l] > 0} {
break
}
after 10
}
# append the server to the stack
lappend ::servers $srv
# connect client (after server dict is put on the stack)
reconnect
# execute provided block
set num_tests $::num_tests
if {[catch { uplevel 1 $code } error]} {
set backtrace $::errorInfo
# Kill the server without checking for leaks
dict set srv "skipleaks" 1
kill_server $srv
# Print warnings from log
puts [format "\nLogged warnings (pid %d):" [dict get $srv "pid"]]
set warnings [warnings_from_file [dict get $srv "stdout"]]
if {[string length $warnings] > 0} {
puts "$warnings"
} else {
puts "(none)"
}
puts ""
error $error $backtrace
}
# Don't do the leak check when no tests were run
if {$num_tests == $::num_tests} {
dict set srv "skipleaks" 1
}
# pop the server object
set ::servers [lrange $::servers 0 end-1]
set ::tags [lrange $::tags 0 end-[llength $tags]]
kill_server $srv
} else {
set ::tags [lrange $::tags 0 end-[llength $tags]]
set _ $srv
}
}
Either there's nothing listening on host 192.168.1.130, port 6379 (well, at a guess) or your firewall configuration is blocking the connection. Impossible to say which, since all the code is really seeing is “the connection didn't work; something said ‘no’…”.

Resources