Setting up varnish on same server as webserver - varnish

Our company recently decided to start working with the Varnish HTTP accelerator. Most important why we chose this solution was because we are a company that specializes in building web shops (Magento Enterprise) => Magento has a commercial plugin that works together with varnish.
The varnish configuration is already present on our testing environment, which contains 1 (software) load balancer running a varnish instance, 2 apache webservers and 1 storage + 1 mysql server.
However now the time has come to add the Varnish to our development environment (virtualbox with 1GB of ram running debian which has the database, webserver, files running all on the same machine)
Could anyone post a default.vcl configuration file for this setup?
Apache2 runs on port 80.
Thanks in advance,
Kenny
EDIT: I found and posted the solution below.

This link has an excellent discussion of using Varnish on big production Web sites. In particular, look at the /etc/default/varnish or /etc/sysconfig/varnish DAEMON OPTS that put the cache 'file' into memory, instead of disk:
http://www.lullabot.com/articles/varnish-multiple-web-servers-drupal
The snippet I'm talking about:
DAEMON_OPTS="-a :80,:443 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u varnish -g varnish \
-S /etc/varnish/secret \
-p thread_pool_add_delay=2 \
-p thread_pools=2 \
-p thread_pool_min=400 \
-p thread_pool_max=4000 \
-p session_linger=50 \
-p sess_workspace=262144 \
-s malloc,3G"

I found the solution after more searching. Basically we need to sure that varnish is listening on the 80 port and apache on the 8080 port (or anything else!).
Here my default.vcl file (located in /etc/varnish/default.vcl):
# default backend definition. Set this to point to your content server.
backend apache1 {
.host = "127.0.0.1";
.port = "8080";
}
director lb round-robin {
{.backend=apache1;}
}
# add your Magento server IP to allow purges from the backend
acl purge {
"localhost";
"127.0.0.1";
}
# needed for TTL handling
C{
#include <errno.h>
#include <limits.h>
}C
sub vcl_recv {
set req.backend=lb;
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE" &&
req.request != "PURGE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# purge request
if (req.request == "PURGE") {
if (!client.ip ~ purge) {
error 405 "Not allowed.";
}
purge("obj.http.X-Purge-Host ~ " req.http.X-Purge-Host " && obj.http.X-Purge-URL ~ " req.http.X-Purge-Regex " && obj.http.Content-Type ~ " req.http.X-Purge-Content-Type);
error 200 "Purged.";
}
# we only deal with GET and HEAD by default
if (req.request != "GET" && req.request != "HEAD") {
return (pass);
}
# static files are always cacheable. remove SSL flag and cookie
if (req.url ~ "^/(media|js|skin)/.*\.(png|jpg|jpeg|gif|css|js|swf|ico)$") {
unset req.http.Https;
unset req.http.Cookie;
}
# not cacheable by default
if (req.http.Authorization || req.http.Https) {
return (pass);
}
# do not cache any page from
# - index files
# - ...
if (req.url ~ "^/(index)") {
return (pass);
}
# as soon as we have a NO_CACHE or admin cookie pass request
if (req.http.cookie ~ "(NO_CACHE|adminhtml)=") {
return (pass);
}
# normalize Aceept-Encoding header
# http://varnish.projects.linpro.no/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
remove req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
remove req.http.Accept-Encoding;
}
}
# remove Google gclid parameters
set req.url = regsuball(req.url,"\?gclid=[^&]+$",""); # strips when QS = "?gclid=AAA"
set req.url = regsuball(req.url,"\?gclid=[^&]+&","?"); # strips when QS = "?gclid=AAA&foo=bar"
set req.url = regsuball(req.url,"&gclid=[^&]+",""); # strips when QS = "?foo=bar&gclid=AAA" or QS = "?foo=bar&gclid=AAA&bar=baz"
# decided to cache. remove cookie
#unset req.http.Cookie;
return (lookup);
}
Here's the content of the varnish file (/etc/default/varnish):
# Configuration file for varnish
#
# /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK
# to be set from this shell script fragment.
#
# Should we start varnishd at boot? Set to "yes" to enable.
START=yes
# Maximum number of open files (for ulimit -n)
NFILES=131072
# Maximum locked memory size (for ulimit -l)
# Used for locking the shared memory log in memory. If you increase log size,
# you need to increase this number as well
MEMLOCK=82000
# Default varnish instance name is the local nodename. Can be overridden with
# the -n switch, to have more instances on a single server.
INSTANCE=$(uname -n)
# This file contains 4 alternatives, please use only one.
## Alternative 1, Minimal configuration, no VCL
#
# Listen on port 6081, administration on localhost:6082, and forward to
# content server on localhost:8080. Use a 1GB fixed-size cache file.
#
# DAEMON_OPTS="-a :6081 \
# -T localhost:6082 \
# -b localhost:8080 \
# -u varnish -g varnish \
# -S /etc/varnish/secret \
# -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"
## Alternative 2, Configuration with VCL
#
# Listen on port 6081, administration on localhost:6082, and forward to
# one content server selected by the vcl file, based on the request. Use a 1GB
# fixed-size cache file.
#
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"
## Alternative 3, Advanced configuration
#
# See varnishd(1) for more information.
#
# # Main configuration file. You probably want to change it :)
# VARNISH_VCL_CONF=/etc/varnish/default.vcl
#
# # Default address and port to bind to
# # Blank address means all IPv4 and IPv6 interfaces, otherwise specify
# # a host name, an IPv4 dotted quad, or an IPv6 address in brackets.
# VARNISH_LISTEN_ADDRESS=
# VARNISH_LISTEN_PORT=6081
#
# # Telnet admin interface listen address and port
# VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
# VARNISH_ADMIN_LISTEN_PORT=6082
#
# # The minimum number of worker threads to start
# VARNISH_MIN_THREADS=1
#
# # The Maximum number of worker threads to start
# VARNISH_MAX_THREADS=1000
#
# # Idle timeout for worker threads
# VARNISH_THREAD_TIMEOUT=120
#
# # Cache file location
# VARNISH_STORAGE_FILE=/var/lib/varnish/$INSTANCE/varnish_storage.bin
#
# # Cache file size: in bytes, optionally using k / M / G / T suffix,
# # or in percentage of available disk space using the % suffix.
# VARNISH_STORAGE_SIZE=1G
#
# # File containing administration secret
# VARNISH_SECRET_FILE=/etc/varnish/secret
#
# # Backend storage specification
# VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"
#
# # Default TTL used when the backend does not specify one
# VARNISH_TTL=120
#
# # DAEMON_OPTS is used by the init script. If you add or remove options, make
# # sure you update this section, too.
# DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
# -f ${VARNISH_VCL_CONF} \
# -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
# -t ${VARNISH_TTL} \
# -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
# -S ${VARNISH_SECRET_FILE} \
# -s ${VARNISH_STORAGE}"
#
## Alternative 4, Do It Yourself
#
# DAEMON_OPTS=""
After that you can monitor how varnish serves the content (from what source) by typing
varnishlog | grep URL

Apache can be used to SSL terminate (decrypt), check http://noosfero.org/Development/Varnish#SSL

Related

varnish is unable to take hold of port 80 on ec2 machine

I am trying to run varnish in a docker container in an ec2 instance.
I tried doing the same in my local and it worked fine. but it keep on giving error:
Error: Could not get socket :80: Permission denied
My vcl looks like:
vcl 4.0;
backend default {
.host = "x.y.z.y";
.port = "8090";
}
sub vcl_recv {
if (req.method == "BAN") {
ban("obj.http.x-host == " + req.http.host + " && obj.http.x-url ~ " + req.url);
return(synth(200, "Banned added"));
}
}
sub vcl_backend_response {
# Store URL and HOST in the cached response.
set beresp.http.x-url = bereq.url;
set beresp.http.x-host = bereq.http.host;
}
sub vcl_deliver {
# Prevent the client from seeing these additional headers.
unset resp.http.x-url;
unset resp.http.x-host;
}
sub vcl_deliver {
# Prevent the client from seeing these additional headers.
unset resp.http.x-url;
unset resp.http.x-host;
}
and there is no process running on 80 port
To access 80 port requires root permission, try to run docker command from root user or add your user to docker group.

Install Magento 2 and Varnish Cache on different server

I have 2 servers, one which has magento 2 installed (ip - 129.89.188.244 port 80) and Varnish on another (ip - 129.89.188.245 port 80)
My Varnish Configuration:
File /etc/default/varnish:-
DAEMON_OPTS="-a :80 \
-T 127.0.0.1:6082 \
-b 129.89.188.244:80 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
netstat -tulpn :-
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1288/sshd
tcp 0 0 127.0.0.1:6082 0.0.0.0:* LISTEN 11115/varnishd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 11115/varnishd
tcp6 0 0 :::22 :::* LISTEN 1288/sshd
tcp6 0 0 :::80 :::* LISTEN 11115/varnishd
/etc/varnish/default.vcl : -
# VCL version 5.0 is not supported so it should be 4.0 even though actually used Varnish version is 5
vcl 4.0;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
backend default {
.host = "129.89.188.244";
.port = "80";
.first_byte_timeout = 600s;
.probe = {
.url = "/pub/health_check.php";
.timeout = 2s;
.interval = 5s;
.window = 10;
.threshold = 5;
}
}
acl purge {
"129.89.188.245";
"127.0.0.1";
"localhost";
}
sub vcl_recv {
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(405, "Method not allowed"));
}
# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in your backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
}
if (req.http.X-Magento-Tags-Pattern) {
ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
return (synth(200, "Purged"));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Bypass shopping cart, checkout and search requests
if (req.url ~ "/checkout" || req.url ~ "/catalogsearch") {
return (pass);
}
# Bypass health check requests
if (req.url ~ "/pub/health_check.php") {
return (pass);
}
# Set initial grace period usage status
set req.http.grace = "none";
# normalize url in case of leading HTTP scheme and domain
set req.url = regsub(req.url, "^http[s]?://", "");
# collect all cookies
std.collect(req.http.Cookie);
# Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
unset req.http.Accept-Encoding;
}
}
# Remove Google gclid parameters to minimize the cache objects
set req.url = regsuball(req.url,"\?gclid=[^&]+$",""); # strips when QS = "?gclid=AAA"
set req.url = regsuball(req.url,"\?gclid=[^&]+&","?"); # strips when QS = "?gclid=AAA&foo=bar"
set req.url = regsuball(req.url,"&gclid=[^&]+",""); # strips when QS = "?foo=bar&gclid=AAA" or QS = "?foo=bar&gclid=AAA&bar=baz"
# Static files caching
if (req.url ~ "^/(pub/)?(media|static)/") {
# Static files should not be cached by default
return (pass);
# But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines
#unset req.http.Https;
#unset req.http.X-Forwarded-Proto;
#unset req.http.Cookie;
}
return (hash);
}
sub vcl_hash {
if (req.http.cookie ~ "X-Magento-Vary=") {
hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
}
# For multi site configurations to not cache each other's content
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
# To make sure http users don't see ssl warning
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
}
sub vcl_backend_response {
set beresp.grace = 3d;
if (beresp.http.content-type ~ "text") {
set beresp.do_esi = true;
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.http.X-Magento-Debug) {
set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
}
# cache only successfully responses and 404s
if (beresp.status != 200 && beresp.status != 404) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
} elsif (beresp.http.Cache-Control ~ "private") {
set beresp.uncacheable = true;
set beresp.ttl = 86400s;
return (deliver);
}
# validate if we need to cache it and prevent from setting cookie
# images, css and js are cacheable by default so we have to remove cookie also
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.set-cookie;
}
# If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
if (beresp.ttl <= 0s ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store") ||
beresp.http.Vary == "*") {
# Mark as Hit-For-Pass for the next 2 minutes
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_deliver {
if (resp.http.X-Magento-Debug) {
if (resp.http.x-varnish ~ " ") {
set resp.http.X-Magento-Cache-Debug = "HIT";
set resp.http.Grace = req.http.grace;
} else {
set resp.http.X-Magento-Cache-Debug = "MISS";
}
} else {
unset resp.http.Age;
}
# Not letting browser to cache non-static files.
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
unset resp.http.X-Magento-Debug;
unset resp.http.X-Magento-Tags;
unset resp.http.X-Powered-By;
unset resp.http.Server;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# Hit within TTL period
return (deliver);
}
if (std.healthy(req.backend_hint)) {
if (obj.ttl + 300s > 0s) {
# Hit after TTL expiration, but within grace period
set req.http.grace = "normal (healthy server)";
return (deliver);
} else {
# Hit after TTL and grace expiration
return (miss);
}
} else {
# server is not healthy, retrieve from cache
set req.http.grace = "unlimited (unhealthy server)";
return (deliver);
}
}
Now the issue is, when I open the URL 129.89.188.244 , magento opens but it's not getting cached in varnish. But when I call varnish URL 129.89.188.245, it will redirect to my magento url 129.89.188.244. In my varnish log, it shows that the page is cached already but magento is not getting served from that varnish cache.
In order for the cache to work, your requests always have to go through the proxy/cache (Varnish).
Varnish evaluates your requests and decides if the request is already cached (then returns it from the cache) or not - then it will redirect it to Magento and cache the response, before returning it to the client.
That's how most caches work. You can read in detail how Varnish cache mechanism works here.
If you want to hit your cache, you always have to go through Varnish (129.89.188.245:80).
Consider this diagram form the official documentation:
this is the expected behaviour.
Varnish sits in front of you Magento server and if you ping your Magento server bypassing Varnish, opening 129.89.188.244, then Magento will send you a response without involving Varnish, while if you ping varnish, Varnish will then call Magento and cache. The other way around is not possible and makes no sense.
This solution worked for me. To configure Varnish and Magento on a different server.
Varnish server: xxx.xxx.xxx.xxx port 80
Magento server: yyy.yyy.yyy.yyy port 80
Changes need to be made on the varnish server:
1. login to varnish server
2. go to file /etc/varnish/default.vcl
3. under the "backend default" update
.host = "yyy.yyy.yyy.yyy";//(use Magento server IP for better network)
.port = "80";//(Magento web server port)
4. Restart the Varnish (systemctl restart varnish)
Note: Kindly use the default VCL that is generated during the varnish installation and don't update it with Magento generated VCL for varnish ( available from Magento Admin)
Changes need to be made on the Magento server:
1. Log in to the Magento server
2. Go to the env.php file located in the app/etc directory
3. Update the values in 'http_cache_hosts' => [
[
'host' => 'xxx.xxx.xxx.xxx', //(varnish server public ip)
'port' => '80' // ( varnish server port)
]
]
Now update your base URL's on the core_config_data table to your varnish public Ip (http://xxx.xxx.xxx.xxx/)
flush the Magento caches ( bin/magento ca:fl)

Sending mail with Symfony (SwiftMail & Gmail)

I'm trying to use Swiftmailer with Symfony 2.4.
Here is my config.yml :
# This file is auto-generated during the composer install
# parameters:
# mailer_transport: gmail
# mailer_host: smtp.gmail.com
# mailer_user: jules.truong.pro#gmail.com
# mailer_password: XXXXXX
# mailer_port: 465
# locale: fr
# secret: XXXX
And this is parameters.yml
# Swiftmailer Configuration
# swiftmailer:
# transport: %mailer_transport%
# username: %mailer_user%
# password: %mailer_password%
My code is pretty basic :
# $request = $this->get('request');
# $dataSubject = $request->query->get('lbSubject');
# $dataEmail = $request->query->get('lbEmail');
# $dataMessage = $request->query->get('lbMessage');
# //Récupération du service
# $mailer = $this->get('mailer');
#
# // Création de l'e-mail : le service mailer utilise SwiftMailer, donc nous créons une instance de Swift_Message
# $message = \Swift_Message::newInstance()
# ->setSubject($dataSubject)
# ->setFrom($dataEmail)
# ->setTo('julestruonglolilol#email.com')
# ->setBody($dataMessage);
#
# try
# {
# if (!$mailer->send($message, $failures))
# {
# return new Response('Erreur' . $failures,400);
# }
# return new Response('OK',200);
# }
# catch(Exception $e)
# {
# return new Response('Erreur' . $failures,400);
# }
At the end, it returns an error
Connection could not be established with host smtp.gmail.com
This is pretty offensive because i know my password .
After a few minutes, i receive and email that tells me that someone tried to hack my account etc ...
Oh and i'm running this with Wamp, so in local.
Is this my code that has a problem or Google maybe ?
Thanks
Try adding the following to your swiftmail configuration as GMail requires encryption/ssl connection
encryption: ssl

Perl for SNMP V3 Not Working, but works with SNMP V1/2 (Redhat Linux)

I have a Perl Script, which registers SNMP OIDs. With SNMP 1/2c, it is able to successfully register all OIDs. However, with SNMP V3, it only partially works.
As you see below, with SNMP V3, it is able to register "$root_OID.0.0.0" successfully. However, it timeouts when trying to invoke the java code for "$root_OID.0.0.1".
Does anyone know, why I'm able to make a successful java call in SNMP V1/2c, but not SNMP V3?
Many Thanks
Here is my Perl script:
#!/usr/bin/perl
use NetSNMP::OID (':all');
use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER);
use NetSNMP::agent (':all');
sub myhandler {
my ($handler, $registration_info, $request_info, $requests) = #_;
my $request;
my $root_OID = ".1.3.6.1.4.1.8072.9999.9999.0";
my $CLASSPATH = "/opt/BPL/JBoss/BPL_JBossJMX.jar:/opt/jboss-5.1/client/*";
my $CLASSNAME = "com.XXXXX.XXXXX.XXXXX.jmx.BPLJbossJMX_For_SNMP";
my $ENV = "localhost";
my $PORT = "8099";
my $LOG4JFILELOC = "/opt/BPL/JBoss/JBoss-BPL-Log4j.xml";
for($request = $requests; $request; $request = $request->next()) {
my $oid = $request->getOID();
if ($request_info->getMode() == MODE_GETNEXT) {
if ($oid < new NetSNMP::OID("$root_OID.0.0.0")) {
my $INPUTSTRNAME = "HeapMemoryUsageZZZZZ";
$request->setOID("$root_OID.0.0.0");
$request->setValue(ASN_OCTET_STR, $INPUTSTRNAME);
} elsif ($oid < new NetSNMP::OID("$root_OID.0.0.1")) {
my $INPUTSTRNAME = "HeapMemoryUsage";
my $OUTPUT= `java -cp $CLASSPATH $CLASSNAME $ENV $PORT $INPUTSTRNAME $LOG4JFILELOC`;
chomp($OUTPUT);
$request->setOID("$root_OID.0.0.1");
$request->setValue(ASN_INTEGER, $OUTPUT);
}
}
}
}
my $rootOID = ".1.3.6.1.4.1.8072.9999.9999.0";
my $regoid = new NetSNMP::OID($rootOID);
$agent->register("BPL-JBoss", $regoid, \&myhandler);
Here is my /etc/snmp/snmpd.conf file (SNMP V1/2c disabled):
###############################################################################
# snmpd.conf:
###############################################################################
#com2sec notConfigUser default public
# groupName securityModel securityName
#group notConfigGroup v1 notConfigUser
#group notConfigGroup v2c notConfigUser
view systemview included .1.3.6.1.4.1.8072.1.3.2
view systemview included .1.3.6.1.2.1
view systemview included .1.3.6.1.2.1.25.1.1
view systemview included .1.3.6.1.4.1.2021
view systemview included .1.3.6.1.4.1.8072.9999.9999
#access notConfigGroup "" any noauth exact systemview none none
###############################################################################
syslocation Unknown (edit /etc/snmp/snmpd.conf)
syscontact Root <root#localhost> (configure /etc/snmp/snmp.local.conf)
###############################################################################
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
###############################################################################
perl do "/home/XXXXXXX/JBoss_hello_world.pl"
rouser TEST_USERNAME priv
Here is the results of my SNMPWALK, when using SNMPV3.
-$snmpwalk -v 3 -l authPriv -a sha -A TEST_PASSWORD -x AES -X TEST_PASSWORD -u TEST_USERNAME localhost .1.3.6.1.4.1.8072.9999.9999
NET-SNMP-MIB::netSnmpPlaypen.0.0.0.0 = STRING: "HeapMemoryUsageZZZZZ"
Timeout: No Response from localhost

using external redis server for testing tcl scripts

I am running Ubuntu 11.10.
i am trying to run TCL test scripts using external redis server.
using the following :
sb#sb-laptop:~/Redis/redis$ tclsh tests/test_helper.tcl --host 192.168.1.130 --port 6379
Getting the following error :
Testing unit/type/list
[exception]: Executing test client: couldn't open socket: connection refused.
couldn't open socket: connection refused
while executing
"socket $server $port"
(procedure "redis" line 2)
invoked from within
"redis $::host $::port"
(procedure "start_server" line 9)
invoked from within
"start_server {tags {"protocol"}} {
test "Handle an empty query" {
reconnect
r write "\r\n"
r flush
assert_equal "P..."
(file "tests/unit/protocol.tcl" line 1)
invoked from within
"source $path"
(procedure "execute_tests" line 4)
invoked from within
"execute_tests $data"
(procedure "test_client_main" line 9)
invoked from within
"test_client_main $::test_server_port "
the redis.conf is set to default binding, but it is commented out.
If this is possible, what i am doing wrong?
Additional Information:
Below is the tcl code that is responsible for starting the server
proc start_server {options {code undefined}} {
# If we are runnign against an external server, we just push the
# host/port pair in the stack the first time
if {$::external} {
if {[llength $::servers] == 0} {
set srv {}
dict set srv "host" $::host
dict set srv "port" $::port
set client [redis $::host $::port]
dict set srv "client" $client
$client select 9
# append the server to the stack
lappend ::servers $srv
}
uplevel 1 $code
return
}
# setup defaults
set baseconfig "default.conf"
set overrides {}
set tags {}
# parse options
foreach {option value} $options {
switch $option {
"config" {
set baseconfig $value }
"overrides" {
set overrides $value }
"tags" {
set tags $value
set ::tags [concat $::tags $value] }
default {
error "Unknown option $option" }
}
}
set data [split [exec cat "tests/assets/$baseconfig"] "\n"]
set config {}
foreach line $data {
if {[string length $line] > 0 && [string index $line 0] ne "#"} {
set elements [split $line " "]
set directive [lrange $elements 0 0]
set arguments [lrange $elements 1 end]
dict set config $directive $arguments
}
}
# use a different directory every time a server is started
dict set config dir [tmpdir server]
# start every server on a different port
set ::port [find_available_port [expr {$::port+1}]]
dict set config port $::port
# apply overrides from global space and arguments
foreach {directive arguments} [concat $::global_overrides $overrides] {
dict set config $directive $arguments
}
# write new configuration to temporary file
set config_file [tmpfile redis.conf]
set fp [open $config_file w+]
foreach directive [dict keys $config] {
puts -nonewline $fp "$directive "
puts $fp [dict get $config $directive]
}
close $fp
set stdout [format "%s/%s" [dict get $config "dir"] "stdout"]
set stderr [format "%s/%s" [dict get $config "dir"] "stderr"]
if {$::valgrind} {
exec valgrind --suppressions=src/valgrind.sup src/redis-server $config_file > $stdout 2> $stderr &
} else {
exec src/redis-server $config_file > $stdout 2> $stderr &
}
# check that the server actually started
# ugly but tries to be as fast as possible...
set retrynum 100
set serverisup 0
if {$::verbose} {
puts -nonewline "=== ($tags) Starting server ${::host}:${::port} "
}
after 10
if {$code ne "undefined"} {
while {[incr retrynum -1]} {
catch {
if {[ping_server $::host $::port]} {
set serverisup 1
}
}
if {$serverisup} break
after 50
}
} else {
set serverisup 1
}
if {$::verbose} {
puts ""
}
if {!$serverisup} {
error_and_quit $config_file [exec cat $stderr]
}
# find out the pid
while {![info exists pid]} {
regexp {\[(\d+)\]} [exec cat $stdout] _ pid
after 100
}
# setup properties to be able to initialize a client object
set host $::host
set port $::port
if {[dict exists $config bind]} { set host [dict get $config bind] }
if {[dict exists $config port]} { set port [dict get $config port] }
# setup config dict
dict set srv "config_file" $config_file
dict set srv "config" $config
dict set srv "pid" $pid
dict set srv "host" $host
dict set srv "port" $port
dict set srv "stdout" $stdout
dict set srv "stderr" $stderr
# if a block of code is supplied, we wait for the server to become
# available, create a client object and kill the server afterwards
if {$code ne "undefined"} {
set line [exec head -n1 $stdout]
if {[string match {*already in use*} $line]} {
error_and_quit $config_file $line
}
while 1 {
# check that the server actually started and is ready for connections
if {[exec cat $stdout | grep "ready to accept" | wc -l] > 0} {
break
}
after 10
}
# append the server to the stack
lappend ::servers $srv
# connect client (after server dict is put on the stack)
reconnect
# execute provided block
set num_tests $::num_tests
if {[catch { uplevel 1 $code } error]} {
set backtrace $::errorInfo
# Kill the server without checking for leaks
dict set srv "skipleaks" 1
kill_server $srv
# Print warnings from log
puts [format "\nLogged warnings (pid %d):" [dict get $srv "pid"]]
set warnings [warnings_from_file [dict get $srv "stdout"]]
if {[string length $warnings] > 0} {
puts "$warnings"
} else {
puts "(none)"
}
puts ""
error $error $backtrace
}
# Don't do the leak check when no tests were run
if {$num_tests == $::num_tests} {
dict set srv "skipleaks" 1
}
# pop the server object
set ::servers [lrange $::servers 0 end-1]
set ::tags [lrange $::tags 0 end-[llength $tags]]
kill_server $srv
} else {
set ::tags [lrange $::tags 0 end-[llength $tags]]
set _ $srv
}
}
Either there's nothing listening on host 192.168.1.130, port 6379 (well, at a guess) or your firewall configuration is blocking the connection. Impossible to say which, since all the code is really seeing is “the connection didn't work; something said ‘no’…”.

Resources