Error by execution varnishtop with some parameters - varnish

I'm trying to debug my varnish configuration, but when I execute
varnishtop -i TxUrl
I get this error:
-i: "RxUrl" matches zero tags
Any idea why?

After long search, we found this:
https://www.varnish-cache.org/lists/pipermail/varnish-misc/2014-September/024019.html
TxURL is named BereqURL in varnish 4
If you use varnish 4, the command should be:
varnishtop -i BereqURL

Related

Varnish: Is it possible to log all GET requests for further processing?

Is it possible to use Varnish for the following task?
Imagine an URL(e.g. /vote?poll-id=1&answer-id=2) that is requested via direct links where we display poll results for the chosen poll-id.
I would like to save/pull/process all those requested URL(in near real time) to generate those poll results.
Is it possible to get those URLs as some sort of stream for further processing?
The reason why Varnish is used is because I would like to reduce the load on a slower upstream backend service. And because some delay in showing the actual results is OK.
Varnish has built-in shared memory logs. These can be consulted using various tools.
The main ones that could be useful for you are:
varnishlog: in-depth logging about every aspect of the request, response, and internal processing
varnishncsa: an Apache/NCSA style logging tool
You can also leverage the VCL programming language and log requests from within VCL to the operating system's syslog mechanism.
varnishlog
The following command will display all logging information for URLs that start with /vote:
varnishlog -g request -q "ReqUrl ~ '^/vote'"
You can filter out the fields you need:
varnishlog -i requrl -i reqheader -g request -q "ReqUrl ~ '^/vote'"
This one will only display the request URL and all request headers.
You can also write the output to a file:
varnishlog -A -a -w /var/log/varnish/vsl_vote.log -i requrl -i reqheader -g request -q "ReqUrl ~ '^/vote'"
See http://varnish-cache.org/docs/6.5/reference/varnishlog.html to learn more about varnishlog and http://varnish-cache.org/docs/6.5/reference/vsl-query.html to learn more about the vsl-query language.
varnishncsa
If you want Apache-style logging, you can use the following command:
varnishncsa -g request -q "ReqUrl ~ '^/vote'"
You can also write these logs to a logfile:
varnishncsa -a -w /var/log/varnish/vote_access.log -g request -q "ReqUrl ~ '^/vote'"
Both varnishncsa and varnishlog binaries can be daemonized using the -D parameter
See http://varnish-cache.org/docs/6.5/reference/varnishncsa.html to learn more about varnishncsa. There is also a section in the docs about including custom fields into your varnishncsa output.
syslog from VCL
If you use the following snippet, you can log vote requests to syslog:
vcl 4.1;
import std;
sub vcl_recv {
if(req.url ~ "^/vote") {
std.syslog(6, "Vote request captured: " + req.url);
}
}
This is boilerplate VCL that cannot just be copy/pasted like that. Please make sure to add import std; to your VCL file, and use std.syslog() to log to your local syslog facility.
See http://varnish-cache.org/docs/6.5/reference/vmod_std.html#void-syslog-int-priority-string-s to learn more about std.syslog().

Getting different local-port value in tcpdump when using --local-port option in curl

I'm using curl command to send the traffic to the Server. In curl command, I'm using --local-port option. Below is the command:
curl -v --interface 10.1.1.3 -b --local-port 10000 http://30.1.1.101/myfile.txt
I'm taking tcpdump to confirm that whether the local-port is used or not. Below is the tcpdump stats(image):
After checking the tcpdump, I have observed that the local-port value is different in tcpdump. It is supposed to be like 10.1.1.1:10000 not like 10.1.1.1:random_val.
So my questions are:
Is it possible to force curl to use the same local-port that I have mentioned in the command?
What's the reason for getting different local-port value?
Any help would be appreciated.

How to configure https_check URL in nagios

I have installed Nagios (NagiosĀ® Coreā„¢ Version 4.2.2) in Linux Server.I have written a JIRA URL check using check_http for HTTPS url.
It should get a response 200, but It gives response HTTP CODE 302.
[demuc1dv48:/pkg/vdcrz/Nagios/libexec][orarz]# ./check_http -I xx.xx.xx -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT
SSL Version: TLSv1
HTTP OK: HTTP/1.1 302 Found - 296 bytes in 0.134 second response time |time=0.134254s;;;0.000000 size=296B;;;
So I configured the same in the nagios configuration file.
define command{
command_name check_https_jira_prod
command_line $USER1$/check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
}
Now my JIRA server is down, But it is not reflected in the nagios check.The nagios response still shows HTTP code 302 only.
How to fix this issue?
You did not specify, but I assume you defined your command in the Nagios central server commands.cfgconfiguration file, but you also need to define a service in services.cfg as services use commands to run scripts.
If you are running your check_httpcheck from a different server you also need to define it in the nrpe.cfg configuration file on that remote machine and then restart nrpe.
As a side note, from the output you've shared, I believe you're not using the flags that the check_http Nagios plugin supports correctly.
From your post:
check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
From ./check_http -h:
-I, --IP-address=ADDRESS
IP address or name (use numeric address if possible to bypass DNS lookup).
You are using a host name instead (xxx.xxx.xxx.com )
-S, --ssl=VERSION
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation (1 = TLSv1, 2 = SSLv2, 3 = SSLv3).
You specified CONNECT
You can't get code 200 unless you set follow parameter in chech_http script.
I suggest you to use something like this:
./check_http -I jira-ex.telefonica.de -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S -f follow
The -f follow is mandatory for your use case.

How to see all Request URLs the server is doing (final URLs)

How list from the command line URLs requests that are made from the server (an *ux machine) to another machine.
For instance, I am on the command line of server ALPHA_RE .
I do a ping to google.co.uk and another ping to bbc.co.uk
I would like to see, from the prompt :
google.co.uk
bbc.co.uk
so, not the ip address of the machine I am pinging, and NOT an URL from servers that passes my the request to google.co.uk or bbc.co.uk , but the actual final urls.
Note that only packages that are available on normal ubuntu repositories are available - and it has to work with command line
Edit
The ultimate goal is to see what API URLs a PHP script (run by a cronjob) requests ; and what API URLs the server requests 'live'.
These ones do mainly GET and POST requests to several URLs, and I am interested in knowing the params :
Does it do request to :
foobar.com/api/whatisthere?and=what&is=there&too=yeah
or to :
foobar.com/api/whatisthathere?is=it&foo=bar&green=yeah
And does the cron jobs or the server do any other GET or POST request ?
And that, regardless what response (if any) these API gives.
Also, the API list is unknown - so you cannot grep to one particular URL.
Edit:
(OLD ticket specified : Note that I can not install anything on that server (no extra package, I can only use the "normal" commands - like tcpdump, sed, grep,...) // but as getting these information with tcpdump is pretty hard, then I made installation of packages possible)
You can use tcpdump and grep to get info about activity about network traffic from the host, the following cmd line should get you all lines containing Host:
tcpdump -i any -A -vv -s 0 | grep -e "Host:"
If I run the above in one shell and start a Links session to stackoverflow I see:
Host: www.stackoverflow.com
Host: stackoverflow.com
If you want to know more about the actual HTTP request you can also add statements to the grep for GET, PUT or POST requests (i.e. -e "GET"), which can get you some info about the relative URL (should be combined with the earlier determined host to get the full URL).
EDIT:
based on your edited question I have tried to make some modification:
first a tcpdump approach:
[root#localhost ~]# tcpdump -i any -A -vv -s 0 | egrep -e "GET" -e "POST" -e "Host:"
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
E..v.[#.#.......h.$....P....Ga .P.9.=...GET / HTTP/1.1
Host: stackoverflow.com
E....x#.#..7....h.$....P....Ga.mP...>;..GET /search?q=tcpdump HTTP/1.1
Host: stackoverflow.com
And an ngrep one:
[root#localhost ~]# ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST"
^[[B GET //meta.stackoverflow.com HTTP/1.1..Host: stackoverflow.com..User-Agent:
GET //search?q=tcpdump HTTP/1.1..Host: stackoverflow.com..User-Agent: Links
My test case was running links stackoverflow.com, putting tcpdump in the search field and hitting enter.
This gets you all URL info on one line. A nicer alternative might be to simply run a reverse proxy (e.g. nginx) on your own server and modify the host file (such as shown in Adam's answer) and have the reverse proxy redirect all queries to the actual host and use the logging features of the reverse proxy to get the URLs from there, the logs would probably a bit easier to read.
EDIT 2:
If you use a command line such as:
ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST" --line-buffered | perl -lne 'print $3.$2 if /(GET|POST) (.+?) HTTP\/1\.1\.\.Host: (.+?)\.\./'
you should see the actual URLs
A simple solution is to modify your '/etc/hosts' file to intercept the API calls and redirect them to your own web server
api.foobar.com 127.0.0.1

curl usage to get header

Why does this not work:
curl -X HEAD http://www.google.com
But these both work just fine:
curl -I http://www.google.com
curl -X GET http://www.google.com
You need to add the -i flag to the first command, to include the HTTP header in the output. This is required to print headers.
curl -X HEAD -i http://www.google.com
More here: https://serverfault.com/questions/140149/difference-between-curl-i-and-curl-x-head
curl --head https://www.example.net
I was pointed to this by curl itself; when I issued the command with -X HEAD, it printed:
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
google.com is not responding to HTTP HEAD requests, which is why you are seeing a hang for the first command.
It does respond to GET requests, which is why the third command works.
As for the second, curl just prints the headers from a standard request.

Resources