I'm using varnish 4. I want to see the web traffic for awstats, but all the samples I found on the Internet look like varnishncsa -m "RxHeader:^Host: domain.com$" -a -w blabla. The problem is that the latest version of varnishncsa doesn't know about the "m" option
$ varnishncsa -V
varnishncsa (varnish-4.0.3 revision b8c4a34)
Copyright (c) 2006 Verdens Gang AS
Copyright (c) 2006-2014 Varnish Software AS
$ varnishncsa -m "RxHeader:^Host: domain.com$" -a -w /web/htdocs/olivier/stats/awstats-7.4/test.tmp
varnishncsa: invalid option -- 'm'
Thus how should I do to split log files based on my vhosts?
It's the "-q" option.
Here's a working sample:
varnishncsa -a -D \
-q 'ReqHeader:Host ~ "mywebsite"' \
-w /web/logs/mywebsite/access.varnish.log \
-P /var/run/varnishncsa.mywebsite.pid
Related
can I use WGET to get, let's say, status 200 OK and save that status somewhere? If not, how can I do that using ubuntu linux?
Thanks!
With curl you can
curl -L -o /dev/null -s -w "%{http_code}\n" http://google.com >> status.txt
You use --save-headers to add the headers to the output, put the output to the console using -O -, ignore the errors stream using >/dev/null and get only the status line using grep HTTP/.
You can then output that into a file using >status_file
$ wget --save-headers -O - http://google.com/ 2>/dev/null | grep HTTP/ > status_file
The question suggests that the output of the wget command be stored somewhere. As another alternative, the following example shows how to store the output of wget execution in a shell variable (wget_status). Where after the execution of the wget command the status of the execution is stored in the variable wget_status. The wget status is displayed in the console using the echo command.
$ wget_status=$(wget --server-response ${URL} 2>&1 | awk '/^ HTTP/{print $2}')
$ echo $wget_status
200
After the execution of the wget command, the execution status can be manipulated using the value of the wget_status variable.
For more information consult the following link as a reference:
https://www.unix.com/shell-programming-and-scripting/148595-capture-http-response-code-wget.html
The tests were executed using cloudshell on a linux system.
Linux cs-335831867014-default 5.10.90+ #1 SMP Wed Mar 23 09:10:07 UTC 2022 x86_64 GNU/Linux
the logs are formatted below. there is also brackets around each **.
03/16-08:30:05.350000 ** [1:491:8] INFO FTP Bad login ** [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 192.168.21.101:21 -> 192.168.202.102:4061
what I am trying to do is:
extract all destination IPs
extract source IP/destination IP/port pairs
view all events associated with source IP and get a count of all destinations from the identified source IP
I have just exctracted all IPs and ports:
grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\:[0-9]\{3,10\}' alert.pcap | sort | uniq
The listing you showed at top appears to be from snort or suricata alert file. And using grep on a pcap file is unlikely to give you what you want.
The right tool for extracting the things you want is probably tshark (the command line version of wireshark).
Extract the destination IPs:
tshark -r file.pcap -T fields -e ip.dst
Extract TCP source, destination IPs and ports:
tshark -r file.pcap -T fields -e ip.src -e tcp.srcport -e ip.dst -e tcp.dstport -Y tcp
Same for UDP:
tshark -r file.pcap -T fields -e ip.src -e udp.srcport -e ip.dst -e udp.dstport -Y udp
Pipe any of the above to sort -u to get unique addresses/tuples.
Another option is to produce an all-inclusive listing in a format suitable for loading into a spreadsheet and use spreadsheet functions to slice and dice as you see fit. One such command line would produce a comma-separated-value format:
tshark -r file.pcap -E separator=, -T fields -e ip.proto -e ip.src -e tcp.srcport -e udp.srcport -e ip.dst -e tcp.dstport -e udp.dstport -Y "udp or tcp"
The ip.proto column here is 6 for TCP or 17 for UDP. Only the respective src/dst port columns will be populated for each IP sub-protocol.
I have two files. One is v3in.txt file in which I have the commands to be executed list. In the second file v3out.txt I wish to store the output of each of these commands.
Can some one help me how to use the standard input output stream to achieve this.
My v3in.txt file contents:
sudo snmpget -v 3 -l authPriv -a SHA -A "NetMan2019" -x DES -X "ITP201820" -u John 198.51.100.5 .1.3.6.1.2.1.1.4.0
sudo snmpget -v 3 -l authPriv -a SHA -A "Net20192020" -x DES -X "TCP201820" -u John 198.51.100.5 .1.3.6.1.2.1.1.5.0
Noob here.
I have ccminer-cryptonight compiled and running on my ubuntu 16.04 thinkpad.
However, since my nvidia gpu is old and not supported by the latest cuda, I have to use the integrated Intel gpu for X and use bumblebee for ccminer and nvidia gpu so my screen wouldn't freeze...
here is the command I use:
optirun ccminer -a cryptonight -o stratum+tcp://miningpooladdress.com:5000 -u username -p "password" -P -R 15
ccminer gives a lot of output I only want to monitor the hashrate since there is a bug when the hashrate goes insanely high which means ccminer has stopped mining, so I have to kill it and restart.
this is the awk command I use to parse hashrate:
optirun ccminer -a cryptonight -o stratum+tcp://miningpooladdress.com:5000 -u username -p "password" -P -R 15 2>&1 | awk '/5400M,/ {print $7}'
which parse the line of hashrate readout, my card is NVS 5400M, the output looks like:
43.43
54.23
32.67
44.89
xx.xx
xx.xx
Now I want to write this output to a log file, I tried:
optirun ccminer -a cryptonight -o stratum+tcp://miningpooladdress.com:5000 -u username -p "password" -P -R 15 2>&1 | awk '/5400M,/ {print $7 >> "logfile"}'
and
optirun ccminer -a cryptonight -o stratum+tcp://miningpooladdress.com:5000 -u username -p "password" -P -R 15 2>&1 | awk '/5400M,/ {print $7}' >> "logfile"
none of these two works, the "logfile" will be created but remain empty what am I doing wrong? Why can I get the screen output but can't write to the file?
Thanks for helps.
Update regarding ccminer-cryptonight: a simple solution of lazy miner behaviours - run as root ;P
awk is buffering its output so change your awk command to:
awk '/5400M,/ {print $7; fflush()}'
For other buffering issues, google stdbuf.
I need help with the following Bash v4.1.2 script.
#!/bin/bash
IP=$1
IPTABLES=/sbin/iptables
$IPTABLES -I INPUT -s $IP -j DROP
echo $IPTABLES -I INPUT -s $IP -j DROP |wall
The variables, IP and IPTABLES, get populated in the echo but the line above is not executed. The echo outputs...
/sbin/iptables -I INPUT -s 1.2.3.4 -j DROP
...which is syntactically correct and works if executed manually.
I don't know Bash so I'm struggling to debug this elementary script. I see some scenarios where commands are left bare as I have mine and some that are wrapped in $() (with and without quotes). I've also tried using backticks and quoting various parts of the command. The echo piped through wall only exists for debugging.
I found a basically identical post at Bash script commands not working in cron. My script is not running from cron though.
=== EDIT ===
Added for #Barmar
[root#server tmp]# bash -x /bin/netfilter-drop.sh
+ IP=1.2.3.4
+ IPTABLES=/sbin/iptables
+ /sbin/iptables -I INPUT -s 1.2.3.4 -j DROP
+ wall
+ echo /sbin/iptables -I INPUT -s 1.2.3.4 -j DROP
[root#server tmp]#
Broadcast message from root#server (Thu Dec 29 12:46:44 2016):
/sbin/iptables -I INPUT -s 1.2.3.4 -j DROP
^C
[root#server tmp]#
I had initially only given sudo access to run the posted Bash script. The problem was not the script, rather it was permissions. I needed to give additional sudo access to run iptables in my sudoers. Fixed.