Using varnish-cache, I am running varnishtop -c -i RxURL to show number of client requests from the cache. The output looks somewhat like this:
list length 40
121.76 RxURL /some/path/to/file
105.17 RxURL /some/other/file
42.91 RxURL /and/another
14.61 RxURL /yet/another
14.59 RxURL /etc
13.63 RxURL /etc/etc
What do the numbers 121.76, 105.17 etc. stand for?
They are increasing when first issuing varnishtop, but then they tend to stabilize, so I tend to believe the represent number of hits per specific timeframe. Is that so, and what is the timeframe?
This is not explained in the man page. Thank you for any assistance!
Edit: varnish version is 2.1
The varnishtop command shows the rolling aggregate count over 60 seconds. That means even if all traffic stops, it will take 60 seconds to average down on the display.
list length 40
Total number of items in the list since the screen can only show so many at a time.
121.76 RxURL /some/path/to/file
~121 requests received in the last 60 seconds for /some/path/to/file.
Some other interesting monitoring stats:
# most frequent cookies
varnishtop -i RxHeader -I Cookie
# continually updated list of frequent URLs
varnishtop -i RxURL
# most frequent UA strings
varnishtop -i RxHeader -C -I ^User-Agent
# frequent charset (Accept-Charset can be replaced with any other HTTP header)
varnishtop -i RxHeader -C -I '^Accept-Charset'
# Requests resulting in 404's
varnishlog -b -m "RxStatus:404"
It's the average number of requests per 60 seconds. The manual does say it - but at the parameter explanation rather than the general description of the tool:
-p period Specifies the number of seconds to measure over, the default is 60 seconds. The first number in the list is the average number of requests seen over this time period.
Related
I am trying to run a tcpdump command with filesize 4096 but, it return with an error :-
tcpdump: invalid filesize
Command :- tcpdump -i any -nn -tttt -s0 -w %d-%m-%Y_%H:%M:%S:%s_hostname_ipv6.pcap -G 60 -C 4096 port 53
After some hit & trial I found that it's failing for filesize (4096 i.e 2^12) (8192 i.e. 2^13) and so on.
So, for filesize after 2^11 it's giving me invalid filesize error.
Can anybody tell me on which condition tcpdump return invalid filesize.
Also when I was running with Filesize :- 100000
tcpdump -i any -nn -tttt -s0 -w %d-%m-%Y_%H:%M:%S:%s_hostname_ipv6.pcap -G 60 -C 100000 port 53
.pcap file of max size 1.3GB was getting created.
I also tried looking in the source code of tcpdump but, couldn't find much.
I am trying to run a tcpdump command with filesize 4096
To quote a recent version of the tcpdump man page:
-C file_size
Before writing a raw packet to a savefile, check whether the
file is currently larger than file_size and, if so, close the
current savefile and open a new one. Savefiles after the first
savefile will have the name specified with the -w flag, with a
number after it, starting at 1 and continuing upward. The units
of file_size are millions of bytes (1,000,000 bytes, not
1,048,576 bytes).
So -C 4096 means a file size of 4096000000 bytes. That's a large file size and, in older versions of tcpdump, a file size that large (one bigger than 2147483647) isn't supported for the -C flag.
If you mean you want it to write out files that are 4K bytes in size, unfortunately tcpdump doesn't support that. This means it's past due to fix tcpdump issue 884 by merging tcpdump pull request 916 - I'll do that now, but that won't help you now.
Also when I was running with Filesize :- 100000
That's a file size of 100000000000, which is 100 gigabytes. Unfortunately, if you want a file size of 100000 bytes (100 kilobytes), again, the current minimum file size is 1 megabyte, so that's not supported.
I'm experiencing some problems with my internet connection so my provider told me to make a logfile for an evening (min. 3 Hours) to see when the connection drops out to see what's the cause of the problem.
When I'm losing connection, I still remain in the network but my Inernet is simply 0B/s. Is there a way to make a log for a certain Time interval that constantly checks the internet connection (and ideally the download/upload speed). I'm kinda beginner in the Linux world and it would be very helpful when the answer will be good explained and every step will be described.
Thanks in advance.
For checking every 10 seconds that your connection is available you could use
ping 8.8.8.8 -D -i 1 2>&1 | tee my.log
where 8.8.8.8 is a DNS server run by Google.
File my.log will receive entries like:
[1583495940.797787] 64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=17.9 ms
[1583495950.809658] 64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=18.7 ms
ping: sendmsg: Network is unreachable
The number in square brackets is the time in seconds since 1970-01-01T00:00:00Z. For our example:
1583495950 = 2020-03-06T11:59:10Z
If you want to really transfer data, you could use a script like:
#!/bin/sh
URL=https://example.com
while [ true ]
do
wget $URL -O /dev/null 2>&1 | grep 'saved' | tee my.log
sleep 10
done
But mind the traffic cost on both sides.
Server: Intel® Core™ i7-3930K 6 core, 64 GB DDR3 RAM, 2 x 3 TB 6 Gb/s HDD SATA3
OS (uname -a): Linux *** 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u1 x86_64 GNU/Linux
Redis server 2.8.19
On the server to spin Redis-server, whose task is to serve requests from two PHP servers.
Problem: The server is unable to cope with peak loads and stops processing incoming requests or makes it very slowly.
Which attempts to optimize server I made:
cat /etc/rc.local
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -SHn 100032
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
cat /etc/sysctl.conf
vm.overcommit_memory=1
net.ipv4.tcp_max_syn_backlog=65536
net.core.somaxconn=32768
fs.file-max=200000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
cat /etc/redis/redis.conf
tcp-backlog 32768
maxclients 100000
What are the settings I found on redis.io, which is in blogs.
Tests
redis-benchmark -c 1000 -q -n 10 -t get,set
SET: 714.29 requests per second
GET: 714.29 requests per second
redis-benchmark -c 3000 -q -n 10 -t get,set
SET: 294.12 requests per second
GET: 285.71 requests per second
redis-benchmark -c 6000 -q -n 10 -t get,set
SET: 175.44 requests per second
GET: 192.31 requests per second
By increasing the number of customers is reduced query performance and the worst thing, Redis-server stops processing incoming requests and servers in PHP there are dozens of types of exceptions
Uncaught exception 'RedisException' with message 'Connection closed' in [no active file]:0\n
Stack trace:\n
#0 {main}\n thrown in [no active file] on line 0
What to do? What else to optimize? How do such a machine can pull customers?
Thank you!
I have some scripts retrieving resources (image files etc) using system calls to curl. Occasionally, these will fail to finish, and will show as pipe_w in process listings.
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
0 S root 4378 4086 0 82 2 - 16002 pipe_w Jan10 ? 00:00:00 curl -JO --max-time 60 --connect-timeout 60 https://address/path/to/resource?identifier=tag
If I understand correctly, I can use connect-timeout to set the # of seconds to try and make the connection, and max-time to limit the amount of time to wait for response from the remote machine.
curl -JO --max-time 60 --connect-timeout 60 https://address/path/to/resource?identifier=tag
Any suggestions as to how I can force curl to continue past this? Or pointers on what might cause this?
This is using curl 7.21.0, on a stock ubuntu 10.10.
I would like to log CPU usage at a frequency of 1 second.
One possible way to do it is via vmstat 1 command.
The problem is that the time between each output is not always exactly one second, especially on a busy server. I would like to be able to output the timestamp along with the CPU usage every second. What would be a simple way to accomplish this, without installing special tools?
There are many ways to do that. Except top another way is to you the "sar" utility. So something like
sar -u 1 10
will give you the cpu utilization for 10 times every 1 second. At the end it will print averages for each one of the sys, user, iowait, idle
Another utility is the "mpstat", that gives you similar things with sar
Use the well-known UNIX tool top that is normally available on Linux systems:
top -b -d 1 > /tmp/top.log
The first line of each output block from top contains a timestamp.
I see no command line option to limit the number of rows that top displays.
Section 5a. SYSTEM Configuration File and 5b. PERSONAL Configuration File of the top man page describes pressing W when running top in interactive mode to create a $HOME/.toprc configuration file.
I did this, then edited my .toprc file and changed all maxtasks values so that they are maxtasks=4. Then top only displays 4 rows of output.
For completeness, the alternative way to do this using pipes is:
top -b -d 1 | awk '/load average/ {n=10} {if (n-- > 0) {print}}' > /tmp/top.log
You might want to try htop and atop. htop is beautifully interactive while atop gathers information and can report CPU usage even for terminated processes.
I found a neat way to get the timestamp information to be displayed along with the output of vmstat.
Sample command:
vmstat -n 1 3 | while read line; do echo "$(date --iso-8601=seconds) $line"; done
Output:
2013-09-13T14:01:31-0700 procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
2013-09-13T14:01:31-0700 r b swpd free buff cache si so bi bo in cs us sy id wa
2013-09-13T14:01:31-0700 1 1 4197640 29952 124584 12477708 12 5 449 147 2 0 7 4 82 7
2013-09-13T14:01:32-0700 3 0 4197780 28232 124504 12480324 392 180 15984 180 1792 1301 31 15 38 16
2013-09-13T14:01:33-0700 0 1 4197656 30464 124504 12477492 344 0 2008 0 1892 1929 32 14 43 10
To monitor the disk usage, cpu and load i created a small bash scripts that writes the values to a log file every 10 seconds.
This logfile is processed by logstash kibana and riemann.
# #!/usr/bin/env bash
# Define a timestamp function
LOGPATH="/var/log/systemstatus.log"
timestamp() {
date +"%Y-%m-%dT%T.%N"
}
#server load
while ( sleep 10 ) ; do
echo -n "$(timestamp) linux::systemstatus::load " >> $LOGPATH
cat /proc/loadavg >> $LOGPATH
#cpu usage
echo -n "$(timestamp) linux::systemstatus::cpu " >> $LOGPATH
top -bn 1 | sed -n 3p >> $LOGPAT
#disk usage
echo -n "$(timestamp) linux::systemstatus::storage " >> $LOGPATH
df --total|grep total|sed "s/total//g"| sed 's/^ *//' >> $LOGPATH
done