I am trying to run a tcpdump command with filesize 4096 but, it return with an error :-
tcpdump: invalid filesize
Command :- tcpdump -i any -nn -tttt -s0 -w %d-%m-%Y_%H:%M:%S:%s_hostname_ipv6.pcap -G 60 -C 4096 port 53
After some hit & trial I found that it's failing for filesize (4096 i.e 2^12) (8192 i.e. 2^13) and so on.
So, for filesize after 2^11 it's giving me invalid filesize error.
Can anybody tell me on which condition tcpdump return invalid filesize.
Also when I was running with Filesize :- 100000
tcpdump -i any -nn -tttt -s0 -w %d-%m-%Y_%H:%M:%S:%s_hostname_ipv6.pcap -G 60 -C 100000 port 53
.pcap file of max size 1.3GB was getting created.
I also tried looking in the source code of tcpdump but, couldn't find much.
I am trying to run a tcpdump command with filesize 4096
To quote a recent version of the tcpdump man page:
-C file_size
Before writing a raw packet to a savefile, check whether the
file is currently larger than file_size and, if so, close the
current savefile and open a new one. Savefiles after the first
savefile will have the name specified with the -w flag, with a
number after it, starting at 1 and continuing upward. The units
of file_size are millions of bytes (1,000,000 bytes, not
1,048,576 bytes).
So -C 4096 means a file size of 4096000000 bytes. That's a large file size and, in older versions of tcpdump, a file size that large (one bigger than 2147483647) isn't supported for the -C flag.
If you mean you want it to write out files that are 4K bytes in size, unfortunately tcpdump doesn't support that. This means it's past due to fix tcpdump issue 884 by merging tcpdump pull request 916 - I'll do that now, but that won't help you now.
Also when I was running with Filesize :- 100000
That's a file size of 100000000000, which is 100 gigabytes. Unfortunately, if you want a file size of 100000 bytes (100 kilobytes), again, the current minimum file size is 1 megabyte, so that's not supported.
Related
I have the following command:
sudo tcpdump -ni enp0s3 -W 1 -C 1 -w file.cap
with this command I say: "listen on the network interface enp0s3 and capture all packets in a file whose maximum size must be 1 mb". It works, however the problem is that when the file reaches the size of 1mb, it is reset and the capture starts all over again from 0 kb, deleting all the packets.
I want that when the file is 1MB, only the older packages are deleted and the new ones are added replacing them. I don't want all packets to be deleted and acquisition restarts at 0kb. In other words, I want the file to always be around 1mb, adding the new incoming packets in place of the oldest ones.
You can use -U -W 2 with the -C size limit. It will then alternate between two files and you can concatenate them (or work on the older one).
Alternatives would be to write to a stream or pipe and not to files, at all.
Description
Recently I've run into an problem. I am not able to run yarn start in element-web directory, I get these errors. Originally I thought it had something to do with element-web itself so I created an issue. Some time after that I tried to run wintersmith preview in bibviz directory and got the same errors. This was weird so I tried to create an Angular project and run ng serve and errors again. I headed to the issue to close it as it wasn't an element-web issue. I found that there was another issue created with the same problem. It had already been closed by turt2live saying it looks like you've run out of memory on your system. Based on this I tried to turn of most programs running in the background and now all the commands worked.
I am sure that ng serve used to work in the past.
My PC has 16 GB of RAM and the commands already fail when I am on 7/16 GB. I can't see any memory spikes when running the commands. Running the commands with sudo also completely eliminates the problem. This doesn't make any sense to me.
Research lead me to ulimits but they seem to have no effect. I have also installed watchman with no effect.
Can someone tell me what I am missing?
Thank you in advance!
Info
I am on Debian 11 Bullseye. This is the output of a few commands that could be useful.
As a regular user:
> uname -a
Linux Simon-s-PC 5.8.0-3-amd64 #1 SMP Debian 5.8.14-1 (2020-10-10) x86_64 GNU/Linux
> sudo sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 46482
-n: file descriptors 8192
-l: locked-in-memory size (kbytes) unlimited
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 95
-N 15: unlimited
> yarn --version
1.22.5
With sudo su:
> sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 63664
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 2043392
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
I think I've found a solution:
Set limits in /etc/sysctl.conf by adding:
fs.inotify.max_user_watches=524288
fs.inotify.max_user_instances=512
Open a new terminal or reload sysctl.conf variables with
sudo sysctl --system
Run yarn start
Everything should work fine now, hopefully. If it doesn't work try setting the limits higher.
tcpdump -W 5 -C 10 -w capfile
I know what this command does, which is rotating buffer of 5 files (-W 5) and tcpdump switches to another file once the current file reaches 10,000,000 bytes, about 10MB (-C works in units of 1,000,000 bytes, so -C 10 = 10,000,000 bytes). The prefix of the files will be capfile (-w capfile), and a one-digit integer will be appended to each: how to save a new file when tcpdum file size reaches 10Mb
My question is what happens if I set -W to 1:
tcpdump -W 1 -C 10 -w capfile
Is this gonna only have 1 file with max size 10 MB contains the latest capture?
how to determine ulimits (linux)?
Im using ubuntu 16.04,
kernel version 4.4.0-21-generic
I set the nofile to maximum for root (in /etc/security/limits.conf)
the line is: * hard nofile NUMBER
according to file /proc/sys/fs/file-max
the value is : 32854728
when Im running the command ulimit -a
i found that the limitation is 1024.
i tested it , and i found that the highest value of max open file is 1048575.
If I set it to higher value the limit is 1024.
how to determine ulimit of openfiles? why I can't set it to higher limit than 1048575?
To determine the maximum number of file handles for the entire system, run:
cat /proc/sys/fs/file-max
To determine the current usage of file handles, run:
$ cat /proc/sys/fs/file-nr
1154 133 8192
| | |
| | |
| | maximum open file descriptors
| total free allocated file descriptors
total allocated file descriptors
(the number of file descriptors allocated since boot)
I am trying to get performance of mounted Sd card to my board and i am using Iozone tool to do that but i am getting starnge results:
command:
# mount /dev/mmcblk2p2 /mnt/SD
# cd /mnt/SD
# iozone -a -s 10M -r 5K -w -e
results:
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
10240 5 4283 4136 68681 378738 337652 3871 133905 96074 216912 4122 5013 364024 376181
the results are in Kbytes that's mean the speed random read is 300MB/s ??
my card is class 4 normally the write speed is 4 MB/s and the reading speed is not very different to this value ??
iozone -a -s 10M -r 5K -w -e
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
10240 5 4283 4136 68681 378738 337652 3871 133905 96074 216912 4122 5013 364024 376181
Yes, your results are in kilobyte/s (KB/s; don't use -s silent option and iozone will say it Output is in kBytes/sec), and yes, there was 380 MB/s for "reread" speed (and 200 MB/s for read after reread?). But reread may be not the speed of your block device (SD card/HDD/SSD) if you test set (10 MB) is smaller than your RAM amount (it is).
Most OS (and Linux too) have software cache-in-RAM for filesystems and block devices. When you access some block for first time (since boot), it will be read from the device and stored in Page Cache of OS. Next access (read) of this block will be served directly from RAM, not from the device itself (unless O_DIRECT option was used in I/O operation, -I option of iozone).
So, your test run is incorrect. Read man page of iozone before use: http://linux.die.net/man/1/iozone and try bigger test set (gigabytes) or use -I to bypass page cache.
here is the results when i am using the -I option
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
10240 1024 2356 2950 19693 20865 20833 2095 20111 1734 14375 2875 3566 386809 389443
write seq : 2,3 Mo/s
read seq: 19,2 Mo/s
write rand: 2 Mo/s
read rand: 20 Mo/s
read blk 20 Mo/s
why the read speed still so high ?