I have the following command:
sudo tcpdump -ni enp0s3 -W 1 -C 1 -w file.cap
with this command I say: "listen on the network interface enp0s3 and capture all packets in a file whose maximum size must be 1 mb". It works, however the problem is that when the file reaches the size of 1mb, it is reset and the capture starts all over again from 0 kb, deleting all the packets.
I want that when the file is 1MB, only the older packages are deleted and the new ones are added replacing them. I don't want all packets to be deleted and acquisition restarts at 0kb. In other words, I want the file to always be around 1mb, adding the new incoming packets in place of the oldest ones.
You can use -U -W 2 with the -C size limit. It will then alternate between two files and you can concatenate them (or work on the older one).
Alternatives would be to write to a stream or pipe and not to files, at all.
Related
I'm trying to create a swap partition/file on my board where a core-image-minimal has been installed.
The fdisk -l command doesn't show any partition thus I'm not able to figure out which block device I need to use to create a new partition.
Secondly, launchig a swapon command on a swapfile correctly initialized using mkswap will raise an invalid argument error saying that the file contains holes even though I created it using dd.
At this point I'm not sure if I can do something like this since the free output looks like:
total used free shared buff/cache available
Mem: 503304 32108 101108 216 370088 465180
Swap: 0 0 0
To add any partition to your image, you need to modify the wks file that is used for your build.
To get the current wks file run :
bitbake -e | grep ^WKS_FILE=
Then, look for that file in your layers sources.
In that file you can add (example 1GB swap):
part swap --ondisk mmcblk0 --size 44 --label swap --fstype=swap --size=1024M --overhead-factor 1
For a real example, you can see the raspberry-pi machine swap support commit here.
You can use a custom wks file and set it to your custom machine conf file:
WKS_FILE ?= "custom-image.wks"
For detailed info, check the Yocto reference about wks.
I am trying to run a tcpdump command with filesize 4096 but, it return with an error :-
tcpdump: invalid filesize
Command :- tcpdump -i any -nn -tttt -s0 -w %d-%m-%Y_%H:%M:%S:%s_hostname_ipv6.pcap -G 60 -C 4096 port 53
After some hit & trial I found that it's failing for filesize (4096 i.e 2^12) (8192 i.e. 2^13) and so on.
So, for filesize after 2^11 it's giving me invalid filesize error.
Can anybody tell me on which condition tcpdump return invalid filesize.
Also when I was running with Filesize :- 100000
tcpdump -i any -nn -tttt -s0 -w %d-%m-%Y_%H:%M:%S:%s_hostname_ipv6.pcap -G 60 -C 100000 port 53
.pcap file of max size 1.3GB was getting created.
I also tried looking in the source code of tcpdump but, couldn't find much.
I am trying to run a tcpdump command with filesize 4096
To quote a recent version of the tcpdump man page:
-C file_size
Before writing a raw packet to a savefile, check whether the
file is currently larger than file_size and, if so, close the
current savefile and open a new one. Savefiles after the first
savefile will have the name specified with the -w flag, with a
number after it, starting at 1 and continuing upward. The units
of file_size are millions of bytes (1,000,000 bytes, not
1,048,576 bytes).
So -C 4096 means a file size of 4096000000 bytes. That's a large file size and, in older versions of tcpdump, a file size that large (one bigger than 2147483647) isn't supported for the -C flag.
If you mean you want it to write out files that are 4K bytes in size, unfortunately tcpdump doesn't support that. This means it's past due to fix tcpdump issue 884 by merging tcpdump pull request 916 - I'll do that now, but that won't help you now.
Also when I was running with Filesize :- 100000
That's a file size of 100000000000, which is 100 gigabytes. Unfortunately, if you want a file size of 100000 bytes (100 kilobytes), again, the current minimum file size is 1 megabyte, so that's not supported.
I have the famous socketexception too many open files bug.
Iam running an apache http server, tomcat server and a mysql database on my server.
I checked the limit of open files with ulimit -n that gave me 1024.
If i want to check how many files are opened by lsof -u tomcat, it gives me 5
same for mysql. I not sure what the problem is.. but i have also a readlink permission denied.
i want to monitor my socket connections and opened files on my server. I thought about using the decribed linux commands in a shell script and send them per mail to me.
The other option i think is using netstat and count maybe the connections.. but its loading very slowly and is giving me getnameinfo fail.
what would be the better command to monitor the bug i have`?
EDIT:
SHOW GLOBAL VARIABLES LIKE '%open%';
Variable_name Value
Com_ha_open 0
Com_show_open_tables 0
Open_files 8
Open_streams 0
Open_table_definitions 87
Open_tables 64
Opened_files 673
Opened_table_definitions 87
Opened_tables 628
Slave_open_temp_tables 0
SHOW GLOBAL VARIABLES LIKE '%open%';
Variable_name Value
have_openssl DISABLED
innodb_open_files 300
open_files_limit 2000
table_open_cache 64
SHOW GLOBAL VARIABLES LIKE '%connect%'
character_set_connection latin1
collation_connection latin1_swedish_ci
connect_timeout 10
init_connect
max_connect_errors 10
max_connections 400
max_user_connections 0
SHOW GLOBAL STATUS LIKE '%connect%';
Variable_name Value
Aborted_connects 1
Connections 35954
Max_used_connections 102
Ssl_client_connects 0
Ssl_connect_renegotiates 0
Ssl_finished_connects 0
Threads_connected 11
You may check ulimit values with 'ulimit -a' to determine capacity of Open Files.
From OS Command Prompt, ulimit -n 8192 and press enter to enable more Open Files dyamically.
To make this change persist across OS restart, the next URL can be your guide.
https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/
Where their example is for 500000 capacity, use 8192 for your system, please.
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # to support your max_used_connections of 102
max_user_connections=400 # from 0 to match max_connections requested
table_open_cache=800 # from 64 to reduce Opened_tables count
innodb_open_files=800 # from 300 to match table_open_cache requested
Implementing these details should avoid 'too many open files' message. For additional assistance, view profile, Network profile for contact information and free downloadable Utility Scripts to assist with performance tuning.
tcpdump -W 5 -C 10 -w capfile
I know what this command does, which is rotating buffer of 5 files (-W 5) and tcpdump switches to another file once the current file reaches 10,000,000 bytes, about 10MB (-C works in units of 1,000,000 bytes, so -C 10 = 10,000,000 bytes). The prefix of the files will be capfile (-w capfile), and a one-digit integer will be appended to each: how to save a new file when tcpdum file size reaches 10Mb
My question is what happens if I set -W to 1:
tcpdump -W 1 -C 10 -w capfile
Is this gonna only have 1 file with max size 10 MB contains the latest capture?
I am running the following command in my server(GNU/LINUX 2.6.27.5):
tcpdump -i eth0 -w test.pcap
After stopping the packet capture manually it shows the number of packets captured & received by filter but the file test.pcap does not contain anything, means its size is 0.
tcpdump version used is tcpdump-3.9.8