How can I limit pppd record file size? - linux

My mother tongue is not English, sorry for my English.
I use pppd with a GPRS module.
I use like pppd record record.pcap call tdscdma command to access Internet.And pppdump record.pcap or wireshark to show the record.pcap.
when pppd run ,the record.pcap will save all data and the file size getting bigger and bigger.
Now I am just want save last(Newest) 1Mb(for example,or quantity) message.And how can I limit the file size.
I am more concerned about the recent network conditions. FIFO is not necessary.if the file bigger than 1Mb, truncate it to zero is OK too.
[root#AT91SAM9-RT9x5 logs]# pppd -v
pppd: unrecognized option '-v'
pppd version 2.4.5
[root#AT91SAM9-RT9x5 logs]# uname -a
Linux AT91SAM9-RT9x5 2.6.39 #34 Wed Jun 4 16:12:41 CST 2014 armv5tejl GNU/Linux
Use wireshark looks like this:

Can you use tcpdump program for capturing traffic of ppp0 interface?
There are -C and -W options for limiting size of output files.
Example:
tcpdump -i ppp0 -C 1 -W 2 -w file.pcap
See more from man page: tcpdump(8).

Related

Ubuntu: Pipe raw unbuffered data to TCP port

My overall goal:
I have a hardware device that streams sensor data to a Ubuntu laptop running a Python script. Data comes in chunks of 240 samples (one per line with \n) every 2 seconds and prints to stdout. I start the Python script on the Ubuntu laptop and pipe its output to a TCP port using netcat. I connect to that TCP port from any other device on the network and get the live data stream - without first loading all previous samples.
My Setup:
Two laptops.
1: Ubuntu collects readings from a sensor, and pipes those readings to TCP port 1234.(This is working.) $ py read_sensors.py | nc -lk 1234
2: Windows 10, has WSL, Python, and existing scripts for processing data streamed from the first laptop. (This is working in WSL) $ nc 10.10.10.01 1234
My Problem:
I begin streaming sensor data on the Ubuntu laptop.
10 min later I connect to that stream from my windows laptop...
I expect to receive the most recent sample at the time the connection was established, and all subsequent samples in (pseudo) real-time.
Instead, as soon as I connect I am flooded with all samples collected since I began the streaming pipeline on the Ubuntu laptop, and once it catches up, I start seeing real-time data.
I have tried: Searching led me to try stdbuf. Lack of results led me to try various combinations of $ stdbuf -oL py read_sensors.py | nc -lk 1234 $ py read_sensors.py | stdbuf -oL nc -lk 1234 but every time I wait a little bit then connect to the port from my windows laptop, it loads all samples from the time I started streaming on the Ubuntu laptop.
I assume: This is a buffering issue and that it will have to be fixed on the Ubuntu machine - but the various combinations of stdbuf has not had any effect on the behavior of the system. So, I turn to the SO gods for insight and grace :)
-Nick
Something like this might meet the overall goal. Based on https://docs.python.org/3/library/socketserver.html#socketserver-tcpserver-example
import socketserver
class MyHandler(socketserver.BaseRequestHandler):
def handle(self):
bytes = read_sensor_data() # call the code that reads the sensors
self.request.sendall(bytes)
if __name__ == "__main__":
with socketserver.TCPServer(("localhost", 1234), MyHandler) as server:
server.serve_forever()
Disable netcat buffering:
Force netcat to send messages immediately (without buffering)
Alternately, I believe if you use bash’s built in tcp connections bypassing netcat, it will work. E.g. read_sensors.py > /dev/tcp/10.10.10.1/1234
EDIT: Added sample code that shows how to send and receive.
Example code:
To send:
#!/bin/bash
while true
do
date > /dev/tcp/localhost/1234 || true # replace date command with read_sensors.py
sleep 1
done
to receive:
ubuntu#ubuntu:~$ nc -lk 1234
Output:
Tue Mar 2 20:40:24 UTC 2021
Tue Mar 2 20:40:25 UTC 2021
Tue Mar 2 20:40:26 UTC 2021
Tue Mar 2 20:40:27 UTC 2021
^C
ubuntu#ubuntu:~$ nc -lk 1234
Tue Mar 2 20:40:40 UTC 2021
Tue Mar 2 20:40:41 UTC 2021
Notice 13 second gap while I hit control C, no data sent or buffered up.

How to detect a connection without logging in on the console (TTY0)

I was wondering what possibilities are available to detect a connection on a tty. My goal is to create an alert in case someone tries to watch what I am doing through the console
First I though about who, which allows to see wether someone is connected and on what tty, but let's say this user isn't logged in, is there still a way of detecting that a tty is opened? Maybe with /dev/tty? Or is it possible to know how many file descriptors are pointing to the file /dev/console and what processes are using the hardware/io? Or maybe using hardware detection with vcs? I actually have no idea how to use/test those.
let's say this user isn't logged in
Each process belongs to some user. The only way to run some code and not being logged in is to write this code in kernel.
I was wondering what possibilities are available to detect a connection on a tty
Try this:
$ lsof /dev/tty0
lsof is a tool for observe open files.
For example, serial console for my development board is /dev/ttyUSB0. So I opened minicom session and also made cat for my TTY file:
$ minicom -D /dev/ttyUSB0
$ cat /dev/ttyUSB0
and checked this:
$ lsof /dev/ttyUSB0
which gave me:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
minicom 13299 joe 3u CHR 188,0 0t0 135832 /dev/ttyUSB0
cat 13310 joe 3r CHR 188,0 0t0 135832 /dev/ttyUSB0
From this output you can figure out, who (USER) connected to your TTY, and whic program he used (COMMAND, PID).
After closing minicom session and cat, lsof doesn't print anything.
Also it can be done using w command:
$ w | grep ttyUSB0
Output:
joe pts/2 :0 17:57 2:47 0.12s 0.00s minicom -D /dev/ttyUSB0
joe pts/3 :0 17:58 2:39 0.08s 0.00s cat /dev/ttyUSB0
UPDATE
If you don't want to be watched in the way described above (i.e. via open descriptor), you can do next.
Let's say you are using /dev/ttyUSB0. To connect to your console via this file, you must open it first (no matter which way, e.g. using cat or minicom etc.). Once you have opened it, your session can be seen by other users (e.g. by root) just looking if there are open file descriptors for that file (lsof /dev/ttyUSB0).
Now, TTY devices are just character devices, and you can create your own files (nodes) for those devices. Let's see closely to /dev/ttyUSB0 file:
$ ls -l /dev/ttyUSB0
Output:
crw-rw---- 1 root dialout 188, 0 Mar 4 16:07 /dev/ttyUSB0
Here c indicates that this is character device, and we can see that it has major number 188 and minor number 0. Let's create our own node (file) for this device.
$ cd /tmp
$ sudo mknod some_tricky_name c 188 0
$ sudo chown root:dialout some_tricky_name
$ sudo chmod 644 some_trickyname
Now you can connect to this file instead of /dev/ttyUSB0 and nobody's gonna see you:
$ minicom -D /tmp/some_tricky_name
Still, if you are gonna run some shell in this TTY, good administrator still can catch you, looking to /var/log/auth.log. But I believe this technique prevents you from being caught using w or lsof commands.

rsyslog - Property-based filtering not working

I almost hate to submit a topic for this, but I haven't been able to figure it out on my own. I'm running a Federoa 17 server, and I'm attempting to log dropped packets from iptables to a separate log file via rsyslog, but it keeps sending them to /var/log/messages instead.
Snippet from my firewall script:
#!/bin/bash
iptables -F
# My accepted rules would be here
iptables -A INPUT -j LOG --log-prefix "iptables: "
iptables -A FORWARD -j LOG --log-prefix "iptables: "
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables-save > /etc/sysconfig/iptables
service iptables restart
iptables -L -v
The config file that SHOULD be catching the messages from iptables:
[root#fc17 ]# cat /etc/rsyslog.d/iptables.conf
:msg, startswith, "iptables: " /var/log/iptables.log
& ~
Snippet from my rsyslog.conf file:
#### GLOBAL DIRECTIVES ####
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
#### RULES ####
# I put this in here too to see if it would work; it doesn't
:msg, startswith, "iptables: " /var/log/iptables.log
& ~
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
I've restarted both iptables and rsyslog multiple times since making the changes, and no matter what, it will only only log dropped packets from iptables to /var/log/messages.
I heard running rsyslog in compatibility mode can cause various problems. Could this be the case here? Here are its run-options on my system:
[root#fc17 ]# ps -ef | grep rsyslog
root 3571 1 0 00:59 ? 00:00:00 /sbin/rsyslogd -n -c 5
startswith comparison operator didn't work,because msg didn't begin with iptables: when i checked my logs.
[root#localhost ~]# cat /etc/rsyslog.d/test.conf
:msg, startswith, "iptables:" /var/log/iptables.log
but contains comparison operator worked on my FC18
[root#localhost ~]# cat /etc/rsyslog.d/test.conf
:msg, contains, "iptables:" /var/log/iptables.log
Ref: Rsyslog site
you should add the following two line in your "/etc/rsyslogd.conf" in directives part
$klogParseKernelTimestamp on
$klogKeepKernelTimestamp off
This will remove the kernel timestamp which appears in the begining of every kernel message like "[6448.546951]" in the following log
Mar 31 14:36:14 localhost kernel: [ 6448.546951] iptables: IN=ppp0 OUT= MAC= SRC=
2019 solution. Tested with rsyslogd 8.32.0 on Ubuntu18.04.
You can still use startswith,
[root#localhost ~]# cat /etc/rsyslog.d/test.conf
:msg, startswith, " iptables:" /var/log/iptables.log
by changing the line in /etc/rsyslogd.conf
module(load="imklog" ParseKernelTimestamp="on" KeepKernelTimestamp="off")
I'm using rsyslogd 5.8.10 over centos 6, my log report show this way:
Aug 12 11:50:41 node2 kernel: [10256396.525411] IPTables-Dropped: IN=eth0 OUT= MAC=00:25:90:c3:05:40:00:24:13:10:8c:00:08:00 SRC=212.237.40.56 DST=37.153.1.29 LEN=45 TOS=0x00 PREC=0x00 TTL=244 ID=54321 PROTO=UDP SPT=45661 DPT=53413 LEN=25
I tried to disabled the timestamp with:
$klogParseKernelTimestamp on
$klogKeepKernelTimestamp off
But show:
Aug 12 11:50:22 node2 rsyslogd-3003: invalid or yet-unknown config file command - have you forgotten to load a module? [try http://www.rsyslog.com/e/3003 ]
In modules have this:
#### MODULES ####
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog # provides kernel logging support (previously done by rklogd)
#$ModLoad immark # provides --MARK-- message capability
Thank you advance.

How do I log data from my serial ports consistently?

I need to deal with two pieces of custom hardware which both send debugging data over two serial connections. Those serial connections go through two serial-to-USB converters. The serial-to-USB devices have the same vendor numbers, device numbers, and, apparently, the same serial numbers.
Here's the issue: I want to log the two serial ports separately. The custom hardware needs to be rebooted constantly, and whether they attach to the same /dev/ttyUSB* is completely random. How can I make them pick the same device path every time? I could make it dependent on what port it is plugged into, but that seems kind of hacky.
So, I ran a diff against the output of udevadm, like so:
$ udevadm info -a -p `udevadm info -q path -n /dev/ttyUSB1` > usb1
$ udevadm info -a -p `udevadm info -q path -n /dev/ttyUSB2` > usb2
$ diff usb1 usb2
The output of the diff is long; you can see it here
Grepping for serial (same for both):
$ udevadm info -a -p `udevadm info -q path -n /dev/ttyUSB2` | grep serial
SUBSYSTEMS=="usb-serial"
ATTRS{serial}=="0001"
ATTRS{serial}=="0000:00:1d.7"
Other info:
I'm using PuTTY to read from the serial ports.
OS:
$ uname -a
Linux xxxxxxxx.localdomain 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Please check if the usb-serial converter is based on a ftdi chip?
(You can check driver filenames)
If so; you have a chance to change serial number,or even the manufacturer info.
http://www.ftdichip.com/Support/Utilities.htm
Check the tools; MProg and FT_PROG utility tools.

Identify other end of a unix domain socket connection

I'm trying to figure out what process is holding the other end of a unix domain socket. In some strace output I've identified a given file descriptor which is involved in the problem I'm currently debugging, and I'd like to know which process is on the other end of that. As there are multiple connections to that socket, simply going by path name won't work.
lsof provides me with the following information:
dbus-daem 4175 mvg 10u unix 0xffff8803e256d9c0 0t0 12828 #/tmp/dbus-LyGToFzlcG
So I know some address (“kernel address”?), I know some socket number, and I know the path. I can find that same information in other places:
$ netstat -n | grep 12828
unix 3 [ ] STREAM CONNECTED 12828 #/tmp/dbus-LyGToFzlcG
$ grep -E '12828|ffff8803e256d9c0' /proc/net/unix
ffff8803e256d9c0: 00000003 00000000 00000000 0001 03 12828 #/tmp/dbus-LyGToFzlcG
$ ls -l /proc/*/fd/* 2>/dev/null | grep 12828
lrwx------ 1 mvg users 64 10. Aug 09:08 /proc/4175/fd/10 -> socket:[12828]
However, none of this tells me what the other end of my socket connection is. How can I tell which process is holding the other end?
Similar questions have been asked on Server Fault and Unix & Linux. The accepted answer is that this information is not reliably available to the user space on Linux.
A common suggestion is to look at adjacent socket numbers, but ls -l /proc/*/fd/* 2>/dev/null | grep 1282[79] gave no results here. Perhaps adjacent lines in the output from netstat can be used. It seems like there was a pattern of connections with and without an associated socket name. But I'd like some kind of certainty, not just guesswork.
One answer suggests a tool which appears to be able to address this by digging through kernel structures. Using that option requires debug information for the kernel, as generated by the CONFIG_DEBUG_INFO option and provided as a separate package by some distributions. Based on that answer, using the address provided by lsof, the following solution worked for me:
# gdb /usr/src/linux/vmlinux /proc/kcore
(gdb) p ((struct unix_sock*)0xffff8803e256d9c0)->peer
This will print the address of the other end of the connection. Grepping lsof -U for that number will provide details like the process id and the file descriptor number.
If debug information is not available, it might be possible to access the required information by knowing the offset of the peer member into the unix_sock structure. In my case, on Linux 3.5.0 for x86_64, the following code can be used to compute the same address without relying on debugging symbols:
(gdb) p ((void**)0xffff8803e256d9c0)[0x52]
I won't make any guarantees about how portable that solution is.
Update: It's been possible to to do this using actual interfaces for a while now. Starting with Linux 3.3, the UNIX_DIAG feature provides a netlink-based API for this information, and lsof 4.89 and later support it. See https://unix.stackexchange.com/a/190606/1820 for more information.

Resources