Sending to a serial device via socat only shows up if I'm actively reading -- I need the data to be cached - linux

I have socat creating two serial ports: ttyclient and ttyserver, ttyclient will be called by an application and I have a script listening to ttyserver. The sequence of events I need is
My socat creates the two via /usr/bin/socat -d -d PTY,link=./ttyserver,raw,echo=0 PTY,link=./ttyclient,raw,echo=0
My script writes a few bytes to ttyserver (to be read by ttyclient)
At some later point in time the application runs, reads the bytes from ttyclient and goes to work.
The problem I am having is the bytes are only readable if something is read during step 2. If I run minicom -D ./ttyclient before step 2 I see the bytes being transmitted as expected, but afterwards shows nothing. The data appears to be discarded.
Is this expected behavior? I'm unsure of how to keep the data around so when something reads it is presented with those bytes. Alternatively I'd happy if I had some way to know ttyclient has been opened and send data on that event.

The issue is that I was using Minicom to read ttyclient. For whatever reason Minicom won't display cached data like that... If I just use cat I get my desired behavior.

Related

How to isolate pty buffers created with socat and tee from tty?

I'm trying to clone a tty that is connected to a serial device, so that I can have two programs reading the data simultaneously (one program occasionally writes to the serial device as well).
I'm using socat to clone the tty, which works fine:
socat /dev/ttyACM0 SYSTEM:'tee >(socat - "PTY,rawer,link=/tmp/myPTY1") | socat - "PTY,rawer,link=/tmp/myPTY2"'
However, these PTYs are not isolated. When myPTY1 doesn't get read, after some time myPTY2 won't receive any new data. Only when I then read from myPTY1 (clearing all the buffered data) then myPTY2 receives new data. I suspect this is about how socat buffers data, it seems like both PTYs are connected to the same buffer.
I would like both PTYs to be completely isolated, so that when one PTY doesn't get read (buffer runs full), the other isn't stuck, but continues receiving new data. Any idea how to do this?
I tried some socat options like nonblock, wait-slave, ctty but none have the desired effect.

Virtual Serial Port with Socat (Linux) and allow buffer overflow

When writing from a laptop to an external serial device that I am using, it is possible for the laptop to be writing faster than the program running on the device can actually clear its serial read buffer, leading to data being lost do to the buffer overflowing.
Is there a way to simulate this behavior using a virtual serial port with socat? For my testing so far, I have used the following command:
socat -d -d pty,raw,echo=0 pty,raw,echo=0
(For the sake of the example, lets say it creates /dev/pts/2 and /dev/pts/3, referred to as A and B respectively).
Which creates two virtual serial ports that I can send data across. However, if I start writing a large amount of data to A without reading any from B, the writing to A will eventually block until I read from B and make room in its read buffer.
It is unclear to me whether there is some internal feedback between the read buffer of B, or if the actual blocking is occurring in the write buffer of A due to some other internal feedback within whatever socat setup that tells blocks the writing because there is no room in the read buffer of B.
Is there any way to avoid this blocking and have it overflow the read buffer of B?
Hopefully my question makes sense. Thank you for the help.

Linux: Read data from serial port with one process and write to it with another

I've ecountered a problem using a serial GPS/GNSS device on a Raspberry Pi. The device in question is a u-blox GNSS receiver symlinked to /dev/gps.
I try to achieve logging the output data from this device and simultaneously sending correction data to it.
To be more specific, I use RTKLIBs (http://www.rtklib.com/) str2str tool for sending NTRIP/RTCM correction data to the GNSS receiver in order to get better position estimations using DGNSS/RTK.
The receiver's output data will be logged by a python script which is based on the GPS deamon (gpsd).
However, I guess the main issue is related to the serial port control.
When I run the writing process (str2str) first and afterwards any reading process (my python script/gpsd frontends (e.g. cgps) /cat) at the same time, the reading process will output data for a few seconds and freeze then. It doesn't matter which tool I use for reading the data.
I found this question: https://superuser.com/questions/488908/sharing-a-serial-port-between-two-processes. Therefore I made sure that the processes got rw access to the device and even tried running them as superuser. Furthermore I stumbled upon socat and virtual serial ports, but didn't find any use for it. (Virtual Serial Port for Linux)
Is there any way to read data from a serial port with one process and write to it with another? The only solution I know of right now might be to rewrite the read and write process in python using pySerial. This would allow to only have one process accessing the serial device, but would mean plenty of work.
Finally I found a soultion using a construction somehow similar to this: https://serverfault.com/questions/453032/socat-to-share-a-serial-link-between-multiple-processes
A first socat instance (A) gets GNSS correction data from a TCP connection, which is piped to socat B. Socat B manages the connection to the serial device and pipes output data to another socat instance C, which allows other processes such as gpsd to connect and get the receiver's output from TCP port.
In total, this looks like:
socat -d -d -d -u -lpA TCP4:127.0.0.1:10030 - 2>>log.txt |
socat -d -d -d -t3 -lpB - /dev/gps,raw 2>>log.txt|
socat -d -d -d -u -lpC - TCP4-LISTEN:10031,forever,reuseaddr,fork 2>>log.txt
With only one process managing the serial connection, it doesn't block anymore.

Dropping packets with netcat using a UDP transfer?

I'm working on sending large data files between two Linux computers via a 10 Gigabit Ethernet cable and netcat with a UDP transfer, but seem to be having issues.
After running several tests, I've come to the conclusion that netcat is the issue. I've tested the UDP transfer using [UDT][1], [Tsunami-UDP]2, and a Python UDT transfer as well, and all of which have not had any packet loss issues.
On the server side, we've been doing:
cat "bigfile.txt" | pv | nc -u IP PORT
then on the client side, we've been doing:
nc -u -l PORT > "outputFile.txt"
A few things that we've noticed:
On one of the computers, regardless of whether it's the client or server, it just "hangs". That is to say, even once the transfer is complete, Linux doesn't kill the process and move to the next line in the terminal.
If we run pipe view on the receiving side as well, the incoming data rate is significantly lower than what the sending side thinks it's sending.
Running Wireshark doesn't show any packet loss.
Running the system performance monitor in Linux shows that the incoming data rate (for the receiving side) is the same as the outgoing data rate from the sending side. This is in contrast to what pipe view thinks (see #2)
We're not sure where the issue is with netcat, and if there is a way around it. Any help/insights would be greatly appreciated.
Also, for what it's worth, using netcat with a TCP transfer works fine. And, I do understand that UDP isn't known for reliability, and that packet loss should be expected, but it's the protocol we must use.
Thanks
It could well be that the sending instance is sending the data too fast for the receiving instance. Note that this can occur even if you see no drops on the receiving NIC (as you seem to be saying), because the loss can occur at OS level instead. Your OS could have its UDP buffers overflowing. Run this command:
watch -d "cat /proc/net/snmp | grep -w Udp"
To see if your RcvbufErrors field is non-zero and/or growing while your file transfer is going on.
This answer (How to send only one UDP packet with netcat?) says that nc sends one packet per line. Assuming that's true, this could lead to a significantly higher number of packets than your other transfer mechanisms. Presumably, as #Smeeheey suggested, you're running out of receive buffers on the receiving end.
To cause your sending end to exit, you can add -q 1 to the command line (exit 1 second after seeing end of file).
But there's no way that the the receiving end nc can know when the transfer is complete. This is why these other mechanisms are "protocols" -- they have mechanisms built into them to communicate the bounds of a file. Raw UDP has no concept of end of file.
Tuning the Linux networking stack is a bit complicated, as there are many components to tune to figure out where data is being dropped.
If possible/feasible, I'd recommend that you start by monitoring packet drops throughout the entire network stack. Once you've done that, you can determine where exactly packets are being dropped and then adjust tuning parameters as needed. There are a lot of different files to measure with lots of different fields. I wrote a detailed blog post about monitoring and tuning each component of the Linux networking stack from top to bottom. It's a bit difficult to summarize all the information there, but take a look, I think it can help guide you.

Shell Script: How to calculate the number of bytes transmitted on a network from a single network intense application?

Shell Scripting: How to calculate the number of bytes uploaded to a remote node?
1) I have used IPTRAF command it can capture all the data but when I try to run the script from a local system using vxargs(python), it doesn't work.
2) I have also tried to use IFTOP command but it doesn't save the output to a file.
3) Also /proc/net/dev doesn't work because its captures some extra packets.
Does tcpdump command capture only the number of bytes/packets of the data load transmitted or does the data transmitted will have some headers attached to it?
I would like to capture only data packets from my running network intensive application.
I don't think this can be done trivially. See http://mutrics.iitis.pl/tracedump for a program that seems to accomplish this. However it has not been updated since 2012 and only works on 32 bit arch.
A crude solution could be to run the application through strace and then summarize the return values of the relevant syscalls like write and sendto.
Maybe not what you want, but if you run the process with specified user, let's say appuser, you can add a firewall rule which will count the traffic for you.
example:
iptables -I OUTPUT -m owner --uid-owner appuser -j ACCEPT
check how much packet and bytes sent out by the user:
iptables -vL OUTPUT |sed -n '2p;/appuser/p'

Resources