Virtual Serial Port with Socat (Linux) and allow buffer overflow - linux

When writing from a laptop to an external serial device that I am using, it is possible for the laptop to be writing faster than the program running on the device can actually clear its serial read buffer, leading to data being lost do to the buffer overflowing.
Is there a way to simulate this behavior using a virtual serial port with socat? For my testing so far, I have used the following command:
socat -d -d pty,raw,echo=0 pty,raw,echo=0
(For the sake of the example, lets say it creates /dev/pts/2 and /dev/pts/3, referred to as A and B respectively).
Which creates two virtual serial ports that I can send data across. However, if I start writing a large amount of data to A without reading any from B, the writing to A will eventually block until I read from B and make room in its read buffer.
It is unclear to me whether there is some internal feedback between the read buffer of B, or if the actual blocking is occurring in the write buffer of A due to some other internal feedback within whatever socat setup that tells blocks the writing because there is no room in the read buffer of B.
Is there any way to avoid this blocking and have it overflow the read buffer of B?
Hopefully my question makes sense. Thank you for the help.

Related

Sending to a serial device via socat only shows up if I'm actively reading -- I need the data to be cached

I have socat creating two serial ports: ttyclient and ttyserver, ttyclient will be called by an application and I have a script listening to ttyserver. The sequence of events I need is
My socat creates the two via /usr/bin/socat -d -d PTY,link=./ttyserver,raw,echo=0 PTY,link=./ttyclient,raw,echo=0
My script writes a few bytes to ttyserver (to be read by ttyclient)
At some later point in time the application runs, reads the bytes from ttyclient and goes to work.
The problem I am having is the bytes are only readable if something is read during step 2. If I run minicom -D ./ttyclient before step 2 I see the bytes being transmitted as expected, but afterwards shows nothing. The data appears to be discarded.
Is this expected behavior? I'm unsure of how to keep the data around so when something reads it is presented with those bytes. Alternatively I'd happy if I had some way to know ttyclient has been opened and send data on that event.
The issue is that I was using Minicom to read ttyclient. For whatever reason Minicom won't display cached data like that... If I just use cat I get my desired behavior.

Linux: Read data from serial port with one process and write to it with another

I've ecountered a problem using a serial GPS/GNSS device on a Raspberry Pi. The device in question is a u-blox GNSS receiver symlinked to /dev/gps.
I try to achieve logging the output data from this device and simultaneously sending correction data to it.
To be more specific, I use RTKLIBs (http://www.rtklib.com/) str2str tool for sending NTRIP/RTCM correction data to the GNSS receiver in order to get better position estimations using DGNSS/RTK.
The receiver's output data will be logged by a python script which is based on the GPS deamon (gpsd).
However, I guess the main issue is related to the serial port control.
When I run the writing process (str2str) first and afterwards any reading process (my python script/gpsd frontends (e.g. cgps) /cat) at the same time, the reading process will output data for a few seconds and freeze then. It doesn't matter which tool I use for reading the data.
I found this question: https://superuser.com/questions/488908/sharing-a-serial-port-between-two-processes. Therefore I made sure that the processes got rw access to the device and even tried running them as superuser. Furthermore I stumbled upon socat and virtual serial ports, but didn't find any use for it. (Virtual Serial Port for Linux)
Is there any way to read data from a serial port with one process and write to it with another? The only solution I know of right now might be to rewrite the read and write process in python using pySerial. This would allow to only have one process accessing the serial device, but would mean plenty of work.
Finally I found a soultion using a construction somehow similar to this: https://serverfault.com/questions/453032/socat-to-share-a-serial-link-between-multiple-processes
A first socat instance (A) gets GNSS correction data from a TCP connection, which is piped to socat B. Socat B manages the connection to the serial device and pipes output data to another socat instance C, which allows other processes such as gpsd to connect and get the receiver's output from TCP port.
In total, this looks like:
socat -d -d -d -u -lpA TCP4:127.0.0.1:10030 - 2>>log.txt |
socat -d -d -d -t3 -lpB - /dev/gps,raw 2>>log.txt|
socat -d -d -d -u -lpC - TCP4-LISTEN:10031,forever,reuseaddr,fork 2>>log.txt
With only one process managing the serial connection, it doesn't block anymore.

How do i pipeline two program that share data through serial port in ubuntu linux?

I have two codes : Prog1 writes data to serial port Prog 2 reads data from serial port.
How do I run these programs simultaneously(pipeline) (on the same pc-loopback).
As far as i know: prog1 | prog2 will connect the two programs but through stdin and stdout.
I want to connect them through the output of prog1 I've sent on serial port and receive it as input for prog2.
Your explanation
prog1 generates binary data and send it to the RS232C(USB-serial) port.
prog2 receives the binary data from RS232C(USB-serial) port.
It sounds to me that the prog1 output is already connected to the input of prog2,
without using pipe...
BTW, pipe is used for inter process communication using system memory.
pipe() system call returns two files descriptors, one channel with a
"read end" and the other channels with a "write end".
Then, you can communicate the data between two processes by using these descriptors respectively.
Please refer pipe(2) man page for the detail.
In order to do the loop back using serial port, I think you can not use pipe.
you have to use the descriptor by opening the serial port device file(/dev/ttyXX),
instead of the descriptor generated by pipe.
There are various way to achieve loopback.
RS232C TX and RX signal is physically connected.
In this case, just open the device file of serial port (/dev/ttyxx).
use read(), write() ioctl to exchange the data.
Use the hardware dependent loop back function.
Some hardware can support loop back function for test purpose.
you might be able to use this via ioctl(TIOCMBIS) with TIOCM_LOOP flag set.
Refer tty_ioctl().
Using echo function of stty. Refer stty man page.
Hope this would help you.

Dropping packets with netcat using a UDP transfer?

I'm working on sending large data files between two Linux computers via a 10 Gigabit Ethernet cable and netcat with a UDP transfer, but seem to be having issues.
After running several tests, I've come to the conclusion that netcat is the issue. I've tested the UDP transfer using [UDT][1], [Tsunami-UDP]2, and a Python UDT transfer as well, and all of which have not had any packet loss issues.
On the server side, we've been doing:
cat "bigfile.txt" | pv | nc -u IP PORT
then on the client side, we've been doing:
nc -u -l PORT > "outputFile.txt"
A few things that we've noticed:
On one of the computers, regardless of whether it's the client or server, it just "hangs". That is to say, even once the transfer is complete, Linux doesn't kill the process and move to the next line in the terminal.
If we run pipe view on the receiving side as well, the incoming data rate is significantly lower than what the sending side thinks it's sending.
Running Wireshark doesn't show any packet loss.
Running the system performance monitor in Linux shows that the incoming data rate (for the receiving side) is the same as the outgoing data rate from the sending side. This is in contrast to what pipe view thinks (see #2)
We're not sure where the issue is with netcat, and if there is a way around it. Any help/insights would be greatly appreciated.
Also, for what it's worth, using netcat with a TCP transfer works fine. And, I do understand that UDP isn't known for reliability, and that packet loss should be expected, but it's the protocol we must use.
Thanks
It could well be that the sending instance is sending the data too fast for the receiving instance. Note that this can occur even if you see no drops on the receiving NIC (as you seem to be saying), because the loss can occur at OS level instead. Your OS could have its UDP buffers overflowing. Run this command:
watch -d "cat /proc/net/snmp | grep -w Udp"
To see if your RcvbufErrors field is non-zero and/or growing while your file transfer is going on.
This answer (How to send only one UDP packet with netcat?) says that nc sends one packet per line. Assuming that's true, this could lead to a significantly higher number of packets than your other transfer mechanisms. Presumably, as #Smeeheey suggested, you're running out of receive buffers on the receiving end.
To cause your sending end to exit, you can add -q 1 to the command line (exit 1 second after seeing end of file).
But there's no way that the the receiving end nc can know when the transfer is complete. This is why these other mechanisms are "protocols" -- they have mechanisms built into them to communicate the bounds of a file. Raw UDP has no concept of end of file.
Tuning the Linux networking stack is a bit complicated, as there are many components to tune to figure out where data is being dropped.
If possible/feasible, I'd recommend that you start by monitoring packet drops throughout the entire network stack. Once you've done that, you can determine where exactly packets are being dropped and then adjust tuning parameters as needed. There are a lot of different files to measure with lots of different fields. I wrote a detailed blog post about monitoring and tuning each component of the Linux networking stack from top to bottom. It's a bit difficult to summarize all the information there, but take a look, I think it can help guide you.

How can I monitor data on a serial port in Linux?

I'm debugging communications with a serial device, and I need to see all the data flowing both directions.
It seems like this should be easy on Linux, where the serial port is represented by a file. Is there some way that I can do a sort of "bi-directional tee", where I tell my program to connect to a pipe that copies the data to a file and also shuffles it to/from the actual serial port device?
I think I might even know how to write such a beast, but it seems non-trivial, especially to get all of the ioctls passed through for port configuration, etc.
Has anyone already built such a thing? It seems too useful (for people debugging serial device drivers) not to exist already.
strace is very useful for this. You have a visualisation of all ioctl calls, with the corresponding structure decoded. The following options seems particularly useful in your case:
-e read=set
Perform a full hexadecimal and ASCII dump of all the data read from
file descriptors listed in the
specified set. For example, to see all
input activity on file descriptors 3
and 5 use -e read=3,5. Note that this
is independent from the normal tracing
of the read(2) system call which is
controlled by the option -e
trace=read.
-e write=set
Perform a full hexadecimal and ASCII
dump of all the data written to file
descriptors listed in the specified
set. For example, to see all output
activity on file descriptors 3 and 5
use -e write=3,5. Note that this is
independent from the normal tracing of
the write(2) system call which is
controlled by the option -e
trace=write.
I have found pyserial to be quite usable, so if you're into Python it shouldn't be too hard to write such a thing.
A simple method would be to write an application which opened
the master side of a pty and the tty under test. You would then
pass your tty application the slave side of the pty as the 'tty device'.
You would have to monitor the pty attributes with tcgetattr() on the pty
master and call tcsetattr() on the real tty, if the attributes changed.
The rest would be a simple select() on both fd's copying data bi-directionally and copying it to a log.
I looked at a lot of serial sniffers. All of them are based on the idea of making a virtual serial port and sniff data from that port. However, any baud/parity/flow changes will break connection.
So, I wrote my own sniffer :). Most of the serial ports now are just USB-to-serial converters. My sniffer collects data from USB through debugfs, parse it and output to the console. Also any baudrate changes, flow control, line events, and serial errors are also recorded. The project is in the early stage of development and for now, only FTDI is supported.
http://code.google.com/p/uscmon/
Much like #MBR, I was looking into serial sniffers, but the ptys broke the parity check. However, his sniffer was not helping me, as I'm using a CP2102, and not a FT232. So I wrote my own sniffer, by following this, and now I have one that can record file I/O on arbitrary files: I called it tracie.

Resources