open command sets RTS and DTR by default - tty

I connect via FTDI to an STM32 via my own application written on Ubuntu 14.04.1.
I figured out one problem with simply opening the tty like:
/* Open File Descriptor */
int fd = open( "/dev/ttyUSB0", O_RDWR| O_NONBLOCK | O_NDELAY );
The problem here is, that open() sets DTR and RTS by default.
That means that the STM always do a reset when I execute the open() command, but actually I just want to read from the port.
The following code won't do it either:
int fd = open( "/dev/ttyUSB0", O_RDONLY| O_NONBLOCK | O_NDELAY );
Is there any way to set RTS and DTR in advance?

Related

Understanding file descriptor duplication in bash

I'm having a hard time understanding something about redirections in bash.
I'll start with what I know:
Each process has file descriptors opened which it can write to/read from. These file descriptors may represent files on disk, terminals, devices, etc.
When we start teminal with bash, we have file stdin (0) stdout (1) and stderr (2) opened, pointing to the terminal. Whenever we run a command (a new process), that process inherits the file descriptors of its parent (bash), so by default, it will print stdout and stderr messages to the terminal, and read from terminal also.
When we redirect, for example:
$ ls 1>filelist
We're actually changing file descriptor 1 of the ls process, to point to the filelist file, instead of the terminal. So when ls will write(1, ...) it will go to the file.
So to sum it up, a redirection is basically changing the file to which the file descriptor to which the program writes/reads to/from refers to.
Now, let's say I have the following C program:
#include <stdio.h>
#include <fcntl.h>
int main()
{
int fd = 0;
fd = open("info.log", O_CREAT | O_RDWR);
printf("%d", fd);
write(fd, "INFO::", 6);
return 0;
}
This program opens a file info.log, which is referred to by a file descriptor (usually 3).
Indeed, if I now compile this program and run it:
$ ./app
3
It creates the file info.log which contains the "INFO::" text in it.
But here's what I don't get: according to the logic described above, if I now redirect FD 3 to another file:
$ ./app 3> another_file
The text should be written to this other file, but for some reason, it doesn't.
Can someone explain?
Hint: when you run ./app 3> another_file, it'll print "4" instead of "3".
More detailed explanation: when you run ./app 3> another_file in the shell, a series of things happens:
The shell fork()s a subprocess that'll run ./app. The subprocess is basically a clone of its parent process so, it'll still be running the shell program.
In that subprocess, the shell opens "another_file" on file descriptor #3 for writing.
Then it uses one of the execl() family of calls to execute the ./app binary (with "another_file" still open on FD#3).
The program runs open("info.log", O_CREAT | O_RDWR), which creates "info.log" and opens it on the next available file descriptor. Since FD#3 is already in use, that's FD#4.
The program writes "INFO::" to FD#4, which is "info.log".
Since open() uses a new FD, it's not really affected by any active redirects. And actually, if the program did open something on FD#3, that'd replace the connection to "another_file" with whatever it had opened instead, essentially overriding the redirect.
If the program wanted to use the redirect, it'd have to write to FD#3 without first opening anything on it. This is what's normally done with FD#1 and 2 (standard output and error), and that's why redirecting those works.

How closing STDOUT effects printf

If someone closes a proccess' STDOUT in it's file descriptor table like so:
close(STDOUT);
and then opens a file to read/write :
​
int ​fd = open(​"myFile"​, O_RDWR);
and after that uses printf:
printf("hello");
I know it won't go to the screen, but will it be printed in the file? Or does he have to use fprintf or the write system call?
From the man page of open:
The file descriptor returned by a successful call will be the lowest-numbered file descriptor not currently open for the process.
When you close the file descriptor for STDOUT, the subsequent open system call will assign the fd of stdout to the new file. Printf just sends to the fd it doesn't matter whether it is stdout or not. So a printf in this scenario will dump the output to "myfile"

Why is File::FcntlLock's l_type always "F_UNLCK" even if the file is locked?

The Perl subroutine below uses File::FcntlLock to check if a file is locked.
Why does it return 0 and print /tmp/test.pid is unlocked. even if the file is locked?
sub getPidOwningLock {
my $filename = shift;
my $fs = new File::FcntlLock;
$fs->l_type( F_WRLCK );
$fs->l_whence( SEEK_SET );
$fs->l_start( 0 );
$fs->l_len( 0 );
my $fd;
if (!open($fd, '+<', $filename)) {
print "Could not open $filename\n";
return -1;
}
if (!$fs->lock($fd, F_GETLK)) {
print "Could not get lock information on $filename, error: $fs->error\n";
close($fd);
return -1;
}
close($fd);
if ($fs->l_type() == F_UNLCK) {
print "$filename is unlocked.\n";
return 0;
}
return $fs->l_pid();
}
The file is locked as follows (lock.sh):
#!/bin/sh
(
flock -n 200
while true; do sleep 1; done
) 200>/tmp/test.pid
The file is indeed locked:
~$ ./lock.sh &
[2] 16803
~$ lsof /tmp/test.pid
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 26002 admin 200w REG 8,5 0 584649 test.pid
sleep 26432 admin 200w REG 8,5 0 584649 test.pid
fcntl and flock locks are invisible to each other.
This is a big problem for your use case because the flock utility that you're using in your shell script depends on flock semantics: the shell script runs a flock child process, which locks an inherited file descriptor and then exits. The shell keeps that file descriptor open (because the redirection is on a whole sequence of commands) until it wants to release the lock.
That plan can't work with fcntl because fcntl locks are not shared among processes. If there was a utility identical to flock but using fcntl, the lock would be released too early (as soon as the child process exits).
For coordination of a file lock between a perl process and a shell script, some options you can consider are:
port the shell script to zsh and use the zsystem flock builtin from the zsh/system module (note: in the documentation it claims to use fcntl in spite of its name being flock)
rewrite the shell script in perl
just use flock in the perl script (give up byte range locking and the "get locker PID" feature - but you can emulate that on Linux by reading /proc/locks)
write your own fcntl utility in C for use in the shell script (the usage pattern will be different - the shell script will have to background it and then kill it later to unlock - and it will need some way to tell the parent process when it has obtained or failed to obtain the lock, which will be hard because it's happening asynchronously now... maybe use the coprocess feature that some shells have).
run a small perl script from the shell script to do the locking (will need the same background treatment that a dedicated fcntl utility would need)
For more information on features of the different kinds of locks, see What is the difference between locking with fcntl and flock.

Linux terminal input: reading user input from terminal truncating lines at 4095 character limit

In a bash script, I try to read lines from standard input, using built-in read command after setting IFS=$'\n'. The lines are truncated at 4095 character limit if I paste input to the read. This limitation seems to come from reading from terminal, because this worked perfectly fine:
fill=
for i in $(seq 1 94); do fill="${fill}x"; done
for i in $(seq 1 100); do printf "%04d00$fill" $i; done | (read line; echo $line)
I experience the same behavior with Python script (did not accept longer than 4095 input from terminal, but accepted from pipe):
#!/usr/bin/python
from sys import stdin
line = stdin.readline()
print('%s' % line)
Even C program works the same, using read(2):
#include <stdio.h>
#include <unistd.h>
int main(void)
{
char buf[32768];
int sz = read(0, buf, sizeof(buf) - 1);
buf[sz] = '\0';
printf("READ LINE: [%s]\n", buf);
return 0;
}
In all cases, I cannot enter longer than about 4095 characters. The input prompt stops accepting characters.
Question-1: Is there a way to interactively read from terminal longer than 4095 characters in Linux systems (at least Ubuntu 10.04 and 13.04)?
Question-2: Where does this limitation come from?
Systems affected: I noticed this limitation in Ubuntu 10.04/x86 and 13.04/x86, but Cygwin (recent version at least) does not truncate yet at over 10000 characters (did not test further since I need to get this script working in Ubuntu). Terminals used: Virtual Console and KDE konsole (Ubuntu 13.04) and gnome-terminal (Ubuntu 10.04).
Please refer to termios(3) manual page, under section "Canonical and noncanonical mode".
Typically, the terminal (standard input) is in canonical mode; in this mode the kernel will buffer the input line before returning the input to the application. The hard-coded limit for Linux (N_TTY_BUF_SIZE defined in ${linux_source_path}/include/linux/tty.h) is set to 4096 allowing input of 4095 characters not counting the ending new line. You can also have a look at file ${linux_source_path}/drivers/tty/n_tty.c, function n_tty_receive_buf_common() and the comment above that.
In noncanonical mode there will by default be no buffering by kernel and the read(2) system call returns immediately once a single character of input is returned (key is pressed). You can manipulate the terminal settings to read a specified amount of characters or set a time-out for non-canonical mode, but then too the hard-coded limit is 4095 per the termios(3) manual page (and the comment above the above mentioned n_tty_receive_buf_common()).
Bash read builtin command still works in non-canonical mode as can be demonstrated by the following:
IFS=$'\n' # Allow spaces and other white spaces.
stty -icanon # Disable canonical mode.
read line # Now we can read without inhibitions set by terminal.
stty icanon # Re-enable canonical mode (assuming it was enabled to begin with).
After this modification of adding stty -icanon you can paste longer than 4096 character string and read it successfully using bash built-in read command (I successfully tried longer than 10000 characters).
If you put this in a file, i.e. make it a script, you can use strace to see the system calls called, and you will see read(2) called multiple times, each time returning a single character when you type input to it.
I do not have a workaround for you, but I can answer question 2.
In linux PIPE_BUF is set to 4096 (in limits.h) If you do a write of more than 4096 to a pipe it will be truncated.
From /usr/include/linux/limits.h:
#ifndef _LINUX_LIMITS_H
#define _LINUX_LIMITS_H
#define NR_OPEN 1024
#define NGROUPS_MAX 65536 /* supplemental group IDs are available */
#define ARG_MAX 131072 /* # bytes of args + environ for exec() */
#define LINK_MAX 127 /* # links a file may have */
#define MAX_CANON 255 /* size of the canonical input queue */
#define MAX_INPUT 255 /* size of the type-ahead buffer */
#define NAME_MAX 255 /* # chars in a file name */
#define PATH_MAX 4096 /* # chars in a path name including nul */
#define PIPE_BUF 4096 /* # bytes in atomic write to a pipe */
#define XATTR_NAME_MAX 255 /* # chars in an extended attribute name */
#define XATTR_SIZE_MAX 65536 /* size of an extended attribute value (64k) */
#define XATTR_LIST_MAX 65536 /* size of extended attribute namelist (64k) */
#define RTSIG_MAX 32
#endif
The problem is definitely not the read() ; as it can read upto any valid integer value. The problem comes from the heap memory or the pipe size.. as they are the only possible limiting factors to the size..

Logging RS232 on Linux without waiting for newline

I was trying to log data from RS232 into a file with cat:
cat /dev/ttyS0 > rs232.log
The result was that I had everything in my file except for the last line.
By printing to stdout, I was able to discover, that cat only writes the output if it gets a newline character ('\n'). I discovered the same with:
dd bs=1 if=/dev/ttyS0 of=rs232.log
After reading How can I print text immediately without waiting for a newline in Perl? I was starting to think, if this could be a buffering problem of either the Linux-Kernel or the coreutils package.
According to TJD's comment, I wrote my own program in C but still had the same problems:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* args[])
{
char buffer;
FILE* serial;
serial = fopen(args[1],"r");
while(1)
{
buffer = fgetc(serial);
printf("%c",buffer);
}
}
As of the results of my own C-Code this seems to be a Linux-Kernel related issue.
You're opening a TTY. When that TTY is in cooked (aka canonical) mode, it performs line processing (e.g. backspace removes the previous character from the buffer). You'll want to put the TTY into raw mode in order to get every single byte when it arrives instead of waiting for the end of line.
From the man page:
Canonical and noncanonical mode
The setting of the ICANON canon flag in c_lflag determines whether
the terminal is operating in canonical mode (ICANON set) or
noncanonical mode (ICANON unset). By default, ICANON set.
In canonical mode:
Input is made available line by line. An input line is available
when one
of the line delimiters is typed (NL, EOL, EOL2; or EOF at the start of
line). Except in the case of EOF, the line delimiter is included in the
buffer returned by read(2).
Line editing is enabled (ERASE, KILL; and if the IEXTEN flag is
set: WERASE,
REPRINT, LNEXT). A read(2) returns at most one line of input; if the
read(2) requested fewer bytes than are available in the current line of
input, then only as many bytes as requested are read, and the remaining
characters will be available for a future read(2).
In noncanonical mode input is available immediately (without the
user having to type a line-delimiter character), and line editing
is disabled.
The simplest thing to do is just to call cfmakeraw.
Does this work?
perl -e 'open(IN, "/dev/ttyS0") || die; while (sysread(IN, $c, 1)) { print "$c" }'
This DOES work:
$ echo -n ccc|perl -e 'while (sysread(STDIN, $c, 1)) { print "$c" } '
ccc$

Resources