How does xterm's -S option (pass pseudo terminal name and descriptor) work in Linux? - linux

Greetings,
while porting old Solaris 2.4 code to CentOS 5.3 I came across an invocation like
/usr/bin/xterm -S%s%d ...
where %s is a two-character digit sequence XX like 00, 01 and %d is a numeric file descriptor. This was apparently a way to tell xterm to use /dev/ttypXX (a pseudo terminal slave) but the code does not seem to bother with opening the corresponding master, calling pipe(2) instead and passing the write fd as the %d substitution above. On Solaris, writing to this write fd from the spawner causes output to appear in the xterm child. In a strace(1) I saw no attempt to open anything under /dev, by the way.

According to the solaris manpage, the pipe system call creates two bidirectional pipes. So on solaris you can use both fds for reading and writing:
The files associated with fildes[0] and fildes1 are streams and are both
opened for reading and writing.
However according to the pipe(2) manpage on linux:
pipe() creates a pipe, a unidirectional data channel that can be used
for interprocess communication.
Note also the following from pipe(7):
On some systems (but not Linux), pipes are bidirectional: data can be
transmitted in both directions between the pipe ends. According to
POSIX.1-2001, pipes only need to be unidirectional. Portable applications
should avoid reliance on bidirectional pipe semantics.
So, on linux you can't pass pipefd1, the write end, to xterm since it expects an fd for bidirectional communication. To make it work, you'd have to use openpty() and pass the slave fd down to xterm.
AFAIK, openpty is not available on Solaris; that seems be the reason your code doesn't use it.

Related

Unexpected periodic, non-continuous output for OCaml program

Someone reports that given a stream of strings on the serial port which is pipelined to the OCaml program below, the output of the program is not continuous, but instead it appears in chunks (of a few tens of lines), as if buffered.
What can be the cause of the non-continuous output?
(The output buffer should be flushed after each new line due to the use of '%!'. So this shouldn't be the cause, right?)
let tp = ref 0
let get_next_entry ic =
try
let (ts, pred, v) = Scanf.fscanf ic " #%d %s#(%d)\n" (fun x y z -> (x,y,z)) in
Printf.printf "at timepoint %d (timestamp %d): %s(%d)\n%!" !tp ts pred v;
incr tp;
true
with End_of_file ->
false
let _ =
while get_next_entry stdin do
()
done
The OCaml version used is 4.05.
It is a threefold problem. From the least likely to the most likely.
The glitching output
It is all in the eye of the beholder, as how the program output will look like depends on the environment in which it is run, i.e., on a program that runs your program and renders this on a visual device. In other words, it involves a lot of variables that are beyond the context of this program.
With that said, let me explain what flush means for the printf function. The printf facility relies on buffered channels. And each channel is roughly a pair of a buffer and system-specific file descriptor. When someone (including printf) outputs to a channel, the information first goes into the buffer and remains there until the next portion of information overrides the buffer (i.e., there is no more space in the buffer) or until the flush function is called explicitly. Then the buffer is flushed, which means that the information in the buffer is transferred to the operating system (e.g., using the write system call or library function).
What happens afterward is system dependent. If the file descriptor was associated with a regular file, then you might expect that the information will be passed to it entirely(though the file system has its own hierarchy of caches, so there're caveats also). If the descriptor was associated with a Unix-style shell process through a pipe, then it will go into the pipe's buffer, extracted from it by the shell and printed using a terminal interface, usually fulfilled with some terminal emulator. By default shells are line-buffered, so the line should be printed as a whole unless the user of the shell changes its parameters somehow.
Basically, I hope you get the idea, it is not your program which is actually manipulating with the terminal and lighting up pixels on your monitors. Your program is just outputting data and some other program is receiving this data and drawing it on the screen. And this some other program (a terminal, or terminal emulator, e.g., minicom) is making this output glitchy, not your program. Your program is doing its best to be printed correctly - full line or nothing.
Your program is glitching
And it is. The in_channel is also buffered, so it will accumulate a few bytes before calling sprintf. Therefore, you can just read from the buffered channel and expect a realtime response to it. The most reliable way for you would be to use the Unix module and process the input using your own buffering.
The glitching input
Finally, the input program can also give you the information in chunks. This is especially true for serial interfaces, so make sure that you have correctly set up your terminal interface using the Unix.tcsetattr function. In particular, when your program is blocked on the input, the operating system may decide not to wake it up on each arrived character or line. This behavior is controlled by the terminal interface (see the Canonical and Non-canonical modes. If your input doesn't have newlines, then you shall use the non-canonical mode).
Finally, the device itself could be acting jittering, and if you have an oscilloscope nearby you can observe the signals it is sending. And make sure that you have configured your serial port as prescribed in the user manual of your device.
One possibility is that fscanf is waiting until it sees everything it's looking for.

Program based on pipe in Linux

Write a program for this------>>>>>>
One program will open a pipe, write a number to pipe.
Other program will open the same pipe, will read the number and print them.
Close both the pipes.
how can i write a program based on this any one knows it then please help me...!!!!
I think what you're looking for is:
echo <number you want to use> (or output from program) | <program you want to pipe into>
For instance:
echo 5 | more
which will simply display:
5
The "|" is your pipe; it redirects output from the left to the right connecting their standard streams, which usually does not include stderr.
Hope that helps.
A pipe is probably the simplest IPC solution under Linux; so talking about a pipe i like talking about a specific interprocess communication solution.
The IPC lives in the kernel space and it's managed by the kernel itself, works in 1 direction and only between a caller and a callee, it's unidirectional.
for more you should just read a good article about the pipes and the IPC under Linux, you will found a gozillion of articles on the internet, for a short example you can go here.

Confused about stdin, stdout and stderr?

I am rather confused with the purpose of these three files. If my understanding is correct, stdin is the file in which a program writes into its requests to run a task in the process, stdout is the file into which the kernel writes its output and the process requesting it accesses the information from, and stderr is the file into which all the exceptions are entered. On opening these files to check whether these actually do occur, I found nothing seem to suggest so!
What I would want to know is what exactly is the purpose of these files, absolutely dumbed down answer with very little tech jargon!
Standard input - this is the file handle that your process reads to get information from you.
Standard output - your process writes conventional output to this file handle.
Standard error - your process writes diagnostic output to this file handle.
That's about as dumbed-down as I can make it :-)
Of course, that's mostly by convention. There's nothing stopping you from writing your diagnostic information to standard output if you wish. You can even close the three file handles totally and open your own files for I/O.
When your process starts, it should already have these handles open and it can just read from and/or write to them.
By default, they're probably connected to your terminal device (e.g., /dev/tty) but shells will allow you to set up connections between these handles and specific files and/or devices (or even pipelines to other processes) before your process starts (some of the manipulations possible are rather clever).
An example being:
my_prog <inputfile 2>errorfile | grep XYZ
which will:
create a process for my_prog.
open inputfile as your standard input (file handle 0).
open errorfile as your standard error (file handle 2).
create another process for grep.
attach the standard output of my_prog to the standard input of grep.
Re your comment:
When I open these files in /dev folder, how come I never get to see the output of a process running?
It's because they're not normal files. While UNIX presents everything as a file in a file system somewhere, that doesn't make it so at the lowest levels. Most files in the /dev hierarchy are either character or block devices, effectively a device driver. They don't have a size but they do have a major and minor device number.
When you open them, you're connected to the device driver rather than a physical file, and the device driver is smart enough to know that separate processes should be handled separately.
The same is true for the Linux /proc filesystem. Those aren't real files, just tightly controlled gateways to kernel information.
It would be more correct to say that stdin, stdout, and stderr are "I/O streams" rather
than files. As you've noticed, these entities do not live in the filesystem. But the
Unix philosophy, as far as I/O is concerned, is "everything is a file". In practice,
that really means that you can use the same library functions and interfaces (printf,
scanf, read, write, select, etc.) without worrying about whether the I/O stream
is connected to a keyboard, a disk file, a socket, a pipe, or some other I/O abstraction.
Most programs need to read input, write output, and log errors, so stdin, stdout,
and stderr are predefined for you, as a programming convenience. This is only
a convention, and is not enforced by the operating system.
As a complement of the answers above, here is a sum up about Redirections:
EDIT: This graphic is not entirely correct.
The first example does not use stdin at all, it's passing "hello" as an argument to the echo command.
The graphic also says 2>&1 has the same effect as &> however
ls Documents ABC > dirlist 2>&1
#does not give the same output as
ls Documents ABC > dirlist &>
This is because &> requires a file to redirect to, and 2>&1 is simply sending stderr into stdout
I'm afraid your understanding is completely backwards. :)
Think of "standard in", "standard out", and "standard error" from the program's perspective, not from the kernel's perspective.
When a program needs to print output, it normally prints to "standard out". A program typically prints output to standard out with printf, which prints ONLY to standard out.
When a program needs to print error information (not necessarily exceptions, those are a programming-language construct, imposed at a much higher level), it normally prints to "standard error". It normally does so with fprintf, which accepts a file stream to use when printing. The file stream could be any file opened for writing: standard out, standard error, or any other file that has been opened with fopen or fdopen.
"standard in" is used when the file needs to read input, using fread or fgets, or getchar.
Any of these files can be easily redirected from the shell, like this:
cat /etc/passwd > /tmp/out # redirect cat's standard out to /tmp/foo
cat /nonexistant 2> /tmp/err # redirect cat's standard error to /tmp/error
cat < /etc/passwd # redirect cat's standard input to /etc/passwd
Or, the whole enchilada:
cat < /etc/passwd > /tmp/out 2> /tmp/err
There are two important caveats: First, "standard in", "standard out", and "standard error" are just a convention. They are a very strong convention, but it's all just an agreement that it is very nice to be able to run programs like this: grep echo /etc/services | awk '{print $2;}' | sort and have the standard outputs of each program hooked into the standard input of the next program in the pipeline.
Second, I've given the standard ISO C functions for working with file streams (FILE * objects) -- at the kernel level, it is all file descriptors (int references to the file table) and much lower-level operations like read and write, which do not do the happy buffering of the ISO C functions. I figured to keep it simple and use the easier functions, but I thought all the same you should know the alternatives. :)
I think people saying stderr should be used only for error messages is misleading.
It should also be used for informative messages that are meant for the user running the command and not for any potential downstream consumers of the data (i.e. if you run a shell pipe chaining several commands you do not want informative messages like "getting item 30 of 42424" to appear on stdout as they will confuse the consumer, but you might still want the user to see them.
See this for historical rationale:
"All programs placed diagnostics on the standard output. This had
always caused trouble when the output was redirected into a file, but
became intolerable when the output was sent to an unsuspecting
process. Nevertheless, unwilling to violate the simplicity of the
standard-input-standard-output model, people tolerated this state of
affairs through v6. Shortly thereafter Dennis Ritchie cut the Gordian
knot by introducing the standard error file. That was not quite enough.
With pipelines diagnostics could come from any of several programs
running simultaneously. Diagnostics needed to identify themselves."
stdin
Reads input through the console (e.g. Keyboard input).
Used in C with scanf
scanf(<formatstring>,<pointer to storage> ...);
stdout
Produces output to the console.
Used in C with printf
printf(<string>, <values to print> ...);
stderr
Produces 'error' output to the console.
Used in C with fprintf
fprintf(stderr, <string>, <values to print> ...);
Redirection
The source for stdin can be redirected. For example, instead of coming from keyboard input, it can come from a file (echo < file.txt ), or another program ( ps | grep <userid>).
The destinations for stdout, stderr can also be redirected. For example stdout can be redirected to a file: ls . > ls-output.txt, in this case the output is written to the file ls-output.txt. Stderr can be redirected with 2>.
Using ps -aux reveals current processes, all of which are listed in /proc/ as /proc/(pid)/, by calling cat /proc/(pid)/fd/0 it prints anything that is found in the standard output of that process I think. So perhaps,
/proc/(pid)/fd/0 - Standard Output File
/proc/(pid)/fd/1 - Standard Input File
/proc/(pid)/fd/2 - Standard Error File
for example
But only worked this well for /bin/bash other processes generally had nothing in 0 but many had errors written in 2
For authoritative information about these files, check out the man pages, run the command on your terminal.
$ man stdout
But for a simple answer, each file is for:
stdout for a stream out
stdin for a stream input
stderr for printing errors or log messages.
Each unix program has each one of those streams.
stderr will not do IO Cache buffering so if our application need to print critical message info (some errors ,exceptions) to console or to file use it where as use stdout to print general log info as it use IO Cache buffering there is a chance that before writing our messages to file application may close ,leaving debugging complex
A file with associated buffering is called a stream and is declared to be a pointer to a defined type FILE. The fopen() function creates certain descriptive data for a stream and returns a pointer to designate the stream in all further transactions. Normally there are three open streams with constant pointers declared in the header and associated with the standard open files.
At program startup three streams are predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device
https://www.mkssoftware.com/docs/man5/stdio.5.asp
Here is a lengthy article on stdin, stdout and stderr:
What Are stdin, stdout, and stderr on Linux?
To summarize:
Streams Are Handled Like Files
Streams in Linux—like almost everything else—are treated as though
they were files. You can read text from a file, and you can write text
into a file. Both of these actions involve a stream of data. So the
concept of handling a stream of data as a file isn’t that much of a
stretch.
Each file associated with a process is allocated a unique number to
identify it. This is known as the file descriptor. Whenever an action
is required to be performed on a file, the file descriptor is used to
identify the file.
These values are always used for stdin, stdout, and stderr:
0: stdin
1: stdout
2: stderr
Ironically I found this question on stack overflow and the article above because I was searching for information on abnormal / non-standard streams. So my search continues.

Interrupt-driven driver using TTY?

I am a newbie to developing drivers for Linux ... . I am developing a SMS-driver (AT commands over serial port to modem) using TTY for accessing the serial port. The driver is written in C.
In the design messages from modem to driver can be triggered by two events:
1) Status as respond to AT commands issued by driver (i.e. expected messages)
2) Indication of new SMS (i.e. unexpected messages)
I am planning on two threads - one for writing to TTY and one for reading from TTY. Is it possible to configure TTY so that my read-thread wakes-up on incoming chars (i.e. read-thread is event-triggered and not based on polling)?
Best Regards,
Witek
I don't think you really want two threads. Typical program flow (write AT command, check response etc ...) will be easier to write and debug in a monothreaded program.
Waiting for chars can be made with select() call. The tty layer is mostly configured through
the tcsetattr, tcgetattr and friends system call. With this call you can configure wether you want to be interrupted on new line or on each char for example. See man termios for the manpage. The two big options are wether you want the special characters like EOF, EOL Ctrl-C etc... to be treated has data (raw mode) or be interpreted by the tty layer (canonical mode).
See the part on select in the serial programming guide, or the select manpage for more info

How can I monitor data on a serial port in Linux?

I'm debugging communications with a serial device, and I need to see all the data flowing both directions.
It seems like this should be easy on Linux, where the serial port is represented by a file. Is there some way that I can do a sort of "bi-directional tee", where I tell my program to connect to a pipe that copies the data to a file and also shuffles it to/from the actual serial port device?
I think I might even know how to write such a beast, but it seems non-trivial, especially to get all of the ioctls passed through for port configuration, etc.
Has anyone already built such a thing? It seems too useful (for people debugging serial device drivers) not to exist already.
strace is very useful for this. You have a visualisation of all ioctl calls, with the corresponding structure decoded. The following options seems particularly useful in your case:
-e read=set
Perform a full hexadecimal and ASCII dump of all the data read from
file descriptors listed in the
specified set. For example, to see all
input activity on file descriptors 3
and 5 use -e read=3,5. Note that this
is independent from the normal tracing
of the read(2) system call which is
controlled by the option -e
trace=read.
-e write=set
Perform a full hexadecimal and ASCII
dump of all the data written to file
descriptors listed in the specified
set. For example, to see all output
activity on file descriptors 3 and 5
use -e write=3,5. Note that this is
independent from the normal tracing of
the write(2) system call which is
controlled by the option -e
trace=write.
I have found pyserial to be quite usable, so if you're into Python it shouldn't be too hard to write such a thing.
A simple method would be to write an application which opened
the master side of a pty and the tty under test. You would then
pass your tty application the slave side of the pty as the 'tty device'.
You would have to monitor the pty attributes with tcgetattr() on the pty
master and call tcsetattr() on the real tty, if the attributes changed.
The rest would be a simple select() on both fd's copying data bi-directionally and copying it to a log.
I looked at a lot of serial sniffers. All of them are based on the idea of making a virtual serial port and sniff data from that port. However, any baud/parity/flow changes will break connection.
So, I wrote my own sniffer :). Most of the serial ports now are just USB-to-serial converters. My sniffer collects data from USB through debugfs, parse it and output to the console. Also any baudrate changes, flow control, line events, and serial errors are also recorded. The project is in the early stage of development and for now, only FTDI is supported.
http://code.google.com/p/uscmon/
Much like #MBR, I was looking into serial sniffers, but the ptys broke the parity check. However, his sniffer was not helping me, as I'm using a CP2102, and not a FT232. So I wrote my own sniffer, by following this, and now I have one that can record file I/O on arbitrary files: I called it tracie.

Resources