Returning values from linux binary - linux

First, let me say that I am very new to writing drivers so I apologize if my terminology is completely off.
I am attempting to write a driver to control an LED and read back some registers on my FPGA development board.
As nicely pointed out here I understand that I can supply my binary (or C program) input values by using (in my C program):
int main(int argc, char *argv[]) { /* ... */ }
So, as an example, in my Linux shell if I run:
root#socfpga:~# ./LED_BLINK 182
I can send an input integer of 182 to my binary that is being executed in the linux shell script. In my case this blinks my LED 182 times (this part works). So I know how to give my program inputs, but how do I properly extract outputs from my program?
My main questions are as follows:
1) In general, suppose my binary has variable(s) that I would like to return as outputs, how can I do this? Example, if I want my program to output the value at register 5, I input in the shell:
root#socfpga:~# ./LED_BLINK 1 5
The LED will blink once and then my shell will return the value at register 5 (this register is just a variable in the binary program).
Another way to describe what I am after, suppose if I have a single register (let's say it is a 16 bit value) and I would like my program to output that value, how can I properly achieve this? Can I pass a variable to my binary (such as a pointer?) and have my program write to that value? Can you provide an example of this or a link to a tutorial?
2) Similar to above, what if I have a list of registers (say 20 or more 16 bit values), how could my binary properly output these values ( a pointer array, maybe)?
3) I am sure that this process has a technical name. What is this 'technically' called (so that I can have smarter google search entries)?
Thank you for your help,
James

The most direct way to pass data between programs in a Unix setting is to use standard input and output "pipes" (also known as stdin and stdout).
By default the C statement "printf" will default to writing to stdout and you will see the output on the terminal by default or you can pass it to another program as input using the "|", or to another file-type operation using the ">" operand.
This is explained in more detail on this post: What does it mean to write to stdout in C?

Related

Unexpected periodic, non-continuous output for OCaml program

Someone reports that given a stream of strings on the serial port which is pipelined to the OCaml program below, the output of the program is not continuous, but instead it appears in chunks (of a few tens of lines), as if buffered.
What can be the cause of the non-continuous output?
(The output buffer should be flushed after each new line due to the use of '%!'. So this shouldn't be the cause, right?)
let tp = ref 0
let get_next_entry ic =
try
let (ts, pred, v) = Scanf.fscanf ic " #%d %s#(%d)\n" (fun x y z -> (x,y,z)) in
Printf.printf "at timepoint %d (timestamp %d): %s(%d)\n%!" !tp ts pred v;
incr tp;
true
with End_of_file ->
false
let _ =
while get_next_entry stdin do
()
done
The OCaml version used is 4.05.
It is a threefold problem. From the least likely to the most likely.
The glitching output
It is all in the eye of the beholder, as how the program output will look like depends on the environment in which it is run, i.e., on a program that runs your program and renders this on a visual device. In other words, it involves a lot of variables that are beyond the context of this program.
With that said, let me explain what flush means for the printf function. The printf facility relies on buffered channels. And each channel is roughly a pair of a buffer and system-specific file descriptor. When someone (including printf) outputs to a channel, the information first goes into the buffer and remains there until the next portion of information overrides the buffer (i.e., there is no more space in the buffer) or until the flush function is called explicitly. Then the buffer is flushed, which means that the information in the buffer is transferred to the operating system (e.g., using the write system call or library function).
What happens afterward is system dependent. If the file descriptor was associated with a regular file, then you might expect that the information will be passed to it entirely(though the file system has its own hierarchy of caches, so there're caveats also). If the descriptor was associated with a Unix-style shell process through a pipe, then it will go into the pipe's buffer, extracted from it by the shell and printed using a terminal interface, usually fulfilled with some terminal emulator. By default shells are line-buffered, so the line should be printed as a whole unless the user of the shell changes its parameters somehow.
Basically, I hope you get the idea, it is not your program which is actually manipulating with the terminal and lighting up pixels on your monitors. Your program is just outputting data and some other program is receiving this data and drawing it on the screen. And this some other program (a terminal, or terminal emulator, e.g., minicom) is making this output glitchy, not your program. Your program is doing its best to be printed correctly - full line or nothing.
Your program is glitching
And it is. The in_channel is also buffered, so it will accumulate a few bytes before calling sprintf. Therefore, you can just read from the buffered channel and expect a realtime response to it. The most reliable way for you would be to use the Unix module and process the input using your own buffering.
The glitching input
Finally, the input program can also give you the information in chunks. This is especially true for serial interfaces, so make sure that you have correctly set up your terminal interface using the Unix.tcsetattr function. In particular, when your program is blocked on the input, the operating system may decide not to wake it up on each arrived character or line. This behavior is controlled by the terminal interface (see the Canonical and Non-canonical modes. If your input doesn't have newlines, then you shall use the non-canonical mode).
Finally, the device itself could be acting jittering, and if you have an oscilloscope nearby you can observe the signals it is sending. And make sure that you have configured your serial port as prescribed in the user manual of your device.
One possibility is that fscanf is waiting until it sees everything it's looking for.

In Turtle, how do I take stdout from a program, process it, and then supply something to stdin?

I am currently playing with format string attacks in C. I have a toy program that prints (to stdout) the address of a variable that I want to access, then accepts a line from stdin and printfs it..
Using Turtle, I'd like to be able to:
execute the program
parse the first few lines of stdout to retrieve the address
using the address, craft a format string for printf (I know how to do this bit)
write the attack string to stdin
However, I can't see how to do this. Using a function like inshell :: Text -> Shell Line -> Shell Line, I can supply some lines to stdin and get back a stream from stdout. However, I don't know how to inject new lines to stdin after having read a couple of lines from stdout.
If your goal is to test your program which performs IO you can use shelltestrunner (project written in Haskell) if you want to test I/O scenarios for every project (non necessarily written in Haskell).

what happen when parent process and child process read the same file and write to the other same file?

#include <fcntl.h>
#include <stdlib.h>
int fdrd,fdwt;
char c;
void rdwrt();
main(int argc,char *argv[])
{
if(argc!=3)
exit(1);
if((fdrd=open(argv[1],O_RDONLY))==-1)
exit(1);
if((fdwt=creat(argv[2],0666))==01)
exit(1);
fork();
rdwrt();
exit(0);
}
void rdwrt()
{
for(;;)
{
if(read(fdrd,&c,1)!=1)
return;
write(fdwt,&c,1);
}
}
This program forks a child process,then parent process and child process try to read the same input file and write to the same output file.
Excute this program like this:
[root#localhost]./a.out input output
where content of input file is:
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
I thought the output file should have equal number of characters to the input file,though the character order probably not the same according to the competition of these two processes.
It turns out that the output file is:
abcdefghijklmnonqbcdefghijklwxyczdefjklpqrstuvwxyz
abcefgklmvwxefgklmnopqrstuvw
qrstuyz
abcdhijxyz
Actually,these tow files have different characters number:
[root#localhost]wc -m input output
162 input
98 output
Now I wonder why?
The contents of the output file will be difficult to predict because your program contains a race condition. Specifically, it depends on process scheduling.
Requested update:
This question is actually more interesting than it looked at first glance.
I'm going to make some predictions (tested successfully...)
On Unix-like systems1 ... then, yes, the number of characters will always be the same but the order will be difficult to predict.
You tagged your question linux unix, and in those systems, all of which1 properly implement the fork model, both children will share a single file position for both (forked) instances of fdrd, and they will share a second file position for both instances of fdwr.
If you could slow down time and watch the program run, at any point there are things you know and things you don't.
You don't know which child will win the race to do the next read, but you do know which character the winner will read, because they are always at the same file position. After the winner gets that next character, you still don't know who will read the following one, because the race is still on.
In fact, it is possible that the same process will win the race again, and again, because the scheduler probably won't want to run it for a very small time slice.
At any moment you also know that the next character will be written at EOF because, again, shared write position.
Now, you might ask, well then, if both processes are always at both the same input and output file positions, how does the file get cracked up?
Well, there is more than one race, one to the read and a second to the write. (Or one, kinda complicated race.) One child may have read its character but not written it when it gets time-sliced. So now it starts losing the race to the write statement and then probably to several iterations of read/write. So a character can get hung up in one child.
And finally, on merely-API-compatible C environments running over other operating systems, anything could happen. The OP's system appears to be one of these, or perhaps the test was flawed. My OSX system behaves as predicted.
1. "Real" UNIX, *BSD, OSX, or Linux.

Write unformatted (binary data) to stdout

I want to write unformatted (binary) data to STDOUT in a Fortran 90 program. I am using AIX Unix and unfortunately it won't let me open unit 6 as "unformatted". I thought I would try and open /dev/stdout instead under a different unit number, but /dev/stdout does not exist in AIX (although this method worked under Linux).
Basically, I want to pipe my programs output directly into another program, thus avoiding having an intermediate file, a bit like gzip -c does. Is there some other way I can achieve this, considering the two problems I have encountered above?
I would try to convert the data by TRANSFER() to a long character and print it with nonadvancing i/o. The problem will be your processors' limit for the record length. If it is too short you will end up having an unexpected end of record sign somewhere. Also your processor may not write the unprintable characters the way you would like.
i.e., something like
character(len=max_length) :: buffer
buffer = transfer(data,buffer)
write(*,'(a)',advance='no') trim(buffer)
The largest problem I see in the unprintable characters. See also A suprise with non-advancing I/O
---EDIT---
Another possibility, try to use file /proc/self/fd/1 or /dev/fd/1
test:
open(11,file='/proc/self/fd/1',access='stream',action='write')
write(11) 11
write(11) 1.1
close(11)
end
This is more of a comment/addition to #VladimirF than a new answer, but I can't add those yet. You can first inquire about the location of the preconnected I/O units and then open the unformatted connection:
character(1024) :: stdout
inquire(6, name = stdout)
open(11, file = stdout, access = 'stream', action = 'write')
This is probably the most convenient way, but it uses stream access, a Fortran 2003 feature. Without this, you can only use sequential access (which adds header data to each record) or direct access (which does not add headers but requires a fixed record length).

gdb break when program opens specific file

Back story: While running a program under strace I notice that '/dev/urandom' is being open'ed. I would like to know where this call is coming from (it is not part of the program itself, it is part of the system).
So, using gdb, I am trying to break (using catch syscall open) program execution when the open call is issued, so I can see a backtrace. The problem is that open is being called alot, like several hundred times so I can't narrow down the specific call that is opening /dev/urandom. How should I go about narrowing down the specific call? Is there a way to filter by arguments, and if so how do I do it for a syscall?
Any advice would be helpful -- maybe I am going about this all wrong.
GDB is a pretty powerful tool, but has a bit of a learning curve.
Basically, you want to set up a conditional breakpoint.
First use the -i flag to strace or objdump -d to find the address of the open function or more realistically something in the chain of getting there, such as in the plt.
set a breakpoint at that address (if you have debug symbols, you can use those instead, omitting the *, but I'm assuming you don't - though you may well have them for library functions if nothing else.
break * 0x080482c8
Next you need to make it conditional
(Ideally you could compare a string argument to a desired string. I wasn't getting this to work within the first few minutes of trying)
Let's hope we can assume the string is a constant somewhere in the program or one of the libraries it loads. You could look in /proc/pid/maps to get an idea of what is loaded and where, then use grep to verify the string is actually in a file, objdump -s to find it's address, and gdb to verify that you've actually found it in memory by combining the high part of the address from maps with the low part from the file. (EDIT: it's probably easier to use ldd on the executable than look in /proc/pid/maps)
Next you will need to know something about the abi of the platform you are working on, specifically how arguments are passed. I've been working on arm's lately, and that's very nice as the first few arguments just go in registers r0, r1, r2... etc. x86 is a bit less convenient - it seems they go on the stack, ie, *($esp+4), *($esp+8), *($esp+12).
So let's assume we are on an x86, and we want to check that the first argument in esp+4 equals the address we found for the constant we are trying to catch it passing. Only, esp+4 is a pointer to a char pointer. So we need to dereference it for comparison.
cond 1 *(char **)($esp+4)==0x8048514
Then you can type run and hope for the best
If you catch your breakpoint condition, and looking around with info registers and the x command to examine memory seems right, then you can use the return command to percolate back up the call stack until you find something you recognize.
(Adapted from a question edit)
Following Chris's answer, here is the process that eventually got me what I was looking for:
(I am trying to find what functions are calling the open syscall on "/dev/urandom")
use ldd on executable to find loaded libraries
grep through each lib (shell command) looking for 'urandom'
open library file in hex editor and find address of string
find out how parameters are passed in syscalls (for open, file is first parameter. on x86_64 it is passed in rdi -- your mileage may vary
now we can set the conditional breakpoint: break open if $rdi == _addr_
run program and wait for break to hit
run bt to see backtrace
After all this I find that glib's g_random_int() and g_rand_new() use urandom. Gtk+ and ORBit were calling these functions -- if anybody was curious.
Like Andre Puel said:
break open if strcmp($rdi,"/dev/urandom") == 0
Might do the job.

Resources