Execve problems when reading input from pipe - linux

I wrote a simple C program to execute another program using execve.
exec.c:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char** argv) {
char path[128];
scanf("%s", path);
char* args[] = {path, NULL};
char* env[] = {NULL};
execve(path, args, env);
printf("error\n");
return 0;
}
I compiled it:
gcc exec.c -o exec
and after running it and writing "/bin/sh", it succesfully ran the shell and displayed the $ sign like a normal shell as can be seen in the picture.
Then I did the following: I created a server using nc -l 12345 and ran nc localhost 12345 | ./exec. It worked, but for some reason I can't understand, the $ sign was not displayed this time. I couldn't figure out the reason to this. (demonstrating images attached)
Now, here is the weirdest thing.
When I try to pass the program path AND more input via the pipe at once it seems like the executed process just ignores the input and closes.
For example:
But, if I run the following it works exactly the same way it worked when I piped nc output:
So, to conclude my questions:
I don't understand why the executed shell doesn't print the $ prompt sign when reads input from a pipe instead of stdin.
Why won't the executed program read input from the pipe when the input is already there and not waiting? It seems like it works only in the cases where the pipe remains open after the command execution.

Like AlexP already mentioned, the prompt sign is only displayed, when input comes from a terminal.
The second question was trickier: When you call libc-function scanf, its implementation will not only consume /bin/sh from the pipe, but also store the next input ls its internal buffers. Those internal buffers, will be overwritten by execve, so the shell gets nothing.
Here is your script without scanf to verify this:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char** argv) {
char path[128];
read(0, path, 8); // consume `/bin/sh`
path[7] = '\0';
char* args[] = {path, NULL};
char* env[] = {NULL};
execve(path, args, env);
printf("error\n");
return 0;
}
Why did the example with cat work in the first place?
That's (probably) because of buffering also. Try:
(echo /bin/sh; echo ls) | stdbuf -i0 ./exec
I recommend this nice Article about buffering for further reading.

Related

Problem using 2-level pipe when the first program doesn't exit in BASH

Consider this :
xinput --test 11 | grep "button press 1"
*11 is my Optical mouse index (could be anything else) and "button press 1" means left click.
When I click somewhere in the screen, that shows me this :
button press 1
No problem so far . But when I wanted to use the output of that as the input to my C program , I noticed that the stdin of the program after another level of pipe is always empty :
xinput --test 11 | grep "button press 1" | ./poll_test
Here's my poll_test code:
/*This is just a program to test polling functionality on stdin.
I've used other programs instead of this as well.
None of them were able to read the stdin */
#include <fcntl.h>
#include <stdio.h>
#include <sys/poll.h>
#include <sys/time.h>
#include <unistd.h>
void main(int argc, char ** argv) {
int fd;
char buf[1024];
int bytes_read;
struct pollfd pfds[1];
while (1) {
pfds[0].fd = 0;
pfds[0].events = POLLIN;
poll(pfds, 1, -1);
if (pfds[0].revents & POLLIN) {
bytes_read = read(0, buf, 1024);
printf("%s\n" , buf);
if (!bytes_read) {
printf("stdin closed\n");
return;
}
write(1, buf, bytes_read);
}
}
}
It prints nothing despite of the clicks.
That is confusing.This is not a normal behavior. When I run this for example:
ls | grep a | grep b
It shows me the results successfully. The only difference is that the ls here exits after it prints out to the stdout but that's not the case in the xinput version.
I spend a lot of time to write a script to play a beep in the event of a mouse click but that didn't work. So I wanted to use a C program because there's no polling functionality in bash.
As far as I know , working of the pipes in bash is something like this :
The second program (the right one in the pipe statement) gets executed until it wants to READ from its stdin and it stops going further until there's something to read from the STDIN.
With that in mind, the third program in the command I posted should be able to read the output.As it's the case when the first programs exits.
The next step would be to use libxcb directly instead of the command xinput if pipe problem doesn't work.
I'm totally confused. Any help would be much appreciated.
EDIT: I also tried using an intermediate file descriptor:
exec 3<&1
xinput --test 11 | grep -i "button press 1" >&3 | ./poll_test 3>&1
but didn't help.And also flushing the stdout forcibly doesn't work either :
xinput --test 11 | grep -i "button press 1" ; stdbuf -oL | ./poll_test
It seems grep is changing its buffering behaviour depending on whether the output is a terminal or not. I don't know exactly why this happens, but --line-buffered forces it to use line buffering (evaluating the expression as soon as the line ends):
xinput --test 11 | grep "button press 1" --line-buffered | ./poll_test

How to use linux wc command with char array?

Hi~ I'm just making sample program which implements pipe command.
In this program, I'm trying to implement "cat somefile.txt | wc" command..
So I called fork() twice, I used first child process for sending results of "cat somefile.txt" to fd[1].
After that second child process gets the result from fd[0] to text array. (I confirmed it successfully reads and stores datas to text array)
So lastly, what I have to do is to call execl function running wc command with text array as arguments. But as you know, wc needs filename. Of course the final output is not what I wanted.. So I'm in trouble now.
I searched execl , wc but I couldn't find any infos which says wc command can be used with char array.
Do you have any ideas to solve this?
Here's code..
#include <unistd.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char text[80];
int main(int argc,char * argv[]){
int fd[2];
if(pipe(fd) == -1){
perror(argv[0]);
exit(1);
}
if(fork() == 0){ // execute cat somefile.txt
dup2(fd[1],1);
close(fd[0]); close(fd[1]);
execl("/bin/cat","cat","somefile.txt",(char *)0);
exit(127);
}
if(fork() == 0){ // execute wc and get datas from cat somefile.txt
dup2(fd[0],0);
close(fd[0]); close(fd[1]);
read_to_nl(text); // I defined but didn't post it. Anyway I confirmed it successfully get results from fd[0] to text array
execl("/usr/bin/wc","wc",text,(char *)0); // how to set arguments to complete command "cat somefile.txt | wc"?
exit(127);
}
close(fd[0]); close(fd[1]);
while(wait((int *) 0) != -1);
return 0;
}

Get the content on the command line with an external promgram

I would like to write a small program which will analyize my current input on the command line and generate some suggesstions like those search engines do.
The problems is how can an external program get the content on command line? For example
# an external program started and got passed in the PID of the shell below.
# the user typed something in the shell like this...
<PROMPT> $ echo "grab this command"
# the external program now get 'echo "grab this command"'
# and ideally the this could be done in realtime.
More over, can I just modify the content of current command line?
EDIT
bash uses libreadline to manage the command line, but still I can not imagine how to make use of this.
You could write your own shell wrapper using c. Open bash in a process using popen and use fgetc and fputc to write the data to the process and the output file.
A quick dirty hack could look like this (bash isn't started in interactive mode, but otherwise should work fine. --> no prompt):
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
pid_t pid;
void kill_ch(int sig) {
kill(pid, SIGKILL);
}
/**
*
*/
int main(int argc, char** argv) {
int b;
FILE *cmd = NULL;
FILE *log = NULL;
signal(SIGALRM, (void (*)(int))kill_ch);
cmd = popen("/bin/bash -s", "r+");
if (cmd == NULL) {
fprintf(stderr, "Error: Failed to open process");
return EXIT_FAILURE;
}
setvbuf(cmd, NULL, _IOLBF, 0);
log = fopen("out.txt", "a");
if (log == NULL) {
fprintf(stderr, "Error: Failed to open logfile");
return EXIT_FAILURE;
}
setvbuf(log, NULL, _IONBF, 0);
pid = fork();
if (pid != 0)
goto EXEC_WRITE;
else
goto EXEC_READ;
EXEC_READ:
while (1) {
b = fgetc(stdin);
if (b != EOF) {
fputc((char) b, cmd);
fputc((char) b, log);
}
}
EXEC_WRITE:
while (1) {
b = fgetc(cmd);
if (b == EOF) {
return EXIT_SUCCESS;
}
fputc(b, stdout);
fputc(b, log);
}
return EXIT_SUCCESS;
}
I might not fully understand your question but I think you'd basically have two options.
The first option would be to explicitly call your "magic" program by prefixing your call with it like so
<PROMPT> $ magic echo "grab this command"
(magic analyzes $* and says...)
Your input would print "grab this command" to stdout
<PROMPT> $
In this case the arguments to "magic" would be handled as positional parameters ($*, $1 ...)
The second option would be to wrap an interpreter-like something around your typing. E.g. the Python interpreter does so if called without arguments. You start the interpreter, which will basically read anything you type (stdin) in an endless loop, interpret it, and produce some output (typically on stdout).
<PROMPT> $ magic
<MAGIC_PROMPT> $ echo "grab this command"
(your magic interpreter processes the input and says...)
Your input would print "grab this command" to stdout
<MAGIC_PROMPT> $

how to define script interpreter with shebang

It is clear that one can use the
#!/usr/bin/perl
shebang notation in the very first line of a script to define the interpreter. However, this presupposes an interpreter that ignores hashmark-starting lines as comments. How can one use an interpreter that does not have this feature?
With a wrapper that removes the first line and calls the real interpreter with the remainder of the file. It could look like this:
#!/bin/sh
# set your "real" interpreter here, or use cat for debugging
REALINTERP="cat"
tail -n +2 $1 | $REALINTERP
Other than that: In some cases ignoring the error message about that first line could be an option.
Last resort: code support for the comment char of your interpreter into the kernel.
I think the first line is interpreted by the operating system.
The interpreter will be started and the name of the script is handed down to the script as its first parameter.
The following script 'first.myint' calls the interpreter 'myinterpreter' which is the executable from the C program below.
#!/usr/local/bin/myinterpreter
% 1 #########
2 xxxxxxxxxxx
333
444
% the last comment
The sketch of the personal interpreter:
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BUFFERSIZE 256 /* input buffer size */
int
main ( int argc, char *argv[] )
{
char comment_leader = '%'; /* define the coment leader */
char *line = NULL;
size_t len = 0;
ssize_t read;
// char buffer[BUFFERSIZE];
// argv[0] : the name of this executable
// argv[1] : the name the script calling this executable via shebang
FILE *input; /* input-file pointer */
char *input_file_name = argv[1]; /* the script name */
input = fopen( input_file_name, "r" );
if ( input == NULL ) {
fprintf ( stderr, "couldn't open file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
while ((read = getline(&line, &len, input)) != -1) {
if ( line[0] != comment_leader ) {
printf( "%s", line ); /* print line as a test */
}
else {
printf ( "Skipped a comment!\n" );
}
}
free(line);
if( fclose(input) == EOF ) { /* close input file */
fprintf ( stderr, "couldn't close file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
Now call the script (made executable before) and see the output:
...~> ./first.myint
#!/usr/local/bin/myinterpreter
Skipped a comment!
2 xxxxxxxxxxx
333
444
Skipped a comment!
I made it work. I especially thank holgero for his tail opion trick
tail -n +2 $1 | $REALINTERP
That, and finding this answer on Stack overflow made it possible:
How to compile a linux shell script to be a standalone executable *binary* (i.e. not just e.g. chmod 755)?
"The solution that fully meets my needs would be SHC - a free tool"
SHC is a shell to C translator, see here:
http://www.datsi.fi.upm.es/~frosal/
So I wrote polyscript.sh:
$ cat polyscript.sh
#!/bin/bash
tail -n +2 $1 | poly
I compiled this with shc and in turn with gcc:
$ shc-3.8.9/shc -f polyscript.sh
$ gcc -Wall polyscript.sh.x.c -o polyscript
Now, I was able to create a first script written in ML:
$ cat smlscript
#!/home/gergoe/projects/shebang/polyscript $0
print "Hello World!"
and, I was able to run it:
$ chmod u+x smlscript
$ ./smlscript
Poly/ML 5.4.1 Release
> > # Hello World!val it = (): unit
Poly does not have an option to suppress compiler output, but that's not an issue here. It might be interesting to write polyscript directly in C as fgm suggested, but probably that wouldn't make it faster.
So, this is how simple it is. I welcome any comments.

Reading with cat: Stop when not receiving data

Is there any way to tell the cat command to stop reading when not receiving any data? maybe with some "timeout" that specifies for how long no data is incoming.
Any ideas?
There is a timeout(1) command. Example:
timeout 5s cat /dev/random
Dependening on your circumstances. E.g. you run bash with -e and care normally for the exit code.
timeout 5s cat /dev/random || true
cat itself, no. It reads the input stream until told it's the end of the file, blocking for input if necessary.
There's nothing to stop you writing your own cat equivalent which will use select on standard input to timeout if nothing is forthcoming fast enough, and exit under those conditions.
In fact, I once wrote a snail program (because a snail is slower than a cat) which took an extra argument of characters per second to slowly output a file (a).
So snail 10 myprog.c would output myprog.c at ten characters per second. For the life of me, I can't remember why I did this - I suspect I was just mucking about, waiting for some real work to show up.
Since you're having troubles with it, here's a version of dog.c (based on my afore-mentioned snail program) that will do what you want:
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
#include <sys/select.h>
static int dofile (FILE *fin) {
int ch = ~EOF, rc;
fd_set fds;
struct timeval tv;
while (ch != EOF) {
// Set up for fin file, 5 second timeout.
FD_ZERO (&fds); FD_SET (fileno (fin), &fds);
tv.tv_sec = 5; tv.tv_usec = 0;
rc = select (fileno(fin)+1, &fds, NULL, NULL, &tv);
if (rc < 0) {
fprintf (stderr, "*** Error on select (%d)\n", errno);
return 1;
}
if (rc == 0) {
fprintf (stderr, "*** Timeout on select\n");
break;
}
// Data available, so it will not block.
if ((ch = fgetc (fin)) != EOF) putchar (ch);
}
return 0;
}
int main (int argc, char *argv[]) {
int argp, rc;
FILE *fin;
if (argc == 1)
rc = dofile (stdin);
else {
argp = 1;
while (argp < argc) {
if ((fin = fopen (argv[argp], "rb")) == NULL) {
fprintf (stderr, "*** Cannot open input file [%s] (%d)\n",
argv[argp], errno);
return 1;
}
rc = dofile (fin);
fclose (fin);
if (rc != 0)
break;
argp++;
}
}
return rc;
}
Then, you can simply run dog without arguments (so it will use standard input) and, after five seconds with no activity, it will output:
*** Timeout on select
(a) Actually, it was called slowcat but snail is much nicer and I'm not above a bit of minor revisionism if it makes the story sound better :-)
mbuffer, with its -W option, works for me.
I needed to sink stdin to a file, but with an idle timeout:
I did not need to actually concatenate multiple sources (but perhaps there are ways to use mbuffer for this.)
I did not need any of cat's possible output-formatting options.
I did not mind the progress bar that mbuffer brings to the table.
I did need to add -A /bin/false to suppress a warning, based on a suggestion in the linked man page. My invocation for copying stdin to a file with 10 second idle timeout ended up looking like
mbuffer -A /bin/false -W 10 -o ./the-output-file
Here is the code for timeout-cat:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
void timeout(int sig) {
exit(EXIT_FAILURE);
}
int main(int argc, char* argv[]) {
int sec = 0; /* seconds to timeout (0 = no timeout) */
int c;
if (argc > 1) {
sec = atoi(argv[1]);
signal(SIGALRM, timeout);
alarm(sec);
}
while((c = getchar()) != EOF) {
alarm(0);
putchar(c);
alarm(sec);
}
return EXIT_SUCCESS;
}
It does basically the same as paxdiablo's dog.
It works as a cat without an argument - catting the stdin. As a first argument provide timeout seconds.
One limitation (applies to dog as well) - lines are line-buffered, so you have n-seconds to provide a line (not any character) to reset the timeout alarm. This is because of readline.
usage:
instead of potentially endless:
cat < some_input > some_output
you can do compile code above to timeout_cat and:
./timeout_cat 5 < some_input > some_output
Try to consider tail -f --pid
I am assuming that you are reading some file and when the producer is finished (gone?) you stop.
Example that will process /var/log/messages until watcher.sh finishes.
./watcher.sh&
tail -f /var/log/messages --pid $! | ... do something with the output
I faced same issue of cat command blocking while reading on tty port via adb shell but did not find any solution (timeout command was also not working). Below is the final command I used in my python script (running on ubuntu) to make it non-blocking. Hope this will help someone.
bash_command = "adb shell \"echo -en 'ATI0\\r\\n' > /dev/ttyUSB0 && cat /dev/ttyUSB0\" & sleep 1; kill $!"
response = subprocess.check_output(['bash', '-c', bash_command])
Simply cat then kill the cat after 5 sec.
cat xyz & sleep 5; kill $!
Get the cat output as a reply after 5 seconds
reply="`cat xyz & sleep 5; kill $!`"
echo "reply=$reply"

Resources