I happened encounter a trouble with calling C printf function from SBCL via cffi. The problem is when I call printf function, I can't find the output text, just the return value of printf function show on the REPL. But when I quit SBCL, the output text appears on the terminal magically.
$ sbcl
* (ql:quickload :cffi)
* (cffi:foreign-funcall "printf" :string "hello" :int)
;;=> 5
* (quit)
hello$
The last line, "hello$" means when quit from SBCL, the text "hello" appears on terminal and followed with the shell prompt "$". So where does printf print the text "hello" to?
I tried `finish-output', `force-output' on *standard-output* but that does not work.
The problem is that C's stdio library has its own buffering that has nothing to do with Lisp's. Flushing the output requires you to have a pointer to C's FILE *stdout variable. You can get this pointer like this:
(cffi:defcvar ("stdout" stdout) :pointer)
Then, after using printf:
(cffi:foreign-funcall "fflush" :pointer stdout :int)
Write in flush.c:
#include <stdio.h>
void flush() {
fflush(stdout);
}
Then:
gcc -fpic -shared flush.c -o flush.so
Then in SLIME:
(cffi:load-foreign-library "./flush.so")
(cffi:foreign-funcall "puts" :string "Hello World" :int)
(cffi:foreign-funcall "flush")
But only print in *inferior-lisp*, even with (with-output-to-string (*standard-output*...)
Related
Consider this :
xinput --test 11 | grep "button press 1"
*11 is my Optical mouse index (could be anything else) and "button press 1" means left click.
When I click somewhere in the screen, that shows me this :
button press 1
No problem so far . But when I wanted to use the output of that as the input to my C program , I noticed that the stdin of the program after another level of pipe is always empty :
xinput --test 11 | grep "button press 1" | ./poll_test
Here's my poll_test code:
/*This is just a program to test polling functionality on stdin.
I've used other programs instead of this as well.
None of them were able to read the stdin */
#include <fcntl.h>
#include <stdio.h>
#include <sys/poll.h>
#include <sys/time.h>
#include <unistd.h>
void main(int argc, char ** argv) {
int fd;
char buf[1024];
int bytes_read;
struct pollfd pfds[1];
while (1) {
pfds[0].fd = 0;
pfds[0].events = POLLIN;
poll(pfds, 1, -1);
if (pfds[0].revents & POLLIN) {
bytes_read = read(0, buf, 1024);
printf("%s\n" , buf);
if (!bytes_read) {
printf("stdin closed\n");
return;
}
write(1, buf, bytes_read);
}
}
}
It prints nothing despite of the clicks.
That is confusing.This is not a normal behavior. When I run this for example:
ls | grep a | grep b
It shows me the results successfully. The only difference is that the ls here exits after it prints out to the stdout but that's not the case in the xinput version.
I spend a lot of time to write a script to play a beep in the event of a mouse click but that didn't work. So I wanted to use a C program because there's no polling functionality in bash.
As far as I know , working of the pipes in bash is something like this :
The second program (the right one in the pipe statement) gets executed until it wants to READ from its stdin and it stops going further until there's something to read from the STDIN.
With that in mind, the third program in the command I posted should be able to read the output.As it's the case when the first programs exits.
The next step would be to use libxcb directly instead of the command xinput if pipe problem doesn't work.
I'm totally confused. Any help would be much appreciated.
EDIT: I also tried using an intermediate file descriptor:
exec 3<&1
xinput --test 11 | grep -i "button press 1" >&3 | ./poll_test 3>&1
but didn't help.And also flushing the stdout forcibly doesn't work either :
xinput --test 11 | grep -i "button press 1" ; stdbuf -oL | ./poll_test
It seems grep is changing its buffering behaviour depending on whether the output is a terminal or not. I don't know exactly why this happens, but --line-buffered forces it to use line buffering (evaluating the expression as soon as the line ends):
xinput --test 11 | grep "button press 1" --line-buffered | ./poll_test
I want to do the next bash command, but actually in gdb (so i can debug it):
myProgram "`echo -en '\x41\x41\x41\x41'`"
I'm trying to do this (in gdb):
(dbg) run "`echo -en "\x41\x41\x41\x41"`"
i DON'T mean the stdin redirect:
echo -en "\x41\x41\x41\x41" > command_output.txt
gdb myProgram
(gdb) run < command_output.txt
how to insert Hex values as an agrument to a program in gdb?
you may try
(dbg) run $(echo -en "\x41\x41\x41\x41")
The gdb set args command can set the arguments. Using eval, which runs its arguments through printf, we can insert almost arbitrary characters using hex escapes. (We're limited by the fact that gdb will typically invoke the target using a shell, so it helps to add single quotes around each argument).
(gdb) eval "set args '%s'", "\x41\x41\x20\x41"
You can't do that without a process to debug.
That's another gdb limitation. An alternative that doesn't make gdb want to allocate memory in the target is to give eval only %c conversions and integer arguments, like this:
(gdb) eval "set args '%c%c%c%c'", 0x41, 0x41, 0x20, 0x41
But having to put the exact number of %c conversions into that format string is tedious, so let's stick with the single %s. We need to start the process, even though we're going to restart it right after we set the args using eval.
(gdb) start
Starting program: /home/mp/argprint
Temporary breakpoint 1, main (argc=1, argv=0x7ffffffee2b8) at argprint.c:4
4 for(int i=0; i < argc; i++) {
(gdb) eval "set args '%s'", "\x41\x41\x20\x41"
(gdb) run
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /home/mp/argprint 'AA A'
arg 0 is <</home/mp/argprint>>
arg 1 is <<AA A>>
If you have an old gdb that doesn't have the eval command but does have Python, you can do this:
(gdb) python gdb.execute("set args '\x41\x41\x20\x41'")
It is clear that one can use the
#!/usr/bin/perl
shebang notation in the very first line of a script to define the interpreter. However, this presupposes an interpreter that ignores hashmark-starting lines as comments. How can one use an interpreter that does not have this feature?
With a wrapper that removes the first line and calls the real interpreter with the remainder of the file. It could look like this:
#!/bin/sh
# set your "real" interpreter here, or use cat for debugging
REALINTERP="cat"
tail -n +2 $1 | $REALINTERP
Other than that: In some cases ignoring the error message about that first line could be an option.
Last resort: code support for the comment char of your interpreter into the kernel.
I think the first line is interpreted by the operating system.
The interpreter will be started and the name of the script is handed down to the script as its first parameter.
The following script 'first.myint' calls the interpreter 'myinterpreter' which is the executable from the C program below.
#!/usr/local/bin/myinterpreter
% 1 #########
2 xxxxxxxxxxx
333
444
% the last comment
The sketch of the personal interpreter:
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BUFFERSIZE 256 /* input buffer size */
int
main ( int argc, char *argv[] )
{
char comment_leader = '%'; /* define the coment leader */
char *line = NULL;
size_t len = 0;
ssize_t read;
// char buffer[BUFFERSIZE];
// argv[0] : the name of this executable
// argv[1] : the name the script calling this executable via shebang
FILE *input; /* input-file pointer */
char *input_file_name = argv[1]; /* the script name */
input = fopen( input_file_name, "r" );
if ( input == NULL ) {
fprintf ( stderr, "couldn't open file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
while ((read = getline(&line, &len, input)) != -1) {
if ( line[0] != comment_leader ) {
printf( "%s", line ); /* print line as a test */
}
else {
printf ( "Skipped a comment!\n" );
}
}
free(line);
if( fclose(input) == EOF ) { /* close input file */
fprintf ( stderr, "couldn't close file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
Now call the script (made executable before) and see the output:
...~> ./first.myint
#!/usr/local/bin/myinterpreter
Skipped a comment!
2 xxxxxxxxxxx
333
444
Skipped a comment!
I made it work. I especially thank holgero for his tail opion trick
tail -n +2 $1 | $REALINTERP
That, and finding this answer on Stack overflow made it possible:
How to compile a linux shell script to be a standalone executable *binary* (i.e. not just e.g. chmod 755)?
"The solution that fully meets my needs would be SHC - a free tool"
SHC is a shell to C translator, see here:
http://www.datsi.fi.upm.es/~frosal/
So I wrote polyscript.sh:
$ cat polyscript.sh
#!/bin/bash
tail -n +2 $1 | poly
I compiled this with shc and in turn with gcc:
$ shc-3.8.9/shc -f polyscript.sh
$ gcc -Wall polyscript.sh.x.c -o polyscript
Now, I was able to create a first script written in ML:
$ cat smlscript
#!/home/gergoe/projects/shebang/polyscript $0
print "Hello World!"
and, I was able to run it:
$ chmod u+x smlscript
$ ./smlscript
Poly/ML 5.4.1 Release
> > # Hello World!val it = (): unit
Poly does not have an option to suppress compiler output, but that's not an issue here. It might be interesting to write polyscript directly in C as fgm suggested, but probably that wouldn't make it faster.
So, this is how simple it is. I welcome any comments.
Can you help me understand the following code?
void errorexit(char *pchar) {
// display an error to the standard err.
fprintf(stderr, pchar);
fprintf(stderr, "\n");
exit(1);
}
Calling errorexit("Error Message") will print "Error Message" to the standard error stream (often in a terminal) and exit the program. Any programs (such as the shell) that called your program will know that the there was an error since your program exited with a non-zero status.
It is printing out the string pointed to by pchar to the standard error output via fprintf and then forcing the application to exit with a return code of 1. This would be used for critical errors when the application can't continue running.
That function prints the provided string and a newline to stderr and then terminates the current running program, providing 1 as the return value.
fprintf is like printf in that it outputs characters, but fprintf is a little different in that it takes a file handle as an argument. I this case stderr is the file handle for standard error. This handle is already defined for you by stdio.h, and corresponds to the error output stream. stdout is what printf outputs to, so fprintf(stdout, "hello") is equivalent to printf("hello").
exit is a function that terminates the execution of the current process and returns whatever value was its argument as the return code to the parent process (usually the shell). A non-zero return code usually indicates failure, the specific value indicating the type of failure.
If you ran this program from the shell:
#include <stdio.h>
#include "errorexit.h"
int main(int argc, char* argv[])
{
printf("Hello world!\n");
errorexit("Goodbye :(");
printf("Just kidding!\n");
return 0;
}
You'd see this output:
Hello world!
Goodbye :(
And your shell would show "1" as the return value (in bash, you can view the last return code with echo $?).
Note that "Just kidding!" would not be printed, as errorexit calls exit, ending the program before main finishes.
I'm feel like I'm missing something pretty obvious here, but I can't seem to figure out what's going on. I have a perl script that I'm calling from C code. The script + arguments is something like this:
my_script "/some/file/path" "arg" "arg with spaces" "arg" "/some/other/file"
When I run it in Windows, Perl correctly identifies it as 5 arguments, whereas when I ran it on the SunOS Unix machine, it identified 8, splitting the arg with spaces into separate args.
Not sure if it makes any difference, but in Windows I'm running it like:
perl my_script <args>
While in Unix I'm just running it as an executable like show above.
Any idea why Unix is not managing that argument properly?
Edit:
Here's the code for calling the perl script:
char cmd[1000];
char *script = "my_script";
char *argument = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
sprintf( cmd, "%s %s >1 /dev/null 2>&1", script, arguments);
system( cmd );
That's not exactly it, as I build the argument string a little more dynamically, but that's the gist.
Also, here's my code for reading the arguments in:
($arg1, $arg2, $arg3, $arg4, $arg5) = #ARGV;
I know it's ridiculously naive, but this script will only be run from the C application, so nothing more complex should be required.
Presumably
system("my_script \"/some/file/path\" \"arg\" \"arg with spaces\" \"arg\" \"/some/other/file\");
causes everything to go through bash (because of the need to interpret the shebang line, eating up the quotes you pass). Again, presumably, the problem could be avoided by invoking perl directly rather than relying on the shell to find it (although this might be a problem if the perl on the path is different than the one provided on the shebang line).
Update:
Given your:
char *argument = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
you might want to try:
char *argument = "\\\"arg1\\\" \\\"arg2\\\" \\\"arg3 with spaces\\\" \\\"arg4\\\" \\\"arg5\\\"";
Another update:
Thank you for accepting my answer, however, my whole theory might be wrong.
I tried the double-backwhacked version of argument above in GNU bash, version 4.0.28(2)-release (i686-pc-linux-gnu) and it ended up passing
[sinan#kas src]$ ./t
'"arg1"'
'"arg2"'
'"arg3'
'with'
'spaces"'
'"arg4"'
'"arg5"'
whereas the original argument worked like a charm. I am a little puzzled by this. Maybe the shell on the SUN isn't bash or maybe there is something else going on.
Did you forget the quotes on SunOS?
If you do
perl script arg1 arg2 "arg3 with spaces" arg4 arg5
you should be good. Otherwise, try switching shells.
Now that the question was updated, I still can't reproduce your results:
~% cat test.c
#include <stdio.h>
#include <stdlib.h>
int main(void){
char cmd[1000];
char *script = "./my_script";
char *arguments = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
sprintf( cmd, "%s %s", script, arguments);
system( cmd );
return 0;
}
~% cat my_script
#!/usr/bin/perl
($arg1, $arg2, $arg3, $arg4, $arg5) = #ARGV;
print "arg1 = $arg1\n";
print "arg2 = $arg2\n";
print "arg3 = $arg3\n";
print "arg4 = $arg4\n";
print "arg5 = $arg5\n";
~% gcc test.c
~% ./a.out
arg1 = arg1
arg2 = arg2
arg3 = arg3 with spaces
arg4 = arg4
arg5 = arg5
~%
There is something else with your configuration.
Previous answer:
Unix shells interpret quoted arguments
as a single argument. You can do a
quick test:
for i in a b "c d e" f; do echo $i; done
The result is what you expect it to
be: "c d e" is treated like a single
argument.
I think you have a problem in your
script, in the argument handling
logic.
Is it possible that the SunOS kernel's handing of shebang interpreters is ludicrously bad? Try running it as "/path/to/perl script args" instead of "script args" and see if anything changes.