How to pass "*" in command line arguments.? - linux

see i have below code
#include<stdio.h>
int main ( int argc, char *argv[] )
{
int i=0;
for(i=1;i<argc-1;i++)
printf(" %s \n",argv[i]);
return 0;
}
compiles and run as follows
gcc test.c
./a.out 1 * 2
and now its o/p is scarred..!
o/p is :
1
a.out
Desktop
Documents
Downloads
ipmsg.log
linux-fusion-3.2.6
Music
Pictures
Public
Templates
test.c

Use single quotes around the asterisk:
./a.out 1 '*' 2
This should prevent your shell from interpreting it as a special character.

You could invoke your test using
./a.out 1 \* 2
if you want to pass * as an argument. You can also use single quotes '*' (as suggested by Esa) or double quotes "*".
Note also that your loop currently ignores the last argument. Use i<argc as your exit condition if this isn't deliberate.

Related

How to read stdout from a sub process in bash in real time

I have a simple C++ program that counts from 0 to 10 with an increment every 1 second. When the value is incremented, it is written to stdout. This program intentionally uses printf rather than std::cout.
I want to call this program from a bash script, and perform some function (eg echo) on the value when it is written to stdout.
However, my script waits for the program to terminate, and then process all the values at the same time.
C++ prog:
#include <stdio.h>
#include <unistd.h>
int main(int argc, char **argv)
{
int ctr = 0;
for (int i = 0; i < 10; ++i)
{
printf("%i\n", ctr++);
sleep(1);
}
return 0;
}
Bash script:
#!/bin/bash
for c in $(./script-test)
do
echo $c
done
Is there another way to read the output of my program, that will access it in real time, rather than wait for for the process to terminate.
Note: the C++ program is a demo sample - the actual program I am using also uses printf, but I am not able to make changes to this code, hence the solution needs to be in the bash script.
Many thanks,
Stuart
As you correctly observed, $(command) waits for the entire output of command, splits that output, and only after that, the for loop starts.
To read output as soon as is available, use while read:
./script-test | while IFS= read -r line; do
echo "do stuff with $line"
done
or, if you need to access variables from inside the loop afterwards, and your system supports <()
while IFS= read -r line; do
echo "do stuff with $line"
done < <(./script-test)
# do more stuff, that depends on variables set inside the loop
You might be more lucky using a pipe:
#!/bin/bash
./script-test | while IFS= read -r c; do
echo "$c"
done

C program stuck on terminal when redirecting its output to another file from bash script

I am trying to run a C program from a bash file in Linux and then write its output to another file (which is in another directory). The command I am using is:
gcc myfile.c -o test
./test > /home/"$user"/Documents/"$name"/"$file"
Whenever I try to run this command, the program doesn't run, rather it is stuck on loading. Even if I write a single file name (from the same directory where the program is), the program does not run until I remove the whole redirection command and just write the simple ./test command. I don't know why this is occurring.
This is the C Program:
#include <stdio.h>
int main()
{
int array[100], n, c, d, swap;
printf("Enter number of elements\n");
scanf("%d", &n);
printf("Enter %d integers\n", n);
for (c = 0; c < n; c++)
scanf("%d", &array[c]);
for (c = 0 ; c < n - 1; c++)
{
for (d = 0 ; d < n - c - 1; d++)
{
if (array[d] > array[d+1])
{
swap = array[d];
array[d] = array[d+1];
array[d+1] = swap;
}
}
}
printf("Sorted list in ascending order:\n");
for (c = 0; c < n; c++)
printf("%d\n", array[c]);
return 0;
}
Even if I'm writing it like this:
./test | tee text.txt
It is not printing anything.
you can use command tee to collect printw info. like ./test | tee file
or you try to add 2>&1 in it.
My problem was solved by using the script command in Linux. This command stores and writes both the input and the output that was provided during run-time of the C program into the desired text file. The cat and tee command doesn't work when there is scanf in the C program.

Execve problems when reading input from pipe

I wrote a simple C program to execute another program using execve.
exec.c:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char** argv) {
char path[128];
scanf("%s", path);
char* args[] = {path, NULL};
char* env[] = {NULL};
execve(path, args, env);
printf("error\n");
return 0;
}
I compiled it:
gcc exec.c -o exec
and after running it and writing "/bin/sh", it succesfully ran the shell and displayed the $ sign like a normal shell as can be seen in the picture.
Then I did the following: I created a server using nc -l 12345 and ran nc localhost 12345 | ./exec. It worked, but for some reason I can't understand, the $ sign was not displayed this time. I couldn't figure out the reason to this. (demonstrating images attached)
Now, here is the weirdest thing.
When I try to pass the program path AND more input via the pipe at once it seems like the executed process just ignores the input and closes.
For example:
But, if I run the following it works exactly the same way it worked when I piped nc output:
So, to conclude my questions:
I don't understand why the executed shell doesn't print the $ prompt sign when reads input from a pipe instead of stdin.
Why won't the executed program read input from the pipe when the input is already there and not waiting? It seems like it works only in the cases where the pipe remains open after the command execution.
Like AlexP already mentioned, the prompt sign is only displayed, when input comes from a terminal.
The second question was trickier: When you call libc-function scanf, its implementation will not only consume /bin/sh from the pipe, but also store the next input ls its internal buffers. Those internal buffers, will be overwritten by execve, so the shell gets nothing.
Here is your script without scanf to verify this:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char** argv) {
char path[128];
read(0, path, 8); // consume `/bin/sh`
path[7] = '\0';
char* args[] = {path, NULL};
char* env[] = {NULL};
execve(path, args, env);
printf("error\n");
return 0;
}
Why did the example with cat work in the first place?
That's (probably) because of buffering also. Try:
(echo /bin/sh; echo ls) | stdbuf -i0 ./exec
I recommend this nice Article about buffering for further reading.

how to define script interpreter with shebang

It is clear that one can use the
#!/usr/bin/perl
shebang notation in the very first line of a script to define the interpreter. However, this presupposes an interpreter that ignores hashmark-starting lines as comments. How can one use an interpreter that does not have this feature?
With a wrapper that removes the first line and calls the real interpreter with the remainder of the file. It could look like this:
#!/bin/sh
# set your "real" interpreter here, or use cat for debugging
REALINTERP="cat"
tail -n +2 $1 | $REALINTERP
Other than that: In some cases ignoring the error message about that first line could be an option.
Last resort: code support for the comment char of your interpreter into the kernel.
I think the first line is interpreted by the operating system.
The interpreter will be started and the name of the script is handed down to the script as its first parameter.
The following script 'first.myint' calls the interpreter 'myinterpreter' which is the executable from the C program below.
#!/usr/local/bin/myinterpreter
% 1 #########
2 xxxxxxxxxxx
333
444
% the last comment
The sketch of the personal interpreter:
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BUFFERSIZE 256 /* input buffer size */
int
main ( int argc, char *argv[] )
{
char comment_leader = '%'; /* define the coment leader */
char *line = NULL;
size_t len = 0;
ssize_t read;
// char buffer[BUFFERSIZE];
// argv[0] : the name of this executable
// argv[1] : the name the script calling this executable via shebang
FILE *input; /* input-file pointer */
char *input_file_name = argv[1]; /* the script name */
input = fopen( input_file_name, "r" );
if ( input == NULL ) {
fprintf ( stderr, "couldn't open file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
while ((read = getline(&line, &len, input)) != -1) {
if ( line[0] != comment_leader ) {
printf( "%s", line ); /* print line as a test */
}
else {
printf ( "Skipped a comment!\n" );
}
}
free(line);
if( fclose(input) == EOF ) { /* close input file */
fprintf ( stderr, "couldn't close file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
Now call the script (made executable before) and see the output:
...~> ./first.myint
#!/usr/local/bin/myinterpreter
Skipped a comment!
2 xxxxxxxxxxx
333
444
Skipped a comment!
I made it work. I especially thank holgero for his tail opion trick
tail -n +2 $1 | $REALINTERP
That, and finding this answer on Stack overflow made it possible:
How to compile a linux shell script to be a standalone executable *binary* (i.e. not just e.g. chmod 755)?
"The solution that fully meets my needs would be SHC - a free tool"
SHC is a shell to C translator, see here:
http://www.datsi.fi.upm.es/~frosal/
So I wrote polyscript.sh:
$ cat polyscript.sh
#!/bin/bash
tail -n +2 $1 | poly
I compiled this with shc and in turn with gcc:
$ shc-3.8.9/shc -f polyscript.sh
$ gcc -Wall polyscript.sh.x.c -o polyscript
Now, I was able to create a first script written in ML:
$ cat smlscript
#!/home/gergoe/projects/shebang/polyscript $0
print "Hello World!"
and, I was able to run it:
$ chmod u+x smlscript
$ ./smlscript
Poly/ML 5.4.1 Release
> > # Hello World!val it = (): unit
Poly does not have an option to suppress compiler output, but that's not an issue here. It might be interesting to write polyscript directly in C as fgm suggested, but probably that wouldn't make it faster.
So, this is how simple it is. I welcome any comments.

Perl script that has command line arguments with spaces

I'm feel like I'm missing something pretty obvious here, but I can't seem to figure out what's going on. I have a perl script that I'm calling from C code. The script + arguments is something like this:
my_script "/some/file/path" "arg" "arg with spaces" "arg" "/some/other/file"
When I run it in Windows, Perl correctly identifies it as 5 arguments, whereas when I ran it on the SunOS Unix machine, it identified 8, splitting the arg with spaces into separate args.
Not sure if it makes any difference, but in Windows I'm running it like:
perl my_script <args>
While in Unix I'm just running it as an executable like show above.
Any idea why Unix is not managing that argument properly?
Edit:
Here's the code for calling the perl script:
char cmd[1000];
char *script = "my_script";
char *argument = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
sprintf( cmd, "%s %s >1 /dev/null 2>&1", script, arguments);
system( cmd );
That's not exactly it, as I build the argument string a little more dynamically, but that's the gist.
Also, here's my code for reading the arguments in:
($arg1, $arg2, $arg3, $arg4, $arg5) = #ARGV;
I know it's ridiculously naive, but this script will only be run from the C application, so nothing more complex should be required.
Presumably
system("my_script \"/some/file/path\" \"arg\" \"arg with spaces\" \"arg\" \"/some/other/file\");
causes everything to go through bash (because of the need to interpret the shebang line, eating up the quotes you pass). Again, presumably, the problem could be avoided by invoking perl directly rather than relying on the shell to find it (although this might be a problem if the perl on the path is different than the one provided on the shebang line).
Update:
Given your:
char *argument = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
you might want to try:
char *argument = "\\\"arg1\\\" \\\"arg2\\\" \\\"arg3 with spaces\\\" \\\"arg4\\\" \\\"arg5\\\"";
Another update:
Thank you for accepting my answer, however, my whole theory might be wrong.
I tried the double-backwhacked version of argument above in GNU bash, version 4.0.28(2)-release (i686-pc-linux-gnu) and it ended up passing
[sinan#kas src]$ ./t
'"arg1"'
'"arg2"'
'"arg3'
'with'
'spaces"'
'"arg4"'
'"arg5"'
whereas the original argument worked like a charm. I am a little puzzled by this. Maybe the shell on the SUN isn't bash or maybe there is something else going on.
Did you forget the quotes on SunOS?
If you do
perl script arg1 arg2 "arg3 with spaces" arg4 arg5
you should be good. Otherwise, try switching shells.
Now that the question was updated, I still can't reproduce your results:
~% cat test.c
#include <stdio.h>
#include <stdlib.h>
int main(void){
char cmd[1000];
char *script = "./my_script";
char *arguments = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
sprintf( cmd, "%s %s", script, arguments);
system( cmd );
return 0;
}
~% cat my_script
#!/usr/bin/perl
($arg1, $arg2, $arg3, $arg4, $arg5) = #ARGV;
print "arg1 = $arg1\n";
print "arg2 = $arg2\n";
print "arg3 = $arg3\n";
print "arg4 = $arg4\n";
print "arg5 = $arg5\n";
~% gcc test.c
~% ./a.out
arg1 = arg1
arg2 = arg2
arg3 = arg3 with spaces
arg4 = arg4
arg5 = arg5
~%
There is something else with your configuration.
Previous answer:
Unix shells interpret quoted arguments
as a single argument. You can do a
quick test:
for i in a b "c d e" f; do echo $i; done
The result is what you expect it to
be: "c d e" is treated like a single
argument.
I think you have a problem in your
script, in the argument handling
logic.
Is it possible that the SunOS kernel's handing of shebang interpreters is ludicrously bad? Try running it as "/path/to/perl script args" instead of "script args" and see if anything changes.

Resources