Is there a way in NMAKE to escape backslashes in the macro definition option in CL? - nmake

If I have the following C file:
#include <stdio.h>
int main()
{
printf(MACRO);
}
And Makefile:
MYPATH=$(HOMEDRIVE)$(HOMEPATH)\foobar
main.exe: main.c
CL /DMACRO=\"$(MYPATH)\" main.c
It fails because the program expands to:
int main()
{
printf("C:\Users\me\foobar");
}
And the backslashes are taken as escape chars by the C compiler.
Is there any way for NMAKE to run:
CL /DMACRO=\"C:\\Users\\me\\foobar\\\" main.c

You can add the double-backslashes in NMAKE via macro substitution:
MYPATH=$(HOMEDRIVE)$(HOMEPATH)\foobar
MYPATH2=$(MYPATH:\=\\) # double any back-slashes
main.exe: main.c
CL /DMACRO=\"$(MYPATH2)\" main.c
See e.g. "Substitution Within Macros" (page 36) in https://www.scribd.com/document/19344397/Managing-Projects-With-NMAKE.
(You may also want to add a trailing back-slash to foobar.)

Related

Execve problems when reading input from pipe

I wrote a simple C program to execute another program using execve.
exec.c:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char** argv) {
char path[128];
scanf("%s", path);
char* args[] = {path, NULL};
char* env[] = {NULL};
execve(path, args, env);
printf("error\n");
return 0;
}
I compiled it:
gcc exec.c -o exec
and after running it and writing "/bin/sh", it succesfully ran the shell and displayed the $ sign like a normal shell as can be seen in the picture.
Then I did the following: I created a server using nc -l 12345 and ran nc localhost 12345 | ./exec. It worked, but for some reason I can't understand, the $ sign was not displayed this time. I couldn't figure out the reason to this. (demonstrating images attached)
Now, here is the weirdest thing.
When I try to pass the program path AND more input via the pipe at once it seems like the executed process just ignores the input and closes.
For example:
But, if I run the following it works exactly the same way it worked when I piped nc output:
So, to conclude my questions:
I don't understand why the executed shell doesn't print the $ prompt sign when reads input from a pipe instead of stdin.
Why won't the executed program read input from the pipe when the input is already there and not waiting? It seems like it works only in the cases where the pipe remains open after the command execution.
Like AlexP already mentioned, the prompt sign is only displayed, when input comes from a terminal.
The second question was trickier: When you call libc-function scanf, its implementation will not only consume /bin/sh from the pipe, but also store the next input ls its internal buffers. Those internal buffers, will be overwritten by execve, so the shell gets nothing.
Here is your script without scanf to verify this:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char** argv) {
char path[128];
read(0, path, 8); // consume `/bin/sh`
path[7] = '\0';
char* args[] = {path, NULL};
char* env[] = {NULL};
execve(path, args, env);
printf("error\n");
return 0;
}
Why did the example with cat work in the first place?
That's (probably) because of buffering also. Try:
(echo /bin/sh; echo ls) | stdbuf -i0 ./exec
I recommend this nice Article about buffering for further reading.

where is the text printed by C printf

I happened encounter a trouble with calling C printf function from SBCL via cffi. The problem is when I call printf function, I can't find the output text, just the return value of printf function show on the REPL. But when I quit SBCL, the output text appears on the terminal magically.
$ sbcl
* (ql:quickload :cffi)
* (cffi:foreign-funcall "printf" :string "hello" :int)
;;=> 5
* (quit)
hello$
The last line, "hello$" means when quit from SBCL, the text "hello" appears on terminal and followed with the shell prompt "$". So where does printf print the text "hello" to?
I tried `finish-output', `force-output' on *standard-output* but that does not work.
The problem is that C's stdio library has its own buffering that has nothing to do with Lisp's. Flushing the output requires you to have a pointer to C's FILE *stdout variable. You can get this pointer like this:
(cffi:defcvar ("stdout" stdout) :pointer)
Then, after using printf:
(cffi:foreign-funcall "fflush" :pointer stdout :int)
Write in flush.c:
#include <stdio.h>
void flush() {
fflush(stdout);
}
Then:
gcc -fpic -shared flush.c -o flush.so
Then in SLIME:
(cffi:load-foreign-library "./flush.so")
(cffi:foreign-funcall "puts" :string "Hello World" :int)
(cffi:foreign-funcall "flush")
But only print in *inferior-lisp*, even with (with-output-to-string (*standard-output*...)

how to define script interpreter with shebang

It is clear that one can use the
#!/usr/bin/perl
shebang notation in the very first line of a script to define the interpreter. However, this presupposes an interpreter that ignores hashmark-starting lines as comments. How can one use an interpreter that does not have this feature?
With a wrapper that removes the first line and calls the real interpreter with the remainder of the file. It could look like this:
#!/bin/sh
# set your "real" interpreter here, or use cat for debugging
REALINTERP="cat"
tail -n +2 $1 | $REALINTERP
Other than that: In some cases ignoring the error message about that first line could be an option.
Last resort: code support for the comment char of your interpreter into the kernel.
I think the first line is interpreted by the operating system.
The interpreter will be started and the name of the script is handed down to the script as its first parameter.
The following script 'first.myint' calls the interpreter 'myinterpreter' which is the executable from the C program below.
#!/usr/local/bin/myinterpreter
% 1 #########
2 xxxxxxxxxxx
333
444
% the last comment
The sketch of the personal interpreter:
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BUFFERSIZE 256 /* input buffer size */
int
main ( int argc, char *argv[] )
{
char comment_leader = '%'; /* define the coment leader */
char *line = NULL;
size_t len = 0;
ssize_t read;
// char buffer[BUFFERSIZE];
// argv[0] : the name of this executable
// argv[1] : the name the script calling this executable via shebang
FILE *input; /* input-file pointer */
char *input_file_name = argv[1]; /* the script name */
input = fopen( input_file_name, "r" );
if ( input == NULL ) {
fprintf ( stderr, "couldn't open file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
while ((read = getline(&line, &len, input)) != -1) {
if ( line[0] != comment_leader ) {
printf( "%s", line ); /* print line as a test */
}
else {
printf ( "Skipped a comment!\n" );
}
}
free(line);
if( fclose(input) == EOF ) { /* close input file */
fprintf ( stderr, "couldn't close file '%s'; %s\n",
input_file_name, strerror(errno) );
exit (EXIT_FAILURE);
}
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
Now call the script (made executable before) and see the output:
...~> ./first.myint
#!/usr/local/bin/myinterpreter
Skipped a comment!
2 xxxxxxxxxxx
333
444
Skipped a comment!
I made it work. I especially thank holgero for his tail opion trick
tail -n +2 $1 | $REALINTERP
That, and finding this answer on Stack overflow made it possible:
How to compile a linux shell script to be a standalone executable *binary* (i.e. not just e.g. chmod 755)?
"The solution that fully meets my needs would be SHC - a free tool"
SHC is a shell to C translator, see here:
http://www.datsi.fi.upm.es/~frosal/
So I wrote polyscript.sh:
$ cat polyscript.sh
#!/bin/bash
tail -n +2 $1 | poly
I compiled this with shc and in turn with gcc:
$ shc-3.8.9/shc -f polyscript.sh
$ gcc -Wall polyscript.sh.x.c -o polyscript
Now, I was able to create a first script written in ML:
$ cat smlscript
#!/home/gergoe/projects/shebang/polyscript $0
print "Hello World!"
and, I was able to run it:
$ chmod u+x smlscript
$ ./smlscript
Poly/ML 5.4.1 Release
> > # Hello World!val it = (): unit
Poly does not have an option to suppress compiler output, but that's not an issue here. It might be interesting to write polyscript directly in C as fgm suggested, but probably that wouldn't make it faster.
So, this is how simple it is. I welcome any comments.

How to pass "*" in command line arguments.?

see i have below code
#include<stdio.h>
int main ( int argc, char *argv[] )
{
int i=0;
for(i=1;i<argc-1;i++)
printf(" %s \n",argv[i]);
return 0;
}
compiles and run as follows
gcc test.c
./a.out 1 * 2
and now its o/p is scarred..!
o/p is :
1
a.out
Desktop
Documents
Downloads
ipmsg.log
linux-fusion-3.2.6
Music
Pictures
Public
Templates
test.c
Use single quotes around the asterisk:
./a.out 1 '*' 2
This should prevent your shell from interpreting it as a special character.
You could invoke your test using
./a.out 1 \* 2
if you want to pass * as an argument. You can also use single quotes '*' (as suggested by Esa) or double quotes "*".
Note also that your loop currently ignores the last argument. Use i<argc as your exit condition if this isn't deliberate.

Perl script that has command line arguments with spaces

I'm feel like I'm missing something pretty obvious here, but I can't seem to figure out what's going on. I have a perl script that I'm calling from C code. The script + arguments is something like this:
my_script "/some/file/path" "arg" "arg with spaces" "arg" "/some/other/file"
When I run it in Windows, Perl correctly identifies it as 5 arguments, whereas when I ran it on the SunOS Unix machine, it identified 8, splitting the arg with spaces into separate args.
Not sure if it makes any difference, but in Windows I'm running it like:
perl my_script <args>
While in Unix I'm just running it as an executable like show above.
Any idea why Unix is not managing that argument properly?
Edit:
Here's the code for calling the perl script:
char cmd[1000];
char *script = "my_script";
char *argument = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
sprintf( cmd, "%s %s >1 /dev/null 2>&1", script, arguments);
system( cmd );
That's not exactly it, as I build the argument string a little more dynamically, but that's the gist.
Also, here's my code for reading the arguments in:
($arg1, $arg2, $arg3, $arg4, $arg5) = #ARGV;
I know it's ridiculously naive, but this script will only be run from the C application, so nothing more complex should be required.
Presumably
system("my_script \"/some/file/path\" \"arg\" \"arg with spaces\" \"arg\" \"/some/other/file\");
causes everything to go through bash (because of the need to interpret the shebang line, eating up the quotes you pass). Again, presumably, the problem could be avoided by invoking perl directly rather than relying on the shell to find it (although this might be a problem if the perl on the path is different than the one provided on the shebang line).
Update:
Given your:
char *argument = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
you might want to try:
char *argument = "\\\"arg1\\\" \\\"arg2\\\" \\\"arg3 with spaces\\\" \\\"arg4\\\" \\\"arg5\\\"";
Another update:
Thank you for accepting my answer, however, my whole theory might be wrong.
I tried the double-backwhacked version of argument above in GNU bash, version 4.0.28(2)-release (i686-pc-linux-gnu) and it ended up passing
[sinan#kas src]$ ./t
'"arg1"'
'"arg2"'
'"arg3'
'with'
'spaces"'
'"arg4"'
'"arg5"'
whereas the original argument worked like a charm. I am a little puzzled by this. Maybe the shell on the SUN isn't bash or maybe there is something else going on.
Did you forget the quotes on SunOS?
If you do
perl script arg1 arg2 "arg3 with spaces" arg4 arg5
you should be good. Otherwise, try switching shells.
Now that the question was updated, I still can't reproduce your results:
~% cat test.c
#include <stdio.h>
#include <stdlib.h>
int main(void){
char cmd[1000];
char *script = "./my_script";
char *arguments = "\"arg1\" \"arg2\" \"arg3 with spaces\" \"arg4\" \"arg5\"";
sprintf( cmd, "%s %s", script, arguments);
system( cmd );
return 0;
}
~% cat my_script
#!/usr/bin/perl
($arg1, $arg2, $arg3, $arg4, $arg5) = #ARGV;
print "arg1 = $arg1\n";
print "arg2 = $arg2\n";
print "arg3 = $arg3\n";
print "arg4 = $arg4\n";
print "arg5 = $arg5\n";
~% gcc test.c
~% ./a.out
arg1 = arg1
arg2 = arg2
arg3 = arg3 with spaces
arg4 = arg4
arg5 = arg5
~%
There is something else with your configuration.
Previous answer:
Unix shells interpret quoted arguments
as a single argument. You can do a
quick test:
for i in a b "c d e" f; do echo $i; done
The result is what you expect it to
be: "c d e" is treated like a single
argument.
I think you have a problem in your
script, in the argument handling
logic.
Is it possible that the SunOS kernel's handing of shebang interpreters is ludicrously bad? Try running it as "/path/to/perl script args" instead of "script args" and see if anything changes.

Resources