Using gcc compiler on linux, I have a C program that used command-line argument (one argument) like
./myprog 0
I want to write a makefile that uses conditional compilation so that if I compile like
make SPECIAL=1
then command-line argument is used.
if I compile without SPECIAL like
make
then command-line argument is ignored even if we enter it.
How to make it possible.
I am using following compilation command
gcc -o myprog myprog.c prog2.c prog3.c
The makefile can be trivial but the real work has to happen in the C code. Something like this, as a minimal example:
#include <stdio.h>
int main(int argc, char **argv, char **envp) {
int i;
#ifndef SPECIAL
argc=1;
argv[1] = NULL;
#endif
for(i=1; i<argc; ++i) {
printf(">>%s<<\n", argv[i]);
}
}
Now, you don't even really need a Makefile for this simple program.
bash$ make CFLAGS=-DSPECIAL=1 -f /dev/null myprog
cc -DSPECIAL=1 myprog.c -o myprog
Having said that, making your build nondeterministic by introducing variations which depend on ephemeral build-time whims is a recipe for insanity. Have two separate targets which create two separate binaries, one with the regular behavior, and the other with the foot-gun semantics.
DEPS := myprog.c prog2.c prog3.c # or whatever your dependencies are
myprog: $(DEPS)
myprog-footgun: CLAGS+=-DSPECIAL=1
myprog-footgun: $(DEPS) # same dependencies, different output file
$(CC) $(CFLAGS) -o $# $^
See Target-specific Variable Values in the GNU Make documentation for details of this syntax.
(Notice that Stack Overflow renders tabs in the Markdown source as spaces, so you will not be able to copy/paste this verbatim.)
I would perhaps in fact reverse the meaning of SPECIAL so that it enables the foot-gun version, rather than the other way around (the original version of this answer had this design, just because I had read your question that way originally).
Related
I have several projects in which I use many custom macros. For example,
//prog.c
#include <stdio.h>
#ifdef DBG_LEVEL_1
#define P(d) printf("%s:%d\n", #d, d);
#else
#define P(...)
#endif
#ifdef INLINE
#define INL static inline
#else
#define INL
#endif
INL void func_Sqr(int a)
{
printf("sqr(a):%d\n", a*a);
}
int main()
{
int a = 3;
P(a);
func_Sqr(a);
return 0;
}
I compile this in several ways,
gcc prog.c
gcc -DINLINE prog.c
gcc -DDBG_LEVEL_1 prog.c
gcc -DDBG_LEVEL_1 -DINLINE-DINLINE prog.c
Is there way to set these macros as enabled as default via environment variable?
We can solve this by creating a makefile where these macros are set. But I want to know if there any Linux environment related solution
Is there way to set these macros as enabled as default via environment variable?
Generally, no there is not. Gcc arguments are not affected by environment variables (except maybe DEPENDENCIES_OUTPUT..).
I advise to write a small wrapper function around the command, for example a bash function:
wgcc() {
gcc "${GCCARGS[#]}" "$#"
}
and then doing:
GCCARGS+=(-DA=1 -DB=2)
wgcc something.c
is trivial to do, easy to understand and maintain and communicate with other team members, easy to implement and share. Suprising haisenbugs will be easy to track - wgcc is unique name different then gcc. Still you could overwrite the original command with gcc() { command gcc "${GCCARGS[#]}" "$#"; } or by creating a /usr/local/bin/gcc file, making it a bit more confusing.
But! You can and I woult strongly advise not to do that, because it will be confusing to others and hard to maintain. You can use COMPILER_PATH environment variable to overwrite compiler tools and provide custom options. In steps:
Create a temporary directory
In that directory link all the subprograms of gcc to it's normal prefix, except the tools the behavior you want to modify, like cc1.
Then create cc1 as a script with that will call the original cc1 but will use some environment variable to pass extra arguments.
Then export the path as COMPILER_PATH so gcc will pick it up.
On my archlinux with gcc10.0.1 I did:
mkdir /tmp/temp
cd /tmp/temp
for i in /usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0/{cc1plus,collect2,lto-wrapper,lto1}; do ln -vs "$i"; done
printf "%s\n" '#!/bin/sh' '/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0/cc1 $GCCARGS "$#"' > cc1
chmod +x cc1
export COMPILER_PATH=/tmp/temp
After that you can:
export GCCARGS=-DA=1
gcc -xc - <<<'int main() { printf("A=%d\n", A); }'
./a.out
> A=1
Ok, I going to assume this is an easy question. I have a .c file and a Makefile. I'm using Linux 12.10 ubuntu if that matters. I am trying to understand how I write in terminal to get these two files to create an executable, source, and object file in the directory to where these two files are utilizing make. I have nasm installed but not sure if there is something else I need installed. This is currently what I am doing but can't seem to understand the basics behind what I can do in windows but can't seem to get it to work in linux. I have changed the Makefile to except linux.
I know this is probably super easy but I'm pretty new to linux and don't really understand some of the things I think I should be able to figure out pretty easily so I do apologize if this seems to easy.
$ make firstlab.c firstlab
is what I am typing in terminal after I am in the right directory. My
feedback is "
make: Nothing to be done for `homework1.c'.
gcc homework1.c -o homework1
homework1.c: In function ‘main’:
homework1.c:20:5: warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]
homework1.c:21:5: warning: incompatible implicit declaration of built-in function ‘scanf’ [enabled by default]
"
#include <stdlib.h>
#include <stdio.h>
int main (int argc, char* argv[])
{
int firstNumber = 0;
int secondNumber = 0;
int result = 0;
printf("Enter first value: ");
scanf("%d", &firstNumber);
printf("Enter second value: ");
scanf("%d", &secondNumber);
if(firstNumber >= secondNumber)
result = firstNumber - secondNumber;
else if(secondNumber > firstNumber)
result = secondNumber + firstNumber;
printf("Result: %d\n", result);
system("pause");
return result;
}
Make File:
##########H
PROJECT = Homework1
##################
CC = gcc
# win 32
#RM = del
#linux
RM = rm -f
BIN = $(PROJECT).exe
OBJ = $(PROJECT).o
all: $(BIN)
clean:
${RM} $(OBJ) $(BIN) $(PROJECT).s
$(BIN): $(OBJ)
$(CC) $(OBJ) -o $(PROJECT).exe
$(OBJ): $(PROJECT).s
$(CC) -c $(PROJECT).s -o $(PROJECT).o
$(PROJECT).s: $(PROJECT).c
$(CC) -c $(PROJECT).c -S -masm=intel
Any help is appreciated.
Are you sure you have a makefile? The output you show doesn't seem to line up with that assumption.
make firstlab.c firstlab is a bit weird. You could just replace it with make firstlab and it would have the same results. If you want an object file type, make firstlab.o.
All of that behaviour depends on make's implicit rules. You probably should write a makefile for your project to control the behaviour better. To support creating the assembly file (firstlab.s) you'll have to do that anyway. A rule something like:
%.s : %.c
$(CC) $(CFLAGS) -S -o $# $<
Should do it. You can make similar rules for the executable and the object files. I strongly recommend a quick glance at the GNU Make Manual to get started.
To fix the printf and scanf warnings, add #include <stdio.h> at the top of your program.
I have a C program that tries to modify a const string literal. As now I learned that this is not allowed.
When I compile the code with clang test.c the compiler gives no warning. But when I compile it with clang++ test.c it gives a warning:
test.c:6:15: warning: conversion from string literal to 'char *' is deprecated
[-Wdeprecated-writable-strings]
char *s = "hello world";
^
The problem is that it turns out clang++ is just a symbol link of clang:
ll `which clang++`
lrwxr-xr-x 1 root admin 5 Jan 1 12:34 /usr/bin/clang++# -> clang
So my question is how could clang++ behaves differently from clang given that it's a symbol link of clang?
Clang is looking at its argv[0] and altering its behavior depending on what it sees. This is an uncommon and discouraged, but not rare, trick going at least as far back as 4.2BSD ex and vi, which were the same executable, and probably farther.
In this case, clang is compiling your .c file as C, and clang++ is compiling it as C++. This is a historical wart which you should not rely on; use the appropriate compiler command and make sure that your file extension reflects the true contents of the file.
By convention, the name by which a command is invoked is passed as argv[0]; it is not especially unusual for programs to change their behavior based on this. (Historically, ln, cp, and mv were hardlinks to the same executable on Research Unix and used argv[0] to decide which action to do. Also, most shells look for a leading - in argv[0] to decide if they should be a login shell.) Often there is also some other way to get the same effect (options, environment variables, etc.); you should in general use this instead of playing argv[0] games.
There are reasons to do this, but in most cases it's not a good idea to rely on it or to design programs around it.
This question is related to this one as well as its answer.
I just discovered some ugliness in a build I'm working on. The situation looks somewhat like the following (written in gmake format); note, this specifically applies to a 32-bit memory model on sparc and x86 hardware:
OBJ_SET1 := some objects
OBJ_SET2 := some objects
# note: OBJ_SET2 doesn't get this flag
${OBJ_SET1} : CCFLAGS += -PIC
${OBJ_SET1} ${OBJ_SET2} : %.o : %.cc
${CCC} ${CCFLAGS} -m32 -o ${#} -c ${<}
obj1.o : ${OBJ_SET1}
obj2.o : ${OBJ_SET2}
sharedlib.so : obj1.o obj2.o
obj1.o obj2.o sharedlib.so :
${LINK} ${LDFLAGS} -m32 -PIC -o ${#} ${^}
Clearly it can work to mix objects compiled with and without PIC in a shared object (this has been in use for years). I don't know enough about PIC to know whether it's a good idea/smart, and my guess is in this case it's not needed but rather it's happening because someone didn't care enough to find out the right way to do it when tacking on new stuff to the build.
My question is:
Is this safe
Is it a good idea
What potential problems can occur as a result
If I switch everything to PIC, are there any non-obvious gotchas that I might want to watch out for.
Forgot I even wrote this question.
Some explanations are in order first:
Non-PIC code may be loaded by the OS into any position in memory in [most?] modern OSs. After everything is loaded, it goes through a phase that fixes up the text segment (where the executable stuff ends up) so it correctly addresses global variables; to pull this off, the text segment must be writable.
PIC executable data can be loaded once by the OS and shared across multiple users/processes. For the OS to do this, however, the text segment must be read-only -- which means no fix-ups. The code is compiled to use a Global Offset Table (GOT) so it can address globals relative to the GOT, alleviating the need for fix-ups.
If a shared object is built without PIC, although it is strongly encouraged it doesn't appear that it's strictly necessary; if the OS must fix-up the text segment then it's forced to load it into memory that's marked read-write ... which prevents sharing across processes/users.
If an executable binary is built /with/ PIC, I don't know what goes wrong under the hood but I've witnessed a few tools become unstable (mysterious crashes & the like).
The answers:
Mixing PIC/non-PIC, or using PIC in executables can cause hard to predict and track down instabilities. I don't have a technical explanation for why.
... to include segfaults, bus errors, stack corruption, and probably more besides.
Non-PIC in shared objects is probably not going to cause any serious problems, though it can result in more RAM used if the library is used many times across processes and/or users.
update (4/17)
I've since discovered the cause of some of the crashes I had seen previously. To illustrate:
/*header.h*/
#include <map>
typedef std::map<std::string,std::string> StringMap;
StringMap asdf;
/*file1.cc*/
#include "header.h"
/*file2.cc*/
#include "header.h"
int main( int argc, char** argv ) {
for( int ii = 0; ii < argc; ++ii ) {
asdf[argv[ii]] = argv[ii];
}
return 0;
}
... then:
$ g++ file1.cc -shared -PIC -o libblah1.so
$ g++ file1.cc -shared -PIC -o libblah2.so
$ g++ file1.cc -shared -PIC -o libblah3.so
$ g++ file1.cc -shared -PIC -o libblah4.so
$ g++ file1.cc -shared -PIC -o libblah5.so
$ g++ -zmuldefs file2.cc -Wl,-{L,R}$(pwd) -lblah{1..5} -o fdsa
# ^^^^^^^^^
# This is the evil that made it possible
$ args=(this is the song that never ends);
$ eval ./fdsa $(for i in {1..100}; do echo -n ${args[*]}; done)
That particular example may not end up crashing, but it's basically the situation that had existed in that group's code. If it does crash it'll likely be in the destructor, usually a double-free error.
Many years previous they added -zmuldefs to their build to get rid of multiply defined symbol errors. The compiler emits code for running constructors/destructors on global objects. -zmuldefs forces them to live at the same location in memory but it still runs the constructors/destructors once for the exe and each library that included the offending header -- hence the double-free.
I know that by default undefined symbols are ignored at compile time. However, I would also like them to be ignored at run-time. I need to distribute a .so that will run with and without MPI. I will know ahead of time if it is an MPI job and if it is not I won't make any MPI_* calls. If it's not an MPI run I need the application to not care that it cannot resolve the MPI_* symbols.
Is this possible? I could have sworn I've done this before but I can't get it working. Everytime I run I immediately get the following even though the logic in my code will never allow that symbol to be referenced:
undefined symbol: hpmp_comm_world
For what it's worth I am using the Intel Fortran Compiler to build the .so file.
EDIT
I found the linker flag: "-z lazy" which is supposed to resolve references to functions when the function is called which is what I want. That doesn't fix my problem, but hpmp_comm_world is a variable - not a function. Should that make a difference?
You can define a symbol to be a weak reference to its definition. Then, the symbol's value will be zero if the definition is not present.
For example, suppose the following is ref.c, which references a function and variable that may or may not be present; we'll use it to build libref.so (corresponding to your library, in your question):
#include <stdio.h>
void global_func(void);
void global_func(void) __attribute__ ((weak));
extern int global_variable __attribute__((weak));
void ref_func() {
printf("global_func = %p\n", global_func);
if (&global_variable)
global_variable++;
if (global_func)
global_func();
}
Here, global_func and global_variable are the weak references to the possibly-available function and variable. This code prints the function's address, increments the variable if it is present, and calls the function if it is present. (Note that the function's and variable's addresses are zero when they are not defined, so it is &global_variable that you must compare with zero.)
And suppose this is def.c, which defines global_func and global_variable; we'll use it to build libdef.so (corresponding to MPI, in your question):
#include <stdio.h>
int global_variable;
void global_func(void) {
printf("Hi, from global_func! global_variable = %d\n", global_variable);
}
And finally, suppose we have a main program, main.c, which calls ref_func from libref.so:
#include <stdio.h>
extern void ref_func(void);
int main(int argc, char **argv) {
printf("%s: ", argv[0]);
ref_func();
return 0;
}
Here's the Makefile that builds libref.so and libdef.so, and then builds two executables, both of which link against libref.so, but only one of which links against libdef.so:
all: ref-absent ref-present
ref-absent: main.o libref.so
$(CC) $(CFLAGS) $(LDFLAGS) $^ -o $#
ref-present: main.o libref.so libdef.so
$(CC) $(CFLAGS) $(LDFLAGS) $^ -o $#
lib%.so: %.o
$(CC) $(CFLAGS) $(LDFLAGS) -shared $^ -o $#
ref.o def.o: CFLAGS += -fpic
clean:
rm -f *.o *.so ref-absent ref-present
Do the build:
$ make
cc -c -o main.o main.c
cc -fpic -c -o ref.o ref.c
cc -shared ref.o -o libref.so
cc main.o libref.so -o ref-absent
cc -fpic -c -o def.o def.c
cc -shared def.o -o libdef.so
cc main.o libref.so libdef.so -o ref-present
$
Note that both ref-absent and ref-present linked without problems, even though there is no definition for global_name in ref-absent.
Now we can run the programs, and see that ref-absent skips the function call, while ref-present uses it. (We have to set LD_LIBRARY_PATH to allow the dynamic linker to find our shared libraries in the current directory.)
$ LD_LIBRARY_PATH=. ./ref-absent
./ref-absent: global_func = (nil)
$ LD_LIBRARY_PATH=. ./ref-present
./ref-present: global_func = 0x15d4ac
Hi, from global_func! global_variable = 1
$
The trick for you will be getting the ((weak)) attribute attached to every declaration of every MPI function your library references. However, as ref.c shows, there can be multiple declarations, and as long as one of them mentions the weak attribute, you're done. So you'll probably have to say something like this (I don't really know MPI):
#include <mpi.h>
mpi_fake_type_t mpi_function_foo(mpi_arg_type_t) __attribute__((weak));
mpi_fake_type_t mpi_function_bar(mpi_other_arg_type_t) __attribute__((weak));
Every reference to an MPI function needs to be in the scope of a ((weak)) declaration for that function; that's how the compiler decides what sort of symbol reference to put in the object file. You'll want to have automated tests to verify that you haven't accidentally generated any non-weak references.