Linux - can you compile AND run a program in one terminal line? - linux

For example, a program named program.c
g++ program.c -o programName
./programName
Is there any way to consolidate these two lines?

Yes, you could write...
g++ program.c -o programName && ./programName
Which will only attempt to run your program if compilation succeeded.
For a more general approach, you could write a bash script..
#!/bin/sh
g++ $1 -o $2 && ./$2
Then you could do (provided it's on your PATH, it's executable and it's called mycompile)...
mycompile program.c programName
To make this program available on your PATH, you can pop it in your bin directory or any directory under echo $PATH. If you don't wish to do that, open your ~/.bashrc file and add its parent directory to your PATH with PATH="$PATH:your/new/dir" (keep in mind all scripts in that folder will be now reachable).
Ensure it's executable (check with ls -l mycompile), if not, you can add that permission with chmod +x mycompile.

Like this:
g++ program.c -o programName && ./programName
Notice that the commands will run sequentially, that is: one after the other.

Related

How can I run executable in a different folder with a bash script?

I have a program, a.out, that will be set up in some sequence of folders, each folder gets an a.out and will produce some results in each folder. I am trying to execute these same programs in parallel. If I am in the folder, I just do ./a.out and it would run. I must execute a.out in its folder because a.out looks for a file inside the current directory. So if I am not in its folder, it won't find that file.
Running these programs is part of another job that is based in the rootDir. I am using MATLAB so I am trying to avoid using cd inside MATLAB since that would recompile the MATLAB code every time I use cd and greatly slow down the code.
I use the MATLAB code to write a CallParallel.sh, in it I have this line:
for i in ${JobsOnThisNode[#]};do echo $i;done | xargs -n1 -P ${SLURM_NTASKS_PER_NODE} sh -c '"$1"' sh;
$1 basically gets this command for each batch of parallel jobs incremented by jname and cname:
cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/$jname/$cname/ && ./a.out
I have tested this code from the rootDir and it successfully runs this program in the other folder. However, when I execute it in the bash script, I get the following errors:
sh: /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1/: Is a directory
sh: &&: command not found
sh: ./a.out: No such file or directory
If I am understanding it correctly, somehow it does not recognize && and cd somehow only checks if it is a directory instead of actually changing to that directory, and as a result, there is no a.out to be found in the rootDir.
When I try this:
sh '"cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1"'
I get:
sh: "cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1": No such file or directory
sh '"cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1"'
means, interpret "cd /mnt/home/thrust2/zf1005/Matlab/GAFit/RunningFolder/1/1" using sh, which does not exist.
to simplify things you can create a runner script that takes the dir as an argument
#! /bin/bash
cd "$1" && /path/to/a.out
and then invoke runner from xargs.
BTW, you need only 1 a.out not 1 per dir.

How can I run an executable file without the "./" using a MakeFile?

I want to run my program with an executable without the "./"
For example lets say I have the makefile:
all: RUN
RUN: main.o
gcc -0 RUN main.o
main.o: main.c
gcc -c main.c
So in order to run the program normally I would say in the terminal "make" then put "./RUN" to invoke the program.
But I would just like to say in the terminal "make" then "RUN" to invoke the program.
So to conclude I would just like to say >RUN instead of >./RUN inside the terminal. Is there any command I can use to do this inside the Makefile?
When I just put "RUN" in the terminal it just says command not found.
It is a matter of $PATH, which is imported by make from your environment.
You might set it in your Makefile, perhaps with
export PATH=$(PATH):.
or
export PATH:=$(shell echo $$PATH:.)
but I don't recommend doing that (it could be a security hole).
I recommend on the contrary using explicitly ./RUN in your Makefile, which is much more readable and less error-prone (what would happen if you got a RUN program somewhere else in your PATH ?).
BTW, you'll better read more about make, run once make -p to understand the builtin rules known to make, and have
CC= gcc
CFLAGS+= -Wall -g
(because you really want all warnings & debug info)
and simply
main.o: main.c
(without recipes in that rule) in your Makefile
change your makefile to
all: RUN
RUN: main.o
gcc -o RUN main.o && ./RUN
main.o: main.c
gcc -c main.c
just put ./filename in your makefile

Executing bash cd command in Makefile [duplicate]

For example, I have something like this in my makefile:
all:
cd some_directory
But when I typed make I saw only 'cd some_directory', like in the echo command.
It is actually executing the command, changing the directory to some_directory, however, this is performed in a sub-process shell, and affects neither make nor the shell you're working from.
If you're looking to perform more tasks within some_directory, you need to add a semi-colon and append the other commands as well. Note that you cannot use new lines as they are interpreted by make as the end of the rule, so any new lines you use for clarity need to be escaped by a backslash.
For example:
all:
cd some_dir; echo "I'm in some_dir"; \
gcc -Wall -o myTest myTest.c
Note also that the semicolon is necessary between every command even though you add a backslash and a newline. This is due to the fact that the entire string is parsed as a single line by the shell. As noted in the comments, you should use '&&' to join commands, which means they only get executed if the preceding command was successful.
all:
cd some_dir && echo "I'm in some_dir" && \
gcc -Wall -o myTest myTest.c
This is especially crucial when doing destructive work, such as clean-up, as you'll otherwise destroy the wrong stuff, should the cd fail for whatever reason.
A common usage, though, is to call make in the subdirectory, which you might want to look into. There's a command-line option for this, so you don't have to call cd yourself, so your rule would look like this
all:
$(MAKE) -C some_dir all
which will change into some_dir and execute the Makefile in that directory, with the target "all". As a best practice, use $(MAKE) instead of calling make directly, as it'll take care to call the right make instance (if you, for example, use a special make version for your build environment), as well as provide slightly different behavior when running using certain switches, such as -t.
For the record, make always echos the command it executes (unless explicitly suppressed), even if it has no output, which is what you're seeing.
Starting from GNU make 3.82 (July 2010), you can use the .ONESHELL special target to run all recipes in a single instantiation of the shell (bold emphasis mine):
New special target: .ONESHELL instructs make to invoke a single instance of the shell and provide it with the entire recipe, regardless of how many lines it contains.
.ONESHELL: # Applies to every targets in the file!
all:
cd ~/some_dir
pwd # Prints ~/some_dir if cd succeeded
another_rule:
cd ~/some_dir
pwd # Prints ~/some_dir if cd succeeded
Note that this will be equivalent to manually running
$(SHELL) $(.SHELLFLAGS) "cd ~/some_dir; pwd"
# Which gets replaced to this, most of the time:
/bin/sh -c "cd ~/some_dir; pwd"
Commands are not linked with && so if you want to stop at the first one that fails, you should also add the -e flag to your .SHELLFLAGS:
.SHELLFLAGS += -e
Also the -o pipefail flag might be of interest:
If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. This option is disabled by default.
Here's a cute trick to deal with directories and make. Instead of using multiline strings, or "cd ;" on each command, define a simple chdir function as so:
CHDIR_SHELL := $(SHELL)
define chdir
$(eval _D=$(firstword $(1) $(#D)))
$(info $(MAKE): cd $(_D)) $(eval SHELL = cd $(_D); $(CHDIR_SHELL))
endef
Then all you have to do is call it in your rule as so:
all:
$(call chdir,some_dir)
echo "I'm now always in some_dir"
gcc -Wall -o myTest myTest.c
You can even do the following:
some_dir/myTest:
$(call chdir)
echo "I'm now always in some_dir"
gcc -Wall -o myTest myTest.c
What do you want it to do once it gets there? Each command is executed in a subshell, so the subshell changes directory, but the end result is that the next command is still in the current directory.
With GNU make, you can do something like:
BIN=/bin
foo:
$(shell cd $(BIN); ls)
Here is the pattern I've used:
.PHONY: test_py_utils
PY_UTILS_DIR = py_utils
test_py_utils:
cd $(PY_UTILS_DIR) && black .
cd $(PY_UTILS_DIR) && isort .
cd $(PY_UTILS_DIR) && mypy .
cd $(PY_UTILS_DIR) && pytest -sl .
cd $(PY_UTILS_DIR) && flake8 .
My motivations for this pattern are:
The above solution is simple and readable (albeit verbose)
I read the classic paper "Recursive Make Considered Harmful", which discouraged me from using $(MAKE) -C some_dir all
I didn't want to use just one line of code (punctuated by semicolons or &&) because it is less readable, and I fear that I will make a typo when editing the make recipe.
I didn't want to use the .ONESHELL special target because:
that is a global option that affects all recipes in the makefile
using .ONESHELL causes all lines of the recipe to be executed even if one of the earlier lines has failed with a nonzero exit status. Workarounds like calling set -e are possible, but such workarounds would have to be implemented for every recipe in the makefile.
To change dir
foo:
$(MAKE) -C mydir
multi:
$(MAKE) -C / -C my-custom-dir ## Equivalent to /my-custom-dir
PYTHON = python3
test:
cd src/mainscripts; ${PYTHON} -m pytest
#to keep make file in root directory and run test from source root above #worked for me.
Like this:
target:
$(shell cd ....); \
# ... commands execution in this directory
# ... no need to go back (using "cd -" or so)
# ... next target will be automatically in prev dir
Good luck!

How to configure GCC to show all warnings by default?

I think it will be good and not much bad if -Wall flag is switched on by default. How do I configure GCC like this?
Is there any drawbacks to this other than the fact that a lot of warnings will flood your terminal when you are compiling some large program from source?
Add these lines to your ~/.bashrc if you use bash as your shell.
alias gcc='gcc -Wall'
Update:
you can refer to this question on https://superuser.com/questions/519692/alias-gcc-gcc-fpermissive-or-modifying-configure-script
If you use make, you need to overwrite make's variables CC and CXX from within the .bashrc:
export CC="gcc -wall"
export CXX="g++ -wall"
juzzlin suggested that a good method would be to write a wrapper for gcc. Marc Glisse also suggested that writing one is the best way to achieve what I want. So that's just what I did.
I made a bash script that calls gcc for me:
#!/bin/sh
echo -n "Compiling $1..."
gcc -Wall -Werror -o $(basename $1 .c).out $1
a=$?
if [[ "$a" -eq 1 ]]; then
echo "Failed!"
else
echo "Done."
echo "Executing:"
./$(basename $1 .c).out
fi
Then I copied the script to /usr/bin and made it executable:
sudo cp car /usr/bin
chmod +x /usr/bin/car
(The name of the script is car which stands for "Compile And Run")
So whenever I want to compile a source file and run it, I will type:
car mysourcefile.c
As discussed in the comments (although it's not a direct answer to your question), using a Makefile has many benefits. It provides a place where to put your build commands, that will alway stay up to date if you only build with make. It also ease running tests at each build.
Writing tests is a good habit, even when you're just working on a small and unsignificant piece of code for homeworks. It allows you to spot some dumb mistakes that you would otherwise miss, and to be sure you don't break your existing code by modifying it (especially the last minute modification).
An example of such a Makefile (here I have nothing to build apart from the test because it's a header only component):
all:
g++ -O2 -Wall -Werror -std=c++11 test_polynomial.cc -o test_polynomial -lgmp
g++ -O2 -Wall -Werror -std=c++11 test_g2polynomial.cc -o test_g2polynomial
./test_polynomial --log_level=test_suite
./test_g2polynomial --log_level=test_suite
clean:
rm -f test_polynomial test_g2polynomial
Note: The example is not a very good one as I don't even factorize the build options in CFLAGS. If I want to add a flag, I have to add it in both commands !
Another benefit is that you always run make to build, whatever the language, the dependencies or even the build system (when working on a project using scons or another build system, I still write a Makefile doing all the commands I do when building and testing !).
This allows my personal addition on it (but here we're completely off-topic): I have a build script named autobuild looping on make each time I write a file in vim. I code in screen and run autobuild in a small window at the bottom of my screen. This way, each change is built and tested as soon as I write the file.

/bin/sh: pushd: not found

I am doing the following inside a make file
pushd %dir_name%
and i get the following error
/bin/sh : pushd : not found
Can someone please tell me why this error is showing up ?
I checked my $PATH variable and it contains /bin so I don't think that is causing a problem.
pushd is a bash enhancement to the POSIX-specified Bourne Shell. pushd cannot be easily implemented as a command, because the current working directory is a feature of a process that cannot be changed by child processes. (A hypothetical pushd command might do the chdir(2) call and then start a new shell, but ... it wouldn't be very usable.) pushd is a shell builtin, just like cd.
So, either change your script to start with #!/bin/bash or store the current working directory in a variable, do your work, then change back. Depends if you want a shell script that works on very reduced systems (say, a Debian build server) or if you're fine always requiring bash.
add
SHELL := /bin/bash
at the top of your makefile
I have found it on another question How can I use Bash syntax in Makefile targets?
A workaround for this would be to have a variable get the current working directory. Then you can cd out of it to do whatever, then when you need it, you can cd back in.
i.e.
oldpath=`pwd`
#do whatever your script does
...
...
...
# go back to the dir you wanted to pushd
cd $oldpath
sudo dpkg-reconfigure dash
Then select no.
This is because pushd is a builtin function in bash. So it is not related to the PATH variable and also it is not supported by /bin/sh (which is used by default by make. You can change that by setting SHELL (although it will not work directly (test1)).
You can instead run all the commands through bash -c "...". That will make the commands, including pushd/popd, run in a bash environment (test2).
SHELL = /bin/bash
test1:
#echo before
#pwd
#pushd /tmp
#echo in /tmp
#pwd
#popd
#echo after
#pwd
test2:
#/bin/bash -c "echo before;\
pwd; \
pushd /tmp; \
echo in /tmp; \
pwd; \
popd; \
echo after; \
pwd;"
When running make test1 and make test2 it gives the following:
prompt>make test1
before
/download/2011/03_mar
make: pushd: Command not found
make: *** [test1] Error 127
prompt>make test2
before
/download/2011/03_mar
/tmp /download/2011/03_mar
in /tmp
/tmp
/download/2011/03_mar
after
/download/2011/03_mar
prompt>
For test1, even though bash is used as a shell, each command/line in the rule is run by itself, so the pushd command is run in a different shell than the popd.
This ought to do the trick:
( cd dirname ; pwd ); pwd
The parentheses start a new child shell, thus the cd changes the directory within the child only, and any command after it within the parentheses will run in that folder. Once you exit the parentheses you are back in wherever you were before..
here is a method to point
sh -> bash
run this command on terminal
sudo dpkg-reconfigure dash
After this you should see
ls -l /bin/sh
point to /bin/bash (and not to /bin/dash)
Reference
Your shell (/bin/sh) is trying to find 'pushd'. But it can't find it because 'pushd','popd' and other commands like that are build in bash.
Launch you script using Bash (/bin/bash) instead of Sh like you are doing now, and it will work
Synthesizing from the other responses: pushd is bash-specific and you are make is using another POSIX shell. There is a simple workaround to use separate shell for the part that needs different directory, so just try changing it to:
test -z gen || mkdir -p gen \
&& ( cd $(CURRENT_DIRECTORY)/genscript > /dev/null \
&& perl genmakefile.pl \
&& mv Makefile ../gen/ ) \
&& echo "" > $(CURRENT_DIRECTORY)/gen/SvcGenLog
(I substituted the long path with a variable expansion. I probably is one in the makefile and it clearly expands to the current directory).
Since you are running it from make, I would probably replace the test with a make rule, too. Just
gen/SvcGenLog :
mkdir -p gen
cd genscript > /dev/null \
&& perl genmakefile.pl \
&& mv Makefile ../gen/ \
echo "" > gen/SvcGenLog
(dropped the current directory prefix; you were using relative path at some points anyway)
And than just make the rule depend on gen/SvcGenLog. It would be a bit more readable and you can make it depend on the genscript/genmakefile.pl too, so the Makefile in gen will be regenerated if you modify the script. Of course if anything else affects the content of the Makefile, you can make the rule depend on that too.
Note that each line executed by a make file is run in its own shell anyway. If you change directory, it won't affect subsequent lines. So you probably have little use for pushd and popd, your problem is more the opposite, that of getting the directory to stay changed for as long as you need it!
Run "apt install bash"
It will install everything you need and the command will work

Resources