I have a test, call it sometest, registered in SCons which can be invoked by doing:
scons sometest.test
But sometest also expects command-line arguments, namely it looks for the flag -beHappy, to run in another mode.
When I attempt to invoke the test using:
scons sometest.test -beHappy
I get a few warnings from SCons:
Warning: ignoring -e option
Warning: the -W option is not yet implemented
Then it proceeds to run the test without passing the parameter.
I tried
scons sometest.test -beHappy
scons 'sometest.test -beHappy'
to no avail.
I don't use scons, but it may be worth trying the old double hyphen, --. Many shells use this to disable option processing passing the options onto the underlying script to process.
scons sometest.test -- -beHappy
From bash's man page:
A -- signals the end of options and disables further option processing. > Any arguments after the -- are treated as filenames and arguments. An argument of - is equivalent to --.
Check out bash's man page and search for -- for more information. If you use another shell be sure to check out that shell's man page as well.
Hope this helps.
Try...
scons sometest.test --beHappy
SCons options that are longer than one character are prefixed with two dashes (-).
It's helpful to know that Scons uses optparse under the hood.
When you entered -beHappy, scons read that as the equivalent -b -e -H -a -p -p -y
Related
I have a reasonably large project (4272 .o files) and I can't get it to link with GNU Make. I run into make: /bin/sh: Argument list too long. This is a Qt 5 project that uses qmake to generate the makefile.
I know there are lots of questions about this, but I don't know how to apply any of the solutions to my problem. I'm also not totally sure why I'm running into this at the linking step. The error I get is:
make: /bin/sh: Argument list too long
The makefile entry for linking my project looks like this:
build/debug/my_target/my_target: $(OBJECTS)
#test -d build/debug/my_target/ || mkdir -p build/debug/my_target/
$(LINK) $(LFLAGS) -o $(TARGET) $(OBJECTS) $(OBJCOMP) $(LIBS)
which expands to something like:
#echo linking /build/debug/my_target/my_target && clang++ -ccc-gcc-name g++ -lc++ -L/path/to/licensing/lib -Wl,-rpath,/path/to/qt/lib -Wl,-rpath-link,/path/to/qt/lib -o build/debug/my_target/my_target build/debug/my_target/obj/object1.o build/debug/my_target/obj/object2.o ... build/debug/my_target/obj/object4272.o ... [ a bunch of moc_X.o ] ... [ a bunch of libs ] -lGL -lpthread -no-pie
This is pretty long. But here's where it gets weird: when I put the expanded command after the #echo linking build/debug/my_target/my_target && into a shell script, and it runs. The shell script is 202,420 characters (including the #!/bin/sh line). Also, if I get rid of the #echo ... && part of the command I can run make and linking works.
Another workaround: if I manually edit my makefile so that the linking command contains build/debug/my_target/*.o instead of $(OBJECTS) it works:
build/debug/my_target/my_target: $(OBJECTS)
#test -d build/debug/my_target/ || mkdir -p build/debug/my_target/
$(LINK) $(LFLAGS) -o $(TARGET) build/debug/my_target/*.o $(OBJCOMP) $(LIBS)
I don't think I can get qmake to do this, though, so I'm stuck manually editing my makefile unless I can find another solution.
Answers to similar problems seem to focus on line breaks and how they're handled in makefiles. My shell script only has two lines (one after #!/bin/sh and one after the actual command). Also, one solution that people have come up with (for example this one) uses a for loop to iteratively run a command on each argument. I'm not sure how I could apply this here, since (I think) I need all those object files in my linker command.
How does #echo cause the max argument length to be exceeded?
Questions I originally asked that aren't really relevant:
(Note: as originally posted this question missed the #echo at the beginning of the linking command. That seems to be the answer to "why is this happening" and as such I don't really need to know the answer to the second question, which is answered in the first comment in any case).
Why is this happening? How is it that make is running into this error with a command that I can apparently run in a shell script?
How can I get around this if there's no way to run my command as an iterative series of shorter commands?
Various details about my system that might be relevant:
I'm running a fairly up-to-date Arch Linux system, kernel 5.8.10
ARG_MAX value is 2097152, the output from xargs --show-limits is:
Your environment variables take up 2343 bytes
POSIX upper limit on argument length (this system): 2092761
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 2090418
Size of command buffer we are actually using: 131072
Maximum parallelism (--max-procs must be no greater): 2147483647
ulimit -s output: 8192 (I've tried setting this to much larger values, e.g. ulimit -s 65536 without success, which maybe isn't surprising since ARG_MAX appears to be much larger than the linker command).
GNU Make version is 4.3
clang/clang++ version is 10.0.1
Qt version is 5.15.1 (I'm fairly certain this isn't relevant, we've just switched our project over from 5.9.6 and I had the same problem then as well).
Just FYI, the reason removing the echo fixes the problem (this is what I was going to suggest as well) is that when you remove the special shell operator && and just have a simple command invocation with no shell features like multiple commands, special quoting, globbing, etc. then make uses the "fast path" to invoke your command.
That is, if make can determine that the shell would do nothing special with your command, other than run it, make will skip invoking the shell and instead run your command directly.
In that case you will not run up against the single-argument limit because it doesn't use the /bin/sh -c '...' form.
Of course, this can be a little magical and inflexible since you have to be careful to ensure no special shell operations are ever included in your link line. But if you can ensure this then it should solve your problem.
Why is this happening? How is it that make is running into this error with a command that I can apparently run in a shell script?
Because make is running the shell commands (recipes) by passing them as a single argument to /bin/sh -c, and not only will that run into the OS's limit on command line arguments + environment variables, but also into the much lower limit that Linux is imposing on a single string from the command line or environment, which is usually 128k bytes.
How can I get around this if there's no way to run my command as an iterative series of shorter commands?
As suggested by #ephemient can use the #arglist argument of gcc or ld (which directs it to take its arguments from a file), and use the file function of GNU make to create that arglist file, which being internal to make, will not run into any that OS limit.
Is it possible to add an option to an existing Bash command?
For example I would like to run a shell script when I pass -foo to a specific command (cp, mkdir, rm...).
You can make an alias for e.g. cp which calls a special script that checks for your special arguments, and in turn call the special script:
$ alias cp="my-command-script cp $*"
And the script can look like
#!/bin/sh
# Get the actual command to be called
command="$1"
shift
# To save the real arguments
arguments=""
# Check for "-foo"
for arg in $*
do
case $arg in
-foo)
# TODO: Call your "foo" script"
;;
*)
arguments="$arguments $arg"
;;
esac
done
# Now call the actual command
$command $arguments
Some programmer dude's code may look cool and attractive... but you should use it very carefully for most commands: https://unix.stackexchange.com/questions/41571/what-is-the-difference-between-and
About usage of $* and $#:
You shouldn't use either of these, because they can break unexpectedly
as soon as you have arguments containing spaces or wildcards.
I was using this myself for at least months until I realized it was the reason why my bash code sometimes didn't work.
Consider much more reliable, but less easy and less portable option. As pointed out in comments, recompile original command with changes, that is:
Download c/c++ source code from some respected developers repositories:
https://github.com/torvalds/linux
http://git.savannah.gnu.org/cgit/coreutils.git/tree/src
https://github.com/coreutils/coreutils/tree/master/src
https://github.com/bluerise/openbsd-src/tree/master/bin
https://git.busybox.net/busybox/tree/coreutils
Add some code in c/c++, compile with gcc/g++.
Also, I guess, you can edit bash itself to set it to check if a string passed to bash as a command matches some pattern, don't execute this and execute some different command or a bash script
https://tiswww.case.edu/php/chet/bash/bashtop.html#Availability
If you really are into this idea of customizing and adding functionality to your shell, maybe check out some other cool fashionable shells like zsh, fish, probably they have something, I don't know.
I am looking at a tcsh script that has the following shebang line:
#!/bin/tcsh -fb
# then executes some commands
What does the -b do?
From the man page:
-b Forces a ''break'' from option processing, causing any further shell arguments to
be treated as non-option arguments. The remaining arguments will not be inter-
preted as shell options. This may be used to pass options to a shell script with-
out confusion or possible subterfuge. The shell will not run a set-user ID script
without this option.
But I don't really understand what it means...
An example would be great.
Thanks.
Say, for example, you have a script that is named --help and you want to execute it using tcsh:
tcsh --help
This will obviously not work. The -b forces tcsh to stop looking for arguments and treat the rest of the command line as file names or arguments to scripts. So, to run the above weirdly named script, you could do
tcsh -b --help
I wrote a program for an assignment which is supposed to print its output to stdout. The assignment spec requires the creation of a Makefile which when invoked as make run > outputFile should run the program and write the output to a file, which has a SHA1 fingerprint identical to the one given in the spec.
My problem is that my makefile:
...
run:
java myprogram
also prints the command which runs my program (e.g. java myprogram) to the output file, so that my file includes this extra line causing the fingerprint to be wrong.
Is there any way to execute a command without the command invocation echoing to the command line?
Add # to the beginning of command to tell gmake not to print the command being executed. Like this:
run:
#java myprogram
As Oli suggested, this is a feature of Make and not of Bash.
On the other hand, Bash will never echo commands being executed unless you tell it to do so explicitly (i.e. with -x option).
Even simpler, use make -s (silent mode)!
You can also use .SILENT
.SILENT: run
hi:
echo "Hola!"
run:
java myprogram
In this case, make hi will output command, but make run will not output.
The effect of preceding the command with an # can be extended to a section by extending the command using a trailing backslash on the line. If a .PHONY command is desired to suppress output one can begin the section with:
#printf "..."
I have just started using Linux and I am curious how shell built-in commands such as cd are defined.
Also, I'd appreciate if someone could explain how they are implemented and executed.
If you want to see how bash builtins are defined then you just need to look at Section 4 of The Bash Man Page.
If, however, you want to know how bash bultins are implemented, you'll need to look at the Bash source code because these commands are compiled into the bash executable.
One fast and easy way to see whether or not a command is a bash builtin is to use the help command. Example, help cd will show you how the bash builtin of 'cd' is defined. Similarly for help echo.
The actual set of built-ins varies from shell to shell. There are:
Special built-in utilities, which must be built-in, because they have some special properties
Regular built-in utilities, which are almost always built-in, because of the performance or other considerations
Any standard utility can be also built-in if a shell implementer wishes.
You can find out whether the utility is built in using the type command, which is supported by most shells (although its output is not standardized). An example from dash:
$ type ls
ls is /bin/ls
$ type cd
cd is a shell builtin
$ type exit
exit is a special shell builtin
Re cd utility, theoretically there's nothing preventing a shell implementer to implement it as external command. cd cannot change the shell's current directory directly, but, for instance, cd could communicate new directory to the shell process via a socket. But nobody does so because there's no point. Except very old shells (where there was not a notion of built-ins), where cd used some dirty system hack to do its job.
How is cd implemented inside the shell? The basic algorithm is described here. It can also do some work to support shell's extra features.
Manjari,
Check the source code of bash shell from ftp://ftp.gnu.org/gnu/bash/bash-2.05b.tar.gz
You will find that the definition of shell built-in commands in not in a separate binary executable but its within the shell binary itself (the name shell built-in clearly suggests this).
Every Unix shell has at least some builtin commands. These builtin commands are part of the shell, and are implemented as part of the shell's source code. The shell recognizes that the command that it was asked to execute was one of its builtins, and it performs that action on its own, without calling out to a separate executable. Different shells have different builtins, though there will be a whole lot of overlap in the basic set.
Sometimes, builtins are builtin for performance reasons. In this case, there's often also a version of that command in $PATH (possibly with a different feature set, different set of recognized command line arguments, etc), but the shell decided to implement the command as a builtin as well so that it could save the work of spawning off a short-lived process to do some work that it could do itself. That's the case for bash and printf, for example:
$ type printf
printf is a shell builtin
$ which printf
/usr/bin/printf
$ printf
printf: usage: printf [-v var] format [arguments]
$ /usr/bin/printf
/usr/bin/printf: missing operand
Try `/usr/bin/printf --help' for more information.
Note that in the above example, printf is both a shell builtin (implemented as part of bash itself), as well as an external command (located at /usr/bin/printf). Note that they behave differently as well - when called with no arguments, the builtin version and the command version print different error messages. Note also the -v var option (store the results of this printf into a shell variable named var) can only be done as part of the shell - subprocesses like /usr/bin/printf have no access to the variables of the shell that executed them.
And that brings us to the 2nd part of the story: some commands are builtin because they need to be. Some commands, like chmod, are thin wrappers around system calls. When you run /bin/chmod 777 foo, the shell forks, execs /bin/chmod (passing "777" and "foo") as arguments, and the new chmod process runs the C code chmod("foo", 777); and then returns control to the shell. This wouldn't work for the cd command, though. Even though cd looks like the same case as chmod, it has to behave differently: if the shell spawned another process to execute the chdir system call, it would change the directory only for that newly spawned process, not the shell. Then, when the process returned, the shell would be left sitting in the same directory as it had been in all along - therefore cd needs to be implemented as a shell builtin.
A Shell builtin -- http://linux.about.com/library/cmd/blcmdl1_builtin.htm
for eg. -
which cd
/usr/bin/which: no cd in (/usr/bin:/usr/local/bin......
Not a shell builtin but a binary.
which ls
/bin/ls
http://ss64.com/bash/ this will help you.
and here is shell scripting guide
http://www.freeos.com/guides/lsst/