Color termcaps Konsole? - colors

I've got a problem with ANSI escape codes in my terminal on OpenSuse 13.2.
My Makefile use to display pretty colors on OSX at work but at home when I use it I get the litteral termcaps such as \033[1;30m ... \033[0m
I know close to nothing about termcaps, I just found these escape characters that seemed to be working fine ! The strangest is that both my OSX and Linux terminal are configured with TERM=xterm-256color so I really don't know where to look for the correct setting I'm currently missing on Linux.
TL;DR: How to get escape codes such as \033[1;30m working in Konsole with xterm-256color ?
Edit: Here's a snippet of the Makefile I am talking about:
\Here's a snippet of the Makefile I am talking about:
# Display settings
RED_L = \033[1;31m
GREEN_L = \033[1;32m
GREEN = \033[0;32m
BLUE = \033[0;34m
RED = \033[0;31m
all: $(OBJ_DIR) $(NAME)
$(OBJ_DIR):
#mkdir -p $(OBJ_DIR)
$(NAME): $(OBJ)
#echo "$(BLUE)Linking binary $(RED)$(NAME)$(BLUE).\n"
#$(CC) -o $# $^ $(LFLAGS)
#echo "\t✻ $(GRAY)$(CC) -o $(RED)$(NAME)$(GRAY) object files:$(GREEN) OK! √\n$(NC)

The example which you gave does not rely upon the setting of TERM (unless it is going someplace other than the terminal, e.g., via some program which interprets it such as the ls program, which has its own notion about colors). It would help if you quoted the section of the makefile which uses the escape sequences. Without that, we can offer only generic advice, e.g,. by assuming you have an echo command in the makefile.
The place to start looking is at the shell which your makefile uses. One would expect bash to be the default shell on OpenSUSE. But suppose you are actually using some other shell which happens to not recognize the syntax you are using, and trying to do something like
echo '\033[1;34mhello\033[m'
To help ensure that you are using the expected shell, you can put an assignment in your makefile, e.g.,
SHELL = /bin/sh
This assumes that /bin/sh itself is going to work as intended. However, that is commonly a symbolic link (for Linux) to the real shell. If so, one possible solution would be to change the real shell using OpenSUSE's update-alternatives feature to change the shell to bash (or zsh).
For additional information, see the discussion of SHELL in the GNU make manual.
Reflecting comments on the version of make -- GNU make 4.0 is known to have incompatible changes versus 3.81, as noted in the thread GNU Make 4.0 released on LWN.net. In particular, there are several comments relating to your problem, starting here.
However, checking a recent Fedora, it seems that the problem really is that the default behavior for echo has changed. As noted in other discussions (such as Why doesn't echo support “\e” (escape) when using the -e argument in MacOSX), this was done to improve POSIX compatibility. You can get your colors back by adding a -e option to the echo commands.

I finally found the solution:
the problem was I used echo instead of echo -e which seems to be the default behaivour on Mac OSX.
Thanks for your help though, it lead me to good lectures :)

Related

Argument list too long when linking with GNU Make

I have a reasonably large project (4272 .o files) and I can't get it to link with GNU Make. I run into make: /bin/sh: Argument list too long. This is a Qt 5 project that uses qmake to generate the makefile.
I know there are lots of questions about this, but I don't know how to apply any of the solutions to my problem. I'm also not totally sure why I'm running into this at the linking step. The error I get is:
make: /bin/sh: Argument list too long
The makefile entry for linking my project looks like this:
build/debug/my_target/my_target: $(OBJECTS)
#test -d build/debug/my_target/ || mkdir -p build/debug/my_target/
$(LINK) $(LFLAGS) -o $(TARGET) $(OBJECTS) $(OBJCOMP) $(LIBS)
which expands to something like:
#echo linking /build/debug/my_target/my_target && clang++ -ccc-gcc-name g++ -lc++ -L/path/to/licensing/lib -Wl,-rpath,/path/to/qt/lib -Wl,-rpath-link,/path/to/qt/lib -o build/debug/my_target/my_target build/debug/my_target/obj/object1.o build/debug/my_target/obj/object2.o ... build/debug/my_target/obj/object4272.o ... [ a bunch of moc_X.o ] ... [ a bunch of libs ] -lGL -lpthread -no-pie
This is pretty long. But here's where it gets weird: when I put the expanded command after the #echo linking build/debug/my_target/my_target && into a shell script, and it runs. The shell script is 202,420 characters (including the #!/bin/sh line). Also, if I get rid of the #echo ... && part of the command I can run make and linking works.
Another workaround: if I manually edit my makefile so that the linking command contains build/debug/my_target/*.o instead of $(OBJECTS) it works:
build/debug/my_target/my_target: $(OBJECTS)
#test -d build/debug/my_target/ || mkdir -p build/debug/my_target/
$(LINK) $(LFLAGS) -o $(TARGET) build/debug/my_target/*.o $(OBJCOMP) $(LIBS)
I don't think I can get qmake to do this, though, so I'm stuck manually editing my makefile unless I can find another solution.
Answers to similar problems seem to focus on line breaks and how they're handled in makefiles. My shell script only has two lines (one after #!/bin/sh and one after the actual command). Also, one solution that people have come up with (for example this one) uses a for loop to iteratively run a command on each argument. I'm not sure how I could apply this here, since (I think) I need all those object files in my linker command.
How does #echo cause the max argument length to be exceeded?
Questions I originally asked that aren't really relevant:
(Note: as originally posted this question missed the #echo at the beginning of the linking command. That seems to be the answer to "why is this happening" and as such I don't really need to know the answer to the second question, which is answered in the first comment in any case).
Why is this happening? How is it that make is running into this error with a command that I can apparently run in a shell script?
How can I get around this if there's no way to run my command as an iterative series of shorter commands?
Various details about my system that might be relevant:
I'm running a fairly up-to-date Arch Linux system, kernel 5.8.10
ARG_MAX value is 2097152, the output from xargs --show-limits is:
Your environment variables take up 2343 bytes
POSIX upper limit on argument length (this system): 2092761
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 2090418
Size of command buffer we are actually using: 131072
Maximum parallelism (--max-procs must be no greater): 2147483647
ulimit -s output: 8192 (I've tried setting this to much larger values, e.g. ulimit -s 65536 without success, which maybe isn't surprising since ARG_MAX appears to be much larger than the linker command).
GNU Make version is 4.3
clang/clang++ version is 10.0.1
Qt version is 5.15.1 (I'm fairly certain this isn't relevant, we've just switched our project over from 5.9.6 and I had the same problem then as well).
Just FYI, the reason removing the echo fixes the problem (this is what I was going to suggest as well) is that when you remove the special shell operator && and just have a simple command invocation with no shell features like multiple commands, special quoting, globbing, etc. then make uses the "fast path" to invoke your command.
That is, if make can determine that the shell would do nothing special with your command, other than run it, make will skip invoking the shell and instead run your command directly.
In that case you will not run up against the single-argument limit because it doesn't use the /bin/sh -c '...' form.
Of course, this can be a little magical and inflexible since you have to be careful to ensure no special shell operations are ever included in your link line. But if you can ensure this then it should solve your problem.
Why is this happening? How is it that make is running into this error with a command that I can apparently run in a shell script?
Because make is running the shell commands (recipes) by passing them as a single argument to /bin/sh -c, and not only will that run into the OS's limit on command line arguments + environment variables, but also into the much lower limit that Linux is imposing on a single string from the command line or environment, which is usually 128k bytes.
How can I get around this if there's no way to run my command as an iterative series of shorter commands?
As suggested by #ephemient can use the #arglist argument of gcc or ld (which directs it to take its arguments from a file), and use the file function of GNU make to create that arglist file, which being internal to make, will not run into any that OS limit.

Broken tab completion on make under linux

I have no idea how tab completion works, but all of a sudden mine is broken. I don't even know what info to provide other than the use case.
there is a target clean in the makefile.
$ make c<tab> results in
$ make c23:set: command not found
lean
EDIT:
I believe somehow I ruined the set bash built-in since man set says No manual entry for set and which set doesn't report anything. Invoking set on the terminal, however, produces result.
I'm using: GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) and GNU Make 3.81
thanks to Etan's comment and Aaron's indication of where makefiles are, I managed to debug this.
I ran set -x so I could track what was happening when doing the tab completion. The output of make c<tab> consists mostly of commands from the bash completion file for make, located at /usr/share/bash-completion/completions/make (1).
However, I noticed the an inconsistency between the output and the file. Towards the end, the output said:
+ local mode=--
+ (( COMP_TYPE != 9 ))
++ set +o
++ grep --colour=auto -n -F posix
+ local 'reset=23:set +o posix'
+ set +o posix
Which I identified as corresponding to these lines from the file:
if (( COMP_TYPE != 9 )); then
mode=-d # display-only mode
fi
local reset=$( set +o | grep -F posix ); set +o posix # for <(...)
So the output did a grep --colour=auto -n instead of just grep. Indeed, I had setup this alias for grep
Make worked as soon as I removed the alias.
I hope this helps others debug their problems.
EDIT: I have submitted a bug report here: https://alioth.debian.org/tracker/index.php?func=detail&aid=315108&group_id=100114&atid=413095
Look into /etc/bash_completion, /etc/bash_completion.d and/or /usr/share/bash-completion/completions. You should find a file make which contains the script that will be called when press Tab.
Use the packaging system of your Linux distro to validate the file (or maybe revert to an older version).
Another cause of this could be something in the Makefile which throws the parser in the BASH completion script off the track.
Not trying to get any credits here, but the best solution is actually a bit hidden in the comments...
Please vote this comment up instead of my answer!
easy steps to fix this:
sudo vi /usr/share/bash-completion/completions/make
find the line that has the grep instruction. It should look like this:
local reset=$( set +o | grep -F posix ); set +o posix # for <(...)
add a "\" before the "grep" instruction:
local reset=$( set +o | \grep -F posix ); set +o posix # for <(...)

A sh line that scares me, is it portable?

I'm currently working on pm2, a process manager for NodeJS.
As it's targeted at Javascript, a new standard is coming, ES6.
To enable it on NodeJS I have to add the option --harmony.
Now for the bash part, I have to let the user pass this option to the interpreter that executes the file. By crawling the web (and found on Stackoverflow) I found this :
#!/bin/sh
':' //; exec "`command -v nodejs || command -v node`" $PM2_NODE_OPTIONS "$0" "$#"
bin line
Looks like a nice hack but is it portable enough ? On CentOS, FreeBSD...
It's kind of critical so I want to be sure.
Thank you
Let's break down the line of interest.
: is a do nothing in shells.
; is a command separator.
exec will replace the current process with the process of the command that it is executing.
Notice that in the exec command it passes "$0" and "$#" as parameter to the command?
This will allow the new process to read the script denoted by $0 and use it as a script input and reads the original parameters as well $#
The new process will read the input script from the beginning and ignore the comments like #!/bin/sh. and will also ignore :.
Here's the trick. Most interpreters, including perl, uses syntax that are ignored by shell or vice-versa so that on re-reading the input file, the interpreter will not exec itself again.
In this case, the new process ignored the whole line from :. The reason why the rest of the line is ignored? On some c like interpreters, // is a comment.
I forgot to answer your question. Yes it seems portable. There may be corner cases but I can't think of any right now.
To enable it on NodeJS I have to add the option --harmony.
Not necessarily. You can use normal "#!/usr/bin/env node" shebang, but set a harmony flags in runtime using setflags module.
I'm not sure it's better solution, but it's worth mentioning.

Determine interpreter from inside script

I have a script; it needs to use bash's associative arrays (trust me on that one).
It needs to run on normal machines, as well as a certain additional machine that has /bin/bash 3.2.
It works fine if I declare the interpreter to be /opt/userwriteablefolder/bin/bash4, the location of bash 4.2 that I put there.. but it then only works on that machine.
I would like to have a test at the beginning of my script that checks what the interpreting shell is, and if it's bash3.2, calls bash4 $0 $#. The problem is that I can't figure out any way to determine what the interpreting shell is. I would really rather not do a $HOSTNAME based decision, but that will work if necessary (It's also awkward, because it needs to pass a "we've done this already" flag).
For a couple reasons, "Just have two scripts" is not a good solution.
You can check which interpreter is used by looking at $SHELL, which contains the full path to the shell executable (ex. /bin/bash)
Then, if it is Bash, you can check the Bash version in various ways:
${BASH_VERSINFO[*]} -- an array of version components, e.g. (4 1 5 1 release x86_64-pc-linux-gnu)
${BASH_VERSION} -- a string version, e.g. 4.1.5(1)-release
And of course, "$0" --version
This could be an option, depending on how you launch the script:
Install bash 4.2 as /opt/userwriteablefolder/bin/bash.
Use '#!/usr/bin/env bash' as the shebang in your script.
Add '/opt/userwriteablefolder/bin' to the front of PATH in the environment from which
your script is called, so that the bash there will be used if present, otherwise
the regular bash will be used.
The benefit would be to avoid having to detect the version of bash at runtime, but I realize your setup may not make step 3 desirable.

REDUX: How to overcome an incompatibility between the ksh on Linux vs. that installed on AIX/Solaris/HPUX?

I have uncovered another problem in the effort that we are making to port several hundreds of ksh scripts from AIX, Solaris and HPUX to Linux. See here for the previous problem.
This code:
#!/bin/ksh
if [ -a k* ]; then
echo "Oh yeah!"
else
echo "No way!"
fi
exit 0
(when run in a directory with several files whose name starts with k) produces "Oh yeah!" when called with the AT&T ksh variants (ksh88 and ksh93). On the other hand it produces and error message followed by "No way!" on the other ksh variants (pdksh, MKS ksh and bash).
Again, my question are:
Is there an environment variable that will cause pdksh to behave like ksh93? Failing that:
Is there an option on pdksh to get the required behavior?
I wouldn't use pdksh on Linux anymore.
Since AT&T ksh has become OpenSource there are packages available from the various Linux distributions. E.g. RedHat Enterprise Linux and CentOS include ksh93 as the "ksh" RPM package.
pdksh is still mentioned in many installation requirement documentations from software vendors. We replaced pdksh on all our Linux systems with ksh93 with no problems so far.
Well after one year there seems to be no solution to my problem.
I am adding this answer to say that I will have to live with it......
in Bash the test -a operation is for a single file.
I'm guessing that in Ksh88 the test -a operation is for a single file, but doesn't complain because the other test words are an unspecified condition to the -a.
you want something like
for K in /etc/rc2.d/K* ; do test -a $K && echo heck-yea ; done
I can say that ksh93 works just like bash in this regard.
Regrettably I think the code was written poorly, my opinion, and likely a bad opinion since the root cause of the problem is the ksh88 built-in test allowing for sloppy code.
You do realize that [ is an alias (often a link, symbolic or hard) for /usr/bin/test, right? So perhaps the actual problem is different versions of /usr/bin/test ?
OTOH, ksh overrides it with a builtin. Maybe there's a way to get it to not do that? or maybe you can explicitly alias [ to /usr/bin/test, if /usr/bin/test on all platforms is compatible?

Resources