Getting the linux error code from make in Vim - linux

I'm trying to get the "0 of success, nonzero if error" return code from make in Vim. Specifically, I am on Ubuntu and using v:shell_error does not work.
After digging around and looking at this question, it seems to be because of my shellpipe setting, which is
shellpipe=2>&1| tee
The tee pipes the make output back into vim. The shell is apparently returning the error code from tee to vim and not from make. How do I get make's error code instead?

You can try to make a custom function for that. E.g. using :call system("make > make.out") run make redirecting output into a file. After that load the error file using :cf make.out. Never tried that myself, though.
In the end, results of make might be also simply checked by testing whether the result is there, in the file system:
:make | if !filereadable("whatever-make-was-supposed-to-create") | throw "Make failed!!!" | endif
(Here the '|' symbol is vim's command separator.) Assigning that to a keyboard shortcut would remove the need for typing.
P.S. I usually try to make my programs to produce no warnings, so I never really came across the issue. What BTW leads to another possible solution: simply remove warnings (or simply undesired output lines) using e.g. grep -v tabooword from the make output by overriding the 'makeprg'. What is actually described in the help: :h 'makeprg'.
P.P.S. I got started on the VIM... Provided that you also use bash as a shell. Did you tried to add to the exit ${PIPESTATUS[0]} to the shellpipe? E.g.:
:set shellpipe=2>&1\ \|\ tee\ %s;exit\ \${PIPESTATUS[0]}
Just tested that on Debian and it worked for me. :h 'shellpipe' for more.

The only thing I can currently think of is creating two wrapper scripts for make and tee. I'm sure there's an easier way, but for now you might try this:
Create a make wrapper script:
#!/bin/bash
make $#
echo $? > ~/exit_code_cache
Create a tee wrapper script:
#!/bin/bash
tee $#
return `cat ~/exit_code_cache` # (or do something else with the exit code)
Use the new make :set makeprg=mymake and setup your own shellpipe that uses your tee wrapper (shellpipe=2>&1 | mytee).
It's not tested, but the idea should be clear. Hope it helps.

Related

Cli colors disappear when piping into a text file [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Can I react on entered command in bash?

I would like to configure my bash in a way so that I react on the event that the user enters a command. The moment they press Enter I would like my bash to run a script I installed first (analog to any PROMPT_COMMAND which is run each time a prompt is given out). This script should be able to
see what was entered,
maybe change it,
maybe even make the shell ignore it (i. e. make it not execute the line),
decide on whether the text shall be inserted in the history or not,
and maybe similar things.
I have not found a proper way to do this. My current implementations are all flawed and use things like debug traps to intervene before executing a command or (HISTTIMEFORMAT='%s '; history 1) to ask the history after the command execution is complete about things when the command was started etc (but that is only hindsight which is not really what I want).
I'd expect something like a COMMAND_INTERCEPTION variable which would work similar to PROMPT_COMMAND but I'm not able to find anything like it.
I also considered to use command line completion to achieve my goal but wasn't able to find anything about reacting on sending a finished command in this, but maybe I just didn't find it.
Any help appreciated :)
You can use the DEBUG trap and the extdebug feature, and peek into BASH_COMMAND from the trap handler to see the running command. (Though as noted in comments, the debug trap is sprung on every simple command, not every command line. Also subshells elude it.)
The debug handler can prevent the command from running, but can't change it directly. Though of course you could run any command inside the debugger, possibly using BASH_COMMAND and eval to build it and then tell the shell to ignore the original command.
This would prevent running anything starting with ls:
$ preventls() { case "$BASH_COMMAND" in ls*) echo "no!"; return 1 ;; esac; }
$ shopt -s extdebug
$ trap preventls DEBUG
$ ls -l
no!
Use trap - DEBUG to remove the trap. Tested on Bash 4.3.30.

A sh line that scares me, is it portable?

I'm currently working on pm2, a process manager for NodeJS.
As it's targeted at Javascript, a new standard is coming, ES6.
To enable it on NodeJS I have to add the option --harmony.
Now for the bash part, I have to let the user pass this option to the interpreter that executes the file. By crawling the web (and found on Stackoverflow) I found this :
#!/bin/sh
':' //; exec "`command -v nodejs || command -v node`" $PM2_NODE_OPTIONS "$0" "$#"
bin line
Looks like a nice hack but is it portable enough ? On CentOS, FreeBSD...
It's kind of critical so I want to be sure.
Thank you
Let's break down the line of interest.
: is a do nothing in shells.
; is a command separator.
exec will replace the current process with the process of the command that it is executing.
Notice that in the exec command it passes "$0" and "$#" as parameter to the command?
This will allow the new process to read the script denoted by $0 and use it as a script input and reads the original parameters as well $#
The new process will read the input script from the beginning and ignore the comments like #!/bin/sh. and will also ignore :.
Here's the trick. Most interpreters, including perl, uses syntax that are ignored by shell or vice-versa so that on re-reading the input file, the interpreter will not exec itself again.
In this case, the new process ignored the whole line from :. The reason why the rest of the line is ignored? On some c like interpreters, // is a comment.
I forgot to answer your question. Yes it seems portable. There may be corner cases but I can't think of any right now.
To enable it on NodeJS I have to add the option --harmony.
Not necessarily. You can use normal "#!/usr/bin/env node" shebang, but set a harmony flags in runtime using setflags module.
I'm not sure it's better solution, but it's worth mentioning.

Is there a way to configure Vim grepprg option to avoid waiting until the external tool has finished searching?

I am a long time Vimmer. However, I keep switching to shell to make searches. This avoids me to use the quickfix functionality.
The main reason for switching to shell is that when I use grep from inside Vim (with :grep), I cannot follow progress.
Because the code base I search is usually wide, I really appreciate immediate feedback.
It gives me a chance to find out that my search expression is wrong before the full results have been displayed.
This allow me to cancel the search, refine the expression then relaunch the search.
Any hint how to reproduce this pattern inside Vim would be appreciated.
I don't see the same vim behaviour as you. When I run :grep, I still see the results in vim (not in the quickfix) before the search completes (but I cannot do anything until the search is done).
I even tried using no vim settings or plugins:
gvim -u NONE -U NONE
If that's not your behaviour, check your grepprg. Mine is the default:
:verbose set grepprg
grepprg=grep -n $* /dev/null
When I use run grep -e "score" -R /etc I see this output in vim:
:!grep -n -e "score" -R /etc /dev/null 2>&1| tee /tmp/voLcaNS/232
It's possible that your system is missing tee or your vim doesn't use it (I'm using Vim 7.2 on Ubuntu 10.10). tee takes the text passed to it and writes it to a file and to stdout.
If you're looking for a way to have the quickfix get updated with your search results and have vim not block while you're searching, then you could write a script that:
searches with grep as a background process and redirects to a file
every second until grep completes, have vim load the file in quickfix (cgetfile) (you can tell vim to do something from another process with --remote-expr)
You can try my AsyncCommand plugin to get your code started. It does the above, except that it only loads the file when the search is complete.
Are you familiar with ack.vim at all? It doesn't use the quickfix window, but uses a separate buffer in a split. However, it's rather faster results come right back to the vim frame.
This may be due to buffering between grep and tee, not vim itself. To test this theory, run grep from the command-line and pipe the output through tee (i.e. grep <pattern> <files> | tee temp.out). If it behaves the same as you observe within vim, then buffering is occurring.
To work around, install expect (sudo apt-get install expect-dev on Ubuntu 10.10) and grepprg to unbuffer grep -n $* /dev/null. (See Turn off buffering in pipe).
Take a look at :vimgrep in the online documentation. It displays the file name being searched and updates as it goes.
There are three ways to do a search in entire projects.
System command grep(fast, but not working well with Ouickfix list)
=>$ grep -n Example *
Vim internal grep(slow, but have a strong pattern support)
:vim[grep] /{pattern}/[g][j] {file} ...
System plugin ack(perfect)
1 install ack
brew install ack
2 add below configs to your .vimrc
:set grepprg=ack\ --nongroup\ --column\ $*
:set grepformat=%f:%l:%c:%m
3 then you can use grep to call ack in vim like
:grep "object\." app/**/*.rb

How do I capture all of my compiler's output to a file?

I'm building an opensource project from source (CPP) in Linux. This is the order:
$CFLAGS="-g Wall" CXXFLAGS="-g Wall" ../trunk/configure --prefix=/somepath/ --host=i386-pc --target=i386-pc
$make
While compiling I'm getting lot of compiler warnings. I want to start fixing them. My question is how to capture all the compiler output in a file?
$make > file is not doing the job. It's just saving the compiler command like g++ -someoptions /asdf/xyz.cpp I want the output of these command executions.
The compiler warnings happen on stderr, not stdout, which is why you don't see them when you just redirect make somewhere else. Instead, try this if you're using Bash:
$ make &> results.txt
The & means "redirect stdout and stderr to this location". Other shells often have similar constructs.
In a bourne shell:
make > my.log 2>&1
I.e. > redirects stdout, 2>&1 redirects stderr to the same place as stdout
Lots of good answers so far. Here's a frill:
$ make 2>&1 | tee filetokeepitin.txt
will let you watch the output scroll past.
The output went to stderr. Use 2> to capture that.
$make 2> file
Assume you want to hilight warning and error from build ouput:
make |& grep -E "warning|error"
Based on an earlier reply by #dmckee
make | tee makelog.txt
This gives you real-time scrolling output while compiling, and simultaneously write to the makelog.txt file.
Try make 2> file. Compiler warnings come out on the standard error stream, not the standard output stream. If my suggestion doesn't work, check your shell manual for how to divert standard error.
From http://www.oreillynet.com/linux/cmd/cmd.csp?path=g/gcc
The > character does not redirect the
standard error. It's useful when you
want to save legitimate output without
mucking up a file with error messages.
But what if the error messages are
what you want to save? This is quite
common during troubleshooting. The
solution is to use a greater-than sign
followed by an ampersand. (This
construct works in almost every modern
UNIX shell.) It redirects both the
standard output and the standard
error. For instance:
$ gcc invinitjig.c >& error-msg
Have a look there, if this helps:
another forum
In C shell
- The ampersand is after the greater-than symbol
make >& filename
It is typically not what you want to do. You want to run your compilation in an editor that has support for reading the output of the compiler and going to the file/line char that has the problems. It works in all editors worth considering. Here is the emacs setup:
https://www.gnu.org/software/emacs/manual/html_node/emacs/Compilation.html

Resources