How to accept the 'Did you mean?' terminal/git suggestion - linux

This is a simple question.
Sometimes on a Terminal when you make a small mistake the console asks ¿Did you mean ...? - ¿Is there a way to quicky accept the suggestion?.
For example:
$ git add . -all
error: did you mean `--all` (with two dashes ?)
Is there a command that repeats the last line, but with the two dashes?
If you forget to write sudo, you can just do sudo !! and it will solve your problem. I want to know if there is something similar but for the error: did you mean case.

In the case of...
$ git add . -all
error: did you mean `--all` (with two dashes ?)
...the message is written by git directly to the terminal. This means that bash has no way of knowing what message was written; it would be literally impossible to implement anything in the shell that could automate putting that correction in place without making programs run under the shell considerably less efficient (by routing their output through the shell rather than directly to the terminal) and changing their behavior (if they ever call isatty() on their stdout or stderr).
That said, you can certainly run
^-all^--all
...if you haven't turned history expansion off, as with set +H (if off, it can be reenabled with set -H). I typically do turn this functionality off, myself; it's often more trouble than it's worth (making commands which would work perfectly well in scripts break in interactive shells when they use characters that history expansion is sensitive to, particularly !).

Related

tmux pin to avoid scrolling

Often when I run a command that writes to stdout, and that command fails, I have to scroll up (using uncomfortable key-bindings) looking for the place where I pressed Enter, to see what the first error was (out of hundreds others, across many screens of text). This is both annoying and time-consuming. I wish there was a feature which allowed me to pin my current terminal to the place where I am now, then start the command, see only the first lines of the output (as many as fits below my cursor) and let the rest of the output be written but not displayed. In other words I would like a feature to allow me automatically scroll up to the place where I gave the command, to see the first lines of the output (where usually the origin of the failure is displayed).
I searched for it but I didn't find it. Do you know if such feature exists? Or have an idea how to implement it with some tricks or workarounds?
If you have a unique shell prompt you could bind a key to jump between shell prompts, for example something like this will make C-b S jump to the previous shell prompt then S subsequent ones:
bind S copy-mode \; send -X search-backward 'nicholas#myhost:'
bind -Tcopy-mode S send -X search-backward 'nicholas#myhost:'
Or similarly you could search for error strings if they have a recognisable prefix. If you install the tmux 3.1 release candidate, you can search for regular expressions.
Alternatively, you could use capture-pane to load the entire history into an editor with key bindings you prefer, for example:
$ tmux capturep -S- -E- -p|vim -
Or pipe to grep or whatever. Note you will need to use a temporary file for this to work with emacs.
Or try to get into the habit of teeing commands with lots of output to a file to start with.

Bash (or other shell): wrap all commands with function/script

Edit: This question was originally bash specific. I'd still rather have a bash solution, but if there's a good way to do this in another shell then that would be useful to know as well!
Okay, top level description of the problem. I would like to be able to add a hook to bash such that, when a user enters, for example $cat foo | sort -n | less, this is intercepted and translated into wrapper 'cat foo | sort -n | less'. I've seen ways to run commands before and after each command (using DEBUG traps or PROMPT_COMMAND or similar), but nothing about how to intercept each command and allow it to be handled by another process. Is there a way to do this?
For an explanation of why I'd like to do this, in case people have other suggestions of ways to approach it:
Tools like script let you log everything you do in a terminal to a log (as, to an extent, does bash history). However, they don't do it very well - script mixes input with output into one big string and gets confused with applications such as vi which take over the screen, history only gives you the raw commands being typed in, and neither of them work well if you have commands being entered into multiple terminals at the same time. What I would like to do is capture much richer information - as an example, the command, the time it executed, the time it completed, the exit status, the first few lines of stdin and stdout. I'd also prefer to send this to a listening daemon somewhere which could happily multiplex multiple terminals. The easy way to do this is to pass the command to another program which can exec a shell to handle the command as a subprocess whilst getting handles to stdin, stdout, exit status etc. One could write a shell to do this, but you'd lose much of the functionality already in bash, which would be annoying.
The motivation for this comes from trying to make sense of exploratory data analysis like procedures after the fact. With richer information like this, it would be possible to generate decent reporting on what happened, squashing multiple invocations of one command into one where the first few gave non-zero exits, asking where files came from by searching for everything that touched the file, etc etc.
Run this bash script:
#!/bin/bash
while read -e line
do
wrapper "$line"
done
In its simplest form, wrapper could consist of eval "$LINE". You mentioned wanting to have timings, so maybe instead have time eval "$line". You wanted to capture exit status, so this should be followed by the line save=$?. And, you wanted to capture the first few lines of stdout, so some redirecting is in order. And so on.
MORE: Jo So suggests that handling for multiple-line bash commands be included. In its simplest form, if eval returns with "syntax error: unexpected end of file", then you want to prompt for another line of input before proceeding. Better yet, to check for proper bash commands, run bash -n <<<"$line" before you do the eval. If bash -n reports the end-of-line error, then prompt for more input to add to `$line'. And so on.
Binfmt_misc comes to mind. The Linux kernel has a capability to allow arbitrary executable file formats to be recognized and passed to user application.
You could use this capability to register your wrapper but instead of handling arbitrary executable, it should handle all executable.

Prevent gVim from returning control to command line (when called from Stata)

When I call gVim from Stata with shell (or equivalently with !) Stata doesn't wait for the command to finish and continues on with the .do file. I usually specify a short sleep and everything works great (discussed on the Stata mailing list here).
But sometimes the gVim call is lengthy and the length is unknown a priori. For example. I use gVim's argdo to remove headers from a folder of text files.
!gvim -c "argdo 1,3d | update" *sheet*.txt
Is there any way that I can force gVim to not return control to Stata? Or are my best options to complete this step outside the .do file or with a pause/lengthy sleep? Thanks!
Oh, I'm on Win 8 (64 bit) with gVim 7.3.
I think you would need to make this call a Stata command or the equivalent thereof.
That is, try running this separately from a do-file editor window or as wrapped up in a separate do-file.
I realise that is not an attractive solution, but in principle it seems the only one.
(sleep solutions I dislike as fudges, but I guess no one likes them on principle.)

Is this batch file injection?

C:\>batinjection OFF ^& DEL c.c
batinjection.bat has contents of ECHO %*
I've heard of SQL injection, though i've never actually done it, but is this injection? Are there different types of injection and this is one of them?
Or is there another technical term for this? or a more specific term?
Note- a prior edit had C:\>batinjection OFF & DEL c.c(i.e. without ^%) and ECHO %1(i.e. without %*) which wasn't quite right. I have corrected it. It doesn't affect the answers.
Your example presents three interesting issues that are easier to understand
when separated.
First, Windows allows multiple statements to be executed on one line by
separating with "&". This could potentially be used in an injection attack.
Second, ECHO parses and interprets messages passed to it. If the message is
"OFF" or "/?" or even blank, then ECHO will provide a different expected
behavior than just copying the message to stdout.
Third, you know that it's possible to inject code into a number of
scriptable languages, including batch files, and want to explore ways
to recognize it so you can better defend against it in your code.
It would be easier to recognize the order in which things are happening
in your script if you add an echo statement before and after the one
you're trying to inject. Call it foo.bat.
#echo off
echo before
echo %1
echo after
Now, you can more easily tell whether your injection attempt executed at
the command line (not injection) or was executed as a result of parameter
expansion that broke out of the echo statement and executed a new statement
(injection).
foo dir
Results in:
before
dir
after
Pretty normal so far. Try a parameter that echo interprets.
foo /?
Results in:
before
Displays messages, or turns command-echoing on or off.
ECHO [ON | OFF]
ECHO [message]
Type ECHO without parameters to display the current echo setting.
after
Hmm. Help for the echo command. It's probably not the desired use of
echo in that batch file, but it's not injection. The parameters were
not used to "escape out" of the limits of either the echo statement or
the syntax of the batch file.
foo dog & dir
Results in:
before
dog
after
[A spill of my current directory]
Okay, the dir happened outside of the script. Not injection.
foo ^&dir/w
Results in:
before
ECHO is off.
[A spill of my current directory in wide format]
after
Now, we've gotten somewhere. The dir is not a function of ECHO, and is
running between the before and after statements. Let's try something
more dramatic but still mostly harmless.
foo ^&dir\/s
Yikes! You can pass an arbitrary command that can potentially impact
your system's performance all inside an innocuous-looking "echo %1".
Yes, it's a type of injection, and it's one of the big problems with batch files, that mostly it isn't a purposefully attac, most of the time you simple get trouble with some characters or word like OFF.
Therefore you should use technics to avoid this problems/vulnerabilitys.
In your case you could change your batch file to
set "param1=%*"
setlocal EnableDelayedExpansion
echo(!param1!
I use echo( here instead of echo. or something else, as it is the only known secure echo for all appended contents.
I use the delayed expansion ! instead of percent expansion, as delayed expansion is always safe against any special characters.
To use the delayed expansion you need to transfer the parameter into a variable and a good way is to use quotes around the set command, it avoid many problems with special characters (but not all).
But to build an absolutly secure way to access batch parameters, the way is quite harder.
Try to make this safe is tricky
myBatch.bat ^&"&"
You could read SO: How to receive even the strangest command line parameters?
The main idea is to use the output of a REM statement while ECHO ON.
This is safe in the way, that you can't inject code (or better: only with really advanced knowledge), but the original content can be changed, if your content is something like.
myBatch.bat myContent^&"&"%a
Will be changed to myContent&"&"4
AFAIK, this is know as command injection (which is one of types code injection attack).
The later link lists various injection attacks. The site (www.owasp.org) is an excellent resource regarding web security.
There are multiple applications of injection one can generalize as "language injection". SQL Injection and Cross Site Scripting are the most popular, but others are possible.
In your example, the ECHO statement isn't actually performing the delete, so I wouldn't call that injection. Instead, the delete happens outside of the invocation of the batinjection script itself.

exec() security

I am trying to add security of GET query to exec function.
If I remove escapeshellarg() function, it work fine. How to fix this issue?
ajax_command.php
<?php
$command = escapeshellarg($_GET['command']);
exec("/usr/bin/php-cli " . $command);
?>
Assume $_GET['command'] value is run.php -n 3
What security check I can also add?
You want escapeshellcmd (escape a whole command, or in your case, sequence of arguments) instead of escapeshellarg (escape just a single argument).
Notice that although you have taken special precautions, this code allows anyone to execute arbitrary commands on your server anyways, by specifying the whole php script in a -r option. Note that php.ini can not be used to restrict this, since the location of it can be overwritten with -c. In short (and with a very small error margin): This code creates a severe security vulnerability.
escapeshellarg returns a quoted value, so if it contains multiple arguments, it won't work, instead looking like a single stringesque argument. You should probably look at splitting the command up into several different parameters, then each can be escaped individually.
It will fail unless there's a file called run.php -n 3. You don't want to escape a single argument, you want to escape a filename and arguments.
This is not the proper way to do this. Have a single PHP script run all your commands for you, everything specified in command line arguments. Escape the arguments and worry about security inside that PHP file.
Or better yet, communicate through a pipe.

Resources