What mechanism allows linux less command to read an encrypted gpg file - linux

After encrypting a file using symmetric encryption, I decided to confirm that the output was encrypted by typing:
gpg -c --force-mdc --s2k-mode 3 --s2k-count 65011712 --output doc.gpg doc.txt
less doc.gpg
To my astonishment, the less command automatically decrypted the contents of doc.gpg and displayed them to me, rather than displaying the raw encrypted contents of the file. This happens only with the "less" command and not with the "cat" command. If "less doc.gpg" is done on a different machine, a command line popup dialog will appear asking for the password.
Could anyone please explain what mechanism is causing gpg to integrate automatically with the "less" command, and what other commands this automatic integration will occur with? Thanks!

Start with this:
$ man less
Read a bit, and find:
INPUT PREPROCESSOR
You may define an "input preprocessor" for less. Before less opens a
file, it first gives your input preprocessor a chance to modify the way
the contents of the file are displayed...
...To set up an input preprocessor, set the LESSOPEN environment variable
to a command line which will invoke your input preprocessor. This command
line should include one occurrence of the string "%s", which will be
replaced by the filename when the input preprocessor command is
invoked.

Related

Bash (or other shell): wrap all commands with function/script

Edit: This question was originally bash specific. I'd still rather have a bash solution, but if there's a good way to do this in another shell then that would be useful to know as well!
Okay, top level description of the problem. I would like to be able to add a hook to bash such that, when a user enters, for example $cat foo | sort -n | less, this is intercepted and translated into wrapper 'cat foo | sort -n | less'. I've seen ways to run commands before and after each command (using DEBUG traps or PROMPT_COMMAND or similar), but nothing about how to intercept each command and allow it to be handled by another process. Is there a way to do this?
For an explanation of why I'd like to do this, in case people have other suggestions of ways to approach it:
Tools like script let you log everything you do in a terminal to a log (as, to an extent, does bash history). However, they don't do it very well - script mixes input with output into one big string and gets confused with applications such as vi which take over the screen, history only gives you the raw commands being typed in, and neither of them work well if you have commands being entered into multiple terminals at the same time. What I would like to do is capture much richer information - as an example, the command, the time it executed, the time it completed, the exit status, the first few lines of stdin and stdout. I'd also prefer to send this to a listening daemon somewhere which could happily multiplex multiple terminals. The easy way to do this is to pass the command to another program which can exec a shell to handle the command as a subprocess whilst getting handles to stdin, stdout, exit status etc. One could write a shell to do this, but you'd lose much of the functionality already in bash, which would be annoying.
The motivation for this comes from trying to make sense of exploratory data analysis like procedures after the fact. With richer information like this, it would be possible to generate decent reporting on what happened, squashing multiple invocations of one command into one where the first few gave non-zero exits, asking where files came from by searching for everything that touched the file, etc etc.
Run this bash script:
#!/bin/bash
while read -e line
do
wrapper "$line"
done
In its simplest form, wrapper could consist of eval "$LINE". You mentioned wanting to have timings, so maybe instead have time eval "$line". You wanted to capture exit status, so this should be followed by the line save=$?. And, you wanted to capture the first few lines of stdout, so some redirecting is in order. And so on.
MORE: Jo So suggests that handling for multiple-line bash commands be included. In its simplest form, if eval returns with "syntax error: unexpected end of file", then you want to prompt for another line of input before proceeding. Better yet, to check for proper bash commands, run bash -n <<<"$line" before you do the eval. If bash -n reports the end-of-line error, then prompt for more input to add to `$line'. And so on.
Binfmt_misc comes to mind. The Linux kernel has a capability to allow arbitrary executable file formats to be recognized and passed to user application.
You could use this capability to register your wrapper but instead of handling arbitrary executable, it should handle all executable.

Add Password Protection to a file editing in vim with read,write and execute premission

I have test.pl and want this file Password Protected. I used vim -x filename then able to password Protected but not able to execute or compile that file. Is there any to executable file
vim test.pl
#!/usr/bin/perl
print "Hello Ram";
I got following error:
syntax error at text line 1, near "VimCrypt~"
Unrecognized character \x19 at text line 1.
The format used by vim's password protection can only be read by vim, and only with the password. perl can't read it.
You'll need to come up with some other way of implementing the access controls you're after, but without some idea of what your end goal is, we can't really help you.

Need help writing bash script, how to take a pw as argument and automate its entry

I am writing a bash script to automate a gpg decryption process. The script takes a Password as one of its arguments. I need to find a way, when I run the gpg decryption within the script, the prompting for the password is automatically performed by the script.
I'd read the variable in using read (in silent mode so that the characters typed aren't visible), then use it.
Example:
echo -n "Enter password: "
read -s password
gpg ... --password=$password
I think you are saying your script runs a program that requires a password to be typed in. To handle this, I have used Expect. To start using expect while using bash, you must first type expect -c, and then include the rest of the expect part in single quotes. When an example of using expect is like this expect
{"passwordprompt:" {send "password\r"} }
Where expect will by looking for the string "passwordprompt" and send the password when it reads that.

vim shell command integration

Ok, I have rather simple question: How can one bind vim command/hotkey to execute some complicated shell-script?
E.g. I want to optimize base64-inlined images inside css files. I know, that in shell it would be something like:
echo `selection` > /tmp/img.png.b64
base64 -d /tmp/img.png.b64 > /tmp/img.png
optipng -o7 /tmp/img.png
base64 -w 0 /tmp/img.png > `selection`
I want to put selection into the script and then get result of script execution and replace selected content with that result.
I see the workflow as selecting base64 part in visual block mode and type e.g. :'<,'>optipng or press some hotkey.
The question is how to setup vim to do that.
Vim allows to filter line(s) through an external command with :[range]!{cmd}. If your optipng command can take input from stdin and print to stdout, you can use it directly; else, with the help of a small shell script wrapper. See :help :range! for details.
One limitation is that this only works for whole lines, not parts, even when visually selected. You can get around this with the vis plugin; it would then be something like:
:'<,'>B !optipng -o7

Securely using password as bash argument

I'm extracting a part of a web application that handles the signup, the other part will be rewritten.
The idea is the signup part can exist as a separate application, interface with the rest of the application for creating and setting up the account. Obviously there are a ton of ways to do this, most of them network based solutions like SOAP, but I'd like to use a simpler solution: a setup script.
The concern is that certain sensitive data, specifically the admin password of the new account, would be passed through bash.
I was thinking of sharing a small class between the applications, so that the password can be passed already hashed, but I would also have to pass the salt, so it still seems like a (small) security risk. One of the concerns is bash logging (can I disable that for a single command?) but I'm sure there are other concerns as well?
This would be on the same private server, so the risk seems minimal, but I don't want to take any chances whatsoever.
Thanks.
Use the $HISTFILE environment variable, unset it (this is for all users):
echo "unset HISTFILE" >> /etc/profile
Then set it back again.
More info on $HISTFILE variable here: http://linux.about.com/cs/linux101/g/histfileenviron.htm
Hope this helps!
From the man page of bash:
HISTIGNORE
A colon-separated list of patterns used to decide which
command
lines should be saved on the history list. Each pattern
is
anchored at the beginning of the line and must match
the com-
plete line (no implicit ‘*’ is appended). Each pattern
is
tested against the line after the checks specified by
HISTCONTROL are applied. In addition to the normal shell
pattern
matching characters, ‘&’ matches the previous history line.
‘&’
may be escaped using a backslash; the backslash is
removed
before attempting a match. The second and subsequent
lines of a
multi-line compound command are not tested, and are added
to the
history regardless of the value of HISTIGNORE.
Or, based on your comment, you could store the password in a protected file, then read from it.
Passing the salt in clear is no problem (the salt is usually stored in clear), the purpose of the salt is avoiding the same password hashing to the same hash always (so users with the same password would have the same hash, and rainbow tables would only need a single hash for each possible password).
What is more problematic is passing sensitive data through command line arguments, an eavesdropper on the same box can see the arguments to any command (on Linux they appear on /proc//cmdline, and on most Unixes can be seen using ps; some systems restrict permissions on /proc// to only the owner of the process for security).
What you could do is pass the sensitive information through a file, don't forget to set the umask to a very restrictive setting before creating the file.
Bash doesn't normally log commands executed in scripts, but only in interactive sessions (depending on appropriate settings). To show this, use the following script:
#!/bin/bash
echo "-- shopt --"
shopt | grep -i hist
echo "-- set --"
set -o | grep -i hist
echo "--vars --"
for v in ${!HIST*}
do
echo "$v=${!v}"
done
Run it like this:
$ ./histshow
and compare the output to that from sourcing it like this:
$ . ./histshow
In the first case take note that HISTFILE is not set and that the set option history is off. In the second case, sourcing the script runs it in your interactive session and shows what your settings are for it.
I was only able to make a script keep an in-memory history by doing set -o history within the script and to log its history to a file by also setting HISTFILE and then doing an explicit history -w.

Resources