Alias to record all commands and std output to a file - linux

I am looking to figure out a way to get my documentation done quicker for my project work. One thing that would help me would be to record my history and each commands output to a file. However, I don't want to have this on all the time and I would rather not have it as a toggle option for the risk of forgetting to turn it off and recording a load of junk that I will have to just go and delete later.
The idea I had was to create an alias, lets say 'verbatim', so that I could enter the command as so:
verbatim <command>
And then the alias would remove 'verbatim', take whatever that was entered and prepend/append it with:
echo -n \[\$(date)\] >> output_file | echo "<command>" >> output_file | <command> | fee -a output_file | echo " " >> output_file
where the output will be:
<timestamp>
<command>
<outputOfTheCommand>
<newLine>
could also add comments by
verbatim #some comment to go in line
example:
verbatim #deploying the production stack upgrade
verbatim <someDeployCommand>
This way by typing just one word extra per line I can record everything that happens as I am doing a deployment for example, which can be used to basically do all of my documentation for me since it is saved to a file in order, all I have to do is remove anything that is irrelevant in hindsight. And the fact that all the data is timestamped means it could also speed up RCA if something goes wrong.
Thanks in advance, any and all advice welcome

I would just do my deployment as usual, and then add
head -n 20 ~/.bash_history
and edit the 20 depending on the history size I want.

You should probably just use script, but you could do something like:
v() { { date; echo "$#"; "$#" | tee /dev/tty; echo; } >> ${OUTPUT-/tmp/output} 2>&1; }
or
v() { { date; echo "$#"; "$#"; echo; } 2>&1 | tee -a ${OUTPUT-/tmp/output}; }
(verbatim is too long, so I shortened it to v)
This won't handle comments at all; that would require writing a new parser, since the comment will never be seen by the function. But you can always echo "# some comment" >> $OUTPUT

Related

Invert grep of a lot of values no consistent output

I'm currently working on setting up automatized pentest reporting. The scripts I set up perform TLS and other security checks to see if the application is secure in these aspects yes or no. Currently use the testssl.sh application (which can be found here: https://testssl.sh/) to perform these checks. I then output the findings to a csv file and created a script that greps the file in question and based on what is found, he will mention something is wrong or is correct. Seeing as I have performed a check and all values were correct, I apply invert greps to say that whenever the value cannot be found in the file, then he needs to perform a certain action.
At first I thought the script I was working on was working, however, when testing another site, the output generated is not correct. Things that are missing should be mentioned, however, when I invert grep only one term without placing OR statements in between the large amounts of things that need to be checked it seems to work.
I have tried all sorts of grep types to get a constant output, but no luck so far. So far, I have tried the following:
if grep -v -e "NULLciphersnoencryptionnotoffered" -e "AnonymousNULLCiphersnoauthenticationnotoffered" -e "ExportcipherswoADHNULLnotoffered" -e "LOW64BitDESencryptionwoexportnotoffered|" -e "Weak128BitciphersSEEDIDEARC24notoffered" -e "TripleDESCiphersMediumnotoffered" -e "HighencryptionAESCamellianoAEADoffered" -e "StrongencryptionAEADciphersoffered" ./resultaten/tls-cipher-suites-ng.csv; then
echo 'This is wrong' >> ../CH-40-Scans.tex
else
echo 'This is correct.' >> ../CH-40-Scans.tex
fi
What I see is that the above does not show This is wrong, but This is correct, while the following does trigger:
if ! grep -q -i "ipv6enabled" ./resultaten/tls-vulnerabilities-new-def.csv; then
echo '\item This is wrong.' >> ../CH-40-Scans.tex
fi
I already replaced the -e with the | variant, but I am not having luck so far on finding a consistent working method (also tried things as egrep). Is there another way to get this working? I don't mind using things such as Java or PHP or whatever to get this working, so if those are needed to create something consistent that would be fine.
I would gladly hear anything I could try to get a trustworthy working fix.
I don't know what it is you're trying to do but try these:
if awk '/NULLciphersnoencryptionnotoffered/ || \
/AnonymousNULLCiphersnoauthenticationnotoffered/ || \
/StrongencryptionAEADciphersoffered/ { f=1; exit }
END { exit !f }' ./resultaten/tls-cipher-suites-ng.csv; then
echo 'Present'
else
echo 'Absent'
fi
if awk -v RS='^$' '/NULLciphersnoencryptionnotoffered/ && \
/AnonymousNULLCiphersnoauthenticationnotoffered/ && \
/StrongencryptionAEADciphersoffered/ { f=1 }
END { exit !f }' ./resultaten/tls-cipher-suites-ng.csv; then
echo 'Present'
else
echo 'Absent'
fi
The first one will exit success if any of the "strings" are present, the second one will exit success if all of them are present. That second one requires GNU awk for multi-char RS.
This works, and may serve as an example (note I have commented out the redirection to /dev/null)
$> cat script
#!/bin/bash
e1="$1"
e2="$2"
if grep -v -e "$e1" -e "$e2" infile #>/dev/null
then
echo "found at least one line without the string(s)"
else
echo "found NO lines without all the string(s)!"
fi
$> cat infile
nabucco
aida
il trovatore
$> script a b
found NO lines without all the string(s)!
$> script z b
aida
trovatore
found at least one line without the string(s)

Can I avoid using a FIFO file to join the end of a Bash pipeline to be stored in a variable in the current shell?

I have the following functions:
execIn ()
{
local STORE_INvar="${1}" ; shift
printf -v "${STORE_INvar}" '%s' "$( eval "$#" ; printf %s x ; )"
printf -v "${STORE_INvar}" '%s' "${!STORE_INvar%x}"
}
and
getFifo ()
{
local FIFOfile
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
while [ -e "${FIFOfile}" ]
do
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
done
mkfifo "${FIFOfile}"
echo "${FIFOfile}"
}
I want to store the output of the end of a pipeline into a variable as given to a function at the end of the pipeline, however, the only way I have found to do this that will work in early versions of Bash is to use mkfifo to make a temp fifo file. I was hoping to use file descriptors to avoid having to create temporary files. So, This works, but is not ideal:
Set Up: (before I can do this I need to have assigned a FIFO file to a var that can be used by the rest of the process)
$ FIFOfile="$( getFifo )"
The Pipeline I want to persist:
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 # for e.g.
The action: (I can now add) >${FIFOfile} &
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 >${FIFOfile} &
N.B. The need to background it with & - Problem 1: I get [1] <PID_NO> output to the screen.
The actual persist:
$ execIn SOME_VAR cat - <${FIFOfile}
Problem 2: I get more noise to the screen
[1]+ Done printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 > ${FIFOfile}
Problem 3: I loose the blanks at the start of the stream rather than at the end as I have experienced before.
So, am I doing this the right way? I am sure that there must be a way to avoid the need of a FIFO file that needs cleanup afterwards using file descriptors, but I cannot seem to do this as I cannot assign either side of the problem to a file descriptor that is not attached to a file or a FIFO file.
I can try and resolve the problems with what I have, although to make this work properly I guess I need to pre-establish a pool of FIFO files that can be pulled in to use or else I have a pre-req of establishing this file before the command. So, for many reasons this is far from ideal. If anyone can advise me of a better way you would make my day/week/month/life :)
Thanks in advance...
Process substitution was available in bash from the ancient days. You absolutely do not have a version so ancient as to be unable to use it. Thus, there's no need to use a FIFO at all:
readToVar() { IFS= read -r -d '' "$1"; }
readToVar targetVar < <(printf '\n\n123\n456\n524\n789\n\n\n')
You'll observe that:
printf '%q\n' "$targetVar"
...correctly preserves the leading newlines as well as the trailing ones.
By contrast, in a use case where you can't afford to lose stdin:
readToVar() { IFS= read -r -d '' "$1" <"$2"; }
readToVar targetVar <(printf '\n\n123\n456\n524\n789\n\n\n')
If you really want to pipe to this command, are willing to require a very modern bash, and don't mind being incompatible with job control:
set +m # disable job control
shopt -s lastpipe # in a pipeline, parent shell becomes right-hand side
readToVar() { IFS= read -r -d '' "$1"; }
printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 | readToVar targetVar
The issues you claim to run into with using a FIFO do not actually exist. Put this in a script, and run it:
#!/bin/bash
trap 'rm -rf "$tempdir"' 0 # cleanup on exit
tempdir=$(mktemp -d -t fifodir.XXXXXX)
mkfifo "$tempdir/fifo"
printf '\n\n123\n456\n524\n789\n\n\n' >"$tempdir/fifo" &
IFS= read -r -d '' content <"$tempdir/fifo"
printf '%q\n' "$content" # print content to console
You'll notice that, when run in a script, there is no "noise" printed to the screen, because all that status is explicitly tied to job control, which is disabled by default in scripts.
You'll also notice that both leading and tailing newlines are correctly represented.
One idea, tell me I am crazy, might be to use the !! notation to grab the line just executed, e.g. if there is a command that can terminate a pipeline and stop it actually executing, whilst still as far as the shell is concerned, consider it as a successful execution, I am thinking something like the true command, I could then use !! to grab that line and call my existing function to execute it with process substitution or something. I could then wrap this into an alias, something like: alias streamTo=' | true ; LAST_EXEC="!!" ; myNewCommandVariation <<<' which I think could be used something like: $ cmd1 | cmd2 | myNewCommandVariation THE_VAR_NAME_TO_SET and the <<< from the alias would pass the var name to the command as an arg or stdin, either way, the command would be not at the end of a pipeline. How mad is this idea?
Not a full answer but rather a first point: is there some good reason not using mktemp for creating a new file with a random name? As far as I can see, your function called getFifo() doesn't perform much more.
mktemp -u
will give to you a free new name without creating anything; then you can use mkfifo with this name.

How to execute Linux shell variables within double quotes?

I have the following hacking-challenge, where we don't know, if there is a valid solution.
We have the following server script:
read s # read user input into var s
echo "$s"
# tests if it starts with 'a-f'
echo "$s" > "/home/user/${s}.txt"
We only control the input "$s". Is there a possibility to send OS-commands like uname or do you think "no way"?
I don't see any avenue for executing arbitrary commands. The script quotes $s every time it is referenced, so that limits what you can do.
The only serious attack vector I see is that the echo statement writes to a file name based on $s. Since you control $s, you can cause the script to write to some unexpected locations.
$s could contain a string like bob/important.txt. This script would then overwrite /home/user/bob/important.txt if executed with sufficient permissions. Sorry, Bob!
Or, worse, $s could be bob/../../../etc/passwd. The script would try to write to /home/user/bob/../../../etc/passwd. If the script is running as root... uh oh!
It's important to note that the script can only write to these places if it has the right permissions.
You could embed unusual characters in $s that would cause irregular file names to be created. Un-careful scripts could be taken advantage of. For example, if $s were foo -rf . bar, then the file /home/user/foo -rf . bar.txt would be created.
If someone ran for file in /home/user; rm $file; done they'd have a surprise on their hands. They would end up running rm /home/user/foo -rf . bar.txt, which is a disaster. If you take out /home/user/foo and bar.txt you're left with rm -rf . — everything in the current directory is deleted. Oops!
(They should have quoted "$file"!)
And there are two other minor things which, while I don't know how to take advantage of them maliciously, do cause the script to behave slightly differently than intended.
read allows backslashes to escape characters like space and newline. You can enter \space to embed spaces and \enter to have read parse multiple lines of input.
echo accepts a couple of flags. If $s is -n or -e then it won't actually echo $s; rather, it will interpret $s as a command-line flag.
Use read -r s or any \ will be lost/missinterpreted by your command.
read -r s?"Your input: "
if [ -n "${s}" ]
then
# "filter" file name from command
echo "${s##*/}" | sed 's|^ *\([[:alnum:]_]\{1,\}\)[[:blank:]].*|/home/user/\1.txt|' | read Output
(
# put any limitation on user here
ulimit -t 5 1>/dev/null 2>&1
`${read}`
) > ${OutPut}
else
echo "Bad command" > /home/user/Error.txt
fi
Sure:
read s
$s > /home/user/"$s".txt
If I enter uname, this prints Linux. But beware: this is a security nightmare. What if someone enters rm -rf $HOME? You'd also have issues with commands containing a slash.

Linux script trying to remove the 'return' in a file

I'm trying to write a pretty basic script in Linux shell but I'm still learning. Basically, everything is good to go except one part. I direct two outputs into the same file, e.g.:
echo `losetup -a` > partitionfile
echo "p1" >> partition final
Basically, I need to add the letter/number "p1" to the end of whatever is written in the file.
The problem is, it ends up being read (cat partitionfile) as:
/dev/loop0
p1
I need it on the same line to it reads out as:
/dev/loop0p1
There has to be a way to fix this, I just don't know it. Any help would be much appreciated!
Thanks!
I would go for:
echo "$(losetup -a)p1" > partitionfile
For an example, see the following transcript:
pax> echo "$(echo xyzzy_)p1"
xyzzy_p1
The xyzzy_ is the output of the inner echo command (which in your case would be losetup) and the outer echo command appends p1.
Hi Actually the correct option of echo to achieve this is "\c"
\c Keeps the cursor on the same line.
However you cannot use \c unless you have enabled it with
-e
Thus your code should be something like this ...
echo -e "`losetup -a` \c" > partitionfile
echo "p1" >> partition final
this will write in partitionfile as
< output of losetup -a > p1
everything on same line.
You can pass -n flag to the first echo statement to not print the trailing new line.
Ref: http://linux.die.net/man/1/echo

Grep filtering output from a process after it has already started?

Normally when one wants to look at specific output lines from running something, one can do something like:
./a.out | grep IHaveThisString
but what if IHaveThisString is something which changes every time so you need to first run it, watch the output to catch what IHaveThisString is on that particular run, and then grep it out? I can just dump to file and later grep but is it possible to do something like background it and then bring it to foreground and bringing it back but now piped to some grep? Something akin to:
./a.out
Ctrl-Z
fg | grep NowIKnowThisString
just wondering..
No, it is only in your screen buffer if you didn't save it in some other way.
Short form: You can do this, but you need to know that you need to do it ahead-of-time; it's not something that can be put into place interactively after-the-fact.
Write your script to determine what the string is. We'd need a more detailed example of the output format to give a better example of usage, but here's one for the trivial case where the entire first line is the filter target:
run_my_command | { read string_to_filter_for; fgrep -e "$string_to_filter_for" }
Replace the read string_to_filter_for with as many commands as necessary to read enough input to determine what the target string is; this could be a loop if necessary.
For instance, let's say that the output contains the following:
Session id: foobar
and thereafter, you want to grep for lines containing foobar.
...then you can pipe through the following script:
re='Session id: (.*)'
while read; do
if [[ $REPLY =~ $re ]] ; then
target=${BASH_REMATCH[1]}
break
else
# if you want to print the preamble; leave this out otherwise
printf '%s\n' "$REPLY"
fi
done
[[ $target ]] && grep -F -e "$target"
If you want to manually specify the filter target, this can be done by having the loop check for a file being created with filter contents, and using that when starting up grep afterwards.
That is a little bit strange what you need, but you can do it tis way:
you must go into script session first;
then you use shell how usually;
then you start and interrupt you program;
then run grep over typescript file.
Example:
$ script
$ ./a.out
Ctrl-Z
$ fg
$ grep NowIKnowThisString typescript
You could use a stream editor such as sed instead of grep. Here's an example of what I mean:
$ cat list
Name to look for: Mike
Dora 1
John 2
Mike 3
Helen 4
Here we find the name to look for in the fist line and want to grep for it. Now piping the command to sed:
$ cat list | sed -ne '1{s/Name to look for: //;h}' \
> -e ':r;n;G;/^.*\(.\+\).*\n\1$/P;s/\n.*//;br'
Mike 3
Note: sed itself can take file as a parameter, but you're not working with text files, so that's how you'd use it.
Of course, you'd need to modify the command for your case.

Resources