What can I use to capture every command I run in bash (a-la history) - linux

I know history will capture commands that I run, but it is shell specific. I work with multiple shells and multiple hosts and would like to write a small script which, after every command I run, dumps that command to some file along with the host name. This way, i can implement my own history command which reads from that file, and can take a host as an argument which would be handy for me. I'm not sure how to get the first part though..i.e., get every shell command I type to trigger a "dump that command into a file" part. Any ideas?
Thanks

In bash, the PROMPT_COMMAND environment variable contains a command that will be executed before the PS1 prompt is displayed. So yours could be something like history | tail -n1 | perl -npe 's/^\s+\d+\s+//' | yourcommand HOST

The script utility should solve your problem. It records everything you type and all that is printed on the terminal in a file (even including terminal control codes, so if you cat that file on the console, you even reproduce the original text colors).

Related

How to take continuous back up of linux gnome terminal logs? commands and output of that command

I want to take continuous backup of logs being printed in my linux terminal. Is it possible that whenever something will be printed in my terminal, it will automatically get printed into some text file with time stamp.
Use the script command ie
script log.txt
at the start of your session. You can also add this to your bash profile so that it starts when you open a terminal etc. You need to use
script -a log.txt
to append. Don't try and cat it or tail it while in the session, you need to CTRL-D then have a look at what got logged.

store all the data in terminal to text file by tee command or equivalent tool

I learnt that a tee command will store the STDOUT to a file as well as outputs to terminal.
But, here the problem is every time I have to give tee command, for every command I give.
Is there any way or tool in linux, so that what ever I run in terminal, it should store the command as well as output. (I used tee command in MySQL, where it will store all the commands and outputs to a file of that entire session. I am expecting a tool similar to this.)
Edit:
When I run script -a log.txt, I see ^M characters as well as ^[ and ^] characters in log.txt file. I used various dos2unix, :set ff=unix, :set ff=dos commands, but they didn't helped me in removing these ^[, ^] characters.
Is there any method, I can directly get the plain text file (with out these extra chars).
OS: RHEL 5
You can use script command which writes everything on file
script -f log.txt
you could use aliases like such alias ls="ls;echo ls >>log" so every time you run ls it runs echo ls >>log too.
But script would probably be better in this case, just dont go into vi while you are in script.

How do I pipe the output of an LS on remote server to the local filesystem via SFTP?

I'm logged into a remote server via SFTP at the command line. The folder I'm in contains hundreds of thousands of files. I need to get a list of these files in a text file so I can access them programmatically, as none of the PHP SFTP clients are able to return such a large list of files.
When I run an ls on the directory ( within the SFTP session ), it takes about 20 minutes for the file list to finally display.
I don't have write access on this server, so I can't pipe the output to a file on the remote server.
How can I pipe the output to a text file on my local machine ... or get a list of the files to my local machine some other way?
If you're willing to wait the 20 minutes for the data to scroll across your screen you can capture all the output using "script".
Call 'script' before you start your ssh or sftp session and it will capture all terminal output to your local disk. Type 'exit' to finish the capture.
NAME
script -- make typescript of terminal session
SYNOPSIS
script [-akq] [-t time] [file [command ...]]
DESCRIPTION
The script utility makes a typescript of everything printed on your ter-
minal. It is useful for students who need a hardcopy record of an inter-
active session as proof of an assignment, as the typescript file can be
printed out later with lpr(1).
If the argument file is given, script saves all dialogue in file. If no
file name is given, the typescript is saved in the file typescript.
If the argument command is given, script will run the specified command
with an optional argument vector instead of an interactive shell.
The following options are available:
-a Append the output to file or typescript, retaining the prior con-
tents.
-k Log keys sent to program as well as output.
-q Run in quiet mode, omit the start and stop status messages.
-t time
Specify time interval between flushing script output file. A
value of 0 causes script to flush for every character I/O event.
The default interval is 30 seconds.
The script ends when the forked shell (or command) exits (a control-D to
exit the Bourne shell (sh(1)), and exit, logout or control-D (if
ignoreeof is not set) for the C-shell, csh(1)).
Certain interactive commands, such as vi(1), create garbage in the type-
script file. The script utility works best with commands that do not
manipulate the screen. The results are meant to emulate a hardcopy ter-
minal, not an addressable one.
ENVIRONMENT
The following environment variable is utilized by script:
SHELL If the variable SHELL exists, the shell forked by script will be
that shell. If SHELL is not set, the Bourne shell is assumed.
(Most shells set this variable automatically).
SEE ALSO
csh(1) (for the history mechanism).
HISTORY
The script command appeared in 3.0BSD.
BUGS
The script utility places everything in the log file, including linefeeds
and backspaces. This is not what the naive user expects.
It is not possible to specify a command without also naming the script
file because of argument parsing compatibility issues.
When running in -k mode, echo cancelling is far from ideal. The slave
terminal mode is checked for ECHO mode to check when to avoid manual echo
logging. This does not work when in a raw mode where the program being
run is doing manual echo.
Wu's answer is good if you do it remotely. Here is another option if you are logged onto the remote server and want to send the file back home to yourself:
Proper answer is here: http://scratching.psybermonkey.net/2011/02/ssh-how-to-pipe-output-from-local-to.html
your_command | ssh username#server "cat > filename.txt"
If you have ssh access, that would be very easy:
ssh user#server ls > foo.txt
Otherwise, you can just redirect sftp's STDOUT and STDERR to a file. You have to type password and commands blindly though.
In my case following worked:
ssh user#server ls /path/to/source/folder/ > /path/to/destination/folder/filenames.txt
I wrote it in Git Bash. This will first ssh then list all files of source folder and then save the file names to the destination text file.
In this way you can also save the output to json file. Just change the file extension to json instead of txt.
For appending output just put ">>" instead of ">".

Linux: using the tee command via ssh

I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.

Limit output of all Linux commands

I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat on a large file or ls on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head or wc to prevent too much output having to be printed to terminal?
I don't know about the general case, but for each well-known command (cat, ls, find?)
you could do the following:
hardlink a copy to the existing utility
write a tiny bash function that calls the utility and pipes to head (or wc, or whatever)
alias the name of the utility to call your function.
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $# | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $# | more"
Perhaps using screen could help?
this makes me think of bash-completion.
As complete command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.

Resources