I'm trying to monitor command execution on a shell.
I need to separate the input command, for example:
input:
ls -l /
output:
total 76
lrwxrwxrwx 1 root root 7 Aug 11 10:25 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Aug 11 11:18 boot
drwxr-xr-x 17 root root 3200 Oct 11 11:10 dev
...
Also, I want to do the same if I open another shell, for example, after connection through ssh to another server.
I've been using script command to do this and it works just fine!
It logs all command input and output even if the shell changes (through ssh, or entering a msfconsole, for example).
Nevertheless, I found two main issues:
For my project, I need to separate (using a decoder) each command from the rest, also it would be awesome to be able to separate command input and output, for example:
cmd1. pwd ---> /var/
cmd2. echo "hello world" ---> "hello world"
....
Sometimes the script command could generate an output with garbage due to shell special characters (for colors, for example) which I would like to filter out.
So I've been thinking about this and I guess I could create a simple script that read from the file written by "script" command and processed the data.
Nevertheless, I'm not sure about what could be the best approach to do this.
I'm evaluating different solutions and I would like to know different proposals from the community.
Maybe I'm losing something and you know a better tool rather than script command or have some idea I've not considered.
Best regards,
A useful util for distinguishing stdout from stderr is annotate-output, (install the "devscripts" package), which sends stderr and stdin both to stdout along with helpful little prefixes. For example, let's try counting characters of a file that exists, plus one that doesn't exist:
annotate-output wc -c /bin/bash /bin/nosuchshell
Output:
00:29:06 I: Started wc -c /bin/bash /bin/nosuchshell
00:29:06 E: wc: /bin/nosuchshell: No such file or directory
00:29:06 O: 1099016 /bin/bash
00:29:06 O: 1099016 total
00:29:06 I: Finished with exitcode 1
That output could be parsed separately using sed, awk, or even a tee and a few greps.
Related
This question already has answers here:
Shell script not running, command not found
(12 answers)
Closed 3 years ago.
I am trying to run a bash script from a script called dev_ro, here is how it's being called.
export SUBNET="$(first_available_docker_network --lock-seconds 7200)"
I am calling dev_ro by ./dev_ro
I am confirm I have
#!/bin/bash
at the top of both files.
Here are perms for both files
$ ls -lh dev_ro
-rwxrwxr-x 1 ME ME 423 Aug 21 15:57 dev_ro
$ ls -lh first_available_docker_network
-rwxrwxr-x 1 ME ME 2.2K Aug 21 15:55 first_available_docker_network
This is the output from running ./dev_ro
++ first_available_docker_network --lock-seconds 7200
compose/everest-compose: line 25: first_available_docker_network: command not found
Additionally when I try to run the script:
ME#SERVER:~/Rosetta/compose$ first_available_docker_network
first_available_docker_network: command not found
ME#SERVER:~/Rosetta/compose$
I have the same setup running on a different server and it's working. The code was pulled from Git, so it's the same codebase.
Any help is much appreciated.
ME#OTHER_SERVER:~/Rosetta/compose$ first_available_docker_network
DEBUG:root:Docker subnets: [IPv4Network(... etc
ME#OTHER_SERVER:~/Rosetta/compose$ ^C
first_available_docker_network is not a standard linux command. This must be your custom script. Try executing using its absolute path. For example, instead of using+
ME#SERVER:~/Rosetta/compose$ first_available_docker_network
use
ME#SERVER:~/Rosetta/compose$ absolute_path_of_script/first_available_docker_network
Or alternatively,
You can try adding the path of first_available_docker_network to the PATH variable.
I am executing the "wget -o " and because the output is bigger than expected, it is split in more than one file. Is there a way to get only one file? If this is possible I would prefer to use only the command wget.
The command wget that I am executing is:
$ wget -o neighborhoods.json https://raw.githubusercontent.com/mongodb/docs-assets/geospatial/neighborhoods.json
And the multiple output is:
-rw-rw-r-- 1 ubuntu ubuntu 6652 Mar 4 01:15 neighborhoods.json
-rw-rw-r-- 1 ubuntu ubuntu 4137081 Mar 4 01:15 neighborhoods.json.1
Look well at the wget output, you will see what it is/will be doing. wget does not split files if they are long; instead, it avoids to overwrite files, if they exist (creating a new file instead of touching the already existing one).
Delete the two files neighborXXX, and start wget again; be sure it finishes without problems: it will write (create) the single file you asked for. If it is interrupted, and you restart it, it will create a new file (appending .1 and so on).
You can pass it the option -c to tell it to continue a broken download, if it was interrupted - most of the times it works well (not always tough).
my user-data script
#!
set -e -x
echo `whoami`
su root
yum update -y
touch ~/PLEASE_WORK.txt
which is fed in from the command:
ec2-run-instances ami-05355a6c -n 1 -g mongo-group -k mykey -f myscript.sh -t t1.micro -z us-east-1a
but when I check the file /var/log/cloud-init.log, the tail -n 5 is:
[CLOUDINIT] 2013-07-22 16:02:29,566 - cloud-init-cfg[INFO]: cloud-init-cfg ['runcmd']
[CLOUDINIT] 2013-07-22 16:02:29,583 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
[CLOUDINIT] 2013-07-22 16:02:29,686 - cloud-init-cfg[DEBUG]: handling runcmd with freq=None and args=[]
[CLOUDINIT] 2013-07-22 16:02:33,691 - cloud-init-run-module[INFO]: cloud-init-run-module ['once-per-instance', 'user-scripts', 'execute', 'run-parts', '/var/lib/cloud/data/scripts']
[CLOUDINIT] 2013-07-22 16:02:33,699 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
I've also verified that curl http://169.254.169.254/latest/user-data returns my file as intended.
and no other errors or the output of my script happens. how do I get the user-data scrip to execute on boot up correctly?
Actually, cloud-init allows a single shell script as an input (though you may want to use a MIME archive for more complex setups).
The problem with the OP's script is that the first line is incorrect. You should use something like this:
#!/bin/sh
The reason for this is that, while cloud-init uses #! to recognize a user script, the operating system needs a complete shebang line in order to execute the script.
So what's happening in the OP's case is that cloud-init behaves correctly (i.e. it downloads and tries to run the script) but the operating system is unable to actually execute it.
See: Shebang (Unix) on Wikipedia
Cloud-init does not accept plain bash scripts, just like that. It's a beast that eats YAML file that defines your instance (packages, ssh keys and other stuff).
Using MIME you can also send arbitrary shell scripts, but you have to MIME-encode them.
$ cat my-boothook.txt
#!/bin/sh
echo "Hello World!"
echo "This will run as soon as possible in the boot sequence"
$ cat my-user-script.txt
#!/usr/bin/perl
print "This is a user script (rc.local)\n"
$ cat my-include.txt
# these urls will be read pulled in if they were part of user-data
# comments are allowed. The format is one url per line
http://www.ubuntu.com/robots.txt
http://www.w3schools.com/html/lastpage.htm
$ cat my-upstart-job.txt
description "a test upstart job"
start on stopped rc RUNLEVEL=[2345]
console output
task
script
echo "====BEGIN======="
echo "HELLO From an Upstart Job"
echo "=====END========"
end script
$ cat my-cloudconfig.txt
#cloud-config
ssh_import_id: [smoser]
apt_sources:
- source: "ppa:smoser/ppa"
$ ls
my-boothook.txt my-include.txt my-user-script.txt
my-cloudconfig.txt my-upstart-job.txt
$ write-mime-multipart --output=combined-userdata.txt \
my-boothook.txt:text/cloud-boothook \
my-include.txt:text/x-include-url \
my-upstart-job.txt:text/upstart-job \
my-user-script.txt:text/x-shellscript \
my-cloudconfig.txt
$ ls -l combined-userdata.txt
-rw-r--r-- 1 smoser smoser 1782 2010-07-01 16:08 combined-userdata.txt
The combined-userdata.txt is the file you want to paste there.
More info here:
https://help.ubuntu.com/community/CloudInit
Also note, this highly depends on the image you are using. But you say it is really cloud-init based image, so this applies. There are other cloud initiators which are not named cloud-init - then it could be different.
This is a couple years old now, but for others benefit I had the same issue, and it turned out that cloud-init was running twice, from inside /etc/rc3.d . Deleting these files inside the folder allowed the userdata to run correctly:
lrwxrwxrwx 1 root root 22 Jun 5 02:49 S-1cloud-config -> ../init.d/cloud-config
lrwxrwxrwx 1 root root 20 Jun 5 02:49 S-1cloud-init -> ../init.d/cloud-init
lrwxrwxrwx 1 root root 26 Jun 5 02:49 S-1cloud-init-local -> ../init.d/cloud-init-local
The problem is with cloud-init not allowing the user script to run on the next start-up.
First remove the cloud-init artifacts by executing:
rm /var/lib/cloud/instances/*/sem/config_scripts_user
And then your userdata must look like this:
#!/bin/bash
echo "hello!"
And then start you instance. It now works (tested).
Does there is usual utility which makes a substitution on some calls like execve and open? Like LD_PRELOAD for calls.
Example:
we have prog_A which uses prog_B.
some days ago prog_B was updated and now prog_A failed!(
usual solution is the next:
$: mv /usr/bin/prog_b /usr/bin/prog_B.new
$: ln -s /usr/bin/prog_b.old /usr/bin/prog_b
$: ./prog_a # now run
but sometimes it's uncomfortably and dirty solution. In some stories the correct way to do so:
$: util "execve+open+stat:/usr/bin/prog_b=/usr/bin/prog_b.old" ./prog_a
where execve,open & stat are system calls. What is the name of this util?
I just write a special FILE_PRELOAD utility to solve my problem.
$: FILE_PRELOAD -C "execve+open+stat:/usr/bin/prog_b:/usr/bin/prog_b.old" ./prog_a
it generates c++ code, then compiles it and then LD_PRELOAD the result lib.so file before run ./prog_a.
Using it you can hook the next calls:
open,fopen,fopen64
opendir,mkdir,rmdir
execve
unlink,unlinkat
stat,lstat,lstat64,_lxstat,_lxstat64,stat64
_xstat,_xstat64,__fxstatat
freopen,freopen64
Please, run docs/tut.sh firstly (it's a tutorial for FP utility).
The common solution is the symlink solution. It isn't dirty. Have a look at debian or Ubuntu for example. They have /etc/alternatives for that purpose.
Here comes an example listing for the view command on Ubuntu:
user#server ls -al /usr/bin/view
lrwxrwxrwx 1 root root 22 Dez 5 2009 /usr/bin/view -> /etc/alternatives/view
user#server ls -al /etc/alternatives/view
lrwxrwxrwx 1 root root 18 Dez 5 2009 /etc/alternatives/view -> /usr/bin/vim.basic
This question already has answers here:
Redirect STDERR / STDOUT of a process AFTER it's been started, using command line?
(8 answers)
Closed 6 years ago.
Normally I would start a command like
longcommand &;
I know you can redirect it by doing something like
longcommand > /dev/null;
for instance to get rid of the output or
longcommand 2>&1 > output.log
to capture output.
But I sometimes forget, and was wondering if there is a way to capture or redirect after the fact.
longcommand
ctrl-z
bg 2>&1 > /dev/null
or something like that so I can continue using the terminal without messages popping up on the terminal.
See Redirecting Output from a Running Process.
Firstly I run the command cat > foo1 in one session and test that data from stdin is copied to the file. Then in another session I redirect the output.
Firstly find the PID of the process:
$ ps aux | grep cat
rjc 6760 0.0 0.0 1580 376 pts/5 S+ 15:31 0:00 cat
Now check the file handles it has open:
$ ls -l /proc/6760/fd
total 3
lrwx—— 1 rjc rjc 64 Feb 27 15:32 0 -> /dev/pts/5
l-wx—— 1 rjc rjc 64 Feb 27 15:32 1 -> /tmp/foo1
lrwx—— 1 rjc rjc 64 Feb 27 15:32 2 -> /dev/pts/5
Now run GDB:
$ gdb -p 6760 /bin/cat
GNU gdb 6.4.90-debian
[license stuff snipped]
Attaching to program: /bin/cat, process 6760
[snip other stuff that's not interesting now]
(gdb) p close(1)
$1 = 0
(gdb) p creat("/tmp/foo3", 0600)
$2 = 1
(gdb) q
The program is running. Quit anyway (and detach it)? (y or n) y
Detaching from program: /bin/cat, process 6760
The p command in GDB will print the value of an expression, an expression can be a function to call, it can be a system call… So I execute a close() system call and pass file handle 1, then I execute a creat() system call to open a new file. The result of the creat() was 1 which means that it replaced the previous file handle. If I wanted to use the same file for stdout and stderr or if I wanted to replace a file handle with some other number then I would need to call the dup2() system call to achieve that result.
For this example I chose to use creat() instead of open() because there are fewer parameter. The C macros for the flags are not usable from GDB (it doesn’t use C headers) so I would have to read header files to discover this – it’s not that hard to do so but would take more time. Note that 0600 is the octal permission for the owner having read/write access and the group and others having no access. It would also work to use 0 for that parameter and run chmod on the file later on.
After that I verify the result:
ls -l /proc/6760/fd/
total 3
lrwx—— 1 rjc rjc 64 2008-02-27 15:32 0 -> /dev/pts/5
l-wx—— 1 rjc rjc 64 2008-02-27 15:32 1 -> /tmp/foo3 <====
lrwx—— 1 rjc rjc 64 2008-02-27 15:32 2 -> /dev/pts/5
Typing more data in to cat results in the file /tmp/foo3 being appended to.
If you want to close the original session you need to close all file handles for it, open a new device that can be the controlling tty, and then call setsid().
You can also do it using reredirect (https://github.com/jerome-pouiller/reredirect/).
The command bellow redirects the outputs (standard and error) of the process PID to FILE:
reredirect -m FILE PID
The README of reredirect also explains other interesting features: how to restore the original state of the process, how to redirect to another command or to redirect only stdout or stderr.
The tool also provides relink, a script allowing to redirect the outputs to the current terminal:
relink PID
relink PID | grep usefull_content
(reredirect seems to have same features than Dupx described in another answer but, it does not depend on Gdb).
Dupx
Dupx is a simple *nix utility to redirect standard output/input/error of an already running process.
Motivation
I've often found myself in a situation where a process I started on a remote system via SSH takes much longer than I had anticipated. I need to break the SSH connection, but if I do so, the process will die if it tries to write something on stdout/error of a broken pipe. I wish I could suspend the process with ^Z and then do a
bg %1 >/tmp/stdout 2>/tmp/stderr
Unfortunately this will not work (in shells I know).
http://www.isi.edu/~yuri/dupx/
Screen
If process is running in a screen session you can use screen's log command to log the output of that window to a file:
Switch to the script's window, C-a H to log.
Now you can :
$ tail -f screenlog.2 | grep whatever
From screen's man page:
log [on|off]
Start/stop writing output of the current window to a file "screenlog.n" in the window's default directory, where n is the number of the current window. This filename can be changed with the 'logfile' command. If no parameter is given, the state of logging is toggled. The session log is appended to the previous contents of the file if it already exists. The current contents and the contents of the scrollback history are not included in the session log. Default is 'off'.
I'm sure tmux has something similar as well.
I collected some information on the internet and prepared the script that requires no external tool: See my response here. Hope it's helpful.