Is there a way to capture the commands executed by GUI programs ?
Or even simple bash scripts ?
Like the "history" command from bash but available on the whole system.
A shell (e.g. bash) has -x option and you can see all the commands which are executed by a particular script. run sh -x <your_script and see the output. You also can temporarily turn on/off this logging by issuing set +x, set -x inside a script.
regarding GUI programs, the answer depends on your needs, what kind of activity you'd like to log. You can use strace as suggested in the comments, and filter out exec* calls. But likely you assume something else since most activities of a GUI program are performed w/o executing external programs.
Related
I'm still pretty confused with the role of linux shell running programs despite of using linux a lot.
I understand there are two type of shells, interactive shells and non-interactive shells. Terminal session interacts with interactive shell, and scripts run in non-interactive shell. But is there really other difference than ability to read input and print output? If I invoke script from shell, does it run in this interactive shell or new non-interactive shell inside shell?
Also, when I execute binary either by invoking it through interactive shell or graphical interface, does it always run in the shell, or could a process run without shell at all? It's said that all processes communicates with kernel through the shell, but I'm confused because in docker, you can define the entrypoint to be either a binary or "sh -c binary".
The shell is just one possible interface. Every Linux system has a notion of a "first" process (usually called init) that is started directly by the kernel. Every other program on your computer is started by another process that first forks itself, then calls exec (actually, one of about 6 functions in the same family) to replace itself with a different program.
The shell is just one possible interface, one that parses text into requests to run other programs. The shell command line mv foo bar is parsed as a request to pass fork the shell and call exec in the new copy with the three words mv, foo, and bar as arguments.
Consider the following snippet of Python:
subprocess.call(["mv", "foo", "bar"])
which basically does the same thing: the Python program forks itself and calls exec with the three given strings as arguments. There is no shell involvement.
The shell is just a convenient UI that lets you run other processes the way you want to. It can also run scripts to do the same. That's all it does. It's not responsible for doing anything for the processes once it runs them.
You could entirely replace it with pythonwhich lets you do the same things, but that's annoying because you have to type chepner's subprocess.call(["mv", "foo", "bar"])just to to run the mv program. If you wanted to pipe one program to another, you'd need 5-10 such lines. Not much fun to write interactively.
You could entirely replace it with KDE/Gnome/whatever and double click programs to run them, but that's not very flexible since you can't include arguments and such, and you can't automate it.
I understand there are two type of shells, interactive shells and non-interactive shells. Terminal session interacts with interactive shell, and scripts run in non-interactive shell. But is there really other difference than ability to read input and print output?
It's just two different modes that you can run sh with. You want comfy keyboard shortcuts, aliases and options to help type things manually (interactively), but they're pointless or annoying when running pre-written script.
If I invoke script from shell, does it run in this interactive shell or new non-interactive shell inside shell?
It runs in a new, independent process. You can run it in the same interactive shell instance with source yourscript, which is basically the same as typing the script contents on the keyboard.
Also, when I execute binary either by invoking it through interactive shell or graphical interface, does it always run in the shell, or could a process run without shell at all?
The process always runs entirely independently of the shell, but may share the same terminal.
It's said that all processes communicates with kernel through the shell,
Processes never talk to the kernel through the shell. They talk through syscalls.
but I'm confused because in docker, you can define the entrypoint to be either a binary or "sh -c binary".
For a simple binary, the two are identical.
If you want to e.g. set up pipes or redirections because the process doesn't do it on its own, you can use sh -c to have a shell do it instead.
I'm trying to create a very simple shell script that opens and runs five instances of my program. With batch, I would do something like this:
#echo off
python webstore.py 55530
python webstore.py 55531
python webstore.py 55532
exit
That would open three terminals and run the commands on each of them with different command line parameter. How would I create the same with a shell script that runs on every single unix-based platform? I've seen some commands for opening terminals, but they are platform specific (gnome-termial, xterm and so on).
How would I create the same with a shell script that runs on every single unix-based platform?
What you're asking for is a bit unreasonable. Think about it that way: on Windows, you're always inside its Desktop Window Manager, period, and the only choice you have is between Powershell and cmd.exe. But on Linux, it's a little more complicated:
Like you said, you can have either rxvt or xterm installed.
This is not the only issue though - you can use any window manager. While this does not matter much here,
You can be either using Xorg, or Wayland. Continuing on,
You can not use any graphical environment at all, e.g. run everything in Linux console! Which, unless you use fancy programs such as fbterm or tmux, is pretty much incapable of multitasking, yet alone spawning new windows.
That being said, you may even not use this computer physically at all, because you're connecting to it from SSH. No remote windows here either (unless you use stuff as X11 forwarding).
Finally, you can use zsh, bash, sh, fish etc. that all come with their own idiosyncrasies.
IMO your best bet is to test in your script which programs are installed, or script around a terminal multiplexer such as tmux and require it to be installed on the target machine. tmux in action:
(source: github.io)
(This will work in either SSH, Linux console, or any other scenario above.)
If you, however, do not care about the output of the commands you're about to run, you can just detach them from your current terminal session like this:
command1 &>/dev/null &
command2 &>/dev/null &
command3 &>/dev/null &
Be mindful that this:
Will run the commands in parallel.
Won't show you any output. If you remove &>/dev/null, the output from each command will interwine with each other, which is probably not what you want.
Closing the terminal usually kills its children processes, which, in this case, will kill your command instances that work in the background.
Issues mentioned above can be worked around, but I believe it is a little out of scope for this answer.
Personally I'd go either for tmux or for detaching solution depending on whether I need to see the console output.
Basically I am writing a report to convince the audience that the following Linux commands
$ a.sh &
$ b.sh &
$ c.sh &
are all started almost at the same time. I couldn't find a good explanation or reliable source to convince the audience. Is there any books or articles that specifically discussed about this? Thanks.
From the Bash docs
If a command is terminated by the control operator ‘&’, the shell executes the command asynchronously in a subshell. This is known as executing the command in the background. The shell does not wait for the command to finish, and the return status is 0 (true). When job control is not active (see Job Control), the standard input for asynchronous commands, in the absence of any explicit redirections, is redirected from /dev/null.
For more details take a look at the Bash tutorial
I'm using expect to execute a bunch of commands in a remote machine. Then, i'm calling the expect script from a shell script.
I don't want the expect script to log to stdout the sent commands but i want it to log the output of the commands, so my shell script can do other things depending on that results.
log_user 0
Hides both the commands and the results, so it doesn't fit my needs. How can i tell expect to log the results?
Hmm... I'm not sure you can do that, since the reason for seeing the commands you send is because the remote device echoes them back to you. This is standard procedure, and is done so that a user sees what he or she types when interacting with the device.
What I'm trying to say is that both the device output to issued commands, and the echoed-back commands, are part of the spawned process's stdout, therefore I don't believe you can separate one from the other.
Now that I think of it, I think you can configure a terminal to not display echoed commands... but not sure how you would go about doing that with a spawned process that is not using an interactive terminal.
Let us know if you find a way, I'd be interested of knowing if there is one.
Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.