I'm trying to create a very simple shell script that opens and runs five instances of my program. With batch, I would do something like this:
#echo off
python webstore.py 55530
python webstore.py 55531
python webstore.py 55532
exit
That would open three terminals and run the commands on each of them with different command line parameter. How would I create the same with a shell script that runs on every single unix-based platform? I've seen some commands for opening terminals, but they are platform specific (gnome-termial, xterm and so on).
How would I create the same with a shell script that runs on every single unix-based platform?
What you're asking for is a bit unreasonable. Think about it that way: on Windows, you're always inside its Desktop Window Manager, period, and the only choice you have is between Powershell and cmd.exe. But on Linux, it's a little more complicated:
Like you said, you can have either rxvt or xterm installed.
This is not the only issue though - you can use any window manager. While this does not matter much here,
You can be either using Xorg, or Wayland. Continuing on,
You can not use any graphical environment at all, e.g. run everything in Linux console! Which, unless you use fancy programs such as fbterm or tmux, is pretty much incapable of multitasking, yet alone spawning new windows.
That being said, you may even not use this computer physically at all, because you're connecting to it from SSH. No remote windows here either (unless you use stuff as X11 forwarding).
Finally, you can use zsh, bash, sh, fish etc. that all come with their own idiosyncrasies.
IMO your best bet is to test in your script which programs are installed, or script around a terminal multiplexer such as tmux and require it to be installed on the target machine. tmux in action:
(source: github.io)
(This will work in either SSH, Linux console, or any other scenario above.)
If you, however, do not care about the output of the commands you're about to run, you can just detach them from your current terminal session like this:
command1 &>/dev/null &
command2 &>/dev/null &
command3 &>/dev/null &
Be mindful that this:
Will run the commands in parallel.
Won't show you any output. If you remove &>/dev/null, the output from each command will interwine with each other, which is probably not what you want.
Closing the terminal usually kills its children processes, which, in this case, will kill your command instances that work in the background.
Issues mentioned above can be worked around, but I believe it is a little out of scope for this answer.
Personally I'd go either for tmux or for detaching solution depending on whether I need to see the console output.
Related
I'm still pretty confused with the role of linux shell running programs despite of using linux a lot.
I understand there are two type of shells, interactive shells and non-interactive shells. Terminal session interacts with interactive shell, and scripts run in non-interactive shell. But is there really other difference than ability to read input and print output? If I invoke script from shell, does it run in this interactive shell or new non-interactive shell inside shell?
Also, when I execute binary either by invoking it through interactive shell or graphical interface, does it always run in the shell, or could a process run without shell at all? It's said that all processes communicates with kernel through the shell, but I'm confused because in docker, you can define the entrypoint to be either a binary or "sh -c binary".
The shell is just one possible interface. Every Linux system has a notion of a "first" process (usually called init) that is started directly by the kernel. Every other program on your computer is started by another process that first forks itself, then calls exec (actually, one of about 6 functions in the same family) to replace itself with a different program.
The shell is just one possible interface, one that parses text into requests to run other programs. The shell command line mv foo bar is parsed as a request to pass fork the shell and call exec in the new copy with the three words mv, foo, and bar as arguments.
Consider the following snippet of Python:
subprocess.call(["mv", "foo", "bar"])
which basically does the same thing: the Python program forks itself and calls exec with the three given strings as arguments. There is no shell involvement.
The shell is just a convenient UI that lets you run other processes the way you want to. It can also run scripts to do the same. That's all it does. It's not responsible for doing anything for the processes once it runs them.
You could entirely replace it with pythonwhich lets you do the same things, but that's annoying because you have to type chepner's subprocess.call(["mv", "foo", "bar"])just to to run the mv program. If you wanted to pipe one program to another, you'd need 5-10 such lines. Not much fun to write interactively.
You could entirely replace it with KDE/Gnome/whatever and double click programs to run them, but that's not very flexible since you can't include arguments and such, and you can't automate it.
I understand there are two type of shells, interactive shells and non-interactive shells. Terminal session interacts with interactive shell, and scripts run in non-interactive shell. But is there really other difference than ability to read input and print output?
It's just two different modes that you can run sh with. You want comfy keyboard shortcuts, aliases and options to help type things manually (interactively), but they're pointless or annoying when running pre-written script.
If I invoke script from shell, does it run in this interactive shell or new non-interactive shell inside shell?
It runs in a new, independent process. You can run it in the same interactive shell instance with source yourscript, which is basically the same as typing the script contents on the keyboard.
Also, when I execute binary either by invoking it through interactive shell or graphical interface, does it always run in the shell, or could a process run without shell at all?
The process always runs entirely independently of the shell, but may share the same terminal.
It's said that all processes communicates with kernel through the shell,
Processes never talk to the kernel through the shell. They talk through syscalls.
but I'm confused because in docker, you can define the entrypoint to be either a binary or "sh -c binary".
For a simple binary, the two are identical.
If you want to e.g. set up pipes or redirections because the process doesn't do it on its own, you can use sh -c to have a shell do it instead.
Is there a way to capture the commands executed by GUI programs ?
Or even simple bash scripts ?
Like the "history" command from bash but available on the whole system.
A shell (e.g. bash) has -x option and you can see all the commands which are executed by a particular script. run sh -x <your_script and see the output. You also can temporarily turn on/off this logging by issuing set +x, set -x inside a script.
regarding GUI programs, the answer depends on your needs, what kind of activity you'd like to log. You can use strace as suggested in the comments, and filter out exec* calls. But likely you assume something else since most activities of a GUI program are performed w/o executing external programs.
I have a program that takes standard input from the user and runs through the command line. Is there someway to make a program ignore pipes and redirects?
For example: python program.py < input.txt > output.txt would just act as if you put in python program.py
There is no simple way to find the terminal the user launched you with in the general case. There are some techniques you can use, but they will not always work.
You can use os.isatty() to detect whether a file (such as sys.stdin or sys.stdout) appears to be an interactive terminal session. It is possible you are hooked up to a terminal session other than the one the user used to launch your program, so this is not foolproof. Such a terminal session might even be under the control of a program rather than a human.
Under Unix, processes have a notion of a "controlling terminal." You may be able to talk to that via os.ctermid(). But the user can manipulate this value before launching your process. You also may not have a controlling terminal at all, e.g. if running as a daemon.
You can inspect the parent process and see if any of its file descriptors are hooked up to terminal sessions. Unfortunately, I'm not aware of any cross-platform way to do that. On Linux, I'd start with os.getppid() and the /proc filesystem (see proc(5)). If the parent process has exited (e.g. the user ran your_program.py & disown; exit under bash), this will not work. But in that case, there isn't much you can do anyway.
Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.
I use Terminal.app and iTerm, both of which support running multiple shells simultaneously via multiple tabs and multiple windows. I often use this feature, but as a result, if I want to change an environment variable setting, I generally have to run the same command once in every single tab and window I have open -- as well as any new tabs or windows that I open in the future. Is it possible for my shells to communicate with each other such that I can change an environment variable once, and have that change propagate to all my other currently running shells?
I know that I can statically set env variables in a startup file like .bashrc. I also know that I can have subshells inherit the environment of parent shells, either normally or via screen. Neither of those options address this question. This question is specifically about dynamically changing the environment of multiple currently-running shells simultaneously.
Ideally, I'd like to accomplish this without writing the contents of these variables to disk at any point. One of the reasons I want to be able to do this is so that I can set sensitive information in an env variable, such as hashed passwords, and refer to them later on in other shells. I would like to be able to set these variables once when I log in, and be able to refer to them in all my shells until I log out, or until the machine is restarted. (This is similar to how ssh-agent works, but as far as I'm aware, ssh-agent will only store SSH keys, not env variables.)
Is it possible to make shells communicate like this?
Right. Since each process has it's own copy of the environment variables, you can't magically change them all at once. If you bend your mind enough though, there are strange workarounds.
For instance, if you currently have a command you run to update each one, you can automate running that command. Check the bash man page for PROMPT_COMMAND, which can run a command each time the bash prompt is printed. Most shells have something similar.
As far as not putting a hashed password on disk because you are pulling it from an envvar instead of something like ssh-agent...that would be a whole 'nother topic.
Unless you write your own shell, you can't. ssh-agent works by having each SSH client contact it for the keys, but most common shells have no similar mechanism.