I use Terminal.app and iTerm, both of which support running multiple shells simultaneously via multiple tabs and multiple windows. I often use this feature, but as a result, if I want to change an environment variable setting, I generally have to run the same command once in every single tab and window I have open -- as well as any new tabs or windows that I open in the future. Is it possible for my shells to communicate with each other such that I can change an environment variable once, and have that change propagate to all my other currently running shells?
I know that I can statically set env variables in a startup file like .bashrc. I also know that I can have subshells inherit the environment of parent shells, either normally or via screen. Neither of those options address this question. This question is specifically about dynamically changing the environment of multiple currently-running shells simultaneously.
Ideally, I'd like to accomplish this without writing the contents of these variables to disk at any point. One of the reasons I want to be able to do this is so that I can set sensitive information in an env variable, such as hashed passwords, and refer to them later on in other shells. I would like to be able to set these variables once when I log in, and be able to refer to them in all my shells until I log out, or until the machine is restarted. (This is similar to how ssh-agent works, but as far as I'm aware, ssh-agent will only store SSH keys, not env variables.)
Is it possible to make shells communicate like this?
Right. Since each process has it's own copy of the environment variables, you can't magically change them all at once. If you bend your mind enough though, there are strange workarounds.
For instance, if you currently have a command you run to update each one, you can automate running that command. Check the bash man page for PROMPT_COMMAND, which can run a command each time the bash prompt is printed. Most shells have something similar.
As far as not putting a hashed password on disk because you are pulling it from an envvar instead of something like ssh-agent...that would be a whole 'nother topic.
Unless you write your own shell, you can't. ssh-agent works by having each SSH client contact it for the keys, but most common shells have no similar mechanism.
Related
We have a startup script for an application (Owned and developed by different team but deployments are managed by us), which will prompt Y/N to confirm starting post deployment. But the number of times it will prompt will vary, depends on changes in the release.
So the number of times it will prompt would vary from 1 to N (Might be even 100 or more than that).
We have automated the deployment and startup using Jenkins shell script jobs. But startup prompts number is hardcoded to 20 which might be more at sometime.
Could anyone please advise how number of prompts can be handled dynamically. We need to pass Y whenever there is pattern in the output "Do you really want to start".
Checked few options like expect, read. But not able to come up with a solution.
Thanks in advance!
In general, the best way to handle this is by (a) using a standard process management system, such as your distro's preferred init system; or, if that's not possible, (b) to adjust the script to run noninteractively (e.g., with a --yes or --noninteractive option).
Barring that, assuming your script reads from standard input and not the TTY, you can use the standard program yes and pipe it into the command you're running, like so:
$ yes | ./deploy
yes prints y (or its argument) over and over until it's killed, usually by SIGPIPE.
If your process is reading from /dev/tty instead of standard input, and you really can't convince the other team to come to their senses and add an appropriate option, you'll need to use expect for this.
I'm trying to create a very simple shell script that opens and runs five instances of my program. With batch, I would do something like this:
#echo off
python webstore.py 55530
python webstore.py 55531
python webstore.py 55532
exit
That would open three terminals and run the commands on each of them with different command line parameter. How would I create the same with a shell script that runs on every single unix-based platform? I've seen some commands for opening terminals, but they are platform specific (gnome-termial, xterm and so on).
How would I create the same with a shell script that runs on every single unix-based platform?
What you're asking for is a bit unreasonable. Think about it that way: on Windows, you're always inside its Desktop Window Manager, period, and the only choice you have is between Powershell and cmd.exe. But on Linux, it's a little more complicated:
Like you said, you can have either rxvt or xterm installed.
This is not the only issue though - you can use any window manager. While this does not matter much here,
You can be either using Xorg, or Wayland. Continuing on,
You can not use any graphical environment at all, e.g. run everything in Linux console! Which, unless you use fancy programs such as fbterm or tmux, is pretty much incapable of multitasking, yet alone spawning new windows.
That being said, you may even not use this computer physically at all, because you're connecting to it from SSH. No remote windows here either (unless you use stuff as X11 forwarding).
Finally, you can use zsh, bash, sh, fish etc. that all come with their own idiosyncrasies.
IMO your best bet is to test in your script which programs are installed, or script around a terminal multiplexer such as tmux and require it to be installed on the target machine. tmux in action:
(source: github.io)
(This will work in either SSH, Linux console, or any other scenario above.)
If you, however, do not care about the output of the commands you're about to run, you can just detach them from your current terminal session like this:
command1 &>/dev/null &
command2 &>/dev/null &
command3 &>/dev/null &
Be mindful that this:
Will run the commands in parallel.
Won't show you any output. If you remove &>/dev/null, the output from each command will interwine with each other, which is probably not what you want.
Closing the terminal usually kills its children processes, which, in this case, will kill your command instances that work in the background.
Issues mentioned above can be worked around, but I believe it is a little out of scope for this answer.
Personally I'd go either for tmux or for detaching solution depending on whether I need to see the console output.
I've always believed that environment variables live within the shell current user is logged into. However recently I've begun working on a shell of my own and learning more about how Linux works under the hood. Now it seems to me, that the environment is shell-independent and handled elsewhere (in the kernel?). So my question is how exactly does it work? Which part of the system is responsible for holding the environment?
Also for instance Bash makes the distinction between export-ed and unexported variables, the latter of which are not inherited by a subshell. Does that mean that each process is the system has it's own set of shell variables?
Yes each process will have its own set of enviornment.
You can find them at
cat /proc/<pid>/environ
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I understand that /bin/sh is a shell that executes commands I have typed.
But the thing is that although I don't type /bin/sh, I can type any command I want.
I heard that when a cracker wants to attack someone, he or she usually wants to get /bin/sh. In particular, I have heard /bin/sh mentioned in conjunction with buffer overflows and remote shells, and that crackers can exploit programs using malicious code executing by /bin/sh such as exec ("bin/sh", ~something, something);.
I am curious about why he or she tries to "get" /bin/sh or execute it.
I am also not sure about the difference between just typing a command and typing the same command after executing /bin/sh, like seen in this terminal interaction:
johndoe#localhost $ pwd
/home/johndoe
johndoe#localhost $ /bin/sh
sh-3.2 $ pwd
/home/johndoe
sh-3.2 $ whoami
johndoe
sh-3.2 $
Although I do not execute /bin/sh, I can still type any command I want to type. So why and when would crackers want to use /bin/sh?
When you login, you are given a shell. It may be /bin/bash, or /bin/sh, or (sadly) /bin/csh, so you can type commands into that shell. If you invoke /bin/sh, you are not gaining much: you're just getting a new shell. The reason a cracker wants to execute /bin/sh is that they may not be in a shell initially. They may be running some program that is supposed to limit their ability to invoke commands, so getting /bin/sh is a huge gain. Frankly, the cracker doesn't care if they get /bin/sh or /bin/bash or even /bin/csh: the point is to get a root level shell so they can execute arbitrary commands. If they are able to make a setuid program spawn a shell, they gain root on the box. (That is, if they run a command like eject that is running as root when they trick it into spawning a shell, the shell they get has root privileges.)
First, about the difference between executing a command on your shell, and executing it after invoking /bin/sh (essentially no significant difference, but I'll elaborate):
When you open up a terminal on your local machine, you see a window and a prompt. The window is a terminal program, and inside it there is already a shell running. From the shell interaction you pasted in your question, it looks like your default shell is /bin/bash.
Simplistically speaking, whenever you type a command into the shell it executes it using a combination of fork and exec. So, when you type /bin/sh, your shell simply executes it the same way. i.e. one shell executes another shell. Inside that shell, you execute more commands. Nothing particularly different. It is another instance of the shell doing the same thing the previous instance was doing.
A shell isn't particularly special to you when you are already logged into a computer and sitting at it typing away. It is just another program after all. But it is a program that can conveniently execute other programs. This is why you are using it. But this very property makes it interesting to crackers because they want to conveniently execute programs on others computers. But we'll come to that in a bit. Just remember this though: A shell is a program that can conveniently execute other programs.
Now on to why crackers are interested in getting shells:
A shell is not the only program that can call exec (start executing another program). Any program can do it. A shell is, of course, the most convenient way to do so. Unfortunately for a would-be cracker, computers don't offer a shell unless they have physical access to it. The only interface they have to a computer is through public services run by that computer. e.g. a web server serving pages does indeed take input from external computers and produces output for them. In the process, the web server reads files on the server, does a bunch of other stuff, and then sends some bytes over the wire. It doesn't exec anything (or even if it does, there is no way for the attacker to directly control what it execs). i.e. you don't know what Google's web server does internally when you see their web page. You just send a query, and see the result in your browser. But if a cracker somehow tricks a web server to exec a shell program (say /bin/sh, or any of its relatives), and pass input to it, then the attacker can run any program they subsequently want on that server. And if that publicly exposed service is running as root: even better. This is what an attacker is interested in doing. Because it is a way to move towards convenient control of a system.
When you type /bin/sh all you are doing is changing your shell to /bin/sh. If you are executing code, it will make no difference.
I'd like to create an auto-testing/grading script for students on a Linux system such that:
Any student user can initiate the script at any time.
A separate script (with root privileges) copies student code to a non-student-accessible file space, using non-student-accessible unit tests, etc.
The user receives limited feedback in the form of a text file generated by the grading script.
In short, I'm looking to create something similar to programming contest submission systems, but allowing richer feedback without revealing all teacher unit testing.
I would imagine that a spooling behavior between one initiating script and one root-permission cron script might be in order. Are there any models/examples of how one might best structure communication between a user-initiated script and a separate root-initiated script for such purposes?
There are many options.
The things I would mention at the first line:
Don't use su; use sudo; there are several reasons for it, and the main reason, that to use su you need the password of the user you want to be and with sudo — you don't;
Scripts can't be suid, you must use binaries or just a normal script that will be started using sudo (of course students must have sudoers entry that allows them to use the script);
Cron is not that fast, as you may theoretically need; cron runs tasks every minute; please consider inotify usage;
To communicate between components of your system you need something that will react in realtime; there are many opensource components/libraries/frameworks that could help you, but I would recommend you to take a look at ZeroMQ and Redis;
Results of the scripts' executions/tests can be written either to a filesystem (I think it would be better), or to a DBMS.
If you want to stick to shell scripting, the method I suggest for communicating between processes would be to have the root script continually check a named pipe for input (i.e. keep opening it after each eof) and send each input through whatever various tests must be done. Have part of the input be a 'return address' - where to send the result.
This should allow the tests to be performed in a privileged space without exposing any control over the privileged space to the students. The students don't need sudo, and you don't need to pull in libraries. Just have the students pipe their code into a non-privileged script that adds the return address and whatever other markup you may need, which then gives it to the named pipe.