Storing text output in variables (bash) - linux

As for bash, is it a bad practice to store text output in variables? I don't mean few lines, but even as much as few MB of data. Should the variables be emptied after the script is done?
Edit: I didn't clarify the second part enough, I wanted to ask whether I should empty the variables in the case I run scripts in the current shell, not in a subshell, so that it doesn't drain memory. Or, shouldn't I run scripts in the current one at all?

Should the variables be emptied after the script is done
You need to understand that a script is executed in a sub shell (child of the present shell) that gets its own environment and variable space. When script ends, that sub-shell exists and all the variables held by that sub-shell get destroyed/released anyway so no need to empty variables programmatically.

As for bash, is it a bad practice to store text output in variables?
That's great practice! Carry on with bash programming, and don't care about this kind of memory issues (until you want to store debian DVD image in a single $debian_iso variable, then you can have a problem)
I don't mean few lines, but even as much as few MB of data. Should the
variables be emptied after the script is done?
All you variables in bash shell evaporate when you finish executing your script. It will manage the memory for you. That said, if you assign foo="bar" you can access $foo in the same script, but obviously you won't see that $foo in another script

Related

Is there any drawback to using functions instead of aliases?

Bash functions are more versatile than aliases. For example, they accept parameters.
Is there any drawback to going full function style and completely drop aliases, even for simple cases? I can imagine that maybe functions are more resource intensive, but have no data to back that up.
Any other reason to keep some of my aliases? They have easier syntax and are easier for humans to read, but apart from that?
Note: aliases take precedence over functions.
Following link may be relevant regarding function overhead, it seems there is no overhead comparing to alias: 3.6. Functions, Aliases, and the Environment
Quoting Dan again: "Shell functions are about as efficient as they can be. It is the approximate equivalent of sourcing a bash/bourne shell script save that no file I/O need be done as the function is already in memory. The shell functions are typically loaded from [.bashrc or .bash_profile] depending on whether you want them only in the initial shell or in subshells as well. Contrast this with running a shell script: Your shell forks, the child does an exec, potentially the path is searched, the kernel opens the file and examines enough bytes to determine how to run the file, in the case of a shell script a shell must be started with the name of the script as its argument, the shell then opens the file, reads it and executes the statements. Compared to a shell function, everything other than executing the statements can be considered unnecessary overhead."

Where does Linux environemnt live?

I've always believed that environment variables live within the shell current user is logged into. However recently I've begun working on a shell of my own and learning more about how Linux works under the hood. Now it seems to me, that the environment is shell-independent and handled elsewhere (in the kernel?). So my question is how exactly does it work? Which part of the system is responsible for holding the environment?
Also for instance Bash makes the distinction between export-ed and unexported variables, the latter of which are not inherited by a subshell. Does that mean that each process is the system has it's own set of shell variables?
Yes each process will have its own set of enviornment.
You can find them at
cat /proc/<pid>/environ

Safe way to call sh script recursively

So I found out the hard way doing this is really bad in linux:
# a.sh
while true
do
some stuff
sh a.sh
done
I want to be able to update the script and have it fix itself. Is something like this considered safe instead/
# a.sh
while true
do
wget http://127.0.0.1/b.sh
sh b.sh
done
# b.sh
some stuff
This way I can update script b.sh and the next execution of it will be force updated since a.sh calls it?
If you want a process to alter its source code and re-launch itself, and you don't want multiple (slightly different) copies of the process running at the same time, then you somehow need to kill the parent process. It doesn't matter much how you do this; for instance, if the "rewrite self" section of the code is only triggered when a rewrite is actually necessary, then you could just put "exit" after the line that calls the rewritten script. (I suspect that something like mv -f b.sh a.sh && sh a.sh && exit might work, too, since I think the entire line would be sent to the Bash interpreter before the script is destroyed.)
If you do want multiple copies of the process running, then you need to find some way to limit this number. For instance, you might have the script check how many iterations of itself are already running before forking.
Note that in both cases, this assumes that your script has accurately modified itself, which...is a tricky problem, to say the least. You are treading on dangerous ground.
Finally, if all you really want is a daemon that gets automatically relaunched when it dies (which is what it sounds like you're describing in your comments), there are other ways to accomplish this. I'm not terribly familiar with how this sort of thing is typically done, but I imagine that in a shell script you could simply use trap. (trap a.sh EXIT might be sufficient, though that would make your script quite difficult to permanently kill should you later decide that you made a mistake.)
You probably want to exec the script. exec replaces the script in execution with a new execution environment, so it never returns (unless there was some problem starting the indicated program).
Unless you know what you're doing, it's a bad idea to overwrite a running script. bash does not read the entire file into memory when it starts a script, so it will continue at the same byte offset in the overwritten file.
Finally, don't use sh when you mean bash. sh might be a different shell, and even if it is bash, it will have different behaviour.

file region locking using bash shell script

I am trying write some script with which i can try to lock a region of file using bash shell script.
I have used flock, but it locks the whole file and does not provide parameters to lock a region of a file like in C language you get with fcntl.
Will be helpful someone can provide some suggestions in this area?
As you use flock (1) (which is a C program, see http://util-linux.sourcearchive.com/documentation/2.17/flock_8c-source.html) to utilize flock (2), you would need a similar command that utilizes fcntl. If such a command doesn't exist yet, one would have to write it.

How can I modify an environment variable across all running shells?

I use Terminal.app and iTerm, both of which support running multiple shells simultaneously via multiple tabs and multiple windows. I often use this feature, but as a result, if I want to change an environment variable setting, I generally have to run the same command once in every single tab and window I have open -- as well as any new tabs or windows that I open in the future. Is it possible for my shells to communicate with each other such that I can change an environment variable once, and have that change propagate to all my other currently running shells?
I know that I can statically set env variables in a startup file like .bashrc. I also know that I can have subshells inherit the environment of parent shells, either normally or via screen. Neither of those options address this question. This question is specifically about dynamically changing the environment of multiple currently-running shells simultaneously.
Ideally, I'd like to accomplish this without writing the contents of these variables to disk at any point. One of the reasons I want to be able to do this is so that I can set sensitive information in an env variable, such as hashed passwords, and refer to them later on in other shells. I would like to be able to set these variables once when I log in, and be able to refer to them in all my shells until I log out, or until the machine is restarted. (This is similar to how ssh-agent works, but as far as I'm aware, ssh-agent will only store SSH keys, not env variables.)
Is it possible to make shells communicate like this?
Right. Since each process has it's own copy of the environment variables, you can't magically change them all at once. If you bend your mind enough though, there are strange workarounds.
For instance, if you currently have a command you run to update each one, you can automate running that command. Check the bash man page for PROMPT_COMMAND, which can run a command each time the bash prompt is printed. Most shells have something similar.
As far as not putting a hashed password on disk because you are pulling it from an envvar instead of something like ssh-agent...that would be a whole 'nother topic.
Unless you write your own shell, you can't. ssh-agent works by having each SSH client contact it for the keys, but most common shells have no similar mechanism.

Resources