Bash functions are more versatile than aliases. For example, they accept parameters.
Is there any drawback to going full function style and completely drop aliases, even for simple cases? I can imagine that maybe functions are more resource intensive, but have no data to back that up.
Any other reason to keep some of my aliases? They have easier syntax and are easier for humans to read, but apart from that?
Note: aliases take precedence over functions.
Following link may be relevant regarding function overhead, it seems there is no overhead comparing to alias: 3.6. Functions, Aliases, and the Environment
Quoting Dan again: "Shell functions are about as efficient as they can be. It is the approximate equivalent of sourcing a bash/bourne shell script save that no file I/O need be done as the function is already in memory. The shell functions are typically loaded from [.bashrc or .bash_profile] depending on whether you want them only in the initial shell or in subshells as well. Contrast this with running a shell script: Your shell forks, the child does an exec, potentially the path is searched, the kernel opens the file and examines enough bytes to determine how to run the file, in the case of a shell script a shell must be started with the name of the script as its argument, the shell then opens the file, reads it and executes the statements. Compared to a shell function, everything other than executing the statements can be considered unnecessary overhead."
Related
I have a batch processing system that can execute a number of commands sequentially. These commands are specified as list of words, that are executed by python's subprocess.call() function, without using a shell. For various reasons I do not want to change the processing system.
I would like to write something to a file, so a subsequent command can use it. Unfortunately, all the ways I can think of to write something to the disk involve some sort of redirection, which is a shell concept.
So is there a way to write a Linux command line that will take its argument and write it to a file, in a context where it is executed outside a shell?
Well, one could write a generalised parser and process manager that could handle this for you, but, luckily, one already comes with Linux. All you have to do is tell it what command to run, and it will handle the redirection for you.
So, if you were to modify your commands a bit, you could easily do this. Just concatenate the words together with strings, quoting when those words may have spaces or other special characters in them, and then you can use a list such as:
/bin/sh, -c, {your new string here} > /some/file
Et voila, stuff written to disk. :)
Looking at the docs for subprocess.call, I see it has extra parameters:
subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False)
If you specify stdout= to a file you have opened, then the output of your code will go to that file, which is basically the same behaviour?
I don't see your exact usage case, but this is certainly a way to synthesise the command-line pipe behaviours, with little coding change.
Note that the docs also say that you should not use the built-in =PIPE support, depending on your exact requirements. It is important that you read data from a pipe regularly or the writer will stall when the buffer is full.
When a shell (e.g. bash) invokes an executable file, it first fork itself, and then its copy execve the executable file.
When a shell invokes builtin commands, there is no new process created, and execve can only operate on executable files while builtin commands are not stored in executable files.
So how are builtin commands stored, and how are they invoked in terms of system calls?
"builtin command" means that you don't have to run an external program. So, no, there's no execve involved at all, and no, there's not even any system call necessarily involved. Your shell really just parses a command string and sees "hey, that's a builtin command, let's execute this and that function".
You can imagine they are the same as shell functions.
So instead of launching external process the shell invokes some internal function library function which reads the input outputs the result and does pretty much the same as main function of regular program.
The shell process itself just handles the builtin and potentially modifies itself or its environment as a result. There might not be any system calls made at all.
As for bash, is it a bad practice to store text output in variables? I don't mean few lines, but even as much as few MB of data. Should the variables be emptied after the script is done?
Edit: I didn't clarify the second part enough, I wanted to ask whether I should empty the variables in the case I run scripts in the current shell, not in a subshell, so that it doesn't drain memory. Or, shouldn't I run scripts in the current one at all?
Should the variables be emptied after the script is done
You need to understand that a script is executed in a sub shell (child of the present shell) that gets its own environment and variable space. When script ends, that sub-shell exists and all the variables held by that sub-shell get destroyed/released anyway so no need to empty variables programmatically.
As for bash, is it a bad practice to store text output in variables?
That's great practice! Carry on with bash programming, and don't care about this kind of memory issues (until you want to store debian DVD image in a single $debian_iso variable, then you can have a problem)
I don't mean few lines, but even as much as few MB of data. Should the
variables be emptied after the script is done?
All you variables in bash shell evaporate when you finish executing your script. It will manage the memory for you. That said, if you assign foo="bar" you can access $foo in the same script, but obviously you won't see that $foo in another script
What is the difference between a command, function and systemcall?
This is probably homework help, but anyhow:
Command - A program (or a shell built-in) you execute from (probably) your command shell.
Function - A logical subset of your program. Calling one is entirely within your process.
Systemcall - A function that is executed by your operating system; a primary way of using OS features like working with a file system or using the network.
A command can be a program, which in turn is comprised of functions, which themselves can execute system calls.
For example, the 'cp' command in Unix-like systems copies files. Its implementation includes functions which perform the copying. Those functions themselves execute system-calls like open() and read().
They are all just abstractions of a set of computer instructions which perform a given task.
Bit support question. Apologies for that.
I have an application linked with GNU readline. The application can invoke shell commands (similar to invoking tclsh using readline wrapper). When I try to invoke the Linux less command, I get the following error:
Suspend (tty output)
I'm not an expert around issues of terminals. I've tried to google it but found no answer. Does any one know how to solve this issue?
Thanks.
You probably need to investigate the functions rl_prep_terminal() and rl_deprep_terminal() documented in the readline manual:
Function: void rl_prep_terminal(int meta_flag)
Modify the terminal settings for Readline's use, so readline() can read a single character at a time from the keyboard. The meta_flag argument should be non-zero if Readline should read eight-bit input.
Function: void rl_deprep_terminal(void)
Undo the effects of rl_prep_terminal(), leaving the terminal in the state in which it was before the most recent call to rl_prep_terminal().
The less program is likely to get confused if the terminal is already in the special mode used by the Readline library and it tries to tweak the terminal into an equivalent mode. This is a common problem for programs that work with the curses library, or other similar libraries that adjust the terminal status and run other programs that also do that.
Whilst counterintuitive it may be stopped waiting for input (some OSs and shells give Stopped/Suspended (tty output) when you might expect it to refer to (tty input)). This would fit the usual behaviour of less when it stops at the end of (what it thinks is) the screen length.
Can you use cat or head instead? or feed less some input? or look at the less man/info pages to see what options to less might suit your requirement (e.g w, z, F)?
Your readline application is making itself the controlling app for your tty.
When you invoke less from inside the application, it wants to be in control of the tty as well.
If you are trying to invoke less in your application to display a file for the user,
you want to set the new fork'd process into it's own process group before calling exec.
You can do this with setsid(). Then when less call tcsetpgrpp(), it will not get
thrown into the backgroud with SIGTTOU.
When less finishes, you'll want to restore the foregroud process group with tcsetpgrp(), as well.