INPUT=10
OUTPUT_IN=20
KEYWORD="IN"
echo $OUTPUT_"$KEYWORD"
It should display 20
Mainly I am looking to generate the variable name OUTPUT_IN
How to resolve this?
You can use indirection in Bash like this:
INPUT=10
OUTPUT_IN=20
KEYWORD="IN"
var="OUTPUT_$KEYWORD"
echo "${!var}"
However, you should probably use an array or some other method to do what you want. From BashFAQ/006:
Putting variable names or any other bash syntax inside parameters is generally a bad idea. It violates the separation between code and data; and as such brings you on a slippery slope toward bugs, security issues etc. Even when you know you "got it right", because you "know and understand exactly what you're doing", bugs happen to all of us and it pays to respect separation practices to minimize the extent of damage they can have.
Aside from that, it also makes your code non-obvious and non-transparent.
Normally, in bash scripting, you won't need indirect references at all. Generally, people look at this for a solution when they don't understand or know about Bash Arrays or haven't fully considered other Bash features such as functions.
And you should avoid using eval if at all possible. From BashFAQ/048:
If the variable contains a shell command, the shell might run that command, whether you wanted it to or not. This can lead to unexpected results, especially when variables can be read from untrusted sources (like users or user-created files).
Bash 4 has associative arrays which would allow you to do this:
array[in]=10
array[out]=20
index="out"
echo "${array[$index]}"
eval newvar=\$$varname
Source: Advanced Bash-Scripting Guide, Indirect References
Related
I am looking for a way to edit the source code of common Linux commands (passwd, cd, rm, cat)
Ex. Every time the 'cat' command is called (by any user), it performs its regular function, but also prints "done" to stdout after.
If you're only looking to "augment" the commands as in your example, you can create e.g. /opt/bin/cat.sh:
/bin/cat && echo "done"
and then either:
change the default PATH (in /etc/bash.bashrc in Ubuntu) as follows:
PATH=/opt/bin:$PATH
or rename cat to e.g. cat.orig and move cat.sh to /bin/cat.
(If you do the latter, your script needs to call cat.orig not cat).
If you want to actually change the behavior, then you need to look at the sources here:
https://ftp.gnu.org/gnu/coreutils/
and then build them and replaces them.
All this assumes, of course, that you have root permissions, seeing how you want to change that behavior for any user.
The answer to how to modify the source is not to, unless you have a REALLY good reason to. There’s a plethora of reasons why you shouldn’t, but a big one is that you should try to avoid modifying the source of anything that could receive an update. The update breaks, if not erases, your code and you’re left with a lot of work.
Alternatively, you can use things like Alias for quick customizations and write scripts that call and rely upon the command being available, instead of worrying about its implementation. I’ve over explained, but that’s because I’m coming to you as someone with only a little experience with Linux but much more in Development, and what I’ve said extends beyond an Operating Systems CLI capabilities and lands further into general concepts of development.
I find it easy in perl to do things such as:
print "File not found, valid files are:\n\n".`ls DIRECTORY | grep 'php'`;
`rm -rf directory`
my #files_list = split("\n", `ls DIRECTORY | grep 'FILE_NAME_REGEX'`)
Is it bad practice to do such things? I find it so much easier to do this than painstakingly implement every thing. I treat Perl as an advanced version of bash.
Using external binaries is:
very inefficient
potentially insecure
not portable (thx #friedo)
lazy... in most cases there's a Perl module that'll do what you want if you look for it
In this particular case, look at the File::Glob module.
It sort of depends on the expected life of the program. If it's something that you're only going to use yourself, or it's meant as a one-off, it's perfectly fine. That is what it was originally designed for: "Initially designed as a glue language for the UNIX operating system..." (Programming Perl, 2nd Ed. page ix).
On the other hand, if it's a program meant to be used a lot, distributed widely, run on more than one OS, etc. then it's best to use the built-ins and the many packages available via CPAN.
Either way, you should probably always use the unlink function instead of calling out to rm, and the grep or map functions instead of calling out to grep.
And now for a dissenting view. TMTWOTDI, as #transistor1 said in a comment. And I could also invoke Perl's power of whipituptitude. If cp and mv are more familiar to you than the File::Copy module, if you aren't worried about portability, and if your program can afford the performance penalty from starting up an extra process or two (hint: it probably can), Perl makes it easy to integrate these tools into your programs, and there is nothing -- nothing -- wrong with using whatever tools are available to get your task done as quickly as you can.
And heck, sometimes there are those one-off tasks where the Unix utility is the right tool for the job, even when you know how to do the job in Perl.
# I need log.err plus the next two oldest and the next two newest
# files in the current directory. Should I say
chomp(#f = qx[ls -t | grep -C2 log.err]);
# or
#e = sort { -M $a <=> -M $b } glob("*");
($i) = grep { $e[$_] eq 'log.err' } 0..$#e;
#f = #e[$i-2 .. $i+2];
# or
use Acme::OlderNewer::FileFinder;
#f = find_oldernewer_files(".", "log.err", -2, +2);
# ? Or suppose I want a list of all the *.pm files under all
# directories in #INC, and we lucked out so that nothing in #INC
# has any spaces or special characters.
# Is my script any less useful for saying
chomp(#f = `find #INC -name \\*.pm`);
# than
use File::Find;
find( sub { /\.pm$/ && push #f, $File::Find::name }, #INC );
Using backticks is usually slower and somewhat insecure, and depending on the action you want, perl is not more difficult, you just have to know what to do. For example:
print "File not found, valid files are\n\n", grep /php/, glob 'DIRECTORY/*';
unlink glob 'directory/*';
rmdir 'directory';
my #files = grep /REGEX/, glob 'DIRECTORY/*';
Perl is built to accomodate the lazy, but most of the normal things you do with bash you can easily do in perl. Not learning how to do it is your call, but I would think if you do it a lot it would make things easier for you.
As with most engineering questions, the answer is "That depends.".
Treating Perl as a scripting language sacrifices portability, maintainability, execution speed, and other properties for getting the task before you accomplished in minimum time. The consequences of making those sacrifices are not constant, but are rather a function of the complexity of your program. The more complex your program, the more likely you are to pay more in terms of maintainability than you gain in whipitupitude, and the more likely you would be making a bad choice by shelling out.
There is no universally right answer here. In some cases, you need a script to work once. In others, you need it to work a few times, but it's short. And in others, you are writing an application of tens of thousands of lines across hundreds of source files. The scenario in which it is used dictates which tool is most appropriate, not some sort of perfect rule.
Sometimes, you'll see this approach derided as "laziness". And that's what it is. Whether you take the above into consideration and retain a willingness to change your approach when your parameters change determines whether you are engaging in the good kind of laziness or the bad kind of laziness. The truly lazy person knows that sometimes the laziest thing to do is to start over.
I recently got into Stata coming from a procedural/OO/functional background, and am having trouble understanding the basic elements of the language.
For example, I discovered that there is a syntax command which "allows programs to interpret the arguments the user types according to a grammar, such as standard Stata syntax". I infer this is the reason why some command require a list of variables given as arguments to be separated by whitespaces while others require a comma-separated list. But the idea of a program defining its own syntax instead of the (parameter) syntax being enforced seems plain weird.
Another quite interesting construct is the syntax for macro definition and expansion (`macro') and the apparent absence of local variables as known in other languages.
Is there something like a "Stata for Java developers" document explaining the basic concepts of the language to people with my background?
PS: Apologies if this question seems unclear. Unfortunately, I can't formulate more concrete/clear questions at this point :(
I'm not exactly sure what you are looking for... but here's a few related points. Stata is kind of like writing a Unix shell script or a Windows batch file. Each line executes a command, and the first word is the command name. By convention, most commands have the following structure:
command [varlist] [=exp] [if expression] [in range] [weight] [using filename] [, options]
Brackets [.] means it's optional (or unavailable, depending on the command). Some commands can be prefixed (such as by:, xi:, or svy:) The syntax of commands by Stata Corp and experienced users are pretty consistent. But, because Stata users also write commands, you occasionally see things that are wacky.
When Stata users write commands, they are saved in .ado files (not .do) and are defined using the program command. (See help program and the "Ado files" section of the manual.) Writing a command is akin to writing a function in other languages (e.g., MatLab)
The syntax command is used to help you write your own command. When you execute a command, everything following the command's name (command above) is passed to the program in the local macro `0'. The syntax command parses this local macro, so that you can reference `varlist' or `if' and so on. In theory, you could parse `0' yourself, but the syntax command makes it much easier for you and your users (as long as you are following the conventional syntax). I put an example at the bottom.
I don't know exactly what you mean by "apparent absence of local variables as known in other languages." Macros store a single string or a single number in memory. Here's a comment I wrote about Stata's local/global macros. They are indeed a unique feature of Stata's programming language. As their names imply, "local" macros are only available within a specify program (command) or .do file while "global" macros are available throughout a Stata session.
I found that, once I got used to macros in Stata, I started to miss them in other languages. They are pretty handy. In addition to (local/global) macros and the main data set, you can also store "things" in memory with the scalar and matrix commands (and one or two other obscure things).
I hope that helps. Here's a list resources that might help.
Example:
program define myprogram
syntax varlist [if], [hello(string) yes]
macro list _0 _varlist _if _hello _yes
summarize `varlist' `if'
display "Here's the string in my hello option: `hello'"
if !missing("`yes'") di "Yes is on"
else di "Yes is off"
end
sysuse auto.dta
myprogram rep78 headroom if price > 5000 , hello("world") yes
A few books offer an "X for Y users" approach, but generally between stats software solutions. Regarding your question, I would recommend using instinct first.
I started reading (programming and markup) code about ten years ago, and even though I cannot code in a large number of languages, I can read a few languages rather easily. I found Stata easy because most of its core commands are straightforward, with recurrent optional statements like over, if or replace (to take a voluntarily diverse set of statements) that are easy to understand and then apply.
When I teach Stata, I always have problems getting students to use the help pages as much as I do (and I love the fact they can be accessed so easily, just like in R). I explain the paradox by considering the fact that I can read the syntax indications straightaway. Syntax is very well covered by the previous reply to your question.
The extra mile consists in opening the [R], [U] and especially [P] handbooks that come with Stata in the utilities folder. There is a wealth of details there, which will interest both programmers and training statisticians. This is where I learnt to use macros and loops, beyond the obvious logic of commands like local/global and foreach/while (if I understand the term correctly, Stata is Turing-complete).
Stata is sometimes a bit of a pain when it comes to using single/double quotes in macro loops, but it's pretty straightforward otherwise. Have fun!
I need to document the software I'm currently working on. The software consists of several programming languages and scripts which got me thinking. If a new developers comes along and needs to fix something, they might know Java but maybe not bash scripting. It would be nice if there was a program which would help to understand what
for f in "$#" ; do
means. I was thinking of something that creates a static HTML page with the code plus syntax highlighting and if you hover over something (like the "for"), it would display a pop-up with an explanation:
for starts a loop which iterates over all values that follow in. In the loop, you can access each value via the variable $f. The loop body is between do and done
Does something like that already exist?
[EDIT] This is just an example. You'll get another help for f, in, "$#", ; and do, i.e. each and every element of the line should be explained. Unknown elements (like command names) should link to Google. So you can understand what it does even if you're missing some detail.
[EDIT2] I'm aware that you can't write a program which understands what another program does. What I'm looking for is a simple tool which will do "extended syntax highlighting" in the sense that it will color an expression and give a short explanation what it means (plus maybe a link to some in-depth reference).
This is meant for someone who knows how to program but maybe hasn't seen some obscure construct before. Say
echo "Error" 1>&2
Every bash programmer knows what this means but a Java developer might be puzzled by the 1>&2 despite the fact that they can guess that echo == System.out.println. A simple "Redirects stdout to stderr" will clear things up and give that instant "AHA!" which allows them to stay in their current train of thought.
A tool like this could be built using ANTLR, i.e. parse the code into an abstract syntax tree using an ANTLR grammar for that language, and write an HTML generator which produced the annotated code.
It sounds like a useful tool to have for language learning, or exploring source code of projects you're not maintaining -- but is it appropriate for documentation?
Why is it important to help the programmers of other languages understand the code at this level of implementation detail? Anyone maintaining the implementation at this level will obviously have to know the language and will probably have an IDE to do most of this.
That said, I'd definitely consider a tool like this as a learning aid.
IMO it would be simpler and more effective to just collect links to good language-specific references and tutorials on a Wiki page.
For all mainstream languages, such sources exist and are maintained regularly. If you try to create your own reference, you need to maintain it too. Fair enough, bash syntax is not going to change very often, but other languages do develop faster, so it is going to be a burden.
If you think about it, it's not that useful to have a tool that explains the syntax. Developers could just google for keywords instead of browsing a website in a similar fashion to http://www.codeweblog.com/source/ .
I believe that good comments will be by far more useful, plus there are tools to extract the documentation by using the comments (for example, HappyDoc does that for Python).
It is a very tricky thing. First of all by definition it can be proven that program that will "understand" any program down't exist. However, you can still use existing documentation. Maybe using tools like Doxygen can help you. You would need to document your code through comments and the documentation will be generated from them.
A language cannot be explained only through its syntax. The runtime environment plays a great part, together with the underlying philosophy of the language and libraies.
Moreover, syntax is not that complex for most common languages (given that code has been written with maintainability in mind).
Going on with bash example, you cannot deeply understand bash if you know nothing about processes & job control, environment variables, a big list of unix commands (tr, sort, cut, paste, sed, awk, find, ...) and many other features that don't appear in syntax.
If the tool produced
for starts a loop which iterates over
all values that follow in. In the
loop, you can access each value via
the variable $f. The loop body is
between do and done
it would be pretty worthless. This is exactly the kind of comment that trainee (human) programmers are told nver to write.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I use bash, and have done so for over a decade - but occasionally I wonder whether there has been any significant new developments in the world of Linux shells.
A few years back Microsoft released PowerShell, which seemed very interesting. Is there any comparable innovation going on in Linux shells?
You do realize bash 4 has very recently been released with a load of new features and language additions?
Shell options globstar (**/foo) does a recursive search, dirspell fixes typos during pathname expansion.
Associative arrays, map strings to strings, instead of just numbers to strings.
The autocd shell option allows changing directories by just typing the directory path instead of having to put cd in front.
Coprocesses
&>> and |& redirection operators that redirect both stdout and stderr
Loads of additions to existing builtins for improved scripting convenience.
Check out:
The "official" changelog: http://tiswww.case.edu/php/chet/bash/CHANGES
A short guide to some of the new features: http://bash-hackers.org/wiki/doku.php/bash4
I'd take a look at zsh or fishshell.
One of the least touted features of Bash (and several other shells) is the ability to write your own loadables, and have the shell run them as builtins.
Lets say you write the loadable 'on' .. and you want it to work like this:
on node 123 run some command
on class nodes run some command
on all nodes run some command
... etc ..
You can follow simple examples on how to write a loadable, then enable it as a bash built in via enable -f /path/to/loadable loadable_name
So in our case, enable -f /opt/bash/loadables/on on
... in your bashrc , and you've got it.
So, if you want to have bash interpret your spiffy new language natively, you would write a loadable named 'use' or 'switch_to', then modify the parser to load a different grammar / runtime if a certain environment variable was set.
I.e.:
#/bin/bash
switch_to my-way-cool-language
funkyfunc Zippy(int p) [[
jive.wassup(p) ]]
Most people are not going to want to hack their shell, however. I did want to point out that facilities exist to take Bash and make it the way you want it, without fiddling too much with core code.
See /path-to-bash-source/examples/loadables, you might be able to get that to fly where you work, since you're still using Bash.
You can run PowerShell on Linux via Pash. It uses Mono the way PowerShell uses .NET.
I think the "original improved shell" is ksh93. bash came into existence at a time when the ksh source code was proprietary; if ksh had been open-source then, it might not have been deemed necessary to have a new shell (though with the FSF you never know). ksh is worth studying, especially for its ability to be extended through C modules, but it's not a clear win over bash. bash's autocompletion is clearly superior, which may be enough to make bash a win overall. In any case bash and ksh have made substantial effort to converge, so differences are minor.
The other interesting shell is zsh, which attempts to be everything that ksh is while also including csh. Since I never saw any point or use to csh, I am not the right person to advocate for zsh. I will point out one unusual incompatibility: by default, in zsh a variable $var always expands to a single token, even if it contains spaces. This behavior is incompatible with all other sh-derived shells, and it is occasionally inconvenient, but really it makes a lot more sense than the original, and it saves a hell of a lot of quoting.
csh was the first shell to have job control, but in my mind it (and its successors) has been superseded by bash and ksh. It was never mucn fun to write scripts in.
Finally, there are many tiny shells designed for rescue floppies (!) and other Spartan environments, but it sounds like you have little interest in those.
(In the matter of innovation, I should add that more than half the scripts I used to write as shell scripts are now Lua scripts. Others could say the same for Python or Ruby, or back in the day, Perl or Tcl. So I think the real innovation is migration away from the shell for programmable interaction at the command line.)
IIRC, Powershell is Object Oriented, whereas most unix shells and utilities operate on text. On that regard, Squirrel Shell might interest you. I've never used it, though.
If you’re willing to lose sh compatibility, you could look at using a scripting language like Python or Tcl as your shell. rlwrap can be very handy if the interpreter doesn't provide line editing, command history, completion, etc.
One philosophy regarding shells is that they should primarily only be used to connect processes with files (here is one page that espouses that approach). That said, people have written some remarkably complex software using them.
Shells don't come much more inovative than the Scheme Schell. All the power of Scheme combined with the ability to run Unix commands and an embedded awk interpreter (written in Scheme, of course). The only drawback is that it needs a tiny bit of patching to build on 64 bit Linux.
It's not exactly Bourne-shell, but it's different. Of course, you have to learn Scheme - bonus!
if you like ruby, you can use rush (ruby-unix shell, not irb)
see the presentation here
http://www.slideshare.net/adamwiggins/rush-the-ruby-shell-and-unix-integration-library
or official website to see more examples
http://rush.heroku.com/