I have a few makefiles that store shared variables, such as CC=gcc , how should I name them?
The candidates are:
common.mk
Make.common
Makefile.common
.. which is more classic? Is there a standard?
Similarly, I have some shell scripts, which should i choose among the following:
do_this_please.sh
do-this-please.sh
DoThisPlease.sh
doThisPlease.sh
Is there a generally accepted 'case' and suffix for these?
Google's open-source style uses underscores as separators: https://google.github.io/styleguide/shellguide.html#s7.4-source-filenames.
So in your case, it would be do_this_please.sh or do_this_please using this style.
What you've got are the bits that glue a build together. Build scripts, auto-generated configs, makefiles that other makefiles include - questioning how that stuff should be named is a good idea.
Most of all, be consistent.
I've seen a lot of .mk extensions for files included via the Makefile. However, as Gyom suggests, it's a very subjective question.
Whatever makes the syntax highlighter in your editor of choice happy is probably a good choice. If you're on a team where everyone uses something different, ask folks. For me, naming a makefile include with a .mk extension highlights correctly for everyone. Naming shell scripts with a .sh suffix helps in a similar way.
In short, make the file names obvious and try to make syntax highlighting work on as many editors / IDEs as possible. Makefile.common might not do that, common.mk may have a better shot.
I would go for do_this_please.sh. Here is my reasoning:
From a readability perspective, spacing is preferable. This takes us away from the camelcase.
The dash has special meaning in most shells, so pretty-print lexers will pick it up and give odd visualisations when pretty-printing something involving one of your files. You see a case above.
This leaves the underscored version. Some additional points in favour:
This convention is the same as module names for Python modules (cmp. PEP 8). Modules in python group code together. Your shell script could also be seen as such a grouping.
Pythonians will have an immediate idea as to how much and what scope of function you offer. Python is very popular, with its home-stay originating in scripting, so its conventions are probably a good template.
Since the purpose of the shell script is to execute command line commands, IMO the naming convention of the shell script should follow the naming conventions for files because it's more related to the operating system rather than a particular programming language. Since most operating systems use - or _ as word separators, it's better to name the script do-this-please.sh or do_this_please.sh. I personally use do_this_please.sh, because I think it looks better.
I would recommend always adding .sh extension to shell scripts, because if other people are looking through your project, do_this_please won't give them enough information about the purpose of the file without opening it. On the other hand when they see the extension, they know the file is a shell script and seeing the name of the script, they know the whole purpose of the file without having to open it.
It is quite a subjective question, so I'll answer it with a subjective answer :-)
IMHO, go for Makefile.common and do-this-please (with or without .sh suffix). I've seen these a lot and they are indisputably readable.
Related
I recently got into Stata coming from a procedural/OO/functional background, and am having trouble understanding the basic elements of the language.
For example, I discovered that there is a syntax command which "allows programs to interpret the arguments the user types according to a grammar, such as standard Stata syntax". I infer this is the reason why some command require a list of variables given as arguments to be separated by whitespaces while others require a comma-separated list. But the idea of a program defining its own syntax instead of the (parameter) syntax being enforced seems plain weird.
Another quite interesting construct is the syntax for macro definition and expansion (`macro') and the apparent absence of local variables as known in other languages.
Is there something like a "Stata for Java developers" document explaining the basic concepts of the language to people with my background?
PS: Apologies if this question seems unclear. Unfortunately, I can't formulate more concrete/clear questions at this point :(
I'm not exactly sure what you are looking for... but here's a few related points. Stata is kind of like writing a Unix shell script or a Windows batch file. Each line executes a command, and the first word is the command name. By convention, most commands have the following structure:
command [varlist] [=exp] [if expression] [in range] [weight] [using filename] [, options]
Brackets [.] means it's optional (or unavailable, depending on the command). Some commands can be prefixed (such as by:, xi:, or svy:) The syntax of commands by Stata Corp and experienced users are pretty consistent. But, because Stata users also write commands, you occasionally see things that are wacky.
When Stata users write commands, they are saved in .ado files (not .do) and are defined using the program command. (See help program and the "Ado files" section of the manual.) Writing a command is akin to writing a function in other languages (e.g., MatLab)
The syntax command is used to help you write your own command. When you execute a command, everything following the command's name (command above) is passed to the program in the local macro `0'. The syntax command parses this local macro, so that you can reference `varlist' or `if' and so on. In theory, you could parse `0' yourself, but the syntax command makes it much easier for you and your users (as long as you are following the conventional syntax). I put an example at the bottom.
I don't know exactly what you mean by "apparent absence of local variables as known in other languages." Macros store a single string or a single number in memory. Here's a comment I wrote about Stata's local/global macros. They are indeed a unique feature of Stata's programming language. As their names imply, "local" macros are only available within a specify program (command) or .do file while "global" macros are available throughout a Stata session.
I found that, once I got used to macros in Stata, I started to miss them in other languages. They are pretty handy. In addition to (local/global) macros and the main data set, you can also store "things" in memory with the scalar and matrix commands (and one or two other obscure things).
I hope that helps. Here's a list resources that might help.
Example:
program define myprogram
syntax varlist [if], [hello(string) yes]
macro list _0 _varlist _if _hello _yes
summarize `varlist' `if'
display "Here's the string in my hello option: `hello'"
if !missing("`yes'") di "Yes is on"
else di "Yes is off"
end
sysuse auto.dta
myprogram rep78 headroom if price > 5000 , hello("world") yes
A few books offer an "X for Y users" approach, but generally between stats software solutions. Regarding your question, I would recommend using instinct first.
I started reading (programming and markup) code about ten years ago, and even though I cannot code in a large number of languages, I can read a few languages rather easily. I found Stata easy because most of its core commands are straightforward, with recurrent optional statements like over, if or replace (to take a voluntarily diverse set of statements) that are easy to understand and then apply.
When I teach Stata, I always have problems getting students to use the help pages as much as I do (and I love the fact they can be accessed so easily, just like in R). I explain the paradox by considering the fact that I can read the syntax indications straightaway. Syntax is very well covered by the previous reply to your question.
The extra mile consists in opening the [R], [U] and especially [P] handbooks that come with Stata in the utilities folder. There is a wealth of details there, which will interest both programmers and training statisticians. This is where I learnt to use macros and loops, beyond the obvious logic of commands like local/global and foreach/while (if I understand the term correctly, Stata is Turing-complete).
Stata is sometimes a bit of a pain when it comes to using single/double quotes in macro loops, but it's pretty straightforward otherwise. Have fun!
I need to document the software I'm currently working on. The software consists of several programming languages and scripts which got me thinking. If a new developers comes along and needs to fix something, they might know Java but maybe not bash scripting. It would be nice if there was a program which would help to understand what
for f in "$#" ; do
means. I was thinking of something that creates a static HTML page with the code plus syntax highlighting and if you hover over something (like the "for"), it would display a pop-up with an explanation:
for starts a loop which iterates over all values that follow in. In the loop, you can access each value via the variable $f. The loop body is between do and done
Does something like that already exist?
[EDIT] This is just an example. You'll get another help for f, in, "$#", ; and do, i.e. each and every element of the line should be explained. Unknown elements (like command names) should link to Google. So you can understand what it does even if you're missing some detail.
[EDIT2] I'm aware that you can't write a program which understands what another program does. What I'm looking for is a simple tool which will do "extended syntax highlighting" in the sense that it will color an expression and give a short explanation what it means (plus maybe a link to some in-depth reference).
This is meant for someone who knows how to program but maybe hasn't seen some obscure construct before. Say
echo "Error" 1>&2
Every bash programmer knows what this means but a Java developer might be puzzled by the 1>&2 despite the fact that they can guess that echo == System.out.println. A simple "Redirects stdout to stderr" will clear things up and give that instant "AHA!" which allows them to stay in their current train of thought.
A tool like this could be built using ANTLR, i.e. parse the code into an abstract syntax tree using an ANTLR grammar for that language, and write an HTML generator which produced the annotated code.
It sounds like a useful tool to have for language learning, or exploring source code of projects you're not maintaining -- but is it appropriate for documentation?
Why is it important to help the programmers of other languages understand the code at this level of implementation detail? Anyone maintaining the implementation at this level will obviously have to know the language and will probably have an IDE to do most of this.
That said, I'd definitely consider a tool like this as a learning aid.
IMO it would be simpler and more effective to just collect links to good language-specific references and tutorials on a Wiki page.
For all mainstream languages, such sources exist and are maintained regularly. If you try to create your own reference, you need to maintain it too. Fair enough, bash syntax is not going to change very often, but other languages do develop faster, so it is going to be a burden.
If you think about it, it's not that useful to have a tool that explains the syntax. Developers could just google for keywords instead of browsing a website in a similar fashion to http://www.codeweblog.com/source/ .
I believe that good comments will be by far more useful, plus there are tools to extract the documentation by using the comments (for example, HappyDoc does that for Python).
It is a very tricky thing. First of all by definition it can be proven that program that will "understand" any program down't exist. However, you can still use existing documentation. Maybe using tools like Doxygen can help you. You would need to document your code through comments and the documentation will be generated from them.
A language cannot be explained only through its syntax. The runtime environment plays a great part, together with the underlying philosophy of the language and libraies.
Moreover, syntax is not that complex for most common languages (given that code has been written with maintainability in mind).
Going on with bash example, you cannot deeply understand bash if you know nothing about processes & job control, environment variables, a big list of unix commands (tr, sort, cut, paste, sed, awk, find, ...) and many other features that don't appear in syntax.
If the tool produced
for starts a loop which iterates over
all values that follow in. In the
loop, you can access each value via
the variable $f. The loop body is
between do and done
it would be pretty worthless. This is exactly the kind of comment that trainee (human) programmers are told nver to write.
Is there a programming language which can be programmed entirely in interactive mode, without needing to write files which are interpreted or compiled. Think maybe something like IRB for Ruby, but a system which is designed to let you write the whole program from the command line.
I assume you are looking for something similar to how BASIC used to work (boot up to a BASIC prompt and start coding).
IPython allows you to do this quite intuitively. Unix shells such as Bash use the same concept, but you cannot re-use and save your work nearly as intuitively as with IPython. Python is also a far better general-purpose language.
Edit: I was going to type up some examples and provide some links, but the IPython interactive tutorial seems to do this a lot better than I could. Good starting points for what you are looking for are the sections on source code handling tips and lightweight version control. Note this tutorial doesn't spell out how to do everything you are looking for precisely, but it does provide a jumping off point to understand the interactive features on the IPython shell.
Also take a look at the IPython "magic" reference, as it provides a lot of utilities that do things specific to what you want to do, and allows you to easily define your own. This is very "meta", but the example that shows how to create an IPython magic function is probably the most concise example of a "complete application" built in IPython.
Smalltalk can be programmed entirely interactively, but I wouldn't call the smalltalk prompt a "command line". Most lisp environments are like this as well. Also postscript (as in printers) if memory serves.
Are you saying that you want to write a program while never seeing more code than what fits in the scrollback buffer of your command window?
There's always lisp, the original alternative to Smalltalk with this characteristic.
The only way to avoid writing any files is to move completely to a running interactive environment. When you program this way (that is, interactively such as in IRB or F# interactive), how do you distribute your programs? When you exit IRB or F# interactive console, you lose all code you interactively wrote.
Smalltalk (see modern implementation such as Squeak) solves this and I'm not aware of any other environment where you could fully avoid files. The solution is that you distribute an image of running environment (which includes your interactively created program). In Smalltalk, these are called images.
Any unix shell conforms to your question. This goes from bash, sh, csh, ksh to tclsh for TCL or wish for TK GUI writing.
As already mentioned, Python has a few good interactive shells, I would recommend bpython for starters instead of ipython, the advantage of bpython here is the support for autocompletion and help dialogs to help you know what arguments the function accepts or what it does (if it has docstrings).
Screenshots: http://bpython-interpreter.org/screenshots/
This is really a question about implementations, not languages, but
Smalltalk (try out the Squeak version) keeps all your work in an "interactive workspace", but it is graphical and not oriented toward the command line.
APL, which was first deployed on IBM 360 and 370 systems, was entirely interactive, using a command line on a modified IBM Selectric typewriter! Your APL functions were kept in a "workspace" which did not at all resemble an ordinary file.
Many, many language implementations come with pure command-line interactive interpreters, like say Standard ML of New Jersey, but because they don't offer any sort of persistent namespace (i.e., when you exit the program, all your work is lost), I don't think they should really count.
Interestingly, the prime movers behind Smalltalk and APL (Kay and Iverson respectively) both won Turing Awards. (Iverson got his Turing award after being denied tenure at Harvard.)
TCL can be programmed entirely interactivly, and you can cetainly define new tcl procs (or redefine existing ones) without saving to a file.
Of course if you are developing and entire application at some point you do want to save to a file, else you lose everything. Using TCLs introspective abilities its relatively easy to dump some or all of the current interpreter state into a tcl file (I've written a proc to make this easier before, however mostly I would just develop in the file in the first place, and have a function in the application to resources itself if its source changes).
Not sure about that, but this system is impressively interactive: http://rigsomelight.com/2014/05/01/interactive-programming-flappy-bird-clojurescript.html
Most variations of Lisp make it easy to save your interactive work product as program files, since code is just data.
Charles Simonyi's Intentional Programming concept might be part way there, too, but it's not like you can go and buy that yet. The Intentional Workbench project may be worth exploring.
Many Forths can be used like this.
Someone already mentioned Forth but I would like to elaborate a bit on the history of Forth. Traditionally, Forth is a programming language which is it's own operating system. The traditional Forth saves the program directly onto disk sectors without using a "real" filesystem. It could afford to do that because it didn't ran directly on the CPU without an operating system so it didn't need to play nice.
Indeed, some implementations have Forth as not only the operating system but also the CPU (a lot of more modern stack based CPUs are in fact designed as Forth machines).
In the original implementation of Forth, code is always compiled each time a line is entered and saved on disk. This is feasible because Forth is very easy to compile. You just start the interpreter, play around with Forth defining functions as necessary then simply quit the interpreter. The next time you start the interpreter again all your previous functions are still there. Of course, not all modern implementations of Forth works this way.
Clojure
It's a functional Lisp on the JVM. You can connect to a REPL server called nREPL, and from there you can start writing code in a text file and loading it up interactively as you go.
Clojure gives you something akin to interactive unit testing.
I think Clojure is more interactive then other Lisps because of it's strong emphasis of the functional paradigm. It's easier to hot-swap functions when they are pure.
The best way to try it out is here: http://web.clojurerepl.com/
ELM
ELM is probably the most interactive you can get that I know of. It's a very pure functional language with syntax close to Haskell. What makes it special is that it's designed around a reactive model that allows hot-swapping(modifying running code(functions or values)) of code. The reactive bit makes it that whenever you change one thing, everything is re-evaluated.
Now ELM is compiled to HTML-CSS-JavaScript. So you won't be able to use it for everything.
ELM gives you something akin to interactive integration testing.
The best way to try it out is here: http://elm-lang.org/try
Mostly for my amusement, I created a makefile in my $HOME/bin directory called rebuild.mk, and made it executable, and the first lines of the file read:
#!/bin/make -f
#
# Comments on what the makefile is for
...
all: ${SCRIPTS} ${LINKS} ...
...
I can now type:
rebuild.mk
and this causes make to execute.
What are the reasons for not exploiting this on a permanent basis, other than this:
The makefile is tied to a single directory, so it really isn't appropriate in my main bin directory.
Has anyone ever seen the trick exploited before?
Collecting some comments, and providing a bit more background information.
Norman Ramsey reports that this technique is used in Debian; that is interesting to know. Thank you.
I agree that typing 'make' is more idiomatic.
However, the scenario (previously unstated) is that my $HOME/bin directory already has a cross-platform main makefile in it that is the primary maintenance tool for the 500+ commands in the directory.
However, on one particular machine (only), I wanted to add a makefile for building a special set of tools. So, those tools get a special makefile, which I called rebuild.mk for this question (it has another name on my machine).
I do get to save typing 'make -f rebuild.mk' by using 'rebuild.mk' instead.
Fixing the position of the make utility is problematic across platforms.
The #!/usr/bin/env make -f technique is likely to work, though I believe the official rules of engagement are that the line must be less than 32 characters and may only have one argument to the command.
#dF comments that the technique might prevent you passing arguments to make. That is not a problem on my Solaris machine, at any rate. The three different versions of 'make' I tested (Sun, GNU, mine) all got the extra command line arguments that I type, including options ('-u' on my home-brew version) and targets 'someprogram' and macros CC='cc' WFLAGS=-v (to use a different compiler and cancel the GCC warning flags which the Sun compiler does not understand).
I would not advocate this as a general technique.
As stated, it was mostly for my amusement. I may keep it for this particular job; it is most unlikely that I'd use it in distributed work. And if I did, I'd supply and apply a 'fixin' script to fix the pathname of the interpreter; indeed, I did that already on my machine. That script is a relic from the first edition of the Camel book ('Programming Perl' by Larry Wall).
One problem with this for generally distributable Makefiles is that the location of make is not always consistent across platforms. Also, some systems might require an alternate name like gmake.
Of course one can always run the appropriate command manually, but this sort of defeats the whole purpose of making the Makefile executable.
I've seen this trick used before in the debian/rules file that is part of every Debian package.
To address the problem of make not always being in the same place (on my system for example it's in /usr/bin), you could use
#!/usr/bin/env make -f
if you're on a UNIX-like system.
Another problem is that by using the Makefile this way you cannot override variables, by doing, for example make CFLAGS=....
"make" is shorter than "./Makefile", so I don't think you're buying anything.
The reason I would not do this is that typing "make" is more idiomatic to building Makefile based projects. Imagine if every project you built you had to search for the differently named makefile someone created instead of just typing "make && make install".
You could use a shell alias for this too.
We can look at this another way: is it a good idea to design a language whose interpreter looks for a fixed filename if you don't give it one? What if python looked for Pythonfile in the absence of a script name? ;)
You don't need such a mechanism in order to have a convention based around a known name. Example: Autoconf's ./configure script.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
For writing scripts for process automisation in Linux platform, which scripting language will be better? Shell script, Perl or Python or is there anything else? I am new to all of them. So, am just thinking to which one to go for?
The answer is: Whatever best fits the job!
My rule of thumb;
Bash - for a short script that might need a for loop to do something repetitively.
Perl - anything to do with some kind of text processing or file processing, especially if it's a one off. Just do a dirty nasty perl script and be done with it
Python - If it's something you might want to do again or something very like it. Then at least you have a chance of being able to reuse the script.
Go for all three of them, start with bash/awk/sed plus fileutils (grep, find, and so on) and then move up the abstraction hierarchy with perl and python.
That way you will be able to decide for yourself which one fits your needs best. I say start with bash and friends because they are ubiquitous, some machines will not have perl or python installed and you'll feel helpless there, especially in traditional unix land (ie, not linux)
When choosing a scripting language to help automate your linux / unix environment, the most important thing in my opinion is... your replacement :-)
By which I mean the next / other sysadmins who may have to maintain your scripts. I am currently working in an environment where the lead Unix guy is a real script head, but he has mainly restrained himself to using bash, with some perl and windows vbscript thrown in for good luck. At least it has forced me to brush up my perl.
While agreeing with the other comments here, my suggestion would be to master bash - where possible do as much as possible in bash, as most people know it, and can maintain / debug it. And it will be most portable. Use with sed & awk is particularly powerful.
When you have that mastered, you can come back here and ask "What scripting language should I learn after bash?" :-)
JB
I use Perl for anything beyond extremely simple scripts.
I also 'use warnings', 'use strict', avoid backticks, call system as 'system($command, #and_args)'. And because I like it to be maintainable: IPC::Run (for pipes), File::Fu (for filenames, tempfiles, etc), YAML (for configs or misc data), and Getopt::Helpful (so I can remember what the options were.)
I think it depends on how complex the tasks are you want to automate. Personally, I've always gone with shell-scripts, which enables you to call on awk, sed, grep, find, ls, cat, etc. which can be combined together to do pretty much anything you can achieve using perl or python. On the other hand, if the processes you want to automate are complex (e.g., not just a linear sequence of steps) then you'll probably find that writing the scripts in perl or python (or even ruby!) is much quicker and makes them easier to maintain.
I'd recommend bash, awk, and sed.
bash - http://tldp.org/LDP/abs/html/
awk - http://www.uga.edu/~ucns/wsg/unix/awk/
http://www.grymoire.com/Unix/Awk.html
sed - http://www.ce.berkeley.edu/~kayyum/unix_tips/sedtips.html
http://www.grymoire.com/Unix/Sed.html
Just some ideas.
Depends on the complexity and problem domain of the task at hand.
Bash scripts are quick and dirty for simple system automation tasks. For more complex things than moving files around and running commands, I'd personally say Perl is next in line as the defacto sys-admin goto automation tool. For more focus on code reuse and readability/maintainability I'd want to step it up it up to Python or Ruby.
PHP can also be used to automate tasks, however it is not widely accepted for this purpose in my experience.
It really comes down to what language you are most interested in learning, most can be used for automation, in addition to many other things.
I prefer shell scripts only for very small tasks. Writing robust shell scripts requires a lot of knowledge about possible pitfalls, which you only learn by doing. But learning even the basics will increase your productivity a lot!
If I need to have complex logic, I usually use Python. By complex I mean anything that has more than two if -statements =)
Perl is okay for its original purpose, but be warned that many of the perlisms you learn are not applicable anywhere else.
Python and Ruby are roughly equivalent. I'd recommend you learn one of them well and check out a tutorial on the other. I prefer Python but it really comes down to personal preference.
To summarize: Learn basics of shell scripts. Learn at least Python or Ruby well.
If you want minimalistic, compact and fast solution (faster than Python/Ruby) then -> go for LUA scripting language :-)
However Lua speed & code compactness is achieved by relativelly small Lua language core, so if you want "batteries included" (aka. very big "standard" libraries) then Lua is not for you. Otherwise, guys who come from C/C++ world very enjoys Lua speed :-)
p.s.
Lua vs Ruby 1.9 benchmark (you can look also Lua Vs Python 3):
http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=lua&lang2=yarv
I have been getting Python recommended all the time. It's supposed to let you do anything. For the small tasks i use shell scripts though.
I would usually say the one you know best which can achieve the results you want. Like all religious wars, and after learning a large number of languages, you realise that you can do most things in most languages (Note I did say most). I use Perl. It is maybe not as up to date as Python or Ruby, but it does have massive library support from CPAN. And I have not found anything I can't do in it yet. When I do I will look at other languages to find out which one can fill that gap.
If I was starting today, maybe I would pick Python or Ruby, but I don't know enough about them to make a judgement call. Do any of your friends/colleagues know scripting languages. This could help you massively as the support when learning a new language is very important.
Good luck
Well, it's like this:
Perl is not the most user friendly scripting language, but it has CPAN (Comprehensive Perl Archive Network), which contains thousands of libraries that implement almost anything you may think of, and Perl is really powerful when it comes to text processing. The disadvantage would be that perl code is kinda hard to maintain (if you don't know it very well).
Python is a scripting language that is becoming more and more popular among scripters. It doesn't have a community like CPAN (yet), but it's more readable, and it's easier to maintain. It's as fast as perl.
Ruby is the newest trend in scripting languages. Ruby is full OOP, which means that everything is an object. Its advantage is that the code is very readable, and it's pretty easy to learn, if you are a beginner. The main disadvantage is its execution speed, which kinda s*x.
That depends on which type of automation you are doing like if it is testing autoamtion Perl is suggested because Perl is much powerful extension modules via CPAN, an online Perl module inventory. If you only need a handy tool to complete a simple source file, awk is very convenient. If you are planning to use the scripts to automate a big project, Perl is a better choice with more features.
Again Python was designed from the start as an object-oriented language. Perl 5 has some o-o features added on, but it looks to me like an awkward retrofit. Python has well-implemented o-o features for multiple inheritance, polymorphism, and encapsulation.n summary, it seems to me that Python dominates Perl in most applications except for fairly short shell-script sorts of applications, and there they are roughly comparable.
If I had to pick one, it would have to be AWK. It's lightweight, has a small learning curve and has many useful functions like index and substr.
Depends on what you want to do, I regularly use all of them:
Shell for simple batching of commands with perhaps a loop or an if-statement.
Perl when I'm munching files and do some text replacement and souch things.
Python when need more logic.
Under *nix you should use the right tool for the right work, which can be hard for the beginner since it's so many things to learn (after some 15 years as a *nix user I still find new things). My recommendation is to look at all the languages quickly to see what they can do, and then start with using shell for everything, when your scripts gets clunky move them to something else.
Just write your commands one after the other, put it in a file and run this file with
promp> bash file
and you have your first automation. Then learn about bash variables, loops and control structures.
I second Python - powerful, simple, performant, and... actually quite fun, compared to perl or bash. Also if you know it, you'll find other uses, it's used in a lot of projects.
And not just as a "classic" scripting language, take for example the twisted project. That's true for Perl too I guess, but I like Python better order of magnitudes myself
Bottom line though is like has been said beofre, make sure you have the right tool for the job...
If you aim at having a simple script program "controlling" another (command-line, of course) program, then you should review Tcl/Tk, especially its dialect expect - they're simple and oriented towards that goal - it's very easy to create a script that controls ftp and even does a su with them!
Awk's very nice to process text files - not as powerful as perl, yet much more simple and straightforward (and without the horrible syntax).
Of course, your mileage may vary, so I guess the best answer would be to ask you: what do you want to write scripts for? And then: Are you familiar with any language script? The answers to these questions will point you to the scripting language you should use, according to the pros/cons of each one and their main target.
On Linux? Choose your poison, basically. I like Python, others Ruby, still others Perl. Pick one and go for it. :-)
I'd say Python - it has a very high readability, it is simple( no curly brackets, key words as close to english as possible etc.) and you can do almost everything in it, from simple to very complex things. It is also popular and fun to code.
This may sound a little odd, I had been using bash for over 10 years. I have started using PHP5 and it was difficult at first, but now I have a much better reusable code base.
I wouldn't recommend it as a starting point though!