Validate bash syntax - linux

I'm looking for a way to validate the syntax of a bash script without executing it.
bash -n only seems to validate base bash syntax, what I am missing is:
function name validations
un-initialized parameter validations
Any ideas on how to achieve that?
I am also missing the validation of the number of parameters a function takes,
but that sounds like a very hard thing to do in theory, in bash.
In other words what I'd like to do is pretty much take a bash script and "compile" it like I would compile a c++ program.

bash -n is certainly the fundamental way to do that.
However there is also a site that will do some validation for you:
http://www.shellcheck.net/ --And there is a link on the site to the source code if you want to run it locally.

Related

How to validade if a Groovy file is with correct syntax without running it?

I'd like to run a shell script that would tell me if a Groovy file is syntactically correct without executing it. The code wouldn't be run, just validated.
I see that there are similar questions, but I would like a simple command line command. I thought there would be parameter for the compiler, but couldn't find it.
I didn't manage to get a direct answer. My problem was that I was making a Jenkins pipeline script and I'd like to verify it before commit and push. This was I would get simples errors in less time.
I've solved my problem using a VSCode Extension to validate the file before committing. No more typos in committed in my version control.

Trying to install program, keep getting problems

I'm currently an intern that's out of their depths here, this is really my first time using Linux, and everything I know comes from basic level tutorials.
I was asked by my boss today to install a program, and I'm following this tutorial on it, but am stuck at the Path part of it.
Solved
Every time I try to do this:
~$ export DTITK_ROOT=${autofs/cluster/name/MyUsername/more/path/DTI-TK/dtitk-2.3.1-Linux-x86-64}/dtitk
Like it told me to.
I get:
bash: DTITK_ROOT=${autofs/cluster/name/MyUsername/more/path/DTI-TK/dtitk-2.3.1-Linux-x86-64}/dtitk: bad substitution
Thank you user Muon
In bash, the ${} syntax can be used to substitute in the value of a previously defined variable, and you've enclosed an explicitly typed-out path within it, so bash is looking for a variable called path/MyUsername/more/path/DTI-TK/dtitk-2.3.1-Linux-x86-64 and not finding it. It should work if you run the command without the substitution:
$ export DTITK_ROOT=/path/to/dtitk

Bash script type inputs when prompted

EDIT: I'm re-writing this because the first time was a bit unclear.
Let's say I have a program (an executable) such that when I run it, it prompts me to enter an input.
For example, I execute ./myProgram
and the program prompts: Please enter your username:
Here, I would type in my username.
Now, how would I write a bash script so that after I start the above program, I can enter inputs to it?
Something along the lines of this:
#!/bin/bash
path/to/myProgram
# And here I would enter the commands, such as providing my username
Thanks
reading values interactively is rather uncommon in *nix scripts, and is frowned upon by those who want to do exactly what you're trying to do. The standard way of doing this would be changing myProgram to accept arguments. At that point it's trivial to do this.
If you really need to use this pattern you need to use some tool like expect, as pointed out by #EricRenouf.
If myProgram reads from standard input, you can use a here-document:
path/to/myProgram <<\END
username
more input if needed
END

Redirect program output without changing directory

Problem
I'm writing a set of scripts to help with automated batch job execution on a cluster.
The specific thing I have is a $OUTPUT_DIR, and an arbitrary $COMMAND.
I would like to execute the $COMMAND such that its output ends up in $OUTPUT_DIR.
For example, if COMMAND='cp ./foo ./bar; mv ./bar ./baz', I would like to run it such that the end result is equivalent to cp ./foo ./$OUTPUT_DIR/baz.
Ideally, the solution would look something like eval PWD="./$OUTPUT_DIR" $COMMAND, but that doesn't work.
Known solutions
[And their problems]
Editing $COMMAND: In most cases the command will be a script, or a compiled C or FORTRAN executable. Changing the internals of these isn't an option.
unionfs, aufs, etc.: While this is basically perfect, users running this won't have root, and causing thousands+ of arbitrary mounts seems like a questionable choice.
copying/ hard/soft links: This might be the solution I will have to use: some variety of actually duplicating the entire content of ./ into ./$OUTPUT_DIR
cd $OUTPUT_DIR; ../$COMMAND : Fails if $COMMAND ever reads files
pipes : only works if $COMMAND doesn't directly work with files; which it usually does
Is there another solution that I'm missing, or is this request actually impossible?
[EDIT:]Chosen Solution
I'm going to go with something where each object in the directory is symbolic-linked into the output directory, and the command is then run from there.
This has the downside of creating a lot of symbolic links, but it shouldn't be too bad.
You can't solve this without making some assumptions about the interface of $COMMAND. There is no single definition of what "output ends up in $OUTPUT_DIR" means. For one program this may be some files, but another program might just print something to stdout and yet another might try sending some data over the internet using some protocol or display something in a GUI and there isn't an obvious way of mapping all of these to "output goes to $OUTPUT_DIR".
So, you need to invent some assumptions and require any $COMMAND implementation to follow them. Then, it may get as simple as requesting that the command accept a parameter such as --target=<DIR>. If your command was some simple command, you would have to create a wrapper script around it to translate that parameter into what the app accepts. cp, mv and a few more utils already accept the parameter --target, so that may be a good starting point.
You cannot set the output directory, you can only set the working directory.
The problem is, once you set the working directory, other references are going to be invalid. For example in your code foo:
cp ./foo ./bar
If you have a specific command, there are workarounds (creating a script that alters arguments, prepending the directory to specific arguments), but in general this is not possible.

change shell directory from a script?

i want to make a script (to) that makes it easier for me to enter folders.
so eg. if i type "to apache" i want it to change the current directory to /etc/apache2.
however, when i use the "cd" command inside the script, it seems like it changes the path WITHIN the script, so the path in the shell has not changed.
how could i make this work?
Use an alias or function, or source the script instead of executing it.
BASH FAQ entry #60.
use a function
to_apache(){
cd /etc/apache
}
put in a file eg mylibrary.sh and whenever you want to use it, source the file. eg
#!/bin/bash
source /path/mylibrary.sh
to_apache
As Ignacio said, make it into a function (or perhaps an alias).
The way I tend to do it is have a shell script that creates the function - and the script and the function have the same name. Then once at some point in time, I will source the script ('. funcname') and thereafter I can simply use the function.
I tend to prefer functions to aliases; it is easier to manage arguments etc.
Also, for the specific case of changing directories, I use CDPATH. The trick with using CDPATH is to have the empty entry at the start:
export CDPATH=:/work4/jleffler:/u/jleffler:/work4/jleffler/src:\
/work4/jleffler/src/perl:/work4/jleffler/src/sqltools:/work4/jleffler/lib:\
/work4/jleffler/doc:/u/jleffler/mail:/work4/jleffler/work:/work4/jleffler/ids
On this machine, my main home directory is /work4/jleffler. I can get to most of the relevant sub-directories in one go with 'cd whatever'.
If you don't put the empty entry (or an explicit '.') first, then you can't 'cd' into a sub-directory of the current directory, which is disconcerting at least.
Ignacio Vazquez-Abrams gave a link to what probably answers the question, although I didn't really follow it. The short answer is to use either "source" or a single dot before the command, eg:
. to apache
But, I found there are down problems to this if you have a more complicated script. It seems that the original script filename variable ($0) is lost. I see "-bash" instead, so your script can't echo error text that that would include the full filename.
Also, you can't use the "exit" command, or your shell will exit (especially disconcerting from ssh).
Also, the "basename" function gives an error if you use that.
So, it seems to me that a function might be the only way to get around some of these problems, like if you are passing parameters.

Resources