Say for example a script begins like this
#!/bin/bash
#$ -S /bin/bash
#$ -l hostname=qn*
and then later down the page the actual script comes into play. My question is what does the "#$" symbol mean or do?
Are you by any means running on a batch cluster? Like Sun Grid Engine? There might be special meanings in scripts intended to run as a batch job.
https://www.wiki.ed.ac.uk/display/EaStCHEMresearchwik/How+to+write+a+SGE+job+submission+script
Update:
above link blocks when used from stackoverflow.com (works from google.com)
alternatives:
http://www.cbi.utsa.edu/sge_tutorial
http://web.njit.edu/all_topics/HPC/basement/sge/SGE.html
Lines beginning with # are comments. The first line may begin with #!, but it's still a comment to bash and is merely used to indicate the interpreter to use for the file. All other lines beginning with # are absolutely unimportant to bash, whether the next character is $ or anything else.
They seem to be parameters for the Oracle (ex-Sun) Grid Engine, look at this SO question or this one.
They are heavily using these kind of comments.
Those line are important for queue systems like sbatch.
Related
Here's the script:
node /app/ganache-core.docker.cli.js — quiet \ — account=”0x873c254263b17925b686f971d7724267710895f1585bb0533db8e693a2af32ff,100000000000000000000" \ — account=”0x8c0ba8fece2e596a9acfc56c6c1bf57b6892df2cf136256dfcb49f6188d67940,100000000000000000000"
I've read What's the magic of "-" (a dash) in command-line parameters?. And I took away that it CAN mean standard input... if the authors of the bash program define it as such.
However, here (link to ganache-core.docker.cli.js github file), I cannot find how or where the author of ganache-core.docker.cli.js would have defined the dash ("-") as standard input. Can someone point that out as well?
Edit: I am looking for confirmation that the dashes do mean standard input for cli args, but more-so looking to understand, WHY they should be definitively be interpreted as stnin when according the linked question above it's only a convention.
Edit2: I suspect the CLI arg parsing library is yArgs
This command line is just badly formatted. You're reading into something that isn't there. Some Blog software author thought it was a smart idea to auto-reformat the article so that hyphens and such were long dashes and quotes were "smart", etc. In the end, somehow a space ended up after the dash, before the next parameter.
For example, let's look at this:
node /app/ganache-core.docker.cli.js — quiet
Even if we assume that's a regular hyphen -, we know it's not supposed to have a space after it. It's supposed to be -quiet. And, if you have any doubt about this, you can read in the source code where this is defined:
.option('q', {
group: 'Other:',
alias: 'quiet',
describe: 'Run ganache quietly (no logs)',
type: 'boolean',
default: false
})
The same is true for -account.
And I took away that it CAN mean standard input... if the authors of the bash program define it as such.
Yes, that's correct. I don't know what this software does, but if it's reading from STDIN, it's not because you told it to on the command line. It's because that's what it does.
I have to perform some necessary steps before installing my package, such as taking back up of previous datastore snapshot.
For that purpose I'm using a %pre script as follows.
%pre
#!/bin/sh
--------
--------
stamp=`date +%Y%m%d%H%M%S`
echo ${stamp}
-------------
-------------
The output is as follows: 20161103123325OURCE
It is printing some random characters along with date. "OURCE" is not present anywhere in my spec file.
The same script works perfectly as standalone. The platform is CentOS7.
rpmbuild knows a whole set of macros. Apparently a certain macro is defined as:
%S = %SOURCE
I didn't manage to find something that tells rpmbuild not to expand that macro; but there is a way in tricking him not to do so. I know this is a little workaround, but it's the best I could come up with:
stamp=$(date '+%Y%m%d%H%M%''S')
note that I replaced the backticks with the recommanded $() invocation
I just inserted two '' to split the string in two parts; this avoids macro replacement.
If you escape the percent '%' in your date command with a second percent symbol '%%' as described at the following link, that should correct the behavior you're seeing with expanding %S to "OURCE" as you're seeing in your output.
stamp=`date +%%Y%%m%%d%%H%%M%%S`
See section "Writing a Macro" here
http://rpm.org/user_doc/macros.html
Until recently, I was under the impression that by convention, all Linux command options were required to be prefixed by a hyphen (-). So for example, the instruction ls –l executes the ls command with the l option (here we can see that the l option is prefixed by a hyphen).
Life was good until I got to the chapter of my Linux for beginners book that explained the ps command. There I learned that I could write something like ps u U xyz where as far as I can tell, theu and U are options that are not required to be prefixed by a hyphen. Normally, I would have expected to have to write that same command as something like ps –uU xyz to force the usage of a hyphen.
I realize that this is probably a stupid question but I was wondering if there is a particular reason as to why the ps command does not follow what I thought was the standard way of specifying command options (prefixing them with hyphens). Why the variation? Is there a particular meaning to specifying hyphen-less options like that?
There are a handful of old programs on Unix that were written when the conventions were not as widely adopted, and ps is one of them. Another example is tar, although it has since been updated to allow options both with and without the - prefix.
IMO the best practice concerning hyphenation is to use them as the default go-to. More times than not, they have accepted hyphen prefixes to most or all flags/options available for commands. Happy to be corrected if I am wrong in this instance. I am still new to this myself! :)
This is my first question on StackOverflow. I am pretty sure it would have been answered already (it is a pretty dumb question, I think, as I just started to learn Linux scripting), but I did'nt succeed to find an answer yet.
Sorry for that.
Here is my problem: I try to use in a shell a number given in a property file, I have an error because the number is not taken "as it is".
I have a prop.properties file :
sleepTimeBeforeLoop=10
and a test.sh shell :
#!/bin/sh
. prop.properties
echo "time="$sleepTimeBeforeLoop
sleep $sleepTimeBeforeLoop
When I launch test.sh I have the following Output:
time=10
sleep: invalid time interval `10\r'
Try `sleep --help' for more information.
What I understand is that my properties files was correctly sourced, but that the property was taken as a string, with some special character to indicate the end of the line, or whatever.
How can I do to take only the "10" value?
Thank you in advance for your answer.
This is line endings issue / make sure your script files are terminated by \n (the unix way)
eg. write it in UNIX text editor or use a windows one capable of saving in "unix style"
Edit: This question was originally bash specific. I'd still rather have a bash solution, but if there's a good way to do this in another shell then that would be useful to know as well!
Okay, top level description of the problem. I would like to be able to add a hook to bash such that, when a user enters, for example $cat foo | sort -n | less, this is intercepted and translated into wrapper 'cat foo | sort -n | less'. I've seen ways to run commands before and after each command (using DEBUG traps or PROMPT_COMMAND or similar), but nothing about how to intercept each command and allow it to be handled by another process. Is there a way to do this?
For an explanation of why I'd like to do this, in case people have other suggestions of ways to approach it:
Tools like script let you log everything you do in a terminal to a log (as, to an extent, does bash history). However, they don't do it very well - script mixes input with output into one big string and gets confused with applications such as vi which take over the screen, history only gives you the raw commands being typed in, and neither of them work well if you have commands being entered into multiple terminals at the same time. What I would like to do is capture much richer information - as an example, the command, the time it executed, the time it completed, the exit status, the first few lines of stdin and stdout. I'd also prefer to send this to a listening daemon somewhere which could happily multiplex multiple terminals. The easy way to do this is to pass the command to another program which can exec a shell to handle the command as a subprocess whilst getting handles to stdin, stdout, exit status etc. One could write a shell to do this, but you'd lose much of the functionality already in bash, which would be annoying.
The motivation for this comes from trying to make sense of exploratory data analysis like procedures after the fact. With richer information like this, it would be possible to generate decent reporting on what happened, squashing multiple invocations of one command into one where the first few gave non-zero exits, asking where files came from by searching for everything that touched the file, etc etc.
Run this bash script:
#!/bin/bash
while read -e line
do
wrapper "$line"
done
In its simplest form, wrapper could consist of eval "$LINE". You mentioned wanting to have timings, so maybe instead have time eval "$line". You wanted to capture exit status, so this should be followed by the line save=$?. And, you wanted to capture the first few lines of stdout, so some redirecting is in order. And so on.
MORE: Jo So suggests that handling for multiple-line bash commands be included. In its simplest form, if eval returns with "syntax error: unexpected end of file", then you want to prompt for another line of input before proceeding. Better yet, to check for proper bash commands, run bash -n <<<"$line" before you do the eval. If bash -n reports the end-of-line error, then prompt for more input to add to `$line'. And so on.
Binfmt_misc comes to mind. The Linux kernel has a capability to allow arbitrary executable file formats to be recognized and passed to user application.
You could use this capability to register your wrapper but instead of handling arbitrary executable, it should handle all executable.