nedit command alias causes functional breakage - linux

I am using nedit to edit source code in my workstation. Yet, it starts with the following errors:
Cannot convert string "-*-helvetica-medium-r-normal-*-*-120-*-*-*-iso8859-1" to type FontStruct
Cannot convert string "-*-helvetica-bold-r-normal-*-*-120-*-*-*-iso8859-1" to type FontStruct
Cannot convert string "-*-helvetica-medium-o-normal-*-*-120-*-*-*-iso8859-1" to type FontStruct
Cannot convert string "-*-courier-medium-r-normal-*-*-120-*-*-*-iso8859-1" to type FontStruct
Cannot convert string "-*-courier-bold-r-normal-*-*-120-*-*-*-iso8859-1" to type FontStruct
Cannot convert string "-*-courier-medium-o-normal-*-*-120-*-*-*-iso8859-1" to type FontStruct
Not knowing how to fix these errors, I used an alias to start edit:
ne='nedit &>/dev/null &'
It is to suppress the warning messages spit to stdout and stderr, and let nedit run in the background, so that I can type the next command in the current terminal window.
Yet, if I use this alias to open a file directly, it gives me an error msg like:
[qxu#merlin:/home/qxu/work/src]# ne abc.c
[4] 24969304
-bash: ./abc.c: The file access permissions do not allow the specified action.
Yet, nedit abc.c works, though with the above font error msgs.
Is there a way for me to use the above alias and give it a filename to open directly?

Use a function instead of an alias. When we have to handle arguments, it is more simple to use. Place the following function in your .bashrc file:
function ne() {
command nedit "$#" &>/dev/null &
}
In this example, when you run ne file.txt, you call this function which executes the command nedit with all the arguments you've passed ("$#").
Take a look at this explanation of when you should use an alias or a function. It is very good

The problem with your alias is that you have & in the wrong place. When the alias is expanded you get
nedit &>/dev/null & abc.c
& is a command separator, so this is equivalent to
nedit &>/dev/null & # launch nedit in the background without a file to edit
abc.c # execute abc.c
and apparently "abc.c" does not have execute permissions.
As Victor said, use a function.

Related

How to get the complete calling command of a BASH script from inside the script (not just the arguments)

I have a BASH script that has a long set of arguments and two ways of calling it:
my_script --option1 value --option2 value ... etc
or
my_script val1 val2 val3 ..... valn
This script in turn compiles and runs a large FORTRAN code suite that eventually produces a netcdf file as output. I already have all the metadata in the netcdf output global attributes, but it would be really nice to also include the full run command one used to create that experiment. Thus another user who receives the netcdf file could simply reenter the run command to rerun the experiment, without having to piece together all the options.
So that is a long way of saying, in my BASH script, how do I get the last command entered from the parent shell and put it in a variable? i.e. the script is asking "how was I called?"
I could try to piece it together from the option list, but the very long option list and two interface methods would make this long and arduous, and I am sure there is a simple way.
I found this helpful page:
BASH: echoing the last command run
but this only seems to work to get the last command executed within the script itself. The asker also refers to use of history, but the answers seem to imply that the history will only contain the command after the programme has completed.
Many thanks if any of you have any idea.
You can try the following:
myInvocation="$(printf %q "$BASH_SOURCE")$((($#)) && printf ' %q' "$#")"
$BASH_SOURCE refers to the running script (as invoked), and $# is the array of arguments; (($#)) && ensures that the following printf command is only executed if at least 1 argument was passed; printf %q is explained below.
While this won't always be a verbatim copy of your command line, it'll be equivalent - the string you get is reusable as a shell command.
chepner points out in a comment that this approach will only capture what the original arguments were ultimately expanded to:
For instance, if the original command was my_script $USER "$(date +%s)", $myInvocation will not reflect these arguments as-is, but will rather contain what the shell expanded them to; e.g., my_script jdoe 1460644812
chepner also points that out that getting the actual raw command line as received by the parent process will be (next to) impossible. Do tell me if you know of a way.
However, if you're prepared to ask users to do extra work when invoking your script or you can get them to invoke your script through an alias you define - which is obviously tricky - there is a solution; see bottom.
Note that use of printf %q is crucial to preserving the boundaries between arguments - if your original arguments had embedded spaces, something like $0 $* would result in a different command.
printf %q also protects against other shell metacharacters (e.g., |) embedded in arguments.
printf %q quotes the given argument for reuse as a single argument in a shell command, applying the necessary quoting; e.g.:
$ printf %q 'a |b'
a\ \|b
a\ \|b is equivalent to single-quoted string 'a |b' from the shell's perspective, but this example shows how the resulting representation is not necessarily the same as the input representation.
Incidentally, ksh and zsh also support printf %q, and ksh actually outputs 'a |b' in this case.
If you're prepared to modify how your script is invoked, you can pass $BASH_COMMANDas an extra argument: $BASH_COMMAND contains the raw[1]
command line of the currently executing command.
For simplicity of processing inside the script, pass it as the first argument (note that the double quotes are required to preserve the value as a single argument):
my_script "$BASH_COMMAND" --option1 value --option2
Inside your script:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
myInvocation=$1 # Save the command line in a variable...
shift # ... and remove it from "$#".
# Now process "$#", as you normally would.
Unfortunately, there are only two options when it comes to ensuring that your script is invoked this way, and they're both suboptimal:
The end user has to invoke the script this way - which is obviously tricky and fragile (you could however, check in your script whether the first argument contains the script name and error out, if not).
Alternatively, provide an alias that wraps the passing of $BASH_COMMAND as follows:
alias my_script='/path/to/my_script "$BASH_COMMAND"'
The tricky part is that this alias must be defined in all end users' shell initialization files to ensure that it's available.
Also, inside your script, you'd have to do extra work to re-transform the alias-expanded version of the command line into its aliased form:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
# Here we also re-transform the alias-expanded command line to
# its original aliased form, by replacing everything up to and including
# "$BASH_COMMMAND" with the alias name.
myInvocation=$(sed 's/^.* "\$BASH_COMMAND"/my_script/' <<<"$1")
shift # Remove the first argument from "$#".
# Now process "$#", as you normally would.
Sadly, wrapping the invocation via a script or function is not an option, because the $BASH_COMMAND truly only ever reports the current command's command line, which in the case of a script or function wrapper would be the line inside that wrapper.
[1] The only thing that gets expanded are aliases, so if you invoked your script via an alias, you'll still see the underlying script in $BASH_COMMAND, but that's generally desirable, given that aliases are user-specific.
All other arguments and even input/output redirections, including process substitutiions <(...) are reflected as-is.
"$0" contains the script's name, "$#" contains the parameters.
Do you mean something like echo $0 $*?

'less' the file specified by the output of 'which'

command 'which' shows the link to a command.
command 'less' open the file.
How can I 'less' the file as the output of 'which'?
I don't want to use two commands like below to do it.
=>which script
/file/to/script/fiel
=>less /file/to/script/fiel
This is a use case for command substitution:
less -- "$(which commandname)"
That said, if your shell is bash, consider using type -P instead, which (unlike the external command which) is built into the shell:
less -- "$(type -P commandname)"
Note the quotes: These are important for reliable operation. Without them, the command may not work correctly if the filename contains characters inside IFS (by default, whitespace) or can be evaluated as a glob expression.
The double dashes are likewise there for correctness: Any argument after them is treated as positional (as per POSIX Utility Syntax Guidelines), so even if a filename starting with a dash were to be returned (however unlikely this may be), it ensures that less treats that as a filename rather than as the beginning of a sequence of options or flags.
You may also wish to consider honoring the user's pager selection via the environment variable $PAGER, and using type without -P to look for aliases, shell functions and builtins:
cmdsource() {
local sourcefile
if sourcefile="$(type -P -- "$1")"; then
"${PAGER:-less}" -- "$sourcefile"
else
echo "Unable to find source for $1" >&2
echo "...checking for a shell builtin:" >&2
type -- "$1"
fi
}
This defines a function you can run:
cmdsource commandname
You should be able to just pipe it over, try this:
which script | less

Executing a bash script from a Perl program

I'm trying to write a Perl program which will execute a bash script. The Perl script looks like this
#!/usr/bin/perl
use diagnostics;
use warnings;
require 'userlib.pl';
use CGI qw(:standard);
ReadParse();
my $q = new CGI;
my $dir = $q->param('X');
my $s = $q->param('Y');
ui_print_header(undef, $text{'edit_title'}.$dir, "");
print $dir."<br>";
print $s."<br>";
print "Under Construction <br>";
use Cwd;
my $pwd = cwd();
my $directory = "/Logs/".$dir."/logmanager/".$s;
my $command = $pwd."/script ".$directory."/".$s.".tar";
print $command."<br>";
print $pwd."<br>";
chdir($directory);
my $pwd1 = cwd();
print $pwd1."<br>";
system($command, $directory) or die "Cannot open Dir: $!";
The script fail with the following error:
Can't exec "/usr/libexec/webmin/foobar/script
/path/filename.tar": No such file or directory at /usr/libexec/webmin/foobar/program.cgi line 23 (#3)
(W exec) A system(), exec(), or piped open call could not execute the
named program for the indicated reason. Typical reasons include: the
permissions were wrong on the file, the file wasn't found in
$ENV{PATH}, the executable in question was compiled for another
architecture, or the #! line in a script points to an interpreter that
can't be run for similar reasons. (Or maybe your system doesn't support #! at all.)
I've checked that the permissions are correct, the tar file I'm passing to my bash script exists, and also tried from the command line to run the same command I'm trying to run from the Perl script ( /usr/libexec/webmin/foobar/script /path/filename.tar ) and it works properly.
In Perl, calling system with one argument (in scalar context) and calling it with several scalar arguments (in list context) does different things.
In scalar context, calling
system($command)
will start an external shell and execute $command in it. If the string in $command has arguments, they will be passed to the call, too. So for example
$command="ls /";
system($commmand);
will evaluate to
sh -c "ls /"
where the shell is given the entire string, i.e. the command with all arguments. Also, the $command will run with all the normal environment variables set. This can be a security issue, see here and here for a few examples why.
On the other hand, if you call system with an array (in list context), Perl will not call a shell and give it the $command as argument, but rather try to execute the first element of the array directly and give it the other arguments as parameters. So
$command = "ls";
$directory = "/";
system($command, $directory);
will call ls directly, without spawning a shell in between.
Back to your question: your code says
my $command = $pwd."/script ".$directory."/".$s.".tar";
system($command, $directory) or die "Cannot open Dir: $!";
Note that $command here is something like /path/to/script /path/to/foo.tar, with the argument already being part of the string. If you call this in scalar context
system($command)
all will work fine, because
sh -c "/path/to/script /path/to/foo.tar"
will execute script with foo.tar as argument. But if you call it in list context, it will try to locate an executable named /path/to/script /path/to/foo.tar, and this will fail.
I found the problem.
changed the system command removing the second parameter and now it's working
system($command) or die "Cannot open Dir: $!";
In fairness I did not understand what was wrong on first example but now works fine, if anyone can explain probably it can be interesting understand
There are multiple ways to execute bash command/ scripts in perl.
System
backquate
exec

How to prevent execution of command in ZSH?

I wrote hook for command line:
# Transforms command 'ls?' to 'man ls'
function question_to_man() {
if [[ $2 =~ '^\w+\?$' ]]; then
man ${2[0,-2]}
fi
}
autoload -Uz add-zsh-hook
add-zsh-hook preexec question_to_man
But when I do:
> ls?
After exiting from man I get:
> zsh: no matches found: ls?
How can I get rid of from message about wrong command?
? is special to zsh and is the wildcard for a single character. That means that if you type ls? zsh tries find matching file names in the current directory (any three letter name starting with "ls").
There are two ways to work around that:
You can make "?" "unspecial" by quoting it: ls\?, 'ls?' or "ls?".
You make zsh handle the cases where it does not match better:
The default behaviour if no match can be found is to print an error. This can be changed by disabling the NOMATCH option (also NULL_GLOB must not be set):
setopt NO_NOMATCH
setopt NO_NULL_GLOB
This will leave the word untouched, if there is no matching file.
Caution: In the (maybe unlikely) case that there is a file with a matching name, zsh will try to execute a command with the name of the first matching file. That is if there is a file named "lsx", then ls? will be replaced by lsx and zsh will try to run it. This may or may not fail, but will most likely not be the desired effect.
Both methods have their pro and cons. 1. is probably not exactly what you are looking for and 2. does not work every time as well as changes your shells behaviour.
Also (as #chepner noted in his comment) preexec runs additionally to not instead of a command. That means you may get the help for ls but zsh will still try to run ls? or even lsx (or another matching name).
To avoid that, I would suggest defining a command_not_found_handler function instead of preexec. From the zsh manual:
If no external command is found but a function command_not_found_handler exists the shell executes this function with all command line arguments. The function should return status zero if it successfully handled the command, or non-zero status if it failed. In the latter case the standard handling is applied: ‘command not found’ is printed to standard error and the shell exits with status 127. Note that the handler is executed in a subshell forked to execute an external command, hence changes to directories, shell parameters, etc. have no effect on the main shell.
So this should do the trick:
command_not_found_handler () {
if [[ $1 =~ '\?$' ]]; then
man ${1%\?}
return 0
else
return 1
fi
}
If you have a lot of matching file names but seldomly misstype commands (the usual reason for "Command not found" errors) you might want to consider using this instead:
command_not_found_handler () {
man ${1%?}
}
This does not check for "?" at the end, but just cuts away any last character (note the missing "\" in ${1%?}) and tries to run man on the rest. So even if a file name matches, man will be run unless there is indeed a command with the same name as the matched file.
Note: This will interfere with other tools using command_not_found_handler for example the command-not-found tool from Ubuntu (if enabled for zsh).
That all being said, zsh has a widget called run-help which can be bound to a key (in Emacs mode it is by default bound to Alt+H) and than runs man for the current command.
The main advantages of using run-help over the above are:
You can call it any time while typing a longer command, as long as the command name is complete.
After you leave the manpage, the command is still there unchanged, so you can continue writing on it.
You can even bind it to Alt+? to make it more similar: bindkey '^[?' run-help

multiline contents of a IO handle in haskell display nothing

I have been experimenting with Haskell. I am trying to write a web crawler and I need to use external curl binary (due to some proxy settings, curl needs to have some special arguments which seem to be impossible/hard to set inside the haskell code, so i rather just pass it as a command line option. but that is another story...)
In the code at the bottom, if I change the marked line with curl instead of curl --help the output renders properly and gives:
"curl: try 'curl --help' or 'curl --manual' for more information
"
otherwise the string is empty - as the `curl --help' response is multiline.
I suspect that in haskell the buffer is cleared with every new line. (same goes for other simple shell commands like ls versus ls -l etc.)
How do I fix it?
The code:
import System.Process
import System.IO
main = do
let sp = (proc "curl --help"[]){std_out=CreatePipe} -- *** THIS LINE ***
(_,Just out_h,_,_)<- createProcess sp
out <-hGetContents out_h
print out
proc takes as a first argument the name of the executable, not a shell command. That, is when you use proc "foo bar" you are not referring to a foo executable, but to an executable named exactly foo bar, with the space in its file name.
This is a useful feature in practice, because sometimes you do have spaces in there (e.g. on Windows you might have c:\Program Files\Foo\Foo.exe). Using a shell command you would have to escape spaces in your command string. Worse, a few other characters need to be escaped as well, and it's cumbersome to check what exactly those are. proc sidesteps the issue by not using the shell at all but passing the string as it is to the OS.
For the executable arguments, proc takes a separate argument list. E.g.
proc "c:\\Program Files\\Foo\\Foo.exe" ["hello world!%$"]
Note that the arguments need no escaping as well.
If you want to pass arguments to curl you have to pass that it in the list:
sp = (proc "/usr/bin/curl" ["--help"]) {std_out=CreatePipe}
Then you will get the complete output in the entire string.

Resources