Determine what exactly returns parameter - linux

In some OS, like Ubuntu, Debian, etc. cal return current calendar with highlighting of today. And cal -h turns off highlighting of today:
But in some OS, like Arch Linux -h param displays the help of a calendar.
I'm doing a small script with Lua:
function foo()
local f, err = io.popen('cal -h', 'r')
if f then
local s = f:read("*all")
f:close()
return s
else
return err
end
end
And my main question - how do I determine exactly what specifically returned parameter -h?

Execute cal -h and parse the output for the word "help". If the word is found, the "-h" is for help. If word not found it is likely used to mean highlight but there is no sure way of knowing (a way that will work on all flavors of Linux). Most likely you will need some code to read environment var that identifies platform so you can issue correct command and rely on users of different Linux flavors to report when default fails and report to you the correct command line parameters. OTOH you could limit supprt to only those platforms you have access to. Or combination of these approaches.

Another solution.
Arch Linux cal have -V param, which returns the UTIL_LINUX_VERSION.
It this case, after call cal -V in Arch Linux you will likely receive exit code 0, but Ubuntu don't have -V param and return 64 :)
So, if cal -V returns exit code 0, -h return help

Related

cannot create /dev/stdout: No such device or address

I'm want to run a shell command via node and capture the result of stdout. My script works fine on OSX, but not on Ubuntu.
I've simplified the problem and script to the following node script:
var execSync = require('child_process').execSync,
result = execSync('echo "hello world" >> /dev/stdout');
// Do something with result
Results in:
/bin/sh: 1: cannot create /dev/stdout: No such device or address
I have tried replacing /dev/stdout with /dev/fd/1
I have tried changing the shell to bash... execSync('echo ...', {shell : '/bin/bash'})
Like I said, the problem above is simplified. The real script accepts as a parameter the name of a file where results should be written, so I need to resolve this by providing access to the stdout stream as a file descriptor, i.e. /dev/stdout.
How can I execute a command via node, while giving the command access to its own stdout stream?
On /dev/stdout
I don't have access to an OSX box, but from this issue on phantomjs, it seems that while on both OSX/BSD and Linux /dev/stdout is a symlink, nonetheless it seems to work differently between them. One of the commenters said it's standard on OSX to use /dev/stdout but not for Linux. In another random place I read statements that imply /dev/stdout is pretty much an OSX thing. There might be a clue in this answer as to why it doesn't work on Linux (seems to implicitly close the file descriptor when used this way).
Further related questions:
https://unix.stackexchange.com/questions/36403/portability-of-dev-stdout
bash redirect to /dev/stdout: Not a directory
The solution
I tried your code on Arch and it indeed gives me the same error, as do the variations mentioned - so this is not related to Ubuntu.
I found a blog post that describes how you can pass a file descriptor to execSync. Putting that together with what I got from here and here, I wrote this modified version of your code:
var fs = require('fs');
var path = require('path');
var fdout = fs.openSync(path.join(process.cwd(), 'stdout.txt'), 'a');
var fderr = fs.openSync(path.join(process.cwd(), 'stderr.txt'), 'a');
var execSync = require('child_process').execSync,
result = execSync('echo "hello world"', {stdio: [0,fdout,fderr] });
Unless I misunderstood your question, you want to be able to change where the output of the command in execSync goes. With this you can, using a file descriptor. You can still pass 1 and 2 if you want the called program to output to stdout and stderr as inherited by its parent, which you've already mentioned in the comments.
For future reference, this worked on Arch with kernel version 4.10.9-1-ARCH, on bash 4.4.12 and node v7.7.3.

Bash does not print any error msg upon non-existing commands starting with dot

This is really just out of curiosity.
A typo made me notice that in Bash, the following:
$ .anything
does not print any error ("anything" not to be interpreted literally, it can really be anything, and no space after the dot).
I am curious about how this is interpreted in bash.
Note that echo $? after such command returns 127. This usually means "command not found". It does make sense in this case, however I find it odd that no error message is printed.
Why would $ anything actually print bash:anything: command not found... (assuming that no anything cmd is in the PATH), while $ .anything slips through silently?
System: Fedora Core 22
Bash version: GNU bash, version 4.3.39(1)-release (x86_64-redhat-linux-gnu)
EDIT:
Some comments below indicated the problem as non-reproducible at first.
The answer of #hek2mgl below summarises the many contributions to this issue, which was eventually found (by #n.m.) as reproducible in FC22 and submitted as a bug report in https://bugzilla.redhat.com/show_bug.cgi?id=1292531
bash supports a handler for situations when a command can't be found. You can define the following function:
function command_not_found_handle() {
command=$1
# do something
}
Using that function it is possible to suppress the error message. Search for that function in your bash startup files.
Another way to find that out is to unset the function. Like this:
$ unset -f command_not_found_handle
$ .anything # Should display the error message
After some research, #n.m. found out that the described behaviour is by intention. FC22 implements command_not_found_handle and calls the program /etc/libexec/pk-command-not-found. This program is part of the PackageKit project and will try to suggest installable packages if you type a command name that can't be found.
In it's main() function the program explicitly checks if the command name starts with a dot and silently returns in that case. This behaviour was introduced in this commit:
https://github.com/hughsie/PackageKit/commit/0e85001b
as a response to this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1151185
IMHO this behaviour is questionable. At least other distros are not doing so. But now you know that the behaviour is 100% reproducible and you may follow up on that bug report.

How to check if command is available or existant?

I am developing a console application in C on linux.
Now an optional part of it (its not a requirement) is dependant on a command/binary being available.
If I check with system() I'm getting sh: command not found as unwanted output and it detects it as existent. So how would I check if the command is there?
Not a duplicate of Check if a program exists from a Bash script since I'm working with C, not BASH.
To answer your question about how to discover if the command exists with your code. You can try checking the return value.
int ret = system("ls --version > /dev/null 2>&1"); //The redirect to /dev/null ensures that your program does not produce the output of these commands.
if (ret == 0) {
//The executable was found.
}
You could also use popen, to read the output. Combining that with the whereis and type commands suggested in other answers -
char result[255];
FILE* fp = popen("whereis command", "r");
fgets(result, 255, fp);
//parse result to see the path of the bin if it has been found.
pclose(check);
Or using type:
FILE* fp = popen("type command" , "r");
The result of the type command is a bit harder to parse since it's output varies depending on what you are looking for (binary, alias, function, not found).
You can use stat(2) on Linux(or any POSIX OS) to check for a file's existence.
Use which, you can either check the value returned by system() (0 if found) or the output of the command (no output equal not found):
$ which which
/usr/bin/which
$ echo $?
0
$ which does_t_exist
$ echo $?
1
If you run a shell, the output from "type commandname" will tell you whether commandname is available, and if so, how it is provided (alias, function, path to binary). You can read the documentation for type here: http://ss64.com/bash/type.html
I would just go through the current PATH and see whether you can find it there. That’s what I did recently with an optional part of a program that needed agrep installed. Alternately, if you don’t trust the PATH but have your own list of paths to check instead, use that.
I doubt it’s something that you need to check with the shell for whether it’s a builtin.

Bash script execution with and without shebang in Linux and BSD

How and who determines what executes when a Bash-like script is executed as a binary without a shebang?
I guess that running a normal script with shebang is handled with binfmt_script Linux module, which checks a shebang, parses command line and runs designated script interpreter.
But what happens when someone runs a script without a shebang? I've tested the direct execv approach and found out that there's no kernel magic in there - i.e. a file like that:
$ cat target-script
echo Hello
echo "bash: $BASH_VERSION"
echo "zsh: $ZSH_VERSION"
Running compiled C program that does just an execv call yields:
$ cat test-runner.c
void main() {
if (execv("./target-script", 0) == -1)
perror();
}
$ ./test-runner
./target-script: Exec format error
However, if I do the same thing from another shell script, it runs the target script using the same shell interpreter as the original one:
$ cat test-runner.bash
#!/bin/bash
./target-script
$ ./test-runner.bash
Hello
bash: 4.1.0(1)-release
zsh:
If I do the same trick with other shells (for example, Debian's default sh - /bin/dash), it also works:
$ cat test-runner.dash
#!/bin/dash
./target-script
$ ./test-runner.dash
Hello
bash:
zsh:
Mysteriously, it doesn't quite work as expected with zsh and doesn't follow the general scheme. Looks like zsh executed /bin/sh on such files after all:
greycat#burrow-debian ~/z/test-runner $ cat test-runner.zsh
#!/bin/zsh
echo ZSH_VERSION=$ZSH_VERSION
./target-script
greycat#burrow-debian ~/z/test-runner $ ./test-runner.zsh
ZSH_VERSION=4.3.10
Hello
bash:
zsh:
Note that ZSH_VERSION in parent script worked, while ZSH_VERSION in child didn't!
How does a shell (Bash, dash) determines what gets executed when there's no shebang? I've tried to dig up that place in Bash/dash sources, but, alas, looks like I'm kind of lost in there. Can anyone shed some light on the magic that determines whether the target file without shebang should be executed as script or as a binary in Bash/dash? Or may be there is some sort of interaction with kernel / libc and then I'd welcome explanations on how does it work in Linux and FreeBSD kernels / libcs?
Since this happens in dash and dash is simpler, I looked there first.
Seems like exec.c is the place to look, and the relevant functionis are tryexec, which is called from shellexec which is called whenever the shell things a command needs to be executed. And (a simplified version of) the tryexec function is as follows:
STATIC void
tryexec(char *cmd, char **argv, char **envp)
{
char *const path_bshell = _PATH_BSHELL;
repeat:
execve(cmd, argv, envp);
if (cmd != path_bshell && errno == ENOEXEC) {
*argv-- = cmd;
*argv = cmd = path_bshell;
goto repeat;
}
}
So, it simply always replaces the command to execute with the path to itself (_PATH_BSHELL defaults to "/bin/sh") if ENOEXEC occurs. There's really no magic here.
I find that FreeBSD exhibits identical behavior in bash and in its own sh.
The way bash handles this is similar but much more complicated. If you want to look in to it further I recommend reading bash's execute_command.c and looking specifically at execute_shell_script and then shell_execve. The comments are quite descriptive.
(Looks like Sorpigal has covered it but I've already typed this up and it may be of interest.)
According to Section 3.16 of the Unix FAQ, the shell first looks at the magic number (first two bytes of the file). Some numbers indicate a binary executable; #! indicates that the rest of the line should be interpreted as a shebang. Otherwise, the shell tries to run it as a shell script.
Additionally, it seems that csh looks at the first byte, and if it's #, it'll try to run it as a csh script.

How to check Linux version with Autoconf?

My program requires at least Linux 2.6.26 (I use timerfd and some other Linux-specific features).
I have an general idea how to write this macro but I don't have enough knowledge about writing test macros for Autoconf. Algorithm:
Run "uname --release" and store output
Parse output and subtract Linux version number (MAJOR.MINOR.MICRO)
Compare version
I don't know how to run command, store output and parse it.
Maybe such macro already exists and it's available (I haven't found any)?
I think you'd be better off detecting the specific functions you need using AC_CHECK_FUNC, rather than a specific kernel version.
This will also prevent breakage if you find yourself cross-compiling at some point in the future
There is a macro for steps 2 (parse) and 3 (compare) version, ax_compare_version. For example:
linux_version=$(uname --release)
AX_COMPARE_VERSION($linux_version, [eq3], [2.6.26],
[AC_MSG_NOTICE([Ok])],
[AC_MSG_ERROR([Bad Linux version])])
Here I used eq3 so that if $linux_version contained additional strings, such as -amd64, the comparison still succeeds. There is a plethora of comparison operators available.
I would suggest you not to check the Linux version number, but for the specific type you need or function. Who knows, maybe someone decides to backport timerfd_settime() to 2.4.x? So I think AC_CANONICAL_TARGET and AC_CHECK_LIB or similar are your friends. If you need to check the function arguments or test behaviour, you'd better write a simple program and use AC_LANG_CONFTEST([AC_LANG_PROGRAM(...)])/AC_TRY_RUN to do the job.
Without going too deep and write autoconf macros properly (which would be preferable anyway) don't forget that configure.ac is basically a shell script preprocessed by m4. So you can write shell commands directly.
# prev. part of configure.ac
if test `uname -r |cut -d. -f1` -lt 2 then; echo "major v. error"; exit 1; fi
if test `uname -r |cut -d. -f2` -lt 6 then; echo "minor v. error"; exit 1; fi
if test `uname -r |cut -d. -f3` -lt 26 then; echo "micro error"; exit 1; fi
# ...
This is just an idea if you want to do it avoiding writing macros for autoconf. This choice is not good, but should work...
The best way is the already suggested one: you should check for features; so, say in a future kernel timerfd is no more available... or changed someway your code is broken... you won't catch it since you test for version.
edit
As user foof says in comments (with other words), it is a naive way to check for major.minor.micro. E.g. 3.5.1 will fail because of 5 being lt 6, but 3.5.1 comes after 2.6.26 so (likely) it should be accepted. There are many tricks that can be used in order to transform x.y.z into a representation that puts each version in its "natural" order. E.g. if we expect x, y, or z won't be greather than 999, we can do something like multiplying by 1000000 major, 1000 minor and 1 micro: thus, you can compare the result with 2006026 as Foof suggested in comment(s).

Resources