Bash read/write file descriptors -- seek to start of file - linux

I tried to use the read/write file descriptor in bash so that I could delete the file that the file descriptor referred to afterward, as such:
F=$(mktemp)
exec 3<> "$F"
rm -f "$F"
echo "Hello world" >&3
cat <&3
but the cat command gives no output. I can achieve what I want if I use separate file descriptors for reading and writing:
F=$(mktemp)
exec 3> "$F"
exec 4< "$F"
rm -f "$F"
echo "Hello world" >&3
cat <&4
which prints Hello world.
I suspected that bash doesn't automatically seek to the start of the file descriptor when you switch from writing to reading it, and the following combination of bash and python code confirms this:
fdrw.sh
exec 3<> tmp
rm tmp
echo "Hello world" >&3
exec python fdrw.py
fdrw.py
import os
f = os.fdopen(3)
print f.tell()
print f.read()
which gives:
$ bash fdrw.sh
12
$ # This is the prompt reappearing
Is there a way to achieve what I want just using bash?

I found a way to do it in bash, but it's relying on an obscure feature of exec < /dev/stdin which actually can rewind the file descriptor of stdin according to http://linux-ip.net/misc/madlug/shell-tips/tip-1.txt:
F=$(mktemp)
exec 3<> "$F"
rm -f "$F"
echo "Hello world" >&3
{ exec < /dev/stdin; cat; } <&3
The write descriptor isn't affected by that so you can still append output to descriptor 3 before the cat.
Sadly I only got this working under Linux not under MacOS (BSD), even with the newest bash version. So it doesn't seem very portable.

If you ever do happen to want to seek on bash file descriptors, you can use a subprocess, since it inherits the file descriptors of the parent process. Here is an example C program to do this.
seekfd.c
#define _FILE_OFFSET_BITS 64
#include <string.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char* argv[])
{
/* Arguments: fd [offset [whence]]
* where
* fd: file descriptor to seek
* offset: number of bytes from position specified in whence
* whence: one of
* SEEK_SET (==0): from start of file
* SEEK_CUR (==1): from current position
* SEEK_END (==2): from end of file
*/
int fd;
long long scan_offset = 0;
off_t offset = 0;
int whence = SEEK_SET;
int errsv; int rv;
if (argc == 1) {
fprintf(stderr, "usage: seekfd fd [offset [whence]]\n");
exit(1);
}
if (argc >= 2) {
if (sscanf(argv[1], "%d", &fd) == EOF) {
errsv = errno;
fprintf(stderr, "%s: %s\n", argv[0], strerror(errsv));
exit(1);
}
}
if (argc >= 3) {
rv = sscanf(argv[2], "%lld", &scan_offset);
if (rv == EOF) {
errsv = errno;
fprintf(stderr, "%s: %s\n", argv[0], strerror(errsv));
exit(1);
}
offset = (off_t) scan_offset;
}
if (argc >= 4) {
if (sscanf(argv[3], "%d", &whence) == EOF) {
errsv = errno;
fprintf(stderr, "%s: %s\n", argv[0], strerror(errsv));
exit(1);
}
}
if (lseek(fd, offset, whence) == (off_t) -1) {
errsv = errno;
fprintf(stderr, "%s: %s\n", argv[0], strerror(errsv));
exit(2);
}
return 0;
}

No. bash does not have any concept of "seeking" with its redirection. It reads/writes (mostly) from beginning to end in one long stream.

When you open a file descriptor in bash like that, it becomes accessible as a file in /dev/fd/.
On that you can do cat and it'll read from the start, or append (echo "something" >> /dev/fd/3), and it'll add it to the end.
At least on my system it behaves this way. (On the other hand, I can't seem to be able to get "cat <&3" to work, even if I don't do any writing to the descriptor).

Try changing the sequence of commands:
F=$(mktemp tmp.XXXXXX)
exec 3<> "$F"
echo "Hello world" > "$F"
rm -f "$F"
#echo "Hello world" >&3
cat <&3

#!/bin/bash
F=$(mktemp tmp.XXXXXX)
exec 3<> $F
rm $F
echo "Hello world" >&3
cat /dev/fd/3
As suggested in other answer, cat will rewind the file descriptor for you before reading from it since it thinks it's just a regular file.

To 'rewind' the file descriptor, you can simply use /proc/self/fd/3
Test script :
#!/bin/bash
# Fill data
FILE=test
date +%FT%T >$FILE
# Open the file descriptor and delete the file
exec 5<>$FILE
rm -rf $FILE
# Check state of the file
# should return an error as the file has been deleted
file $FILE
# Check that you still can do multiple reads or additions
for i in {0..5}; do
echo ----- $i -----
echo . >>/proc/self/fd/5
cat /proc/self/fd/5
echo
sleep 1
done
Try to kill -9 the script while it is running, you will see that contrary to what happens with the trap method, the file is actually deleted.

Expansion on the answer by #sanmai...
And confirmation of what is going on...
#/bin/bash
F=$(mktemp tmp.XXXXXX)
exec 3<>$F # open the temporary file for read and write
rm $F # delete file, though it remains on file system
echo "Hello world!" >&3 # Add a line to a file
cat /dev/fd/3 # Read the whole file
echo "Bye" >>/dev/fd/3 # Append another line
cat /dev/fd/3 # Read the whole file
echo "Goodbye" >&3 # Overwrite second line
cat /dev/fd/3 # Read the whole file
cat <&3 # Try to Rewind (no output)
echo "Cruel World!" >&3 # Still adds a line on end
cat /dev/fd/3 # Read the whole file
shell_seek 3 6 0 # seek fd 3 to position 6
echo -n "Earth" >&3 # Overwrite 'World'
shell_seek 3 # rewind fd 3
cat <&3 # Read the whole file put 3 at end
Note that the echo Goodbye overwrites the second lineas the file descriptor &3 had not changed by the cat!
So I tried using cat <&3 which did not output anything, probably as the file descriptor was at the end of the file. To see it if it rewinds the descriptor it was given. It does not.
The last part is to use the 'C' program that was provided, compiled and named shell_seek and yes it seems it works as the first 'World' was replaced by 'Earth', the rewind (seek to start) worked allowing the last cat to again read the whole file. It would put the fd at the end of the file again!
Doing it using perl instead of C was not that hard either.
For example perl -e 'open(FD,">&3"); seek(FD,0,0);' will rewind file descriptor 3 back to the start of the file.
I have now made a perl version of shell_seek so I don't have to re-compile it all the time for different systems. Not only that but the script can also 'tell' you the current file descriptor offset, and also 'truncate' the file that file descriptor points too. Both operations are commonly used when using seek, so it seemed a good idea to include those functions. You can download the script from...
https://antofthy.gitlab.io/software/#shell_seek

Related

While loop (in a background process) reading from file descriptor does not exit

I'll explain my problem statement first. I have a command1 which generates data on both stderr and stdout, the stdout is piped to command2, while the stderr should go to a background process which continuously calls an API.
Dummy example of the same:
#!/bin/bash
set -e;
metric_background_process () {
# This while loop waits till there is a line to read
while read -u 3 -r line; do
echo $line;
done;
# This never prints !!!
echo "Done"
}
tmp_dir=$(mktemp -d)
mkfifo "$tmp_dir/f1"
# Allocate a file descriptor to the named pipe
# to allow read and write from it.
exec 3<> "$tmp_dir/f1"
metric_background_process <&3 &
pid=$!
# Main commands (stdout is being piped to stderr as a dummy case)
cat ./assets/random_file 1>&3
exec 3<&-
wait $pid
The input file contains: 1 through 9, with 9 being the last line.
Observed Output:
1
2
3
4
5
6
7
8
The shell waits after printing 8, 9 is not printed, and the program does not stop by itself.
No matter what I try, the while loop does not exit. Maybe I'm missing something simple, any help or further questions for clarification would be highly appreciated.
The parent shell is opening the fifo for read/write (<>). The subshell (&) inherits the FD so it's also opening the fifo for read/write. When the parent closes the FD the subshell is still opening the fifo for writing so the write side is never closed and so the read side (read -u 3) cannot get EOF.
To make it a bit simpler —
The script:
$ cat foo.sh
metric_background_process () {
local fifo=$1
while read line; do
echo $line
done < $fifo
echo "Done"
}
tmp_dir=$(mktemp -d)
fifo="$tmp_dir/f1"
mkfifo "$tmp_dir/f1"
metric_background_process $fifo &
pid=$!
exec 3> $fifo
for i in {1..5}; do
echo i=$i >&3
done
exec 3>&-
wait $pid
Result:
$ bash foo.sh
i=1
i=2
i=3
i=4
i=5
Done

How to avoide "stdin appears to be a pipe" error in linux bash scripting [duplicate]

I'm trying to do the opposite of "Detect if stdin is a terminal or pipe?".
I'm running an application that's changing its output format because it detects a pipe on STDOUT, and I want it to think that it's an interactive terminal so that I get the same output when redirecting.
I was thinking that wrapping it in an expect script or using a proc_open() in PHP would do it, but it doesn't.
Any ideas out there?
Aha!
The script command does what we want...
script --return --quiet -c "[executable string]" /dev/null
Does the trick!
Usage:
script [options] [file]
Make a typescript of a terminal session.
Options:
-a, --append append the output
-c, --command <command> run command rather than interactive shell
-e, --return return exit code of the child process
-f, --flush run flush after each write
--force use output file even when it is a link
-q, --quiet be quiet
-t[<file>], --timing[=<file>] output timing data to stderr or to FILE
-h, --help display this help
-V, --version display version
Based on Chris' solution, I came up with the following little helper function:
faketty() {
script -qfc "$(printf "%q " "$#")" /dev/null
}
The quirky looking printf is necessary to correctly expand the script's arguments in $# while protecting possibly quoted parts of the command (see example below).
Usage:
faketty <command> <args>
Example:
$ python -c "import sys; print(sys.stdout.isatty())"
True
$ python -c "import sys; print(sys.stdout.isatty())" | cat
False
$ faketty python -c "import sys; print(sys.stdout.isatty())" | cat
True
The unbuffer script that comes with Expect should handle this ok. If not, the application may be looking at something other than what its output is connected to, eg. what the TERM environment variable is set to.
Referring previous answer, on Mac OS X, "script" can be used like below...
script -q /dev/null commands...
But, because it may replace "\n" with "\r\n" on the stdout, you may also need script like this:
script -q /dev/null commands... | perl -pe 's/\r\n/\n/g'
If there are some pipe between these commands, you need to flush stdout. for example:
script -q /dev/null commands... | ruby -ne 'print "....\n";STDOUT.flush' | perl -pe 's/\r\n/\n/g'
I don't know if it's doable from PHP, but if you really need the child process to see a TTY, you can create a PTY.
In C:
#include <stdio.h>
#include <stdlib.h>
#include <sysexits.h>
#include <unistd.h>
#include <pty.h>
int main(int argc, char **argv) {
int master;
struct winsize win = {
.ws_col = 80, .ws_row = 24,
.ws_xpixel = 480, .ws_ypixel = 192,
};
pid_t child;
if (argc < 2) {
printf("Usage: %s cmd [args...]\n", argv[0]);
exit(EX_USAGE);
}
child = forkpty(&master, NULL, NULL, &win);
if (child == -1) {
perror("forkpty failed");
exit(EX_OSERR);
}
if (child == 0) {
execvp(argv[1], argv + 1);
perror("exec failed");
exit(EX_OSERR);
}
/* now the child is attached to a real pseudo-TTY instead of a pipe,
* while the parent can use "master" much like a normal pipe */
}
I was actually under the impression that expect itself does creates a PTY, though.
Updating #A-Ron's answer to
a) work on both Linux & MacOs
b) propagate status code indirectly (since MacOs script does not support it)
faketty () {
# Create a temporary file for storing the status code
tmp=$(mktemp)
# Ensure it worked or fail with status 99
[ "$tmp" ] || return 99
# Produce a script that runs the command provided to faketty as
# arguments and stores the status code in the temporary file
cmd="$(printf '%q ' "$#")"'; echo $? > '$tmp
# Run the script through /bin/sh with fake tty
if [ "$(uname)" = "Darwin" ]; then
# MacOS
script -Fq /dev/null /bin/sh -c "$cmd"
else
script -qfc "/bin/sh -c $(printf "%q " "$cmd")" /dev/null
fi
# Ensure that the status code was written to the temporary file or
# fail with status 99
[ -s $tmp ] || return 99
# Collect the status code from the temporary file
err=$(cat $tmp)
# Remove the temporary file
rm -f $tmp
# Return the status code
return $err
}
Examples:
$ faketty false ; echo $?
1
$ faketty echo '$HOME' ; echo $?
$HOME
0
embedded_example () {
faketty perl -e 'sleep(5); print "Hello world\n"; exit(3);' > LOGFILE 2>&1 </dev/null &
pid=$!
# do something else
echo 0..
sleep 2
echo 2..
echo wait
wait $pid
status=$?
cat LOGFILE
echo Exit status: $status
}
$ embedded_example
0..
2..
wait
Hello world
Exit status: 3
Too new to comment on the specific answer, but I thought I'd followup on the faketty function posted by ingomueller-net above since it recently helped me out.
I found that this was creating a typescript file that I didn't want/need so I added /dev/null as the script target file:
function faketty { script -qfc "$(printf "%q " "$#")" /dev/null ; }
There's also a pty program included in the sample code of the book "Advanced Programming in the UNIX Environment, Second Edition"!
Here's how to compile pty on Mac OS X:
man 4 pty # pty -- pseudo terminal driver
open http://en.wikipedia.org/wiki/Pseudo_terminal
# Advanced Programming in the UNIX Environment, Second Edition
open http://www.apuebook.com
cd ~/Desktop
curl -L -O http://www.apuebook.com/src.tar.gz
tar -xzf src.tar.gz
cd apue.2e
wkdir="${HOME}/Desktop/apue.2e"
sed -E -i "" "s|^WKDIR=.*|WKDIR=${wkdir}|" ~/Desktop/apue.2e/Make.defines.macos
echo '#undef _POSIX_C_SOURCE' >> ~/Desktop/apue.2e/include/apue.h
str='#include <sys/select.h>'
printf '%s\n' H 1i "$str" . wq | ed -s calld/loop.c
str='
#undef _POSIX_C_SOURCE
#include <sys/types.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s file/devrdev.c
str='
#include <sys/signal.h>
#include <sys/ioctl.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s termios/winch.c
make
~/Desktop/apue.2e/pty/pty ls -ld *
I was trying to get colors when running shellcheck <file> | less on Linux, so I tried the above answers, but they produce this bizarre effect where text is horizontally offset from where it should be:
In ./all/update.sh line 6:
for repo in $(cat repos); do
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
(For those unfamiliar with shellcheck, the line with the warning is supposed to line up with the where the problem is.)
In order to the answers above to work with shellcheck, I tried one of the options from the comments:
faketty() {
0</dev/null script -qfc "$(printf "%q " "$#")" /dev/null
}
This works. I also added --return and used long options, to make this command a little less inscrutable:
faketty() {
0</dev/null script --quiet --flush --return --command "$(printf "%q " "$#")" /dev/null
}
Works in Bash and Zsh.

In bash how to parse and/or redirect stdout for an arbitrary program I am backgrounding

In a previous question I asked how to write to a program's stdin. This question builds off of that one. Now I am asking "how do I parse my called program's output".
The fact that I'm calling a program I can edit (Sum.sh) in this example is irrelevant to the question. I want this behavior in the event that I do not have control over the implementation of the program I am calling. I want to "replace" the physical human that a program expects and drive it via a script.
Given two scripts
Sum.sh:
#!/bin/bash
sum=0
inp=0
while true;
do
printf "Sum: %d\n" $sum
printf "Give me an integer: "
read inp
if [ "$inp" == 0 ]; then
exit
fi
sum=$((sum+inp))
done
And driver.sh:
#!/bin/bash
rm -f /tmp/sumpipe
mkfifo /tmp/sumpipe
./Sum.sh < /tmp/sumpipe &
echo 2 > /tmp/sumpipe
echo 5 > /tmp/sumpipe
echo 0 > /tmp/sumpipe
sleep 1 # this is required for some reason or the script hangs
And I run driver.sh I get the following:
$ ./driver.sh
Sum: 0
Give me an integer: Sum: 2
Give me an integer: Sum: 7
Give me an integer: $
The formatting of the terminal output is not important to this question.
What I want to do is parse said output in order to make decisions in driver.sh. In the following psuedo code of driver.sh I try to show what I want to do:
#!/bin/bash
rm -f /tmp/sumpipe
mkfifo /tmp/sumpipe
output=$(./Sum.sh < /tmp/sumpipe &)
while true; do
if [ "$output" != empty ]; then
(make some decision based on what output is and echo back to my process if necessary)
fi
done
Unfortunately, $output never contains anything, and I have tried many versions of abusing read and piping and none of them have worked for me in this application. How can I redirect my backgrounded program's stdout so I can parse it?
I attempted OrangesV's proposed solution (and with deference to William Pursell's suggestion about the out buffer being too empty) with the following:
#!/bin/bash
rm -f /tmp/sumpipe
mkfifo /tmp/sumpipe
while read output; do
echo "2347823" > /tmp/sumpipe
if [ -z $output ]; then
continue
else
echo "got it!"
echo $output
sleep 1
fi
done < <(./Sum.sh < /tmp/sumpipe &)
Unfortunately, "got it!" is never echoed, so it seems it didn't work.
In case it's relevant, my bash version:
$ bash --version
GNU bash, version 4.4.0(7)-rc2 (x86_64-unknown-linux-gnu)
There is now a partial solution. Diego Torres Milano's answer does work. However, my stdio is being buffered so that I won't get anything till stdio has at least 4096 bytes in it. I can force output with the following version of driver.sh:
#!/bin/bash
inpipe=/tmp/sumpipei
outpipe=/tmp/sumpipeo
rm -f $inpipe
rm -f $outpipe
mkfifo $inpipe
mkfifo $outpipe
./Sum.sh < $inpipe > $outpipe &
count=0
while true;
do
while read line
do
echo "line=$line"
done < $outpipe &
echo 2 > $inpipe
done
I tried all sorts of invocations of stdbuf (like: stdbuf -o0 ./Sum.sh <$inpipe > $outpipe &). stdbuf does not apparently work in this situation to force off buffering because, according to the GNU coreutils manual on stdbuf for command in stdbuf option… command:
command must start with the name of a program that does ... not adjust
the buffering of its standard streams (note the program tee is not in
this category).
Which is unfortunately exactly what I'm doing here.
I'm marking Diego's answer as correct, even though it doesn't work until the stdio buffer issue is fixed. I'll make my next question to address the stdio buffering problem. I can almost taste the prize here.
Use another pipe for the output
mkfifo /tmp/output
./Sum.sh < /tmp/sumpipe > /tmp/output &
and you can process it with
while read line
do
echo "line=$line"
done < /tmp/output &
You can do a while read within your driver.sh.
while read output; do
if [ "${output}" == "something" ]; then
#do something
fi
done < <(./Sum.sh < /tmp/sumpipe &)
EDIT
This loop will not start iterating until the script's stdin receives input. So, in another shell/process an echo # > /tmp/sumpipe.

Get reason for permission denied due to traversed directory not executable

I have a file /a/b that is readable by a user A. But /a does not provide executable permission by A, and thus the path /a/b cannot traverse through /a. For an arbitrarily long path, how would I determine the cause for not being able to access a given path due to an intermediate path not being accessible by the user?
Alternative answer to parsing the tree manually and pinpointing the error to a single row would be using namei tool.
namei -mo a/b/c/d
f: a/b/c/d
drwxrw-rw- rasjani rasjani a
drw-rwxr-x rasjani rasjani b
c - No such file or directory
This shows the whole tree structure and permissions up until the entry where the permission is denied.
Something along like this:
#!/bin/bash
PAR=${1}
PAR=${PAR:="."}
if ! [[ "${PAR:0:1}" == / || "${PAR:0:2}" == ~[/a-z] ]]
then
TMP=`pwd`
PAR=$(dirname ${TMP}/${PAR})
fi
cd $PAR 2> /dev/null
if [ $? -eq 1 ]; then
while [ ! -z "$PAR" ]; do
PREV=$(readlink -f ${PAR})
TMP=$(echo ${PAR}|awk -F\/ '{$NF=""}'1|tr ' ' \/)
PAR=${TMP%/}
cd ${PAR} 2>/dev/null
if [ $? -eq 0 ]; then
if [ -e ${PREV} ]; then
ls -ld ${PREV}
fi
exit
fi
done
fi
Ugly but it would get the job done ..
So the idea is basicly that taking a parameter $1, if its not absolute directory, expand it to such and then drop the last element of the path and try to cd into it, if it fails, rince and repeat .. If it works, PREV would hold the last directory where user couldn't cd into, so print it out ..
Here's what I threw together. I actually didn't look at rasjani's answer before writing this, but it uses the same concept where you take the exit status of the command. Basically its going through all the directories (starting the farthest down the chain) and tries to ls them. If the exit status is 0, then the ls succeeded, and it prints out the last dir that it couldn't ls (I'm not sure what would happen in some of the edge cases like where you can't access anything):
LAST=/a/b
while [ ! -z "$LAST" ] ; do
NEXT=`echo "$LAST" | sed 's/[^\/]*$//' | sed 's/\/$//'`
ls "$NEXT" 2> /dev/null > /dev/null
if [ $? -eq 0 ] ; then
echo "Can't access: $LAST"
break
fi
LAST="$NEXT"
done
and I like putting stuff like this on one line just for fun:
LAST=/a/b; while [ ! -z "$LAST" ] ; do NEXT=`echo "$LAST" | sed 's/[^\/]*$//' | sed 's/\/$//'`; ls "$NEXT" 2> /dev/null > /dev/null; if [ $? -eq 0 ] ; then echo "Can't access: $LAST"; break; fi; LAST="$NEXT"; done
I have below C program for you which does this. Below are the steps
Copy and save program as file.c.
Compile program with gcc file.c -o file
Execute it as ./file PATH
Assuming that you have a path as /a/b/c/d and you do not have permission for 'c' then output will be
Given Path = /a/b/c/d
No permission on = /a/b/c
For permission i am relying on "EACCES" error. Path length is assumed to 1024.
If you have any question please share.
#include <stdio.h>
#include <string.h>
#include <errno.h>
#define MAX_LEN 1024
int main(int argc, char *argv[])
{
char path[MAX_LEN] = "/home/sudhansu/Test";
int i = 0;
char parse[MAX_LEN] = "";
if(argc == 2)
{
strcpy(path, argv[1]);
printf("\n\t\t Given Path = %s\n", path);
}
else
{
printf("\n\t\t Usage : ./file PATH\n\n");
return 0;
}
if(path[strlen(path)-1] != '/')
strcat(path, "/");
path[strlen(path)] = '\0';
while(path[i])
{
if(path[i] == '/')
{
strncpy(parse, path, i+1);
if(chdir(parse) < 0)
{
if(errno == EACCES)
{
printf("\t\t No permission on = [%s]\n", parse);
break;
}
}
}
parse[i] = path[i];
i++;
}
printf("\n");
return 0;
}
Regards,
Sudhansu

linux shell test `-d` on empty argument

Here is the code,
x=
if [ -d $x ]; then
echo "it's a dir"
else
echo "not a dir"
fi
The above code gives me "it's a dir", why? $x is empty, isn't it?
x=
if [ -d $x ]; then
is equivalent to:
if [ -d ] ; then
A simpler way to demonstrate what's going on is:
test -d ; echo $?
which prints 0, indicating that the test succeeded ([ is actually a command, equivalent to test except that it takes a terminating ] argument.)
But this:
test -f ; echo $?
does the same thing. Does that mean that the missing argument is both a directory and a plain file?
No, it means that it's not doing those tests.
According to the POSIX specification for the test command, its behavior depends on the number of arguments it receives.
With 0 arguments, it exits with a status of 1, indicating failure.
With 1 argument, it exits with a status of 0 (success) if the argument is not empty, or 1 (success) if the argument is empty.
With 2 arguments, the result depends on the first argument, which can be either ! (which reverses the behavior for 1 arguments), or a "unary primary" like -f or -d, or something else; if it's something else, the results are unspecified.
(POSIX also specifies the behavior for more than 2 arguments, but that's not relevant to this question.)
So this:
x=
if [ -d $x ]; then echo yes ; else echo no ; fi
prints "yes", not because the missing argument is a directory, but because the single argument -d is not the empty string.
Incidentally, the GNU Coreutils manual doesn't mention this.
So don't do that. If you want to test whether $x is a directory, enclose it in double quotes:
if [ -d "$x" ] ; then ...
The stat system call, which your shell presumably uses to determine if is a directory, treats null as the current directory.
Try compiling this program and running it with no arguments:
#include <stdio.h>
#include <sys/stat.h>
int main(int argc, char* argv[]) {
struct stat s ;
stat(argv[1], &s) ;
if(s.st_mode & S_IFDIR != 0) {
printf("%s is a directory\n", argv[1]) ;
}
}

Resources