Update: This is a more general command that is more reproducible. ShellFish identified that there is a more general pattern:
non-existingcommand & existingcommand &
for example,
xyz & echo &
Also, I had a coworker try over an ssh connection and his connection was closed after running the command. So this doesn't appear to be limited to a certain terminal emulator.
Original question:
echo?a=1&b=2|3&c=4=
Behavior:
After executing the command, my current Gnome Terminal tab closes without warning.
Background:
We were testing a URL with a curl command but forgot to quote it or escape the special characters (hence the ampersands and equals signs). Expecting some nonsense about syntax issues or commands not found, we instead watched our shell simply quit. We spent some time narrowing the command down to the minimum that would cause the behavior.
We are using Gnome Terminal on Ubuntu 14.10. Strangely, the behavior is not present on another box I have running byobu even if I detach from the session. It also doesn't happen on Cygwin. Unfortunately I'm limited to testing with Ubuntu 14.10 otherwise.
Note: The following command also kills my terminal but only about half of the time:
echo?a=1&b=2&c=3=
Additional tests:
Someone recommend using a subshell...
guest-cvow8T#chortles:~$ bash -c 'echo?a=1&b=2|4&c=3='
bash: echo?a=1: command not found
guest-cvow8T#chortles:~$ bash: 4: command not found
No exit.
I could reproduce this issue in an Ubuntu VM but not on an OEL VM. Difference was, on Ubuntu the package command-not-found was installed, and it provides the python script /usr/lib/command-not-found. This script is responsible for exiting the shell.
In /etc/bash.bashrc, there is a function command-not-found_handle , which executes /usr/lib/command-not-found. Hence, the terminal exits when we try to execute such commands. When I commented out the call to /usr/lib/command-not-found, the issue was no longer reproducible.
From my /etc/bash.bashrc:
function command_not_found_handle {
#check because c-n-f could've been removed in meantime
if [ -x /usr/lib/command-not-found ]; then
/usr/bin/python /usr/lib/command-not-found -- "$1"
return $?
elif [ -x /usr/share/command-not-founf/command-not-found ]; then
/usr/bin/python /usr/share/command-not-founf/command-not-found -- "$1"
return $?
else
printf "%s:command not found\n" "$1"
return 127
fi
}
Related
I have stumbled upon this small difference between the set implementation of the 2 operating systems.
When running:
#!/bin/sh
set -eu
echo "${#}"
Running this on MacOS gives the following error:
#: unbound variable
Whereas running this in a Linux environment, results no errors, but echos an empty string.
Can this be resolved somehow, except changing ${#} to ${#:-} as this may result different results, if the amount of arguments is checked within the code.
The shell you have on Linux conforms to the latest POSIX standard while the one on MacOS does not (see spec., scroll downwards for -u; related: #0000155), hence the difference in behavior. A workaround would be:
#!/bin/sh
set -eu
echo "${1+"$#"}"
As you have an old version of bash on macos, you have to use "$#"
You can check the version with :
/bin/sh --version
I have two scripts:
fail_def.sh:
#!/bin/bash -eu
function fail() {
echo -e "$(error "$#")"
exit 1
}
bla.sh:
#!/bin/bash -eu
fail "test"
After source fail_def.sh, I can use the fail command without any problems in the terminal. However, when I call bla.sh, I always get line 2: fail: command not found.
It doesn't matter whether I call it via ./bla.sh or bash bla.sh or bash ./bla.sh, the error remains.
Adding source fail_def.sh to the beginning of bla.sh solves the problem, but I'd like to avoid that.
I'm working on an Ubuntu docker container running on a Mac, in case that is relevant.
I tried to google that problem and found some similar problems, but most of them seem to be connected to either not sourcing the file or mixing up different shell implementations, neither of which seems to be the case here.
What do I have to do to get the fail command to work inside the script?
It is expected!
The shell runs the script run with an she-bang separator always as a separate process and hence on a different shell namespace. The new shell in which your script runs does not have the function source'd.
For debugging such information, add a line echo $BASHPID which prints the process id of the current bash process on the bla.sh script after the line #!/bin/bash -eu and a test result produced
$ echo $BASHPID
11700
$ bash bla.sh
6788
fail.sh: line 3: fail: command not found
They scripts you have run on separate process where the imported functions are not shared between. One of the ways would be to your own error handling on the second script and by source-ing the second script. On the second script
$ cat fail.sh
echo $BASHPID
set -e
fail "test"
set +e
Now running it
$ source fail.sh
11700
11700
bash: error: command not found
which is obvious as error is not a shell built-in which is available. Observe the process id's same on the above case.
I created a bash script to use for interactive screen capture and another one for window capture. I'm linking to these with keyboard shortcuts in Linux. The window capture script works without problems:
#!/bin/sh
scrot -u 'ScreenShot_%Y-%m-%d_at_%I:%M:%S-%p.png' -e 'mv $f ~/Pictures/scrot-screenshots'
But the script for for area capture (user selects area with mouse drag) does not work, even though the command works in terminal:
#!/bin/sh
scrot -s 'ScreenShot_%Y-%m-%d_at_%I:%M:%S-%p.png' -e 'mv $f ~/Pictures/scrot-screenshots'
What am I doing wrong? Or maybe a better question is what is preventing the script from letting me select an area of the screen?
I manged to get it working by adding a delay to give the giblib resource time (2/10 of a second) to load:
#!/bin/sh
sleep 0.2 ; scrot -s 'ScreenShot_%Y-%m-%d_at_%I:%M:%S-%p.png' -e 'mv $f ~/Pictures/scrot-screenshots'
How I found the solution:
I couldn't figure out how to get errors to output to a file because running my script from terminal didn't produce any errors. Double clicking the script ran properly and script > file 2>&1 in terminal didn't give me any errors because it ran properly from terminal. I only had errors when I tried to use the keyboard shortcuts (keybindings) attached to the second command from my original post. To see the error that finally lead to the above solution, I downloaded:
`apt-get install xbindkeys` && `apt-get install gconf-editor`
I ran gconf-editor and used the Run Action to executed the script the same manner it would be executed if I was using the keybindings...but attached to a terminal output. That gave me the error output I needed to see:
giblib error: couldn't grab pointer:Resource temporarily unavailable
Which lead me to this post:
https://bbs.archlinux.org/viewtopic.php?id=86507 for the tip.
For whomever the jtlindsey's answer did not work for solving the problem :
giblib error: couldn't grab pointer:Resource temporarily unavailable
Another solution could be this : just before calling scrot, run the command :
xdotool key XF86Ungrab
This should release the pointer, and the scrot command should work after it.
Note : the source claims that before executing previous xdotool command, it would possibly be required to execute :
setxkbmap -option grab:break_actions
so this is a really weird error. I've tested it on two installs of GNU/Linux, namely Arch Linux and Linux Mint 17.1. I've got a bash script I'm working on, and it starts with an if-statement.
if which apt-get &> /dev/null; then x="foo"
elif which yum &> /dev/null; then x="bar"
elif which pacman &> /dev/null; then x="baz"
It works fine on my Arch installation (with bash v4.3.33), but on my Mint installation (bash v4.3.11), it completes the if-statements without incident, but after the if-statements there's a prompt for user input and bash immediately prints from stdout (even though I'm fairly sure I redirected it to /dev/null). It looks like:
Choose an option: /usr/bin/apt-get
where "Choose an option: " is my prompt. This doesn't occur on Arch, and when I switched the order around to
if which pacman &> /dev/null; then x="foo"
elif which apt-get &> /dev/null; then x="bar"
the trailing output after my prompt stopped appearing.
Is there something I'm missing here?
Alright guys, I figured out what was going on; it was just a dumb mistake on my part, but I really appreciate the help from dg99 and glenn jackman.
I noticed that if executed in a graphical file manager (Nemo), and selected to "Run in terminal," that the weird output didn't happen. This prompted some testing via terminal and I discovered that the issue resolved itself if I executed the script by using ./script instead of sh script. I don't really know why that's the case, but it resolved the issue pretty cleanly.
Hi I'm trying to run a script that calls xclip in order to have a string ready to paste when i connect to the internet.
I have a script /etc/network/if-up.d/script that does execute when connecting (i make him post a date in a file succesfuly ) but the xclip instruction seems not to work, there's nothing to paste. If i call this script manually by typing /etc/network/if-up.d/script in a console it works perfectly.
If i try to launch a zenity message it also don't appeare when connecting. Again if i do it by hand it appeares.
Then I have a expect script that calls matlab (console mode), if I execute it manually it works but if i call it from cron it freezees when calling the script.
It's driving me crasy since it seems that only certain commands in a script can be executed when the system calls them automaticaly.
I'v tryed to call the instructions with nohup instruction & but still misses
This is working as designed, you search around and will see compliated ways to resolve this issue, or you can use xmessage as I describe here: Using Zenity in a root incron job to display message to currently logged in user
Easy option 1: xmessage (in the script)
MSSG="/tmp/mssg-file-${RANDOM}"
echo -e " MESSAGE \n ==========\n Done with task, YEY. " > ${MSSG}
xmessage -center -file ${MSSG} -display :0.0
[[ -s ${MSSG} ]] && rm -f ${MSSG}
Easy option 2: set the DISPLAY (then should work)
export DISPLAY=:0 && /usr/bin/somedirectory/somecommand
question is answered here for cron :
http://ubuntuforums.org/archive/index.php/t-105250.html
and here for if-up network :
Bash script not working properly when run automatically