Odd, long space in bash when writing commands - linux

I have added some custom parameters to personalize my bash and I am experiencing some unexpected behavior, so I think I might have done it wrong. In the code block below are the custom parameters:
# Custom parameters
tprompt () {
local bold=$(tput bold)
local red=$(tput setaf 1)
local green=$(tput setaf 2)
local magenta=$(tput setaf 5)
local cyan=$(tput setaf 6)
local plain=$(tput sgr0)
printf -v PS1 '%s' \
'\[\033[1;36m\]' \
'\u\[\033[1;31m\]' \
'#\[\033[1;32m\]' \
'\h:\[\033[1;35m\]' \
'\w\[\033[1;31m\]' \
'\$\[\033[0m\] '
}
tprompt
tput () {
printf '\\['
command tput "$#"
printf '\\]'
}
Everything works well, but it seems as if it will behave like this when the a path is too long, as shown in the pic below:
It might also be worth mentioning that I am using ble.sh.
EDIT:
Output of echo $SHELL: /bin/bash
Output of declare -p PS1:
declare -- PS1="\\[\\033[1;36m\\]\\u\\[\\033[1;31m\\]#\\[\\033[1;32m\\]\\h:\\[\\033[1;35m\\]\\w\\[\\033[1;31m\\]\\\$\\[\\033[0m\\] "

Thank you for the report! I'm the author of ble.sh. This was a bug of ble.sh in the coordinate calculation. I fixed it in the latest push. Could you please update ble.sh by the following command?
$ ble-update
The bug was created in 0.4.0-devel3 commit 4fa139ad (2021-03-21) and fixed in commit 9badb5f (2021-06-11 now!). (I have actually noticed this problem on 2021-05-16, but somehow I was forgetting to fix it). The versions between these commits were affected by the bug. Everyone using the master branch of ble.sh should update it.

Related

Bash module load function not working as expected

I have a batch script that I eventually want to execute on a cluster via condor_submit. The script needs to load some module via "module load matlab/R2020a". However nothing works.
The script looks like this:
#!/bin/bash
module load cudnn/8.2.0-cu11.x
module load cuda/11.2
module load matlab/R2020a
echo $PATH
echo $SHELL
#Check matlab version
echo_and_run() { echo "$*" ; "$#" ; }
matlab -e | grep "MATLAB="
echo_and_run matlab -e | grep "MATLAB="
...
setup input etc.
...
echo_and_run matlab -nodisplay -batch "......matlab commands"
When I run it from my home shell it gives me:
...
/bin/bash
MATLAB=/is/software/matlab/linux/R2014a
MATLAB=/is/software/matlab/linux/R2014a
...
Which is both not correct.
When executing this in my local shell ( source ./scriptname.sh) The output is even more confusing:
...
/bin/bash
MATLAB=/is/software/matlab/linux/R2020a
MATLAB=/is/software/matlab/linux/R2014a
...
So the matlab version updates, but only for the non "echo_and_run" execution (the first call). In the actual call it also is the default 2014 version.
What in the world is going on? I checked the $PATH variables and they are identically to my running shell. I tried sourcing ~/.bashrc at the top of the script, no difference. When I type "type module" I can see that it is a function:
module is a function
module ()
{
_module_raw "$#" 2>&1
}
Some older posts mention, that I should either run with "source" or "." (for sh) but I cannot do that, since the script is called by condor_submit eventually. Or I should find the file that defines module. However I do not know what other file (besides ~/.bashrc) that could be.
Old post: https://unix.stackexchange.com/questions/194893/why-cant-i-load-modules-while-executing-my-bash-script-but-only-when-sourcing
Edit
I am currently trying everything locally (in the login shell) which could be different from the execution shell, but even here I get this weird behaviour.
Edit II:
+ type -a _module_raw
_module_raw is a function
_module_raw ()
{
unset _mlshdbg;
if [ "${MODULES_SILENT_SHELL_DEBUG:-0}" = '1' ]; then
case "$-" in
*v*x*)
set +vx;
_mlshdbg='vx'
;;
*v*)
set +v;
_mlshdbg='v'
;;
*x*)
set +x;
_mlshdbg='x'
;;
*)
_mlshdbg=''
;;
esac;
fi;
unset _mlre _mlIFS;
if [ -n "${IFS+x}" ]; then
_mlIFS=$IFS;
fi;
IFS=' ';
for _mlv in ${MODULES_RUN_QUARANTINE:-};
do
if [ "${_mlv}" = "${_mlv##*[!A-Za-z0-9_]}" -a "${_mlv}" = "${_mlv#[0-9]}" ]; then
if [ -n "`eval 'echo ${'$_mlv'+x}'`" ]; then
_mlre="${_mlre:-}${_mlv}_modquar='`eval 'echo ${'$_mlv'}'`' ";
fi;
_mlrv="MODULES_RUNENV_${_mlv}";
_mlre="${_mlre:-}${_mlv}='`eval 'echo ${'$_mlrv':-}'`' ";
fi;
done;
if [ -n "${_mlre:-}" ]; then
eval `eval ${_mlre}/usr/bin/tclsh8.6 /usr/lib/x86_64-linux-gnu/modulecmd.tcl bash '"$#"'`;
else
eval `/usr/bin/tclsh8.6 /usr/lib/x86_64-linux-gnu/modulecmd.tcl bash "$#"`;
fi;
_mlstatus=$?;
if [ -n "${_mlIFS+x}" ]; then
IFS=$_mlIFS;
else
unset IFS;
fi;
unset _mlre _mlv _mlrv _mlIFS;
if [ -n "${_mlshdbg:-}" ]; then
set -$_mlshdbg;
fi;
unset _mlshdbg;
return $_mlstatus
}
Provided script output does not help to understand if there is an issue at the environment modules level.
Adding a module list command in your script after the 3 module load commands may help to determine if the module function has properly loaded your environment or not.
In some situations, like when running script on a cluster through a batch scheduler, it is good to source the module initialization script at the start of such script to ensure the module function is defined. It seems that you are running on a Debian-like system, so the initialization script may be sourced with:
. /usr/share/modules/init/bash

What's the equivalent to ${var:-defaultvalue} in fish?

Hello I am trying to translate my .bashrc to fish format almost done, mostly is clear on the documentation but this part is giving me a headache.. is so my gnupg works with my yubikey ssh etc etc..
The fish version is latest 3.0 under Arch GNU/Linux
original on BASH:
# Set SSH to use gpg-agent
unset SSH_AGENT_PID
if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]; then
export SSH_AUTH_SOCK="/run/user/$UID/gnupg/S.gpg-agent.ssh"
fi
echo "UPDATESTARTUPTTY" | gpg-connect-agent > /dev/null 2&>1
Mine half converted into fish:
set -e SSH_AGENT_PID
if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]
set -x SSH_AUTH_SOCK="/run/user/$UID/gnupg/S.gpg-agent.ssh"
end
echo "UPDATESTARTUPTTY" | gpg-connect-agent > /dev/null 2>&1
so as you see above I have so far converted the stdin and stderror pine and the unset variable with set -e the error I am having is a bit more obscure to me:
~/.config/fish/config.fish (line 33): ${ is not a valid variable in fish.
if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]
^
from sourcing file ~/.config/fish/config.fish
called during startup
Any help will be much appreciated,
BTW will be nice a migrate too :) are there any out there?
[edit] ok got this working thanks to the response below, now all my bash environment, profile, bashrc etc is translated to fish and using it solely as my shell 100%
You should not change your login shell until you have a much better understanding of fish syntax and behavior. For example, in fish the equivalent of $$ is %self or $fish_pid depending on which fish version you are using. You should always specify the version of the program you are having problems with.
Assuming you're using fish 2.x that would be written as
if not set -q gnupg_SSH_AUTH_SOCK_by
or test $gnupg_SSH_AUTH_SOCK_by -ne %self
set -gx SSH_AUTH_SOCK "/run/user/$UID/gnupg/S.gpg-agent.ssh"
end
Also, notice that there is no equal-sign between the var name and value in the set -x.
Since ${var:-value} expands to value if $var is empty, you can always replace it by writing your code out the long way:
begin
if test -n "$gnupg_SSH_AUTH_SOCK_by"
set result "$gnupg_SSH_AUTH_SOCK_by"
else
set result 0
end
if [ "$result" -ne %self ]
set -x SSH_AUTH_SOCK "/run/user/$UID/gnupg/S.gpg-agent.ssh"
end
set -e result
end
Note that I don't use (a) endorse, (b) condone the use of, or (c) fail to hold unwarranted prejudices against users of, fish. Thus, my advice is very much suspect, and it's likely that there are considerably better ways to do this.
I had a similar question, related to XDG_* variables.
var1="${XDG_CACHE_HOME:-$HOME/.cache}"/foo
var2="${XDG_CONFIG_HOME:-$HOME/.config}"/foo
var3="${XDG_DATA_HOME:-$HOME/.local/share}"/foo
some-command "$var1" "$var2" ...
What I found as the best alternative is to simply set univeral variables once for the defaults--
set -U XDG_CACHE_HOME ~/.cache
set -U XDG_CONFIG_HOME ~/.config
set -U XDG_DATA_HOME ~/.local/share
Then in fish config file(s) or scripts, simply use "$XDG_CONFIG_HOME"/.... The value of an exported environment variable will override the universal variable if set, otherwise the universal variable is there as a default/fallback. If the universal variable is used, it is not exported to child processes, while an exported environment variable is, which provides the full equivalent to bash|zsh parameter expansion.

Bash - File local variables

I just wrote a small file to set my PS1 variable. This file is sourced from my .bashrc. Now I have a couple of questions regarding this approach.
But first the code:
setprompt:
# Normal variables
BOLD="$(tput bold)"
RESET="$(tput sgr0)"
RED="$(tput setaf 1)"
GREEN="$(tput setaf 2)"
YELLOW="$(tput setaf 3)"
BLUE="$(tput setaf 4)"
PINK="$(tput setaf 5)"
CYAN="$(tput setaf 6)"
GRAY="$(tput setaf 7)"
# Make non-printable variables
PROMPT_BOLD="\[$BOLD\]"
PROMPT_RESET="\[$RESET\]"
PROMPT_RED="\[$RED\]"
PROMPT_GREEN="\[$GREEN\]"
PROMPT_YELLOW="\[$YELLOW\]"
PROMPT_BLUE="\[$BLUE\]"
PROMPT_PINK="\[$PINK\]"
PROMPT_CYAN="\[$CYAN\]"
PROMPT_GRAY="\[$GRAY\]"
# Other variables
USERNAME="\u"
FULL_HOSTNAME="\H"
SHORT_HOSTNAME="\h"
FULL_WORKING_DIR="\w"
BASE_WORKING_DIR="\W"
# Throw it together
FINAL="${PROMPT_RESET}${PROMPT_BOLD}${PROMPT_GREEN}"
FINAL+="${USERNAME}#${SHORT_HOSTNAME} "
FINAL+="${PROMPT_RED}${FULL_WORKING_DIR}\$ "
FINAL+="${PROMPT_RESET}"
# Export variable
export PS1="${FINAL}"
.bashrc:
..
source ~/.dotfiles/other/setprompt
..
My questions:
Will this approach slow down my bash startup? Should I just write one ugly unreadable line of code instead of doing these variable definitions/sourcing?
I noticed, that the variables defined in setprompt are still defined in my .bashrc. I don't like this behaviour since it's not obvious to the editor of .bashrc that variables are defined when sourcing setprompt. Is this just the behaviour of source? What can I do about this?
Edit:
This is the approach I use now (recommended by tripleee):
getPrompt.sh:
#!/bin/bash
getPrompt () {
# Bold/Reset
local PROMPT_BOLD="\[$(tput bold)\]"
local PROMPT_RESET="\[$(tput sgr0)\]"
# Colors
local PROMPT_RED="\[$(tput setaf 1)\]"
local PROMPT_GREEN="\[$(tput setaf 2)\]"
# Miscellaneous
local USERNAME="\u" local SHORT_HOSTNAME="\h"
local FULL_WORKING_DIR="\w"
# Print for later use
printf "%s%s%s%s" "${PROMPT_RESET}${PROMPT_BOLD}${PROMPT_GREEN}" \
"${USERNAME}#${SHORT_HOSTNAME} " \
"${PROMPT_RED}${FULL_WORKING_DIR}\$ " \
"${PROMPT_RESET}"
}
.bashrc:
source ~/.dotfiles/bash/getPrompt.sh
PS1=$(getPrompt)
Keeping things human-readable is probably a good thing, and if performance is a problem, perhaps you can control whether this gets executed at all if your prompt is already set. As a first step, maybe move the call to .bash_profile instead of .bashrc.
You can either unset all the variables at the end of the script, or refactor the script so that it runs as a function, or as a separate script (i.e. call it instead of source it).
If you put it all in a function, the function will need to declare all the variables local.
If you run this as an external script, you will need to change it so that it prints the final value. Then you can call it like
PS1=$(setprompt)
without any side effects. (Perhaps you would want to do this with a function, too, just to keep it clean and modular.)

Colourful makefile info command

Usually I am using echo -e "\e[1:32mMessage\e[0m" to print colourful messages out of the makefile. But now I want to print message inside of the ifndef block, so I am using $(info Message) style. Is it possible to make this kind of message colourful ?
Yes. You can use a tool like tput to output the literal escape sequences needed instead of using echo -e (which isn't a good idea anyway) to do the same thing.
For example:
$(info $(shell tput setaf 1)Message$(shell tput sgr0))
Though that requires two shells to be spawned and two external commands to be run as opposed to the echo (or similar) methods in a recipe context so that's comparatively more expensive.
You could (and should if you plan on using the colors in more than one place) save the output from tput in a variable and then just re-use that.
red:=$(shell tput setaf 1)
reset:=$(shell tput sgr0)
$(info $(red)Message$(reset))
$(info $(red)Message$(reset))

GhostScript Batch Mode?

I'm using GhostScript to convert PDFs to PNGs, the problem is that for each page I'm calling:
gs -sDEVICE=pnggray -dBATCH -dNOPAUSE -dFirstPage=10 -dLastPage=10 -r600 -sOutputFile=image_10.png pdf_file.pdf
Which is not good, I want to pass dFirstPage=10 dLastPage=30 for example and make GhostScript automatically extract each page in a SEPARATE png file WITH the PAGE-NUMBER in the filename, without starting it again with different sOutputFile...
I know it's probably something simple but I'm missing it...
Also, it would be great if someone can tell me what parameter I need to pass to make ghostscript run in total silence, without any output to the console.
EDIT: Adding %d to the output parameter adds the number of the run, instead of the number of the page. For example:
-dFirstPage=10 -dLastPage=15 -sOutputFile=image_%d.png
results in:
image_1.png, image_2.png, image_3.png etc... instead of:
image_10.png, image_11.png, image_12.png ...
Save this as a file
#!/bin/bash
case $# in [!3] ) printf "usage : ${0##*/} stPage endPage File\n" >&2 ;; esac
stPage=$1
endPage=$2
(( endPage ++ ))
file=$3
i=$stPage
while (( i < endPage )) ; do
gs -sstdout=/dev/null -sDEVICE=pnggray -dBATCH -dNOPAUSE -dPage=$i -r600 -sOutputFile=image_$i.png ${file}
(( i ++ ))
done
Check in the ghost script manual to see if there is a -dPage=${num} option, else use
-dFirstPage=${i} -dLastPage=${i} .
Then make it executeable chmod 755 batch_gs.sh
Finally run it with arguments
batch_gs.sh 3 5 fileName
(Lightly tested).
I hope this helps.
Unfortunately, what you want to do is not possible. See also my answers here and here.
If you want to do all PNG conversions in one go (without restarting Ghostscript for each new page), you have to live with the fact, that the %d macro always starts with numbering the first output page as 1, but of course you will gain a much better performance.
If you do not like this naming conventions in your end result, you have to do a second step that renames the resulting files to their final name.
Assuming your initial output files are named image_1.png ... image_15.png, but you want them named image_25.png ... image_39.png, your core command to do this would be:
for i in $(seq 1 15); do
mv image_${i}.png image_$(( ${i} + 24)).png
done
Note, this will go wrong if the two ranges of numbers intersect, as the command would then overwrite one of your not-yet-renamed input files. To be save, don't use mv but use cp to make a copy of the new files in a temporary subdirectory first:
for i in $(seq 1 15); do
cp -a image_${i}.png temp/image_$(( ${i} + 14)).png
done

Resources