I'm logging in and out of a remote machine many times a day (through ssh) and I'd like to shorten a bit the whole procedure. I've added an alias in my .bashrc and .profile that looks like:
alias connect='ssh -XC username#remotemachine && cd /far/away/location/that/takes/time/to/get/to/;'
My problem is that when I write connect, I first get to the location in cause (on my local machine) and then the ssh connection takes place. How can this be? I've thought that by using "&&" the second command will be run only after the first one is successful. After the ssh command is successful, the .profile/.bashrc are loaded anew, before the second part of the alias is successfully executed?
For the ssh specifically, you're looking for the following:
ssh -t username#remotemachine "cd /path/you/want ; bash"
Using "&&" or even ";" normally will execute the commands in the shell that you're currently in. It's like if you're programming and make a function call and then have another line that you want to effect what happens in the function-- it doesn't work because it's essentially in a different scope.
For sequence of commands :
Try this (Using ;) :
alias cmd='command1;command2;command3;'
Use of '&&' instead of ';' -
The && makes it only execute subsequent commands if the previous returns successful.
Related
I've been using letsencrypt to generate SSL certificates for my site, more specifically letsencrypt_webfaction. When I run this command in my project, it works
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
However, when I run the same command in a bash script, I get the error
generate_certificate.sh: line 2: letsencrypt_webfaction: command not found
I made sure I had all possible permissions on the bash script using chmod 777 generate_certificate.sh, but still nothing. On top of that I have a bash script that runs right before that, which simply restarts Apache, and that works fine.
I read other S.O articles, such as this one, and tried running dos2unix script.sh, which did run successfully, but when I tried running the bash script again, it didn't work.
Restart Apache Script
#!/bin/bash
../apache2/bin/./restart
#END
Generate SSL Script
#!/bin/bash
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
#END
I'm a python developer, and don't have much experience with Ruby, so excuse my ignorance, but the letsencrypt_webfaction command is a function in my bash profile.
~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
function letsencrypt_webfaction {
PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction $*
}
eval "$(rbenv init -)"
PATH=$PATH:$HOME/bin
export PATH
export PATH="$HOME/.rbenv/bin:$PATH"
export TMPDIR="/home/doc4design/src/tmp"
By default, shell functions are only available in the shell they were defined in; they're not inherited by subprocesses. Your .bash_profile is only run by the login shell, not shells that run as subprocesses (e.g. to run scripts).
Option 1: In bash, you can run export -f letsencrypt_webfaction in the defining shell (i.e. in your .bash_profile), and it'll be inherited by subprocesses (provided they're also running bash).
Option 2: You can define the function in your .bashrc instead of .bash_profile, and since you run .bashrc from .bash_profile it'll get defined in all your bash shells.
Option 3: Just use the full command in the script. This would be my preference, since it makes the script more independent. Having a script depend on a shell function that's defined in a completely different place is fragile (as you're experiencing) and just a bit weird.
While I'm at it, here are some general scripting recommendations:
In most contexts, you should put double-quotes around variable references (and strings that contain variable references) to avoid weird effects from word splitting and wildcard expansion. The right side of an assignment is one place it's ok to leave them off (e.g. PATH=$PATH:$HOME/bin and PATH="$PATH:$HOME/bin" are both ok), but I tend to recommend using quotes everywhere as it's hard to keep track of where it's safe to leave them off and where it's dangerous. For the same reason, you should almost always use "$#" instead of $* (as in the letsencrypt_webfaction function).
shellcheck.net is really good at spotting errors like this, so I recommend running your shell scripts through it and acting on its suggestions.
Using the function keyword to define a function is nonstandard; the standard syntax is to use () after the function name, like this:
letsencrypt_webfaction() {
PATH="$PATH:$GEM_HOME/bin" GEM_HOME="$HOME/.letsencrypt_webfaction/gems" RUBYLIB="$GEM_HOME/lib" ruby2.2 "$HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction" "$#"
}
The function I just gave still may not work right, since it (re)defines GEM_HOME after using it. The entire line gets parsed (and pre-existing variable definitions expanded), then the variables defined as prefixes to the command get included in the environment of the command. This means that the ruby script gets the updated value of GEM_HOME, but the updated values of PATH and RUBYLIB are based on whatever value GEM_HOME had when the function was run. I'm pretty sure this is not what you intended.
In the restart apache script, you use a relative path to the restart command. This will be evaluated relative to the working directory of the process that runs the script, not relative to the script's location. This could be anywhere.
I'd like to be able to "spoof" certain commands on my machine, actually invoking them on a remote system. For example. Whenever I run:
cmd options
I'd like the actual command to be:
ssh user#host cmd options
Ideally I'd like to have a folder called spoof, add it to my PATH, and have an executable in there called cmd which does the spoofing. If I have a lot of commands, this could get tedious. Anyone have ideas of a good way to go about this? Such that I can add and remove a lot of commands in the future? And, I'd like to be able to pass all the arguments exactly (or as exact as possible) and every single command I want to spoof would just have the ssh user#host in front of it.
The reason for this is I'm running a container (specifically singularity) on my machine, and there are certain commands I don't really want to containerize, but still want to run from within the container. I've found I can get the functionality I want by just appending the ssh in front of it. Examples are sbatch and matlab which are a pain to containerize and I'm fine with just using ssh to call them. Files that these programs use are written to a bind point so the host machine can see them just fine.
The following script can be hardlinked under all the names of commands you wish to transparently proxy:
#!/usr/bin/env bash
printf -v str '%q ' "${0##*/}" "$#"
ssh host "$str"
SSH combines all its arguments into a single string, which is then executed by a remote shell. To ensure that the remote arguments are identical to the local one, the values need to be escaped; otherwise, somecommand "hello world" and somecommand "hello" "world" can be represented identically over-the-wire.
In an appropriately extended printf (including both bash and ksh implementations), %q is replaced with an escaped form of the corresponding value, which will be evaled back to the original (literal) text by if interpreted later.
printf -v varname stores the output of printf in a variable named varname without the overhead/inefficiency of a command substitutions. (In ksh93, varname=$(printf ...) is optimized to skip subshell overhead, so this is not necessary there).
$0 evaluates to argv[0], which is by convention the name of the command currently being run. (This can be overridden, but you trust your users to behave reasonably... right?)
${0##*/} is a parameter expansion which returns only content after the last / in $0 (should it in fact contain any slashes; otherwise, the original value is used unmodified).
"$#" refers to the exact argument vector passed to your script.
I am unable to run a remote shell script located on "admin" server with arguments.
ssh koliwada#admin "~/bin/addautomaps $groupentry $homeentry $ticket"
"groupentry" and "homeentry" are as follows
user1:*:52940:OWNER-user1
user1 -rw,intr,hard,rsize=32768,wsize=32768 basinas01:/ifs/basinas01/home/&
the script is located at ~/bin/addautomaps in admin server.
I see the error,
tput: No value for $TERM and no -T specified
I also see the arguments also are not passed correctly.
I also tried using "ssh -t ..." but that doesnt work.
Answering your questions in reverse order (or most serious to least serious).
Your problem with the arguments (with spaces) not being passed correctly is that while you are quoting the command string locally you aren't quoting them when they are actually run by the remote machine.
That is you are generating a single string with the variables expanded but nothing tells the remote system not to split the expanded values on spaces.
The fix for that is that you need to quote the arguments inside the command for the remote shell as well as the entire string for ssh.
My answer here might help explain some (it is a similar issue).
The tput "issue" is likely just a warning that you can probably ignore if you don't care about the colorized/stylized/etc. output that tput is likely being used to create. You could also try forcing a value for $TERM on the remote side like ssh ... "export TERM=dumb; ..." or something like that to silence it.
If I try to run multiple commands, and let's say there is one SSH that I must perform that requires a password, once I type said password, the rest of the commands do not execute.
Before you tell me to setup an SSH key, ironically, the process is TO setup an SSH key just by pasting in the commands.
If I lost you somewhere, let me know and I will re-word it. Any ideas?
You can execute multiple commands serially by separating them with the && operator. It will continue executing the next command if the previous command(s) were executed successfully.
Example:
cat /proc/cpuinfo && /bin/true
Example of the second command not executing due to the first command:
/bin/false && cat /proc/cpuinfo
(This is assuming you're using the bash shell)
If you don't care if the command executed successfully, you can separate them with a semicolon ;.
This question already has answers here:
Multiple commands in an alias for bash
(10 answers)
Closed 4 years ago.
I know how to configure aliases in bash, but is there a way to configure an alias for a sequence of commands?
I.e say I want one command to change to a particular directory, then run another command.
In addition, is there a way to setup a command that runs "sudo mycommand", then enters the password? In the MS-DOS days I'd be looking for a .bat file but I'm unsure of the linux (or in this case Mac OSX) equivalent.
For chaining a sequence of commands, try this:
alias x='command1;command2;command3;'
Or you can do this:
alias x='command1 && command2 && command3'
The && makes it only execute subsequent commands if the previous returns successful.
Also for entering passwords interactively, or interfacing with other programs like that, check out expect. (http://expect.nist.gov/)
You mention BAT files so perhaps what you want is to write a shell script. If so then just enter the commands you want line-by-line into a file like so:
command1
command2
and ask bash to execute the file:
bash myscript.sh
If you want to be able to invoke the script directly without typing "bash" then add the following line as the first line of the file:
#! /bin/bash
command1
command2
Then mark the file as executable:
chmod 755 myscript.sh
Now you can run it just like any other executable:
./myscript.sh
Note that unix doesn't really care about file extensions. You can simply name the file "myscript" without the ".sh" extension if you like. It's that special first line that is important. For example, if you want to write your script in the Perl programming language instead of bash the first line would be:
#! /usr/bin/perl
That first line tells your shell what interpreter to invoke to execute your script.
Also, if you now copy your script into one of the directories listed in the $PATH environment variable then you can call it from anywhere by simply typing its file name:
myscript.sh
Even tab-completion works. Which is why I usually include a ~/bin directory in my $PATH so that I can easily install personal scripts. And best of all, once you have a bunch of personal scripts that you are used to having you can easily port them to any new unix machine by copying your personal ~/bin directory.
it's probably easier to define functions for these types of things than aliases, keeps things more readable if you want to do more than a command or two:
In your .bashrc
perform_my_command() {
pushd /some_dir
my_command "$#"
popd
}
Then on the command line you can simply do:
perform_my_command my_parameter my_other_parameter "my quoted parameter"
You could do anything you like in a function, call other functions, etc.
You may want to have a look at the Advanced Bash Scripting Guide for in depth knowledge.
For the alias you can use this:
alias sequence='command1 -args; command2 -args;'
or if the second command must be executed only if the first one succeeds use:
alias sequence='command1 -args && command2 -args'
Your best bet is probably a shell function instead of an alias if the logic becomes more complex or if you need to add parameters (though bash supports aliases parameters).
This function can be defined in your .profile or .bashrc. The subshell is to avoid changing your working directory.
function myfunc {
( cd /tmp; command )
}
then from your command prompt
$ myfunc
For your second question you can just add your command to /etc/sudoers (if you are completely sure of what you are doing)
myuser ALL = NOPASSWD: \
/bin/mycommand
Apropos multiple commands in a single alias, you can use one of the logical operators to combine them. Here's one to switch to a directory and do an ls on it
alias x="cd /tmp && ls -al"
Another option is to use a shell function. These are sh/zsh/bash commands. I don't know enough of other shells to be sure if they work.
As for the sudo thing, if you want that (although I don't think it's a good idea), the right way to go is to alter the /etc/sudoers file to get what you want.
You can embed the function declaration followed by the function in the alias itself, like so:
alias my_alias='f() { do_stuff_with "$#" (arguments)" ...; }; f'
The benefit of this approach over just declaring the function by itself is that you can have a peace of mind that your function is not going to be overriden by some other script you're sourcing (or using .), which might use its own helper under the same name.
E.g., Suppose you have a script init-my-workspace.sh that you're calling like . init-my-workspace.sh or source init-my-workspace.sh whose purpose is to set or export a bunch of environment variables (e.g., JAVA_HOME, PYTHON_PATH etc.). If you happen to have a function my_alias inside there, as well, then you're out of luck as the latest function declaration withing the same shell instance wins.
Conversely, aliases have separate namespace and even in case of name clash, they are looked up first. Therefore, for customization relevant to interactive usage, you should only ever use aliases.
Finally, note that the practice of putting all the aliases in the same place (e.g., ~/.bash_aliases) enables you to easily spot any name clashes.
you can also write a shell function; example for " cd " and "ls " combo here