I need to quit the lftp automatically after some local shell commands are executed. E.g. I need to find some files and exit.
lftp -e "!find . -maxdepth 3 -name \"index.*\" -type f;exit" sftp://user:pass#mysite.com:22
When this command is executed, it keeps me inside the lftp environment so I need to send extra "bye" command to leave the app. But I need to perform it automatically upon shell command execution.
I tried
lftp -e "!find . -maxdepth 3 -name \"index.*\" -type f;exit;bye" sftp://user:pass#mysite.com:22
but it doesn't work (seems "bye" is executed in local shell context rather than lftp shell).
It there any chance to exit from local shell mode back to lftp command mode and then perform "bye" within the same session?
Note that what you're trying won't have a useful effect -- the local shell is local to where you're running lftp, so you're running find on the same machine as the client, not the server. There's thus no reason to run find inside lftp as opposed to outside of it.
Getting past that, though, and answering the literal question -- you can split your commands across multiple lines; $'\n' is a literal for a newline, or newlines can be literally added to a single-line string. Thus:
lftp -c '
open sftp://user:pass#mysite.com:22
!find . -maxdepth 3 -name "index.*" -type f
' </dev/null
There's no need for the exit or bye as using -c rather than -e causes the connection to be closed and lftp to automatically exit after all commands are run. Using </dev/null also ensures that even if you did use -e, attempts to read further commands from stdin would return an EOF (and thus likewise indicate an exit).
I've also observed that, somehow, after executing a local command, lftp will run a local version of the next command, even if for that second command 'local' was not specified. Normally this reverts back to sending commands to the remote site the third time around, however, when at times I walk away from the terminal and come back and issue a third command much later, the new command and all subsequent ones will also apply to local, like if the connection had been lost -or like it never existed- and in this situation unless I reconnect to the site a command such as 'bye' is just not possible.
What I do to work around this is to define a bookmark early on in the connection process that I can reuse later and make sure is open prior to issuing 'bye' - which as you said, should close the connection / the process / the application and/or window.
So initially, issue something like 'bookmark save remote'. And just prior to leaving, issue something like 'open remote' followed by 'bye', and that should work.
NB: Give your bookmarks unique names instead of 'remote' if you wish to connect to multiple servers and plan to do concurrent work, as all sessions will most likely share the same set of lftp bookmarks.
Related
I'm using rsync to transfer files from a server to another server (both owned by me), my only problem is that these files are over 50GB and I got a ton of them to transfer (Over 200 of them).
Now I could just open multiple tabs and run rsync or add the "&" at the end of the script to execute it in the background.
So my question is, how can I execute this command in the background and when its done transferring, I want a message to be shown on the terminal window that executed the script.
(rsync -av -progress [FOLDER_NAME] [DISTINATION]:[PATH] &) && echo 'Finished'
I know thats completely wrong but I need to use & to run it in the background and && to run echo after rsync finished.
Next to the screen-based solution, you could use xargs tool, too.
echo '/srcpath1 host1 /dstpath1
/srcpath2 host2 /dstpath2
/srcpath3 host3 /dstpath3'| \
xargs -P 5 --max-lines 1 bash -e 'rsync -av -progress $1 $2:$3'
xargs reads its input for stdin, and executes a command for every single words or lines. This time, lines.
What it makes very good: it can do with its child processes parallel! In this configuration, xargs does this by using always 5 parallel child processes. This number can be 1 or even infinite.
xargs will exit, if all of its childs are ready, and handles every ctrl/c, child processing, etc very well and problem tolerant.
Instead of the echo, the input of xargs can come from a file, or even from a previous command in the pipe, too. Or from a for or while loop.
You could use gnu screen for that, screen could monitor output for silence and for activity. Additional benefit - you could close terminal and reattach to screen later - even better if you run could screen on server - then you could shutdown or reboot your machine and processes in screen still be executing.
Well, to answer your specific question, your invocation:
(rsync ... &) && echo 'Finished'
creates a subshell - the ( ... ) bit - in which rsync is run in the background, which means the subshell will exit as soon as it has started rsync, not after rsync finishes. The && echo ... part then notices that the subshell has exited successfully and does its thing, which is not what you want, because rsync is most likely still running.
To accomplish what you want, you need to do this:
(rsync ... && echo 'Finished') &
That will put the subshell itself in the background, and the subshell will run rsync and then echo. If you need to wait for that subshell to finish at some point later in your script, simply insert a wait at the appropriate point.
You could also structure it this way:
rsync ... &
# other stuff to do while rsync runs
wait
echo 'Finished'
Which is "better" is really just a matter of preference. There's one minor difference in that the && will run echo only if rsync doesn't report an error exit code - but replacing && with ; would make the two patterns more equivalent. The second method makes the echo synchronous with other output from your script, so it doesn't show up in the middle of other output, so it might be slightly preferable from that respect, but capturing the exit condition of rsync would be more complicated if it was necessary...
I'm logging in and out of a remote machine many times a day (through ssh) and I'd like to shorten a bit the whole procedure. I've added an alias in my .bashrc and .profile that looks like:
alias connect='ssh -XC username#remotemachine && cd /far/away/location/that/takes/time/to/get/to/;'
My problem is that when I write connect, I first get to the location in cause (on my local machine) and then the ssh connection takes place. How can this be? I've thought that by using "&&" the second command will be run only after the first one is successful. After the ssh command is successful, the .profile/.bashrc are loaded anew, before the second part of the alias is successfully executed?
For the ssh specifically, you're looking for the following:
ssh -t username#remotemachine "cd /path/you/want ; bash"
Using "&&" or even ";" normally will execute the commands in the shell that you're currently in. It's like if you're programming and make a function call and then have another line that you want to effect what happens in the function-- it doesn't work because it's essentially in a different scope.
For sequence of commands :
Try this (Using ;) :
alias cmd='command1;command2;command3;'
Use of '&&' instead of ';' -
The && makes it only execute subsequent commands if the previous returns successful.
I think it's related to the parent process creating new subprocess and does not have tty. Can anyone explain the detail under the hood? i.e. the related working model of bash, process creation, etc?
It may be a very broad topic so pointers to posts are also very appreciated. I've Googled for a while, all the results are about very specific case and none is about the story behind the scene. To provide more context, below is the shell script resulting the 'bash: no job control in this shell'.
#! /bin/bash
while [ 1 ]; do
st=$(netstat -an |grep 7070 |grep LISTEN -o | uniq)
if [ -z $st ]; then
echo "need to start proxy #$(date)"
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
else
echo "proxy OK #$(date)"
fi
sleep 3
done
This line:
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
is where "bash:no job control in this shell” come from.
You may need to enable job control:
#! /bin/bash
set -m
Job control is a collection of features in the shell and the tty driver which allow the user to manage multiple jobs from a single interactive shell.
A job is a single command or a pipeline. If you run ls, that's a job. If you run ls|more, that's still just one job. If the command you run starts subprocesses of its own, then they will also belong to the same job unless they are intentionally detached.
Without job control, you have the ability to put a job in the background by adding & to the command line. And that's about all the control you have.
With job control, you can additionally:
Suspend a running foreground job with CtrlZ
Resume a suspended job in the foreground with fg
Resume a suspend job in the background with bg
Bring a running background job into the foreground with fg
The shell maintains a list of jobs which you can see by running the jobs command. Each one is assigned a job number (distinct from the PIDs of the process(es) that make up the job). You can use the job number, prefixed with %, as an argument to fg or bg to select a job to foreground or background. The %jobnumber notation is also acceptable to the shell's builtin kill command. This can be convenient because the job numbers are assigned starting from 1, so they're shorter than PIDs.
There are also shortcuts %+ for the most recently foregrounded job and %- for the previously foregrounded job, so you can switch back and forth rapidly between two jobs with CtrlZ followed by fg %- (suspend the current one, resume the other one) without having to remember the numbers. Or you can use the beginning of the command itself. If you have suspended an ffmpeg command, resuming it is as easy as fg %ff (assuming no other active jobs start with "ff"). And as one last shortcut, you don't have to type the fg. Just entering %- as a command foregrounds the previous job.
"But why do we need this?" I can hear you asking. "I can just start another shell if I want to run another command." True, there are many ways of multitasking. On a normal day I have login shells running on tty1 through tty10 (yes there are more than 6, you just have to activate them), one of which will be running a screen session with 4 screens in it, another might have an ssh running on it in which there is another screen session running on the remote machine, plus my X session with 3 or 4 xterms. And I still use job control.
If I'm in the middle of vi or less or aptitude or any other interactive thing, and I need to run a couple of other quick commands to decide how to proceed, CtrlZ, run the commands, and fg is natural and quick. (In lots of cases an interactive program has a ! keybinding to run an external command for you; I don't think that's as good because you don't get the benefit of your shell's history, command line editor, and completion system.) I find it sad whenever I see someone launch a secondary xterm/screen/whatever to run one command, look at it for two seconds, and then exit.
Now about this script of yours. In general it does not appear to be competently written. The line in question:
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
is confusing. I can't figure out why the ssh command is being passed down to a separate shell instead of just being executed straight from the main script, let alone why someone added -i to it. The -i option tells the shell to run interactively, which activates job control (among other things). But it isn't actually being used interactively. Whatever the purpose was behind the separate shell and the -i, the warning about job control was a side effect. I'm guessing it was a hack to get around some undesirable feature of ssh. That's the kind of thing that when you do it, you should comment it.
One of the possible options would be not having access to the tty.
Under the hood:
bash checks whether the session is interactive, if not - no job
control.
if forced_interactive is set, then check that stderr is
attached to a tty is skipped and bash checks again, whether in can
open /dev/tty for read-write access.
then it checks whether new line discipline is used, if not, then job control is disabled too.
If (and only if) we just set our process group to our pid, thereby becoming a process group leader, and the terminal is not in the same process group as our (new) process group, then set the terminal's process group to our (new) process group. If that fails, set our process group back to what it was originally (so we can still read from the terminal) and turn off job control.
if all of the above has failed, you see the message.
I partially quoted the comments from bash source code.
As per additional request of the question author:
http://tiswww.case.edu/php/chet/bash/bashtop.html Here you can find bash itself.
If you can read the C code, get the source tarball, inside it you will find job.c - that one will explain you more "under the hood" stuff.
I ran into a problem on my own embedded system and I got rid of the "no job control" error by running the getty process with "setsid", which according to its manpage starts a process with a new session id.
Faced this problem only because I've miscopied a previously executed command together with % prefix into zsh, like % echo this instead of echo this. The error was very unclear to such a stupid typo
I need to execute multiple commands on remote machine, and use ssh to do so,
ssh root#remote_server 'cd /root/dir; ./run.sh'
In the script, I want to pass a local variable $argument when executing run.sh, like
ssh root#remote_server 'cd /root/dir; ./run.sh $argument'
It does not work, since in single quote $argument is not interpreted the expected way.
Edit: I know double quote may be used, but is there any side effects on that?
You can safely use double quotes here.
ssh root#remote_server "cd /root/dir; ./run.sh $argument"
This will expand the $argument variable. There is nothing else present that poses any risk.
If you have a case where you do need to expand some variables, but not others, you can escape them with backslashes.
$ argument='-V'
$ echo "the variable \$argument is $argument"
would display
the variable $argument is -V
You can always test with double quotes to discover any hidden problems that might catch you by surprise. You can always safely test with echo.
Additionally, another way to run multiple commands is to redirect stdin to ssh. This is especially useful in scripts, or when you have more than 2 or 3 commands (esp. any control statements or loops)
$ ssh user#remoteserver << EOF
> # commands go here
> pwd
> # as many as you want
> # finish with EOF
> EOF
output, if any, of commands will display
$ # returned to your current shell prompt
If you do this on the command line, you'll get a stdin prompt to write your commands. On the command line, the SSH connection won't even be attempted until you indicate completion with EOF. So you won't see results as you go, but you can Ctrl-C to get out and start over. Whether on the command line or in a script, you wrap up the sequence of commands with EOF. You'll be returned to your normal shell at that point.
You could run xargs on the remote side:
$ echo "$argument" | ssh root#remote_server 'cd /root/dir; xargs -0 ./run.sh'
This avoids any quoting issues entirely--unless your argument has null characters in it, I suppose.
I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.