How do I pipe the output of an LS on remote server to the local filesystem via SFTP? - linux

I'm logged into a remote server via SFTP at the command line. The folder I'm in contains hundreds of thousands of files. I need to get a list of these files in a text file so I can access them programmatically, as none of the PHP SFTP clients are able to return such a large list of files.
When I run an ls on the directory ( within the SFTP session ), it takes about 20 minutes for the file list to finally display.
I don't have write access on this server, so I can't pipe the output to a file on the remote server.
How can I pipe the output to a text file on my local machine ... or get a list of the files to my local machine some other way?

If you're willing to wait the 20 minutes for the data to scroll across your screen you can capture all the output using "script".
Call 'script' before you start your ssh or sftp session and it will capture all terminal output to your local disk. Type 'exit' to finish the capture.
NAME
script -- make typescript of terminal session
SYNOPSIS
script [-akq] [-t time] [file [command ...]]
DESCRIPTION
The script utility makes a typescript of everything printed on your ter-
minal. It is useful for students who need a hardcopy record of an inter-
active session as proof of an assignment, as the typescript file can be
printed out later with lpr(1).
If the argument file is given, script saves all dialogue in file. If no
file name is given, the typescript is saved in the file typescript.
If the argument command is given, script will run the specified command
with an optional argument vector instead of an interactive shell.
The following options are available:
-a Append the output to file or typescript, retaining the prior con-
tents.
-k Log keys sent to program as well as output.
-q Run in quiet mode, omit the start and stop status messages.
-t time
Specify time interval between flushing script output file. A
value of 0 causes script to flush for every character I/O event.
The default interval is 30 seconds.
The script ends when the forked shell (or command) exits (a control-D to
exit the Bourne shell (sh(1)), and exit, logout or control-D (if
ignoreeof is not set) for the C-shell, csh(1)).
Certain interactive commands, such as vi(1), create garbage in the type-
script file. The script utility works best with commands that do not
manipulate the screen. The results are meant to emulate a hardcopy ter-
minal, not an addressable one.
ENVIRONMENT
The following environment variable is utilized by script:
SHELL If the variable SHELL exists, the shell forked by script will be
that shell. If SHELL is not set, the Bourne shell is assumed.
(Most shells set this variable automatically).
SEE ALSO
csh(1) (for the history mechanism).
HISTORY
The script command appeared in 3.0BSD.
BUGS
The script utility places everything in the log file, including linefeeds
and backspaces. This is not what the naive user expects.
It is not possible to specify a command without also naming the script
file because of argument parsing compatibility issues.
When running in -k mode, echo cancelling is far from ideal. The slave
terminal mode is checked for ECHO mode to check when to avoid manual echo
logging. This does not work when in a raw mode where the program being
run is doing manual echo.

Wu's answer is good if you do it remotely. Here is another option if you are logged onto the remote server and want to send the file back home to yourself:
Proper answer is here: http://scratching.psybermonkey.net/2011/02/ssh-how-to-pipe-output-from-local-to.html
your_command | ssh username#server "cat > filename.txt"

If you have ssh access, that would be very easy:
ssh user#server ls > foo.txt
Otherwise, you can just redirect sftp's STDOUT and STDERR to a file. You have to type password and commands blindly though.

In my case following worked:
ssh user#server ls /path/to/source/folder/ > /path/to/destination/folder/filenames.txt
I wrote it in Git Bash. This will first ssh then list all files of source folder and then save the file names to the destination text file.
In this way you can also save the output to json file. Just change the file extension to json instead of txt.
For appending output just put ">>" instead of ">".

Related

How to update the /etc/profile within the sudo bash without rebooting or logging out

Currently right now I am creating a script that updates the path and environment variable of the profile within my raspberry-pi
I have created a script within the /etc/profile.d/sdk.sh to create a environment variable. Now it does not updates within my env, How can I add/update my environment variable without rebooting or logging-out of the system.
My script:
SDK_SH_FILE="/etc/profile.d/sdk.sh"
EXPORT_SDK_HOME="export SDK_HOME=/edit/"
echo -e "$EXPORT_SDK_HOME" > "$SDK_SH_FILE"
It is run using: cat my-script | sudo bash
Currently it is not updating my env unless I logout or reboot the system.
After editing sdk.sh, you need to load it in the current shell with:
source /etc/profile.d/sdk.sh
You have two choices for this job:
source /etc/profile.d/sdk.sh
OR
. /etc/profile.d/sdk.sh
I have just tried out Barmar's suggestion. It works to update the current bash session. If you close the terminal window and open a new one, you need to run source again.
Also, the new values are only concatenated to the environment variable rather than replacing the old values. So, it is still better to log out and log in again.
You can update the current shell environment by sourcing a script, because that way it runs in the same shell instance, but you need to acquire privileges to update sdk.sh (so shell redirections won't work), right?
The solution is to separate the write operation that requires privileges (calling only that via sudo).
Here the UNIX toolbox comes to the rescue with tee, a program that takes files as parameters, reads from it's standard input, and copies to it's standard output and to the files specified as parameters; this being a separate program, and opening it's parameters on it's own, can be called with sudo just fine.
Solution
export SDK_HOME=/edit/
typeset -p SDK_HOME | sudo tee /etc/profile.d/sdk.sh >/dev/null
Now, you need to source this file, instead of calling it, like so:
. ./my-script.
Here I used typeset -p to avoid repeating myself, it reproduces the declarations of the specified variables.
Variables and startup files
Variables are handed down from process to process in the environment, which is a portion of memory every process gets from it's parent process.
Shells behave differently depending on how they're called; a login shell will load a user/system profile file, i.e. /etc/profile (or a similar file at the home directory), and the rest of the session will get variables from it through the environment (thus updates to the file don't matter); normally all other interactive instances of the shell load a secondary file, in the case of BASH it's $HOME/.bashrc or /etc/bashrc (which login shells don't load).

bash + Linux + how to ignore the character "!"

I want to send little script to remote machine by ssh
the script is
#!/bin/bash
sleep 1
reboot
but I get event not found - because the "!"
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
-bash: !/bin/bash\nsleep: event not found
how to ignore the "!" char so script will so send successfully by ssh?
remark I cant use "\" before the "!" because I get
more /tmp/file
#\!/bin/bash
sleep 1
Use set +H before your command to disable ! style history substitution:
set +H
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
# enable hostory expnsion again
set -H
I think your command line is not well formated. You can send this:
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\nreboot">/tmp/file'
When I say "not well formated" I mean you put ">" inside the "echo" and you forgot to add "n" before "reboot", and you put "\reboot", wich will be interpreted as "CR" (carriage return) followed by "eboot" command (which I don't think that exists).
But what did the trick here is to invert the comas changing (') with (") and viceversa.
Bash is running interactively (which means that you are feeding commands to it from the standard input and not exec(2)ing a command from a shell script) so you don't need to include the line #!/bin/bash in that case (even more, bash should just ignore it, but not the included bang, as it is part of the active history mechanism)
But why? the first two characters in an executable file (any file capable of being exec(2)ed from secondary storage, not your case) have a special meaning (for the kernel and for the shell): they are the magic number that identifies the kind of executable file the kernel is loading. This allows the kernel to select the proper executable loading routines depending on the binary executable format (and what allows you for example to execute BSD programs in linux kernels, and viceversa)
A special value for this magic numbers is composed by the two characters # and ! (in that order) that forces the kernel to read the complete first line of that file and load the executable file specified in that line instead, allowing you to execute shell scripts for different interpreters directly from the command line. And it is done on purpose, as the # character is commonly in shell script parlance a comment character. This only happens when the shell that is interpreting the commands is not an interactive shell. When the shell loads a script with those characters, it normally reads the first line also to check if it has the #! mark and load the proper interpreter, by replicating the kernel function that does this. Despite of being a comment for the shell, it does this to allow to treat as executables files that are not stored on secondary storage (the only ones the exec(2) system call can deal with), but coming from stdin (as happens to yours).
As your shell is running interactively and you do want to execute its commands without a shell change, you don't need that line and can completely eliminate it without having to disable the bang character.
Sorry, but the solution given about executing the shell with -H option will probably not be viable, as the shell executing the commands is the login shell in the target machine, so you cannot provide specific parameters to it (parameters are selected by the login(8) program and normally don't include arbitrary parameters like -H).
The best solution is to fully eliminate the #!/bin/bash line, as you are not going to exec(2) that program in the target. In case you want to select the shell from the input line (case the user has a different shell installed as login shell), it is better to invoke the wanted shell in the command line and pass it (through stdin, or making it read the shell script as a file) the shell commands you wan to execute (but again, without the #! line).
NOTE
Its important to ensure you'll execute the whole thing, so it's best to pass all the script contents in the destination target, and once assured you have passed the whole thing to execute it as a whole. Then your #! first line will be properly processed, as the executable will be run by means of an exec(2) made from the kernel.
Example:
DIRECTORY=/bla/bla
FILE=/path/to/file
OUTPUT=/path/to/output
# this is the command we want to pass through the line
cat <<EOF | ssh user#target "cat >>/tmp/shell.sh"
cd $DIRECTORY
foo $FILE >$OUTPUT
exit 0
EOF
# we have copied the script file in a remote /tmp/shell.sh
# and we are sure it has passed correctly, so it's ready
# for local execution there.
# now, execute it.
# the remote shell won't be interactive, and you'll ensure that it is /bin/bash
ssh user#target "/bin/bash /tmp/shell.sh" >remote_shell.out
A more sophisticate system is one that allows to to sign the shell script before sending, and verify the script signature before executing it, so you are protected against possible trojan horse attacks. But this is out of scope on this explanation.
Another alternative is to use the batch(2) command remotely and pass it all the commands you want executed. you'll get a sessionless executing environment, more suitable to the task you are demanding (despite the fact that you'll get the script output by email to the target user running the script)
Interactively, beware that ! triggers history expansion inside double quotes
from here: https://riptutorial.com/bash/example/2465/quoting-literal-text
my recommended solution is to use single quotes to define the string (and either escape single quotes \' or use double quotes " within the string):
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\reboot>"/tmp/file'

How to take continuous back up of linux gnome terminal logs? commands and output of that command

I want to take continuous backup of logs being printed in my linux terminal. Is it possible that whenever something will be printed in my terminal, it will automatically get printed into some text file with time stamp.
Use the script command ie
script log.txt
at the start of your session. You can also add this to your bash profile so that it starts when you open a terminal etc. You need to use
script -a log.txt
to append. Don't try and cat it or tail it while in the session, you need to CTRL-D then have a look at what got logged.

Linux: using the tee command via ssh

I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.

What can I use to capture every command I run in bash (a-la history)

I know history will capture commands that I run, but it is shell specific. I work with multiple shells and multiple hosts and would like to write a small script which, after every command I run, dumps that command to some file along with the host name. This way, i can implement my own history command which reads from that file, and can take a host as an argument which would be handy for me. I'm not sure how to get the first part though..i.e., get every shell command I type to trigger a "dump that command into a file" part. Any ideas?
Thanks
In bash, the PROMPT_COMMAND environment variable contains a command that will be executed before the PS1 prompt is displayed. So yours could be something like history | tail -n1 | perl -npe 's/^\s+\d+\s+//' | yourcommand HOST
The script utility should solve your problem. It records everything you type and all that is printed on the terminal in a file (even including terminal control codes, so if you cat that file on the console, you even reproduce the original text colors).

Resources