Saving grep results to a file on a remote box - linux

I'm somewhat new in Linux. I have to read logs on a remote host and save certain lines, found with grep command to a file. The problem is that I don't have permissions to create a file on the host. Is there a workaround the issue? Thanks!

You can run something like the following:
ssh remotehost "grep certainline logs*" > file
to save the file locally.
Otherwise, you might be able to create a file in /tmp.

You don't mention but I'm going to assume you're using ssh to access the remote machine. So you can run the command on the remote machine and redirect the output on the local machine like so:
ssh remotehost 'grep pattern /var/log/mylog' > mylocalfile
Note that the redirection occurs outside the quoted command that is given to ssh to send to the remote host. If you were to put it inside the quotes then the redirection would occur on the remote side.

Related

copy file from one server to another in linux

how to run commands like ftp or sftp or scp in background? Also how to setup password less connection for running these command?
Look for manual pages for scp or rsync, which both can do this job well, if not being forced you don't want to use sftp or even the non encrypted ftp file transfer!
something like the following, for example:
rsync [some other parameters] -e ssh SOURCE TARGET
Assuming these commands are coming from a bash script , you would need to make sure that the two (or more ) systems have ssh certificates generated that allow you to access said systems without providing a "password" per se.
Briefly, you could do it by running this command on one system:
ssh-keygen
following through, this will generate a key. Then run:
ssh-copy-id user#some-remote-system
to copy it to the remote system, which will allow passwordless access, enabling scripts to go about their business without stalling for password prompts.

Shell script - SSH

I am using shell script to add some file to server. Is there any way to write a shell script that will execute one part on local computer and the other part when you're logged into that server?
For example, I want to log in, do something, add some file, and then I want to list everything on that server.
ssh something#something
I enter password
Then list files from server.
You can just add a command to end of the ssh command; for example:
ssh username#host ls
will run ls on the server, instead of giving you a login shell.

Using vim to remotely edit a file on serverB only accessible from serverA

Although I have never tried this, it is apparently possible to remotely edit a file in vim as described here. In my particular case the server I need access to can only be accessed from on campus, hence I have to log into my university account like so:
ssh user#login.university.com
then from there log into the secure server like so:
ssh user#secure.university.com
I have keyless ssh set up, so I can automate the process like so:
ssh user#login.university.com -t "ssh user#secure.university.com"
is there anyway to remotely edit a file such as secure.university.com/user/foo.txt on my local machine?
EDIT:
My intention is to use vim on my local machine as it is impractical (move .vim folder, copy .vimrc) and in some cases impossible (recompile vim with certain settings, patch vim source, install language beautifiers) to make vim on the remote machine behave the way I want it to behave. What I want is to issue something like this (this is not accurate scp, I know)
vim scp://user#login.university.com scp://user#secure.university.com//home/user/foo.txt
OK after a little working around I figured it out. First you have to edit (or create) your .ssh/config file as described here. For our purposes, we will add a line like this, which essentially adds a proxy.
Host secure
User Julius
HostName secure.university.com
ProxyCommand ssh Tiberius#login.university.com nc %h %p 2> /dev/null
Then we can simply copy (via scp) the file secure.university.com:/home/Julius/fee/fie/fo/fum.txt to the local computer like so
scp secure:/home/Julius/fee/fie/fo/fum.txt fum.txt
Extending on this, we can load it into vim remotely like so:
vim scp://secure//home/Julius/fee/fie/fo/fum.txt
or using badd like so:
:badd scp://secure//home/Julius/fee/fie/fo/fum.txt
To simplify my life, I added this shortcut to my .vimrc file for the most commonly used subfolder:
nnoremap <leader>scp :badd scp://secure//home/Julius/fee/fie/fo/fum.txt
So far vim has proven to be pretty aware that this is a remote file, so if the C file includes a file like so:
#include "foo.h"
it won't complain that "foo.h" is missing
Once you SSHed in the machine you can run any command(also vim) in remote host on your shell. After logging run vim as you are running in your machine.
Since you are using ssh, you basically have access to the server via the CLI, as if you were sitting in front of the machine itself. With that said, you can use any program on that machine, just like you would use it on your own machine. Assuming that the secure.university.com/user/foo.txt means that there is a text file called foo.txt at location /user on the secure server, then the following commands would work after logging in through ssh:
cd /user
vim foo.txt
You could also use nano or any other CLI based editor that is installed on the machine.

shell script, for loop, ssh and alias

I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.

How can I redirect program output using libssh2?

I'm using libssh2 for a C++ program in a Linux environment and so far I'm able to start a program on a remote machine using libssh2_channel_exec. However I'd like to redirect the program's output to the local machine (i.e. the output shall travel over ssh).
I'd like to achieve the same goal of the following bash line:
$ ssh user#remote ls > local_file.txt
I cannot specify the > local_file.txt part the command parameter because the file must be written in the local machine and not in the remote one.
So, how can I redirect a remote program's output to the local machine?
you should use the libssh2_channel_read function to read the remote stdout:
http://www.libssh2.org/libssh2_channel_read.html

Resources