I am trying to compare two files one local and other one on remote. I used
meld a.txt user#host:/home/user/john/b.txt
This was not possible because file could not be detected. However, I could copy the same file from the same location to local via scp and do the comparison afterwards. How to access the file directly on cluster for example like:
vim user#host:/home/user/john/b.txt
In bash, you can create a file from a process with <(...):
meld a.txt <(ssh user#host cat /path/to/file/b.txt)
If you want to modify the remote file, you'll have to use some mounting.
One way to do it is to use sshfs
# sshfs setup
mkdir ~/remote
sshfs user#host:/path/to/file ~/remote
# meld invocation
meld a.txt ~/remote/b.txt
Check the path in your example, I think it's incorrect. Try directly:
vim john#server:b.text
That should work.
Try save the remote file to your linux system and simply use diff -y file1 file2 command. You will see the diff between.
diffutils - packagre required for it
Related
In *nix, I can create an empty file using cp:
cp /dev/null ~/emptyfile
I'd like to know if it's possible to do something similar using scp (instead of ssh + touch). If I try to run:
scp /dev/null remoteserver:~/emptyfile
it returns an error /dev/null: not a regular file
EDIT:
Just to clarify, I don't wanna run any command at the remoteserver (i.e. No ssh command should be invoked).
So it's ok to run some command at localhost (echo, cat, dd or any other trivial command) and copy the resulting file to remoteserver.
It's preferable not leaving the resulting file at localhost. It's also good if the resulting command is an one-liner solution.
EDIT2:
It's impossible to use /dev/null approach as in cp command, because scp only works with regular files and directories:
https://github.com/openssh/openssh-portable/blob/8a85f5458d1c802471ca899c97f89946f6666e61/scp.c#L838-L850
So it's mandatory to use another command (touch, cat, dd etc) to create a regular empty file (either in a previous command, pipe or a subshell).
As #willh99 commented, creating an empty file locally, and then performing scp is the most feasible solution.
So far I came up with this:
filename=$(mktemp) && scp $filename remoteserver:~/emptyfile; rm $filename
It creates an empty file in a subshell, and copies it to remoteserver as emptyfile.
Any refactor/improvements are welcome.
EDIT: remove $filename whether scp succeeding or not, as stated by #Friek.
If you're just trying to create an empty file, you can use ssh and run the touch command like this:
ssh username#remoteserver touch anemptyfile
I want to download all the files in a specific directory of my site.
Let's say I have 3 files in my remote SFTP directory
www.site.com/files/phone/2017-09-19-20-39-15
a.txt
b.txt
c.txt
My goal is to create a local folder on my desktop with ONLY those downloaded files. No parents files or parents directory needed. I am trying to get the clean report.
I've tried
wget -m --no-parent -l1 -nH -P ~/Desktop/phone/ www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
I got
I want to get
How do I tweak my wget command to get something like that?
Should I use anything else other than wget ?
Ihue,
Taking a shell programatic perspective I would recommend you try the following command line script, note I also added the citation so you can see the original threads.
wget -r -P ~/Desktop/phone/ -A txt www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
-r enables recursive retrieval. See Recursive Download for more information.
-P sets the directory prefix where all files and directories are saved to.
-A sets a whitelist for retrieving only certain file types. Strings and patterns are accepted, and both can be used in a comma separated list. See Types of Files for more information.
Ref: #don-joey
https://askubuntu.com/questions/373047/i-used-wget-to-download-html-files-where-are-the-images-in-the-file-stored
I am trying to SCP a file from a remote host onto local host.
The file on the remote host would be, KMST_DataFile_[MMDDYY]T[HHMM].kms
I have come up with 2 SCP commands, but I was wondering if there's a way to combine these, to only SCP file that match both the file name pattern above and the extension .kms
scp -v user#remotehost:/location/KMST_DataFile_*
scp -v user#remotehost:/location/{*.kms}
This will do your job:
scp -v user#remotehost:/location/KMST_DataFile_*.kms
As #manu mentioned in the comment, on Ubuntu or Mac, you may need to escape the asterisk:
scp -v user#remotehost:/location/KMST_DataFile_\*.kms
The main thing here is to use recursive mode -r even if you copy files and not directories. It works.
If you want to copy files that start with "val" and contain also the string "v2" then use:
scp -r makis#server.gr:/media/Data/results/val*v2* /Users/makis/Desktop/
Here, vecs*v2* will expand and get only files that start with val and also contain v2 string.
Similarly, if the files end with .png for example use:
scp -r makis#server.gr:/media/Data/results/val*.png /Users/makis/Desktop/
You should use \* instead of using *
scp -v user#remotehost:/location/KMST_DataFile_\*
ssh user#host 'tar cf - /location/KMST_DataFile_* /location/{*.kms}' | tar tvpf -
Note that these taroptions only give you a table of contents. You'll want to check before you extract, and almost certainly remove the absolute path.
Is there any option to tell scp command - not copy file from current machine in case file exists on remote machine
For example
On my machine I have the file -
/etc/secret-pw.txt
On Remote machine I have also the file -
/etc/secret-pw.txt
So
scp /etc/secret-pw.txt $remote_machine:/etc
Will destroy the secret-pw.txt, and scp will not ask questions about: overwrite the target file
Is there any option to avoid copy if file exist on target machine by scp?
Update: I can't install rsync or any other program.
You should be using rsync instead of scp. It will give you what you need.
If you can't install rsync (as you mentioned in the comments) you need to run a script beforehand to check if file exists and run it with ssh.
SCP does not offer any option, unfortunately.
But you can resort standard tools, like this:
ssh $remote_machine -- cp --no-clobber /dev/stdin /etc/secret-pw.txt < /etc/secret-pw.txt
Note that with this trick you gain all the functionalities of cp.
I need to backup a large server into FTP storage. I can tar all files, I can upload using FTP and I can split the tar file into many small files.
But the problem is I can't do these three steps in one step. I can tar to FTP directly, I can tar with split, but can't tar with FTP and split.
The OS is CentOS 6.2
The Files Size more than 800G
Thanks
To can tar, split and ftp a directory with one command line you need the following:
split command write to the standard output only, so you can't pass the file to another command like ftp to process it, to do so you need to patch split to can use the --filter option to can pass the output file to ftp "on the fly" without having to save to hard disk by setting up the $FILE environmental variable with the output file (the file names would be x00, x01, x02 ...).
1) Here is the split patch: http://lists.gnu.org/archive/html/coreutils/2011-01/txt3j8asgk8WH.txt
After patching split command, you would see in the man that the --filter option available in your split command.
2) install the ncftp ftp client which is a good ftp client that allows you to connect to ftp and put file in one line command, without waiting for the ftp response like ordinary ftp client. the ncftp is useful to integrate with scripts and so on.
here is the command that would compress /home directory with tar split it to 100MB small files and transfer each file through FTP
tar cvz -i /home | split -d -b 100m --filter 'ncftpput -r 10 -F -c -u ftpUsername -p ftpPassword ftpHost $FILE'
note that we used the ncftpput that would pass the $FILE to ftp in single command too.
additional ftp options:
-r 10: allows you to try to reconnect 10 times after loosing connection with ftp.
-F: To use passive mode.
-c: takes the input from stdin.
To merge the split files (x00, x01, x02, x03 ...) to can extract the file use following command
cat x* > originalFile.tar
You can make a shell script and use
tar zcf - /usr/folder | split -b 30720m - /usr/archive.tgz
and then upload to FTP also because once you are doing tar and putting onto FTP then how can you split.