Shell Script - SFTP -> If copied, remove? - linux

Iam trying to copy textfiles with a shellscript over sftp.
I already wrote a script that does the job.
#!/bin/bash
HOST='Servername'
USER='Username'
sftp -b - ${USER}#${HOST} << EOFFTP
get /files/*.txt /tmp/ftpfiles/
rm /files/*.txt
quit
EOFFTP
Before I remove all the textfiles on the FTP, I want to make sure, I copied all the files without errors. How can I do this? I use SSH-keys for login.
Task is:
Copy all textfiles over and over but make sure, its not the same ones... (thats why I use remove...)
Maybe I could move them on the FTP? like copy and then move to /files/copied ?

Actually, rsync is ideal for this:
rsync --remove-source-files ${USER}#${HOST}:/files/*.txt /tmp/ftpfiles/

Related

Linux zip up all log files

I'm trying to zip up all .log files in a my log directory. I want to zip each log file individually and keep it in the same directory, then delete the original. I'm somewhat new to using Linux and for loops in Linux. Here is the for loop I'm trying to run
ssh user#SERVER "for i in *.log; do zip -m \"${i%.*}.zip\" \"${i%.*}\".*; done"
What ended up happening is all of my hidden files got zipped up. Like I said, I'm kinda new so, whatever syntax error I made isn't jumping out at me. Any help would be appreciated
Try this (you were near)
ssh user#SERVER 'for i in *.log; do echo zip -m "\${i/.log/.zip}" "\${i}"; done'
If the output seems correct, remove the echo

Is it possible to create an empty file using scp?

In *nix, I can create an empty file using cp:
cp /dev/null ~/emptyfile
I'd like to know if it's possible to do something similar using scp (instead of ssh + touch). If I try to run:
scp /dev/null remoteserver:~/emptyfile
it returns an error /dev/null: not a regular file
EDIT:
Just to clarify, I don't wanna run any command at the remoteserver (i.e. No ssh command should be invoked).
So it's ok to run some command at localhost (echo, cat, dd or any other trivial command) and copy the resulting file to remoteserver.
It's preferable not leaving the resulting file at localhost. It's also good if the resulting command is an one-liner solution.
EDIT2:
It's impossible to use /dev/null approach as in cp command, because scp only works with regular files and directories:
https://github.com/openssh/openssh-portable/blob/8a85f5458d1c802471ca899c97f89946f6666e61/scp.c#L838-L850
So it's mandatory to use another command (touch, cat, dd etc) to create a regular empty file (either in a previous command, pipe or a subshell).
As #willh99 commented, creating an empty file locally, and then performing scp is the most feasible solution.
So far I came up with this:
filename=$(mktemp) && scp $filename remoteserver:~/emptyfile; rm $filename
It creates an empty file in a subshell, and copies it to remoteserver as emptyfile.
Any refactor/improvements are welcome.
EDIT: remove $filename whether scp succeeding or not, as stated by #Friek.
If you're just trying to create an empty file, you can use ssh and run the touch command like this:
ssh username#remoteserver touch anemptyfile

rsync - copy files with same name

I have some different files with the same name and I want to copy all of them to the destination which has a flat structure (no directories, just files), is there any way to append some text onto one of the file names so that both can be copied.
Need to use rsync because there are some files that I need to exclude from the copy.
For example:
dir1/file1.txt
dir1/dir2/file1.txt
both get copied, and in the destination there is:
file1.txt
file1.txt.txt
typically, when I want to do some complex name-mungling, I just write the list of files (with find dir1 >listfiles) and fix it with a text editor.
for example, s/^.*\/([^\/]+)$/cp \0 destination/\1/ converts a file like
dir1/file1.txt
dir1/dir2/file1.txt
to a script like:
cp dir1/file1.txt destination/file1.txt
cp dir1/dir2/file1.txt destination/file1.txt
then you could do something like cut -f 3 <listfiles | sort | uniq -d to find those with the same destination filename. then go back to the editor and fix those lines.
After a few minutes you get a full script for exactly the copy you want, without surprises because you can see each command and apply the best fix for each case.
As far as i know there is no default option in rsync to do that. But i guess that since you are copying files with the same name but from different directories, you are using
multiple rsync commands.
So, this gives you two options:
Create folders..
rsync -av /home/user1/file1 /media/foo/user1/file1
rsync -av /home/user2/file1 /media/foo/user2/file1
etc..
or rename the files with an id
rsync -av /home/user1/file1 /media/foo/parent_dir-file1
rsync -av /home/user2file1 /media/foo/parent_dir-file1
etc..
If you want to use the second solution you can build a simple script. As you are using rsync i suppose that you know the basics on GNU-Linux, so a simple bash script would be enough!
A basic ID is to get the parent folder name and add it as variable to the path of the rsync command. ( it won't always work )
IF you want to be sure of a good id you can for example set a counter and increment like
file1-1
file1-2
file1-3
But you will loose the track of its absolute path.
All the solutions can work, its up to you to choice the one that feed your needs!

Create and update archive over ssh on local machine

I am trying to find a way to create and update a tar archive of files on a remote system where we don't have write permissions (the remote file system is read only) over ssh. I've figured out that the way to create a archive is,
ssh user#remoteServer "tar cvpjf - /" > backup.tgz
However, I would like to know if there is some way of performing only incremental backups from this point on (of only files that have actually changed?). Any help with this is much appreciated.
You can try using the --listed-incremental option of tar:
http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html
The main problem is that you have no option to pipe the snar file through the stdout because you are already piping backup.tgz so the best option to store it would be to create the file in the /tmp directory where you should have write permissions and then download it at the end of the backup session.
For example:
ssh user#remoteServer "tar --listed-incremental=/tmp/backup-1.snar -cvpjf - /" > backup-1.tgz
scp user#remoteServer:/tmp/backup-1.snar
And in the following session you will use that .snar file to avoid copying the same files:
scp backup-1.snar user#remoteServer:/tmp/backup-1.snar
ssh user#remoteServer "tar --listed-incremental=/tmp/backup-1.snar -cvpjf - /" > backup-2.tgz

How can I upload an entire folder, that contains other folders, using sftp on linux?

I have tried put -r directory/*, which only uploaded the files and not folders. Gave me the error, cannot Couldn't canonicalise.
Any help would be greatly appreciated.
For people actually wanting a direct answer to this question (instead of being told to use something other than sftp)...
put -r local/path/to/directoryName
The uploaded directory must already exist in the working directory on the server, so you might need to create it first.
mkdir directoryName
Here you can find detailed explanation as how to copy a directory using scp. In your case, it would be something like:
$ scp -r foo your_username#remotehost.edu:/some/remote/directory/bar
This will copy the directory "foo" from the local host to a remote host's directory "bar".
Here -r is -recursively copy entire directories.
You can also use rcp with similar syntax. The only difference between them is that scp uses secure shell and rcp uses remote shell.
BTW The "Couldn't canonicalise" error you mentioned appear when sftp server is unable to access the file/directory mentioned in the command.
UPDATE: For users who want to use put specifically, please refer to Ben Thielker answer here.
sftp> mkdir source
sftp> put -r source
Uploading source/ to /home/myself/source
Entering source/
source/file1
source/file2
if you have issues using sftp, you can use ncftp
For centos
yum install ncftp
To copy a whole directory recursively
ncftpput -R -v -u username -P 21 ftp.server.dev /remote-path/ /localdirectory
Use scp instead. It uses SSH too and can easily handle recursion.

Resources