How can I send files from a Linux machine to an sftp server that were created 1 minute ago?
I have tried using find, but I’m not sure how to pipe it through to sftp?
I have tried something like below
find | sftp {user}#{host}:{remote_dir} <<< $'put {local_file_path}'
But I don’t know how to pipe the files created one minute ago into the sftp command.
I cannot install additional packages as the Linux machine is not connected to the internet.
Assuming you don't have strange file names:
$ find -mmin -10 | sed 's/^/put /' | sftp -b - sorin#192.168.0.14
sftp> put ./16/test00116.gz
sftp> put ./20200113.gz
sftp> put ./log20200128.gz
-b - - read a batch file from stdin
sed 's/^/put /' - prefix each file with the put command.
A bit more robust, removing the uploaded file before trying to put the new one, and making sure sftp doesn't exit on error:
$ find -mmin -10 -exec basename -- "{}" \; -print | sed '1~2s/^/-rm /;0~2s/^/-put /' | sftp -b - sorin#192.168.0.14
sftp> -rm exisingfile20200102.gz
sftp> -put ./2/existingfile20200102.gz
sftp> -rm newfile20200121.gz
Couldn't delete file: No such file or directory
sftp> -put ./21/newfile20200121.gz
Related
Where I started / The problem.
I am trying to run a fairly complex kubectl command to copy files above a specific date from kubernetes to a local drive.
I am trying to take advantage of this command.
$ kubectl cp <file-spec-src> <file-spec-dest>
Which according to https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp
Is a "shorthand" for this command.
kubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar
Which works but had not date restriction parameters.
What commands I came up with (which didn't work).
However, I do not just want to copy files, I want to copy specific files. In that pursuit I come up with 2 commands that both work on my local machine, but not when used with kubectl.
Command 1
My thought with command 1 was just to find a solution to the date problem first, then get it to tar, and the out of tar. Pipes seemed appreciate until I got to the kubectl command.
Command 1 local.
find /foo -type d -maxdepth 3 -newermt '2/25/2021 0:00:00' -print0 | xargs -0 tar cf - | tar xf - -C /bar
Command 1 kubectl.
kubectl exec -n <namespace> <pod_name> -- "find /foo -type d -maxdepth 3 -newermt '2/25/2021 0:00:00' -print0 | xargs -0 tar cf - " | tar xf - -C /bar
The quotations ("") around find and the first pipe is there because those need to be run in the kubernetes pod. The last pipe is there as in the official command, to pipe to local disk.
Error
The command only returns an error, and not a useful one at that. What I can say is that removing the last pipe returns the same error.
no such file or directory: unknown
Command 2
My thought behind command 2, was that if pipes creates too many problems for me, why not take advantage of find´s -exec command, and only have 1 pipe.
Command 2 local.
find /foo -type d -maxdepth 3 -newermt '2/25/2021 0:00:00' -exec tar -rvf - {} \; | tar xf - -C /bar
Command 2 kubectl.
kubectl exec -n <pod_name> -- find /foo -type d -maxdepth 3 -newermt '4/1/2021 0:00:00' -exec tar -cf - {} ; | tar xf - -C /bar
Error
The command does this time, not return an error, but instead proceeds to copy every file it can find, even those outside of the "-type" "-maxdepth" "-newermt" parameters. So this command essentially does that same as just copying the entire folder.
Finally
I have no clue as to how to proceed from here. Is there any other combination that I could try, or is there some sort of error in my code anyone could help me with ?
Thanks :)
For now I am running it with a compromise that works.
kubectl exec -n <namespace> <pod_name> -- find /foo -type f -newermt '4/1/2021 0:00:00' -exec tar -cf - {} + | tar xf - -C ./bar --strip-components=3
This how ever takes longer to run since I look at every file, and not just the top level folders.
I need to back up a folder containing more than 2500 sub folders and around 3TB size then ftp it to windows based FTP Server.
As i know tar command is not able to do so.
Thus is it possible to create a script to "tar" each 500 sub folders?
If yes please share your command.
BTW: one way i would do is just using mput in ftp without compression:
touch ftp_temp.sh
chmod +x ftp_temp.sh
vim ftp_temp.sh :
'''
#!/bin/bash
HOST='10. 20.30.40'
USER='ftpuser'
PASSWD='ftpuserpasswd'
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd upload
mput
bye
EOT
'''
The following pattern may give you an idea:
find . -type d -maxdepth 1 -print0 | xargs -0 -n 500 echo
find will locate the names of all directories in the current directory, while xargs will pass up to 500 of them at a time to another command (in the above example, the echo command).
I have a folder called documentaries on my Linux computer.
I have SSH access to seedbox (also Linux).
How do I find out which documentaries I have in both computers?
On seedbox it's a flat file structure. Some documentaries are files, some are folders which contain many files, but all in same folder
For example:
data/lions_botswana.mp4
data/lions serengeti/S01E01.mkv
data/lions serengeti/S01E02.mkv
data/strosek_on_capitalism.mp4
data/something_random.mp4
Locally structure is more organized
documentaries/animals/lions_botswana.mp4
documentaries/animals/lions serengeti/S01E01.mkv
documentaries/animals/lions serengeti/S01E02.mkv
documentaries/economy/strosek_on_capitalism.mp4
documentaries/something_random.mp4
I am not looking for command like diff, I am looking for command like same (opposite of diff) if such command exists.
Based on the answer from Zumo de Vidrio, and my comment:
on one computer
cd directory1/; find | sort > filelist1
on the other
cd directory2/; find | sort > filelist2
copy them in one place an run:
comm -12 filelist1 filelist2
or as a one liner:
ssh user#host 'cd remotedir/; find|sort' | comm -12 - <(cd localdir/; find|sort)
Edit: With multiple folders this would look as follows
on one computer
cd remotedir/; find | sort > remotelist
on the other
cd localdir/subdir1/; find > locallist1
cd -;
cd localdir/subdir2/; find > locallist2
cd -;
#... and so on
sort locallist1 locallist2 > locallistall
copy them in one place an run:
comm -12 remotelist locallistall
or as a (now very long) one liner:
ssh user#host 'cd remotedir/; find|sort' | comm -12 - <({cd localdir/subdir1/; find; cd -; cd localdir/subdir2/; find; cd -; cd localdir/subdir3/; find}|sort)
Export list of remote files to local file by:
ssh user#seedbox 'find /path/to/data -type f -execdir echo {} ";"' > remote.txt
Note: On Linux you've to use absolute path to avoid leading ./ or use with "$PWD"/data.
Then grep the result of find command:
find documentaries/ -type f | grep -wFf remote.txt
This will display only these local files which also exist on remote.
If you would like to generate similar list on local and compare two files, try:
find "$PWD"/documentaries/ -type f -execdir echo {} ';' > local.txt
grep -wFf remote.txt local.txt
However above methods aren't reliable, since one file could have a different size. If files would have the same structure, you could use rsync to keep your files up-to-date.
For more reliable solution, you can use fdupes which can find all files which exist in both directories by comparing file sizes and MD5 signatures.
Sample syntax:
fdupes -r documentaries/ data/
However both directories needs to be accessible locally, so you can always use sshfs tool to mount the remote directory locally. Then you can use fdupes to find all duplicate files. It has also option to remove the other duplicates (-d).
Copy the ls output of each Computer to a same folder and then apply diff over them:
In your computer:
ls -R documentaries/ > documentaries_computer.txt
In seedbox:
ls -R documentaries/ > documentaries_seedbox.txt
Copy both files to a same location and execute:
diff documentaries_computer.txt documentaries_seedbox.txt
You can mount remote folder using sshfs, then you can use diff -r to find the differences between them.
E.g.
sshfs user#seedbox-host:/path/to/documentaries documentaries/
diff -rs /local/path/documentaries/animals documentaries/ | grep identical
diff -rs /local/path/documentaries/economy documentaries/ | grep identical
I want to delete oldest files in a directory when the number of files is greater than 5. I'm using
(ls -1t | tail -n 3)
to get the oldest 3 files in the directory. This works exactly as I want. Now I want to delete them in a single command with rm. As I'm running these commands on a Linux server, cd into the directory and deleting is not working so I need to use either find or ls with rm and delete the oldest 3 files. Please help out.
Thanks :)
If you want to delete files from some arbitrary directory, then pass the directory name into the ls command. The default is to use the current directory.
Then use $() parameter expansion to transfer the result of tail into rm like this
rm $(ls -1t dirname| tail -n 3)
rm $(ls -1t | tail -n 3) 2> /dev/null
ls may return No such file or directory error message, which may cause rm to run unnessesary with that value.
With the help of following answer: find - suppress "No such file or directory" errors and https://unix.stackexchange.com/a/140647/198423
find $dirname -type d -exec ls -1t {} + | tail -n 3 | xargs rm -rf
I am tring to use xargs -a to read the contents of a file that has a list of filenames in it.
My directory working in looks like:
backups
file1.bak
file2.bak
file3.bak
bakfiles.txt
File name with filenames in it: bakfiles.txt
bakfiles.txt contents:
file1.bak
file2.bak
file3.bak
So essentially I'm trying to copy file1.bak,file2.bak,file3.bak into the folder backups. But using the contents of bakfiles.txt to do so.
I tried:
xargs -a bakfiles.txt | cp {} backups
But I get the error:
cp: cannot stat `{}': No such file or directory
I should mention i also tried:
xargs -a bakfiles.txt cp {} backups
And get the error:
cp: target `file3.bak' is not a directory
This works for me on Windows 7 using mks toolkit version of 'xargs'
cat bakfiles.txt | xargs -I '{}' cp '{}' backups/'{}'
From the following link i figured it out:
https://superuser.com/questions/180251/copy-list-of-files
Heres the command:
xargs -a bakfiles.txt cp -t backups