I need to back up a folder containing more than 2500 sub folders and around 3TB size then ftp it to windows based FTP Server.
As i know tar command is not able to do so.
Thus is it possible to create a script to "tar" each 500 sub folders?
If yes please share your command.
BTW: one way i would do is just using mput in ftp without compression:
touch ftp_temp.sh
chmod +x ftp_temp.sh
vim ftp_temp.sh :
'''
#!/bin/bash
HOST='10. 20.30.40'
USER='ftpuser'
PASSWD='ftpuserpasswd'
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd upload
mput
bye
EOT
'''
The following pattern may give you an idea:
find . -type d -maxdepth 1 -print0 | xargs -0 -n 500 echo
find will locate the names of all directories in the current directory, while xargs will pass up to 500 of them at a time to another command (in the above example, the echo command).
Related
How can I send files from a Linux machine to an sftp server that were created 1 minute ago?
I have tried using find, but I’m not sure how to pipe it through to sftp?
I have tried something like below
find | sftp {user}#{host}:{remote_dir} <<< $'put {local_file_path}'
But I don’t know how to pipe the files created one minute ago into the sftp command.
I cannot install additional packages as the Linux machine is not connected to the internet.
Assuming you don't have strange file names:
$ find -mmin -10 | sed 's/^/put /' | sftp -b - sorin#192.168.0.14
sftp> put ./16/test00116.gz
sftp> put ./20200113.gz
sftp> put ./log20200128.gz
-b - - read a batch file from stdin
sed 's/^/put /' - prefix each file with the put command.
A bit more robust, removing the uploaded file before trying to put the new one, and making sure sftp doesn't exit on error:
$ find -mmin -10 -exec basename -- "{}" \; -print | sed '1~2s/^/-rm /;0~2s/^/-put /' | sftp -b - sorin#192.168.0.14
sftp> -rm exisingfile20200102.gz
sftp> -put ./2/existingfile20200102.gz
sftp> -rm newfile20200121.gz
Couldn't delete file: No such file or directory
sftp> -put ./21/newfile20200121.gz
My script should do a simple trick - unpack the files as 'root' from tared file, and process them on the fly with different user. Smth like this.
./script # starts with 'root' user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#!/bin/bash
TAR="some.tar"
FILES=$(find /home/user \( -name "xxx*.yyy" \) | sort -n | tail -n 3)
tar -xpf $TAR --overwrite --same-owner -P
sudo -u user bash <<- EOF # switch to 'user'
sh process_unpacked_$FILES.sh
EOF
It works, however it doesn't catch the unpacked files (they appear in the directory) on the fly. I mean if i execute the same script again, when the files are already unpacked and exist in directory, script goes and finishes the trick.
The files in tar belong to 'user'. How to make the script work in a single execution?
Im trying to run this script in the terminal but its not working and says permission denied. scriptEmail is filename.
% find . -type d -exec ./scriptEmail {} \;
scriptEmail is written as follows:
# !/bin/bash
# Mail Script
find gang-l -type f -name "*" -exec sh -c ' file = "$0" java RemoveHeaders "$file" > processed/$file ' {} ';'
My read write permission
-rwxr-xr-x
As for permissions:
Check that your shebang is at the very top of your file, and that it starts exactly with #!; # ! will not work.
Check that your files are given execute permissions; chmod 750 scriptEmail will do.
Check that your file uses UNIX newlines -- with DOS newlines, your shebang may have a hidden character making it point to an interpreter which doesn't actually exist.
Check that the directory your file is stored in is in a directory where executable scripts are allowed (not mounted with the noexec flag, or in a SELinux context disallowing execution).
If your mount point is noexec or your ability to create executable scripts is blocked by SELinux or similar, then use find . -type d -exec bash ./scriptEmail {} \; to explicitly specify an interpreter rather than attempting to execute your script.
Second: Since you're executing your script with find already -- and using that to recurse through directories -- you don't need a second find inside (which would have you potentially operating on processed/dirA/dirB/file as well as processed/dirB/file and processed/file -- with errors for all of these where the directory doesn't exist).
#!/bin/sh
cd "$1" || exit # if we can't cd to directory given in argument, exit.
mkdir -p processed || exit # if we can't create our output directory, exit.
for f in *; do # ...iterate through all directory contents...
[ -f "$f" ] || continue # ...if they aren't files, skip them...
java RemoveHeaders "$f" >processed/"$f" # run the processing for one item
done
Try
sudo find . -type d -exec ./scriptEmail {} \;
I know that this is really not my strong side to write those scripts.
I need a shell script which recursively packs every single file in its folder into the .bz2 format because I have a lot of files and when I do this manually it takes me hours.
For example here are alot of files(much more then this example):
/home/user/data/file1.yyy
/home/user/data/file2.xxx
/home/user/data/file3.zzz
/home/user/data/file4.txt
/home/user/data/file5.deb
/home/user/data/moredata/file1.xyz
/home/user/data/muchmoredata/file1.xyx
And I need them all formated into .bz2 like this:
/home/user/data/file1.yyy.bz2
/home/user/data/file2.xxx.bz2
/home/user/data/file3.zzz.bz2
/home/user/data/file4.txt.bz2
/home/user/data/file5.deb.bz2
/home/user/data/moredata/file1.xyz.bz2
/home/user/data/muchmoredata/file1.xyx.bz2
Another thing that would be great when at then end the script will run one times chown -R example:example /home/user/data
I hope you can help me
bzip2 will accept multiple files as arguments on the command line. To solve your specific example, I would do
cd /home/user/
find . -type f | egrep -v '\.bz2' | xargs bzip2 -9 &
This will find all files under /home/user, exclude any already existing .bz2 files from processing, and then send the remaining list via xargs to bzip2. The -9 gives you maximum compression (but takes more time). There is no limit to the number or length of filenames that can be processed when using xargs to feed the command (in this case bzip2).
The & char means "run all of this in the background". This means the command prompt will return to you immediately, and you continue other work, but don't expect all the files to be compressed for a while. At some point you'll also get messages like 'Job %1 ... started' and later, 'Job %1 ... finished'.
As you asked for a script, we can also do this
#!/bin/bash
if [[ ! -d "$1" ]] ; then
echo "usage: b2zipper /path/to/dir/to/search" 1>&2
exit 1
if
find "$1" -type f | egrep -v '\.bz2' | xargs bzip2 -9 &
Save this as b2zipper, and then make it executable with
chmod +x b2zipper
IHTH
To build on accepted answer, an alternative would be:
find /path/to/dir -type f -exec bzip2 {} \;
So I have a huge folder full subfolders with tons of files, and I add files to it all the time.
I need a subfolder in the root of that folder with a symlink of the last 10-20 files added so that I can quickly find the things I recently added. This is located on a NAS, but I have a linux box running Arch connected through NFS, so I assume the best way is to run a bash script with a find command followed by a loop of ln -sf, but I can't do it safely without help.
Something like this is required:
mkdir -p subfolder
find /dir/ -type f -printf '%T# %p\n' | sort -n | tail -n 10 | cut -d' ' -f2- | while IFS= read -r file ; do ln -s "$file" subfolder ; done
Which will create symlinks in subfolder pointing to the 10 most recently modified files in the directory tree rooted at /dir/
You could just create a shell function like:
recent() { ls -lt ${1+"$#"} | head -n 20; }
which will give you a listing of the 20 most recent items in the specified directories, or the current directory if no arguments are given.