Is there any option to rename the encrypted file in gpg?
For example, when I encrypt a file (eg. file1.txt) I use the command
gpg --encrypt --sign --armor -r person#email.com file1.txt
What I want is to rename the encrypted file to something like enc-file1.txt
Is it possible?
The man page, which you really should have checked first, tells me
--output file
-o file
Write output to file.
More information can be found in the gnupgp documentation.
Related
I want to download all the files in a specific directory of my site.
Let's say I have 3 files in my remote SFTP directory
www.site.com/files/phone/2017-09-19-20-39-15
a.txt
b.txt
c.txt
My goal is to create a local folder on my desktop with ONLY those downloaded files. No parents files or parents directory needed. I am trying to get the clean report.
I've tried
wget -m --no-parent -l1 -nH -P ~/Desktop/phone/ www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
I got
I want to get
How do I tweak my wget command to get something like that?
Should I use anything else other than wget ?
Ihue,
Taking a shell programatic perspective I would recommend you try the following command line script, note I also added the citation so you can see the original threads.
wget -r -P ~/Desktop/phone/ -A txt www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
-r enables recursive retrieval. See Recursive Download for more information.
-P sets the directory prefix where all files and directories are saved to.
-A sets a whitelist for retrieving only certain file types. Strings and patterns are accepted, and both can be used in a comma separated list. See Types of Files for more information.
Ref: #don-joey
https://askubuntu.com/questions/373047/i-used-wget-to-download-html-files-where-are-the-images-in-the-file-stored
I need to be able to transfer data from memory into a remote file through SFTP. I originally had this working through SSH, and while working, discovered that I don't have SSH access to the remote location, only SFTP access.
Here is an example of the original SSH code I had:
echo "secret_data" | ssh root#localhost cat > secret_file;
This is exactly what I need, but in some kind of form of:
sftp root#localhost put $secret_data secret_file;
In principal, I need the data to never be stored on a file on the local machine, and dealt with entirely from memory.
Any replies appreciated. Thanks.
The multi-protocol client lftp explicitly supports reading content from a non-seekable file descriptor:
#!/bin/bash
# ^^^^ some features used here are not present in /bin/sh
lftp \
-u remote_username \
-e 'put /dev/stdin -o /tmp/secret' \
sftp://remote_host \
< <(printf '%s' "$secret_data")
Note the use of <() as opposed to <<< (the latter can, in some situations, be implemented via writing a temporary file; the former will be a /dev/fd-style redirection on modern Linux, or may be implemented with a FIFO on some other platforms).
Create tmpfs partition in /etc/fstab (not stored on drive, only in memory), store the file there and then use your described method if you really want to avoid storing the file into your hard drive.
I have set up this way /tmp and /var/log to avoid writing all over the SSD drive:
# <file system> <mount point> <type> <options> <dump> <pass>
none /tmp/ tmpfs size=15% 0 0
If it's permissible to use your own, hacked copy of sftp, you can use
echo "my secret data 2" | (exec 6<&0 ; ( echo put /proc/self/fd/6 /tmp/secret | sftp user#remote_host))
The exec redirects the stdin, which transports your secret, to another file descriptor, in this case, 6. The second echo issues the sftp command to execute. It uses the /proc/ magic file system, the /proc/self redirection to /proc/<pid> of the process that opens it, and the proc/<pid>/fd/6 name of the file descriptor 6, and copies the data that it reads from there to a file on your remote host.
It would be much easier if you could use a hacked version of scp, this would read
echo "my secret data 2" | scp /proc/self/fd/0 user#remote_host:/tmp/secret
Now for the hack: sftp and scp make sure that the local file is a regular file, but the /proc/self/fd/... file descriptors are pipes. You need to disable the checks in the source code.
For sftp, you would modify file sftp-client.c: Find all occurences of S_ISREG(...) and replace them with 1
This is quick and dirty and might leave you open to security vulnerabilities if you do not check openssh security messages regularly and recompile if necessary. A better way would be to use a scripting language with a well maintained sftp library and use that.
I need to backup a large server into FTP storage. I can tar all files, I can upload using FTP and I can split the tar file into many small files.
But the problem is I can't do these three steps in one step. I can tar to FTP directly, I can tar with split, but can't tar with FTP and split.
The OS is CentOS 6.2
The Files Size more than 800G
Thanks
To can tar, split and ftp a directory with one command line you need the following:
split command write to the standard output only, so you can't pass the file to another command like ftp to process it, to do so you need to patch split to can use the --filter option to can pass the output file to ftp "on the fly" without having to save to hard disk by setting up the $FILE environmental variable with the output file (the file names would be x00, x01, x02 ...).
1) Here is the split patch: http://lists.gnu.org/archive/html/coreutils/2011-01/txt3j8asgk8WH.txt
After patching split command, you would see in the man that the --filter option available in your split command.
2) install the ncftp ftp client which is a good ftp client that allows you to connect to ftp and put file in one line command, without waiting for the ftp response like ordinary ftp client. the ncftp is useful to integrate with scripts and so on.
here is the command that would compress /home directory with tar split it to 100MB small files and transfer each file through FTP
tar cvz -i /home | split -d -b 100m --filter 'ncftpput -r 10 -F -c -u ftpUsername -p ftpPassword ftpHost $FILE'
note that we used the ncftpput that would pass the $FILE to ftp in single command too.
additional ftp options:
-r 10: allows you to try to reconnect 10 times after loosing connection with ftp.
-F: To use passive mode.
-c: takes the input from stdin.
To merge the split files (x00, x01, x02, x03 ...) to can extract the file use following command
cat x* > originalFile.tar
You can make a shell script and use
tar zcf - /usr/folder | split -b 30720m - /usr/archive.tgz
and then upload to FTP also because once you are doing tar and putting onto FTP then how can you split.
Can someone please explain me how to use ">" and "|" in linux commands and convert me these three lines into one line of code please?
mysqldump --user=*** --password=*** $db --single-transaction -R > ${db}-$(date +%m-%d-%y).sql
tar -cf ${db}-$(date +%m-%d-%y).sql.tar ${db}-$(date +%m-%d-%y).sql
gzip ${db}-$(date +%m-%d-%y).sql.tar
rm ${db}-$(date +%m-%d-%y).sql (after conversion I guess this line will be useless)
The GNU tar program can itself do the compression normally done by gzip. You can use the -z flag to enable this. So the tar and gzip could be combined into:
tar -zcf ${db}-$(date +%m-%d-%y).sql.tar.gz ${db}-$(date +%m-%d-%y).sql
Getting tar to read from standard input for archiving is not a simple task but I would question its necessity in this particular case.
The intent of tar is to be able to package up a multitude of files into a single archive file but, since it's only one file you're processing (the output stream from mysqldump), you don't need to tar it up, you can just pipe it straight into gzip itself:
mysqldump blah blah | gzip > ${db}-$(date +%m-%d-%y).sql.gz
That's because gzip will compress standard input to standard output if you don't give it any file names.
This removes the need for any (possibly very large) temporary files during the compression process.
You can use next script:
#!/bin/sh
USER="***"
PASS="***"
DB="***"
mysqldump --user=$USER --password=$PASS $DB --single-transaction -R | gzip > ${DB}-$(date +%m-%d-%y).sql.gz
You can learn more about "|" here - http://en.wikipedia.org/wiki/Pipeline_(Unix). I can say that this construction moves output of mysqldump command to the standard input of gzip command, so that is like you connect output of one command with input of other via pipeline.
I dont see the point in using tar: You just have one file, and for compression you call gzip explicit. Tar is used to archive/pack multiple files into one.
You cammandline should be (the dump command is shorted, but I guess you will get it):
mysqldump .... | gzip > filename.sql.gz
To append the commands together in one line, I'd put && between them. That way if one fails, it stops executing them. You could also use a semicolon after each command, in which case each will run regardless if the prior command fails or not.
You should also know that tar will do the gzip for you with a "z" option, so you don't need the extra command.
Paxdiablo makes a good point that you can just pipe mysqldump directly into gzip.
The directory of which users have their backups for their files is located in a directory which they can access and upload to.
If they get the naming scheme right and cause an error on purpose that makes the system try to restore the last 5 or so backups, they could potentially put files they want onto the server by using a absolute path gzip file such as ../../../../../etc/passwd or whatever may be the case.
What checks can I perform to prevent this from happening programmatically in BASH
The following command is what is ran by root (it gets ran by root because I use webmin):
tar zxf /home/$USER/site/backups/$BACKUP_FILE -C /home/$USER/site/data/
Where $BACKUP_FILE will be the name of the backup it's attempting to restore
edit 1:
This is what I came up with so far. I am sure this way could be improved a lot:
CONTENTS=$(tar -ztvf /home/$USER/site/backups/$BACKUP_FILE | cut -c49-200)
for FILE in $CONTENTS; do
if [[ $FILE =~ \.\. ]] || [[ $FILE =~ ^\/ ]]; then
echo "Illegal characters in contents"
exit 1
fi
done
tar zxf /home/$USER/site/backups/$BACKUP_FILE -C /home/$USER/site/data/
exit 0
I am wondering if disallowing it to begin with / and not allow the .. will be enough? also is character 50+ normal for the output of tar -ztvf ?
Usually tar implementations strip a leading / and don't extract files with .., so all you need to do is check your tar manpage and don't use the -P switch.
Another thing tar should protect you from is symlink attacks: a user creates the file $HOME/foo/passwd, gets it backed up, removes it and instead symlinks $HOME/foo to /etc, then restores the backup. Signing the archive would not help with this, although running with user privileges would.
Try restoring every backup as the backup owner using su:
su $username -c tar xzvf ...
(You might also need the -l option in some cases.)
Make sure that you really understand the requirements of the program to run as root. The process you are running has no need for any other privilege than read access to one directory and write access to another one so even the user account privilege is an overkill not to even mention root. This is just asking for trouble.
I'm assuming the backups are generated by your scripts, and not the user. (Extracting arbitrary user created tar files are never ever a good idea).
If your backups have to be store in a directory writeable by users, I would suggest you digitally sign each backup file so that it's integity can be validated. You can then verify that a tar file is legit before using it for restoration.
An example using GPG:
gpg --armor --detach-sign backup_username_110217.tar.gz
That creates a signature file backup_username_110217.tar.gz.asc which can be used to verify the file using:
gpg --verify backup_username_110217.tar.gz.asc backup_username_110217.tar.gz
Note that to run that in a script, you'll need to create your keys without a passphrase. Otherwise, you'll have to store the password in your scripts as plain text which is a horrid idea.