I am using the following command to download the file from s3 and zip it into smaller zips.
aws s3 cp "${INFILE}" - | gunzip | split -b 1000m --filter "gzip > ./logs/cdn/2021-04-14/\$FILE.gz | echo \"\$FILE.gz\""
This is working fine on local and saving the file in local.
I am not sure how to upload this file directly to s3 without saving in local after smaller zips are generated.
To have truly off-site and durable backups of my ZFS pool, I would like to store zfs snapshots in Amazon Glacier. The data would need to be encrypted locally, independently from Amazon, to ensure privacy. How could I accomplish this?
An existing snapshot can be sent to a S3 bucket as following:
zfs send -R <pool name>#<snapshot name> | gzip | gpg --no-use-agent --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg
or for incremental back-ups:
zfs send -R -I <pool name>#<snapshot to do incremental backup from> <pool name>#<snapshot name> | gzip | gpg --no-use-agent --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg
This command will take an existing snapshot, serialize it with zfs send, compress it, and encrypt it with a passphrase with gpg. The passphrase must be readable on the first line in the ./passphrase file.
Remember to back-up your passphrase-file separately in multiple locations! - If you lose access to it, you'll never be able to get to your data again!
This requires:
A pre-created Amazon s3 bucket
awscli installed (pip install awscli) and configured (aws configure).
gpg installed
Lastly, S3 lifecycle rules can be used to transition the S3 object to glacier after a pre-set amount of time (or immediately).
For restoring:
aws s3 cp s3://<bucketname>/<filename>.zfs.gz.gpg - | gpg --no-use-agent --passphrase-file ./passphrase -d - | gunzip | sudo zfs receive <new dataset name>
i want to backup my files from /etc/httpd on a remote server with IP=xxx to my dir. /tmp. Ive got password and name of it. Created backup should be compressed and named httpd-day-month-year-hour-minute-sec.tar.bz2 . I tried
FILE=httpd.sql. `date +"%d%m%Y_%H%M%S"`
DBSERVER=xxx
DATABASE=/etc/httpd
USER=uzivatel
PASS=password
mysqldump --opt --protocol=TCP --user=${USER} --password${PASS} --host=${DBSERVER} ${DATABASE} > ${FILE}
bunzip $FILE
echo"${FILE}.bz hotovo:"
ls-l ${FILE.bz}
I know there are many mistakes but i am fairly new to scripting and problems are in FILE coomand not dounf in mysqldump ambiguous redirect. thanks
I have already uploaded my VMDK file to the S3 bucket using following command:
s3cmd put /root/Desktop/centos-ldaprad.vmdk --multipart-chunk-size-mb=10 s3://xxxxx
Now When I would like to create AWS Instance from the same VMDK available at S3 bucket:
ec2-import-instance centos-ldaprad.vmdk -f VMDK -t t2.micro -a x86_64 -b xxxxx -o <XXXX_ACCESS_KEY_XXXX> -w <XXXX_SECRET_KEY_XXX> -p Linux --dont-verify-format -s 5 --ignore-region-affinity
But It looks on present working directory for the source VMDK file. I will be really greatful if you can guide to how to point source VMDK at bucket instead of local source?
Does this --manifest-url url points to the S3 bucket? But when I have uploaded do not have any idea whether it has created any such file? If it creates where it would be created?
Another thing is using above ec2-import-instance when I am creating it searches for VMDK on present working directory and if found it will start uploading. But is there any provision to make upload in parts and also to resume in case of interruption?
It's not really the answer you were after, but I've attached the script I use to upload VMDKs and convert them to AMI images.
This uses the ec2-resume-import, so you can restart it if a upload partially fails.
http://pastebin.com/bD8c3gQu
It's worth pointing out that when I register the device I specify a block device mapping. This is because my images always include a separate boot partition, and a LVM based root partition.
--root-device-name /dev/sda1 -b /dev/sda=$SNAPSHOT_ID:10:true --region $REGION -a x86_64 --kernel aki-52a34525
I connected to Amazon's linux instance from ssh using private key. I am trying to copy entire folder from that instance to my local linux machine .
Can anyone tell me the correct scp command to do this?
Or do I need something more than scp?
Both machines are Ubuntu 10.04 LTS
another way to do it is
scp -i "insert key file here" -r "insert ec2 instance here" "your local directory"
One mistake I made was scp -ir. The key has to be after the -i, and the -r after that.
so
scp -i amazon.pem -r ec2-user#ec2-##-##-##:/source/dir /destination/dir
Call scp from client machine with recursive option:
scp -r user#remote:src_directory dst_directory
scp -i {key path} -r ec2-user#54.159.147.19:{remote path} {local path}
For EC2 ubuntu
go to your .pem file directory
scp -i "yourkey.pem" -r ec2user#DNS_name:/home/ubuntu/foldername ~/Desktop/localfolder
You could even use rsync.
rsync -aPSHiv remote:directory .
This's how I copied file from amazon ec2 service to local window pc:
pscp -i "your-key-pair.pem" username#ec2-ip-compute.amazonaws.com:/home/username/file.txt C:\Documents\
For Linux to copy a directory:
scp -i "your-key-pair.pem" -r username#ec2-ip-compute.amazonaws.com:/home/username/dirtocopy /var/www/
To connect to amazon it requires key pair authentication.
Note:
Username most probably is ubuntu.
I use sshfs and mount remote directory to local machine and do whatever you want. Here is a small guide, commands may change on your system
This is also important and related to the above answer.
Copying all files in a local directory to EC2. This is a Unix answer.
Copy the entire local folder to a folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles ubuntu#ec2.amazonaws.com:/home/dir
Copy only the entire contents of local folder to folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles/* ubuntu#ec2.amazonaws.com:/home/dir
I do not like to use scp for large number of files as it does a 'transaction' for each file. The following is much better:
cd local_dir; ssh user#server 'cd remote_dir_parent; tar -c remote_dir' | tar -x
You can add a z flag to tar to compress on server and uncompress on client.
One way I found on youtube is to connect a local folder with a shared folder in EC2 instance. Please view this video for the full instruction. The sharing is instantaneous.