I have already uploaded my VMDK file to the S3 bucket using following command:
s3cmd put /root/Desktop/centos-ldaprad.vmdk --multipart-chunk-size-mb=10 s3://xxxxx
Now When I would like to create AWS Instance from the same VMDK available at S3 bucket:
ec2-import-instance centos-ldaprad.vmdk -f VMDK -t t2.micro -a x86_64 -b xxxxx -o <XXXX_ACCESS_KEY_XXXX> -w <XXXX_SECRET_KEY_XXX> -p Linux --dont-verify-format -s 5 --ignore-region-affinity
But It looks on present working directory for the source VMDK file. I will be really greatful if you can guide to how to point source VMDK at bucket instead of local source?
Does this --manifest-url url points to the S3 bucket? But when I have uploaded do not have any idea whether it has created any such file? If it creates where it would be created?
Another thing is using above ec2-import-instance when I am creating it searches for VMDK on present working directory and if found it will start uploading. But is there any provision to make upload in parts and also to resume in case of interruption?
It's not really the answer you were after, but I've attached the script I use to upload VMDKs and convert them to AMI images.
This uses the ec2-resume-import, so you can restart it if a upload partially fails.
http://pastebin.com/bD8c3gQu
It's worth pointing out that when I register the device I specify a block device mapping. This is because my images always include a separate boot partition, and a LVM based root partition.
--root-device-name /dev/sda1 -b /dev/sda=$SNAPSHOT_ID:10:true --region $REGION -a x86_64 --kernel aki-52a34525
Related
I have a linux server where i had mounted blog storage but it is temporary mount everytime i restart the machine i have to run this below command manually
sudo blobfuse /sfp/publicstorage134/blobstorage123 --tmp-path=/mnt/rec/mountpath --config-file=/user1/connection_sf.cfg -o attr_timeout=180 -o entry_timeout=120 -o negative_timeout=180 -o allow_other
How can i make this stoarge mount permanently instead of mounting with this command after every restart. Is it possible to put this in /etc/fstab?
The recommendation is to create a script, such as mount.sh or you can also add blobfuse directly to /etc/fstab
Add the following line to use mount.sh:
/<path_to_blobfuse>/mount.sh </path/to/desired/mountpoint> fuse _netdev
OR
Add the following line to run without mount.sh:
blobfuse /home/azureuser/mntblobfuse fuse delay_connect,defaults,_netdev,--tmp-path=/home/azureuser/tmppath,--config-file=/home/azureuser/connection.cfg,--log-level=LOG_DEBUG,allow_other 0 0
I have python script pscript.py which takes input parameters -c input.txt -s 5 -o out.txt. The files are all located in an aws s3 bucket. How do I run it after creating an instance? Do I have to mount the bucket on EC2 instance and execute the code? or use lambda? I am not sure. Reading so many aws documentations kinda confusing.
Command line run is as follows:
python pscript.py -c input.txt -s 5 -o out.txt
You should copy the file from Amazon S3 to the EC2 instance:
aws s3 cp s3://my-bucket/pscript.py
You can then run your above command.
Please note that, to access the object in Amazon S3, you will need to assign an IAM Role to the EC2 instance. The role needs sufficient permission to access the bucket/object.
I have a file system where files can be dropped into an EC2 instance and I have a shell script running to sync the newly dropped files to an s3 bucket. I'm looking to delete the files off the E2C instance once they are synced. Specifically the files are dropped into the "yyyyy" folder.
Below is my shell code:
#!/bin/bash
inotifywait -m -r -e create "yyyyy" | while read -r NEWFILE
do
if lsof | grep "$NEWFILE" ; then
echo "$NEWFILE";
else
sleep 15
aws s3 sync yyyyy s3://xxxxxx-xxxxxx/
fi
Instead of using aws s3 sync, you could use aws s3 mv (which is a 'move').
This will copy the file to the destination, then delete the original (effectively 'moving' the file).
Can also be used with --recursive to move a whole folder, or --include and --exclude to specify multiple files.
I'm looking for a solution to recursively get the size of all my folders on a Amazon S3 bucket which has a lot of embedded folders.
The perfect example is the Linux du --si command:
12M ./folder1
50M ./folder2
50M ./folder2/subfolder1
etc...
I'm also open to any graphical tool. Is there any command or AWS API for that?
Use awscli
aws s3 ls s3://bucket --recursive --human-readable --summarize
s3cmd du -H s3://bucket-name
This command tells you the size of the bucket (human readable). If you want to know the sizes of subfolders you can list the folders in the bucket (s3cmd ls s3://bucket-name) and then iterate through them.
I'm on Ubuntu 14.04.
I try to make an incremental backup of some files on my Ubuntu HD (ext4) to a Buffalo network HD (XFS).
My script mounts the Buffalo HD with this command :
sudo mount.cifs //192.168.1.12/Sauvegardes /mnt/Sauvegardes -o username=myusername,password=mypassword
After the disk is mounted, I use rsync trying to make an incremental backup with rsync and --link-dest. Each day, when the script is launched, all the folders change according to actual date of the day. Here is an example when the script is launched on 2017-03-09. It should check on 2017-03-08 backup if files already exist :
sudo rsync -arR --link-dest="/mnt/Sauvegardes/racine_2017-03-08" --timeout=30 /home/flooder/Sauvegardes/ /mnt/Sauvegardes/racine_2017-03-09/
The problem : rsync doesn't seem to check on the --link-dest destination. It copies all the files all the day. So the disk will be full quickly and the backup is very very long each day...
Would you have an idea for me?
Should I mount the network drive an other way?
Do I have the right rsync command?
I have mounted my network disk with this line instead. It works well now. If the file already exists in --link-dest, only an hard link is created. Second pass is very very quick!
sudo mount -t cifs //192.168.1.12/Sauvegardes /mnt/Sauvegardes -o username=myusername,password=mypassword,uid=1000,gid=1000
uid and gid are those of my logged user.