Use a rackspace cloud image on Amazon EC2? - linux

I've a Rackspace (UK) cloud instance, running Ubuntu 11.10, which has taken 10+ man-hours to install all the packages (and custom application code) I need, tighten security, test, etc.
I can take a snapshot of that, and start another instance on Rackspace UK. That worked nicely. Because I've got /etc under git source control I could see the files the start-up process altered were:
network files (IP address, default gateway)
root password
/etc/hostname
About the only post-startup steps I needed to do were a DNS entry and dpkg-reconfigure postfix to set the new machine name.
I'm assuming, but haven't tested yet, that I could use this image with Rackspace U.S. But what about with Amazon EC2 (or any other cloud provider for that matter)? Can I just download the image, upload it to Amazon S3, and start new instances with it? If not is there a utility I can run to convert from one linux image format to another?

The poor man's approach is to use rsync between servers. Rackspace has a 3-part guide on this, starting here:
http://www.rackspace.com/knowledge_center/index.php/Migrating_a_Linux_Server_From_Command_Line_Stage_1

Related

How to enable cloud init on first boot when deploying VMs on OpenStack using .raw

So I am converting custom built .iso to .raw. Deployed a VM on OpenStack using this .raw but I am unable to ssh into this machine.
I used GUI console and was able to login to this OpenStack VM using username and password. Once I am logged in, I restarted the cloud-init service and that resolved the ssh issue. I can ssh into the machine just fine.
Now the question is how do I make sure that enabling and restarting the cloud-init service are as part of first boot when deploying VMs on OpenStack.
I know I can pass the script when using UI to deploy VMs on OpenStack website but the requirement is this should be as part of the image itself. That means, I should just deploy a vm using .raw and the enabling and starting of cloud-init service should be part of the .raw image itself.
I am new to Linux or IT in general. Any suggestions are much appreciated.
Welcome to SO.
Maybe the keypoint is that cloud-init system which is an open-source package from Ubuntu that is available on various Linux distributions was installed or configured wrong. You should be sure that the cloud-init system which running in .iso image was working before convert to .raw format.
how do I make sure that enabling and restarting...
If you want to find it out, you could create instance with --user-data parameter.

Proftpd incredibly slow sftp and ftp connection on Ec2

Iwas using digital ocean for a long time but I wanted to tive it a shot on amazon ec2 machines. I ve created my environment but when I set up proftpd and configure it as a sftp server it transfer files incredibly slow like in bytes per second not even kbps. It was also the same for ftp. I just had this issue on amazon ec2 server, never happened in digital ocean's. I tried eveything from google but not helped.
Is there any solution?

Migrate instance from EC2 to Google Cloud

I have a running Linux instance in Amazon EC2. I'd like to migrate this instance to a Google Cloud VM Instance. I'd like to have the minimum work on this operation, a kind of copy and paste solution. How can I do this?
You can import an Amazon Machine Image (AMI) to Google Compute Engine but it's not just one operation. There is a section in the Google Compute Engine documentation that shows the steps you need to follow in order to achieve your goal.
I hope it helps.
With GCP you can use the import feature which forwards to Cloud Endure site, where you can migrate your existing server, virtual on Cloud or non Cloud or even physical machine, to GCP.
You can also import Amazon Linux AMI EC2 instances on AWS.
Cloud Endure provides also live migration, so it does continues replication, if you don't power on your migrated VM on GCP.
It can also be used for just one time migration.
Amazon Linux AMI can be updated on GCP Cloud as well, so no problems with that.
Migration takes few hours depending on size of the source machine. You might need to change the hard drive paths on /etc/fstab to reflect their names on GCP (dev/xvdf --> /dev/sdb, for example).
The easiest one step solution would be using a third party tool to do it for you. There are way many cloud migration vendors that would make this process nearly zero effort. I did that with cloud endure and it went ok, but obviously it involves costs so make sure to check them out.
Found the end to end video which will give an idea how to do migration from ec2 to google cloud.
link: https://www.youtube.com/watch?v=UT1gPToi7Sg

Backup server for a NAS with web interface

I'm evaluating the features of a full-fledged backup server for my NAS (synology). I need
FTP access (backup remote sites)
SSH/SCP access (backup remote server)
web interface (in order to monitor each backup job)
automatic mail alerting if jobs fail
lightweight software (no mysql, sqlite ok)
optional: S3/Glacier support (as target)
optional: automatic long-term storage after a given time (ie local disk for 3 months, Glacier after that)
seems like biggest player are Amanda, Bacula and duplicity (likewise)
Any suggestion?
thanks a lot
Before jumping on the full server backups, please clarify these questions:
Backup software's are agent and non agent based, which one do you want to use?
Are you interested to go for open source or proprietary software?
Determine your source and destination are they in the same LAN or in the Internet. Try to get the picture of the bandwidth between source and destination and the volume of data getting backed up?
Also if you are interested try to know gui requirements and various other os platform support for backup software.
Importantly try to know the mail notification configuration.
Presently am setting one for my project and so far have installed bacula-v7.0.5 with webmin as gui. Trying the same config in the amazon cloud utilizing s3 as storage by mounting s3fs into the ec2 instance.
My bacula software is a free community version.Haven't explored the mail notification until now.

Is a Amazon Machine Images (AMI's) static or it's code be modified and rebuilt

I have a customer who wishes me to do some customisations of the erp system opentaps, which they used via opentaps Amazon Elastic Computing Cloud (EC2) images, I've only worked with it on a normal server and don't know anything about images in the cloud. When I ssh in with the details the client gave me there is no sign of the erp installation directory I'd expect to see. I did originally expect that the image wouldn't be accessible, but the client assured me it was. I suppose they could be confused.
Would one have to create a new image and swap it out or is there a way to alter the source and rebuild like on a normal server?
Something is not quite clear to me here. First of all EC2 images running in the cloud are just like normal virtual servers, so If you have an access to the running instance there is no difference between instance in the cloud and instance on another pc in your home for example.
You have to find out how opentaps are installed on the provided amis, then do your modifications, create an image from the modified instance and save it to s3 for backup if necessary.
If you want to start with fresh instance, you can start up any linux/windows distro on the EC2, install opentaps yourself your way and you are done.

Resources