Migrate instance from EC2 to Google Cloud - linux

I have a running Linux instance in Amazon EC2. I'd like to migrate this instance to a Google Cloud VM Instance. I'd like to have the minimum work on this operation, a kind of copy and paste solution. How can I do this?

You can import an Amazon Machine Image (AMI) to Google Compute Engine but it's not just one operation. There is a section in the Google Compute Engine documentation that shows the steps you need to follow in order to achieve your goal.
I hope it helps.

With GCP you can use the import feature which forwards to Cloud Endure site, where you can migrate your existing server, virtual on Cloud or non Cloud or even physical machine, to GCP.
You can also import Amazon Linux AMI EC2 instances on AWS.
Cloud Endure provides also live migration, so it does continues replication, if you don't power on your migrated VM on GCP.
It can also be used for just one time migration.
Amazon Linux AMI can be updated on GCP Cloud as well, so no problems with that.
Migration takes few hours depending on size of the source machine. You might need to change the hard drive paths on /etc/fstab to reflect their names on GCP (dev/xvdf --> /dev/sdb, for example).

The easiest one step solution would be using a third party tool to do it for you. There are way many cloud migration vendors that would make this process nearly zero effort. I did that with cloud endure and it went ok, but obviously it involves costs so make sure to check them out.

Found the end to end video which will give an idea how to do migration from ec2 to google cloud.
link: https://www.youtube.com/watch?v=UT1gPToi7Sg

Related

What is the best service for a GCP FTP Node App?

Ok, so a bit of background on what we are doing.
We have various weather station and soil monitoring stations across the country that gather up data and then using FTP, upload to a server for processing.
Note: this server is not located in the GCP, but we are migrating all our services over at the moment.
Annoyingly FTP is the only service that these particular stations allow. Newer stations thankfully are using REST APIs instead, so that makes it much simpler.
I have written a small nodejs app that works with ftp-srv. This acts as the FTP server.
I have also written a new FileSystem class that will hook directly into Google Cloud Storage. So instead of getting a local directory, it reads the GCS directory.
This allows for weather stations to upload their dump files direct to GCP for processing.
My question is, what is the best service to use?
First I thought using App Engine, since its just a small nodejs app, I don't really want to have to go and create a VM for it just to run this.
However, I have found that I have been unsuccessful to open up port 21 and any other ports used for passive FTP.
I then thought using Kubernetes Engine. To be honest, I don't know anything at all about this, as of yet. But it seems like its a bit of an overkill just to run the small app.
My last thought would be to use Compute Engine. I have a working copy with PROFTPD installed and working, so I know I can get the ports open and have data flowing, but I feel that it's a bit overkill to run a full VM just for something that is acting as an intermediary between the weather stations and GCS.
Any recommendations would be very appreciated.
Thanks!
Kubernetes just for FTP would be using a crane to lift your fork.
Google Compute Engine and PROFTPD will fit in a micro instance at a whopping cost of about $6.00 per month.
The other Google Compute services do not support FTP. This includes:
App Engine Standard
App Engine Flexible
Cloud Run
Cloud Functions
This leaves you with either Kubernetes or Compute Engine.

Storage and cost for a website on EC2 AWS using Node.js + express

I'm trying to understand how to use the EC2 AWS services so I've developed a dynamic website using Node.js and Express.
I'm reading the documentation but people's advice are always useful when learning new stuff.
In this website users can upload photos so I need storage space (SSD would be better).
I have three questions:
1) Is storage provided in the EC2 instance or do I have to use another AWS service as S3Bucket? What's the best/fast and less expensive solution to store and access images?
2) I'm using a t2.nano which cost $0.0063 per hour. So if i run the instance for 10 days my costs are 24hours * 10days * 0.0063?
3)I'm using mongoDB, is a good solution to run it on my EC2 instance? Or should I use the RDS provide by AWS?
So:
1) Personally I'd use an S3 bucket to store images, note if you have multipart uploads in the S3 bucket, if one fails it'll not only not show on the object listing, it'll still use space. There's an option to remove them after a certain period.
When you add an object s3 you want to store it's key in your database, then you can simply retrieve it as required.
2) t2 nano is on free tier - so technically you can run it for nothing for the first year.
3) Personally i'd set Mongo up to run on an appropriate EC2 instance, note: you must properly define the Security group, you only want aws internal applications and services to access the EC2 instance, you'll need SSH access to configure it, but then I'd remove that rule from the security group.
Once your Mongo instance is setup, take an AMI so that should anything go wrong you can re deploy it configured(note this won't restore the data).
Aws pricing calc here for EC2 the easy way is to calculate it at 100% usage, the other stuff can get a bit complicated but that wizard lets you basically price up your monthly running costs.
Edit: checkout this comparison on the different storage options for S3 vs X for storing those images although your "bible" should be that pricing calculator - I'd highly recommend learning how to use it as for your own business it's going to be invaluable and if your working for someone else it'll help you make business cases.

Deploying an application to a Linux server on Google compute engine

My developer has written a web scraping app on Linux on his private machine, and asked me to provide him with a Linux server. I setup an account on Google Compute Engine, created a Linux image with enough resources and a sufficiently large SSD drive. Three weeks later he is claiming that working on Google is too complex quote - "google is complex because their deployment process is separate for all modules. especially i will have to learn about how to set a scheduler and call remote scripts (it looks they handle these their own way)."
He suggests I create an account on Hostgator.com.
I appreciate that I am non-technical, but I cannot be that difficult to use Linux on Google?! Am I missing something? Is there any advice you could give me?
Regarding the suggestion to create an account on Hostgator to utilize what I presume would be a VPS in lieu of a Virtual Machine on GCE , I would suggest seeking a more concrete example from the developer.
For instance, the comment about the "scheduler", let's refer to it as some process that needs to execute on a regular basis:
How is this 'process' currently accomplished on the private machine ?
How would it be done on the VPS ?
What is preventing this 'process' from being done on the GCE VM ?

Azure to AWS migration

What strategy you will recommend in order to move Linux VM deployed currently in Azure to AWS?
Assume that I will fit all the data in the OS disk, so only one disk has to be moved.
The VM is running Linux Ubuntu if that matters.
Naturally I will like to do that with as little network traffic as possible, since it is chargeable.
I read comments about the image making procedure described here, that it is not safe and some of the VMs are lost in the process ... Not sure if it is still the case, but I will hate very much to lose my VM. :)
You can Export VM from Azure using CloudXplorer and then Import the VM in EC2.

Is a Amazon Machine Images (AMI's) static or it's code be modified and rebuilt

I have a customer who wishes me to do some customisations of the erp system opentaps, which they used via opentaps Amazon Elastic Computing Cloud (EC2) images, I've only worked with it on a normal server and don't know anything about images in the cloud. When I ssh in with the details the client gave me there is no sign of the erp installation directory I'd expect to see. I did originally expect that the image wouldn't be accessible, but the client assured me it was. I suppose they could be confused.
Would one have to create a new image and swap it out or is there a way to alter the source and rebuild like on a normal server?
Something is not quite clear to me here. First of all EC2 images running in the cloud are just like normal virtual servers, so If you have an access to the running instance there is no difference between instance in the cloud and instance on another pc in your home for example.
You have to find out how opentaps are installed on the provided amis, then do your modifications, create an image from the modified instance and save it to s3 for backup if necessary.
If you want to start with fresh instance, you can start up any linux/windows distro on the EC2, install opentaps yourself your way and you are done.

Resources