How to change platform of Amazon EC2 instance - linux

I have an on-demand instance running in Amazon EC2. I want to create a reserved instance.
My instance configuration is
Instance type :m1.small
Availability zone:us-west-2c
Platform: Red Hat Enterprise Linux Server release 6.4
I have set up this instance with all required software to run my web service (30GB EBS is attached to this instance)
Got to know how to create a reserved instance from here. When I was about to purchase a reserved instance, I noticed that Linux/UNIX is cost effective than Red Hat Enterprise Linux. Is there any way I can change my running instance platform or Should I redo every setup with a new instance?

AFAIK, there is no way to replace the instance's platform (and pricing) and maintaining its current deployment. I've found this article about a workaround for 'sharing EBS' but it doesn't guarantee that an EBS volume created for RHEL can be attached to a Linux instance (I'd bet against it).
If you already paid for reserved instances, I suggest you contact aws support team to see if there are any alternatives, or at least - a refund.
(BTW please update us if there is a solution)

Related

How to create AWS EC2 instance shared folders

A Dobut, how to create sharable folders between two EC2 windows systems?
And how to create a share folder in two Linux machine?
Is the only option for Linux is EFS and windows is FSx?
I couldn't find a way, can anybody help.
I tried with enabling proper ports in security groups, and made the discovery on in network and security in windows. But no results.
What is the purpose of shared folders? Based on that you can use services.
There is also a feature called EBS multi attach where you can attach the same io1/io2 type ebs volume to different ec2 in same AZ.
EFS for linux is pay per gb used. It costs roughly <$2 for 10 GB/Month. FSx is bit costly.
Check which one suits your needs & decide.

Linux Servers Patching - GCP

I wanted patch my Linux instances, hosted on Google Cloud Platform.
Is there any native tool available on Google Cloud Platform, like Azure Update Manager, or do we have to use a 3rd party tool?
Yes, At this moment GCP doesn't have a product that fulfills patch management like Azure update management. However, there are some other workarounds, on how to manage the patch updates of a large number of VMs.
a). Set up a startup script in order to execute certain maintenance routines. However, restarting the VM is necessary. Startup scripts can perform many actions, such as installing software, performing updates, turning on services, and any other tasks defined in the script [1] .
b). If we want to patch large number of instances, a Managed Instance Group [2] could also be an alternative, as the managed instance group automatic updater safely deploy new versions of software to instances in MIG and supports a flexible range of rollout scenarios. Also, we can control the speed and scope of deployment as well as the level of disruption to service. [3]
c). We could use OS Inventory Management [4] to collect and view operating system details for VM instances. These details include operating system information such as hostname, operating system, and kernel version as well as installed packages, and available package updates for the operating system. The process is described here [5].
d). Finally, there's also the possibility of setting up automatic security updates directly in CentOS or Redhat 7.
I hope the above information is useful.
RESOURCES:
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups
[3] https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
[4] https://cloud.google.com/compute/docs/instances/os-inventory-management
[5] https://cloud.google.com/compute/docs/instances/view-os-details#query-inventory-data
Thank you all, who shared your knowledge!!!
GCP does not have any such package manager currently. If you would like to patch your servers you would have to setup a cronjob (either with crontab or another cron service like a GKE cronjob) to run the appropriate update command.
I think it was released after this question was asked (April 2020ish), but GCP now offers a native VM patch service called OS Patch Management for their VMs. You can learn more about it here: https://cloud.google.com/compute/docs/os-patch-management

How are OS patches with critical security update on GCE, GKE and AI Platform notebook?

Is there complete documentation that explains if and how critical security updates are applied to an OS image on the following IaaS/PaaS?
GCE VM
GKE (VM of in a cluster)
VM on which is running AI Platorm notebook
In which cases is the GCP team taking care of these updates and in which cases should we take care of it?
For example, in the case of a GCE VM (Debian OS) the documentation seems to indicate that no patches are applied at all and no reboots are done.
What are people doing to keep GCE or other VMs up to date with critical security updates, if this is not managed by GCP? Will just restarting the VM do the trick? Is there some special parameter to set in the YAML template of the VM? I guess for GKE or AI notebook instances, this is managed by GCP since this is PaaS, right? Are there some third party tools to do that?
As John mentioned, for the GCE Vm instances, you are responsible for all of the packages updates and it is handled like in any other System:
Linux: sudo apt/yum update/upgrade
Windows: Windows update
There are some internal tools in each GCE image that could help you to automatically update your system:
Windows: automatic updates are enabled by default
RedHat/Centos systems: you can use yum-cron tool to enable automatic updates
Debian: using the tool unattended-upgrade
As per GKE, I think this is done when you upgrade your cluster version, the version of the master is upgraded automatically (since it is Google managed), but the nodes should be done by you. The node update can be automated, please see the second link below for more information.
Please check the following links for more details on how the Upgrade process works in GKE:
Upgrading your cluster
GKE Versioning and upgrades
As per "VM on which is running AI Platform notebook", I don't understand what do you mean by this. Could you provide more details

What is the difference between AWS Marketplace and sudo apt-get install?

I'm seeking objective answers, so that this is not closed as subjective. Question may be moved to a different site if required.
What is the difference between AWS Marketplace and sudo apt-get install, and how do I decide to choose one over the other?
I noted that Amazon AWS has a marketplace with ready-to-deploy offerings like LAMP Stack from Bitnami. But tutorials often instruct you to create a blank EC2 instance, SSH into it, and manually install software using commands like sudo apt-get install lamp-server^.
Are they the same? What are the advantages and disadvantages of each method?
If I get an offering from the AWS Marketplace, can I install other software to the same EC2 instance using either method? If there's a paid offering from AWS Marketplace that I used sudo apt-get to install, will Amazon charge me? (They should, right? Or that will be a big loophole many will exploit.)
AWS Marketplace allows you as a developer or company to create a re-usable AMI pre-packaged with an installation of software. This installation can then be used by end users, either paid for, or free.
As a user, it allows you to easily provision servers with software pre-installed. A very common use case is to allow people to license software hourly rather than upfront or monthly (hence fitting into the elasticity of AWS). For instance, if I have Software X which I need a baseline of 10 servers, I may pay the developer for a perpetual licence for 10 - however at peaks I'll use AWS Marketplace and license by the hour as necessary
Are they the same? What are the advantages and disadvantages of each method?
Often software pre-installed onto an AMI will come pre-configured - so for instance the Bitnami AMIs allow you to easily deploy Wordpress fully pre-configured.
This does, however, mean that the initial configuration choices that were made by a third party can impact (positively or negatively) your application. Subsequently you may choose to install and configure your own applications from scratch - possibly even creating an AMI yourself which you can re-use for further deployments of that application
If there's a paid offering from AWS Marketplace that I used sudo apt-get to install, will Amazon charge me? (They should, right? Or that will be a big loophole many will exploit.)
Amazon will not charge you, no. If for instance there was a paid Wordpress AMI, and you then created an EC2 instance, installed Apache, MySQL, PHP, and Wordpress -- Amazon would not charge you anything additional.
To make it simple: with an AMI you get a pre-configured virtual computer and with sudo apt-get install (or an empty ec2 instance) you get a blank machine and you configure it by yourself.
So:
AMI
You have to pay ec2 fee.
AMI creator could charge you the rent of his creation (but it can be
free).
You get a ready-to-go instance but maybe it has more things than what
you need.
Blank instance
You have to pay ec2 fee.
No one charges you anything if you use free software.
You install just what you need.

Migrate instance from EC2 to Google Cloud

I have a running Linux instance in Amazon EC2. I'd like to migrate this instance to a Google Cloud VM Instance. I'd like to have the minimum work on this operation, a kind of copy and paste solution. How can I do this?
You can import an Amazon Machine Image (AMI) to Google Compute Engine but it's not just one operation. There is a section in the Google Compute Engine documentation that shows the steps you need to follow in order to achieve your goal.
I hope it helps.
With GCP you can use the import feature which forwards to Cloud Endure site, where you can migrate your existing server, virtual on Cloud or non Cloud or even physical machine, to GCP.
You can also import Amazon Linux AMI EC2 instances on AWS.
Cloud Endure provides also live migration, so it does continues replication, if you don't power on your migrated VM on GCP.
It can also be used for just one time migration.
Amazon Linux AMI can be updated on GCP Cloud as well, so no problems with that.
Migration takes few hours depending on size of the source machine. You might need to change the hard drive paths on /etc/fstab to reflect their names on GCP (dev/xvdf --> /dev/sdb, for example).
The easiest one step solution would be using a third party tool to do it for you. There are way many cloud migration vendors that would make this process nearly zero effort. I did that with cloud endure and it went ok, but obviously it involves costs so make sure to check them out.
Found the end to end video which will give an idea how to do migration from ec2 to google cloud.
link: https://www.youtube.com/watch?v=UT1gPToi7Sg

Resources