I am having around 300 red hat server on AWS which are hosting application. I want to make sure that my Linux instances are up to date from security and other point of view. Also i can not launch a new linux instance and delete the old one to get update one.
Can anyone please suggest me how to patch the AWS linux instances from a centralize location to all 300 servers.
Thanks
Manu.
This seems like a DevOps type question there are many ways to script this use the AWS API and the like in many languages from Python (Using something like Paramiko) or even straight up bash.
Provided you have the keys to access these linux instances the script should be trivial:
Get List of 300 servers from AWS using boto3 or awscli
Iterate over the server set to find the IP which you can SSH (private/public)
SSH to the instance and assume the root account
perform the yum update | yum upgrade command
Log out and move on to the next
Get a coffee and wait
Hope that helps!
Thanks,
//P
Related
Suggest a solution if such exists.
There are 20 empty baremetal servers. Me need to go to the ipmi and manually connect the image file to start the installation OS.
Question: are there any solutions to automate this process?
Since you tag this question with "OpenStack", you must have heard of Ironic.
If the thought of installing OpenStack to automatically install servers frightens you, look up Cobbler. It was used by now defunct products Helion OpenStack and SUSE OpenStack Cloud to set up clouds.
Ubuntu uses MAAS for this purpose.
This is not a complete list.
Trying to understand how node is supposed to be installed for multiple developers in AWS EC2 as an administrator. (I am also one of the devs).
I have an EC2 server with nginx running on port 80. Should I now go to the webroot and install nvm/node/npm as ec2-user? Or my own user, and then all the other users after? (No one can actually use the ec2-user account except server admins.)
How about other developers who need to use node? I was hoping to install nvm/node/npm for everyone in advance who needs it so that they could use it immediately after getting access to the server, but maybe everyone should install nvm/node/npm themselves?
Or it would be nice if there is a way to install it as ec2-user and then share it with all the users properly/securely? What's the right way to set this up?
(When I ran through this myself as my own user and installed nvm for the first time in EC2 Linux 2 AMI, I noticed that when I switched to another user or root, the "node -v" command didn't work for other accounts - and basically I'm trying to do an install that covers all the users.)
In fact, in AWS EC2, you need only one user and one NodeJS running. I would suggest below set-up for development and deployment.
Let all developers have their dev environment set-up in their local machines.
Let developers check-in their code to Github or a similar repository.
Using a CI/CD pipeline. Integrate the code and build the code and deploy it into EC2.
Instead of EC2, I would recommend using AWS Beanstalk.
If this makes sense for you, we can elaborate this into a solution and implement it.
I have an AWS Windows Server 2016 VM. This VM has a bunch of libraries/software installed (dependencies).
I'd like to, using python3, launch and deploy multiple clones of this instance. I want to do this so that I can use them almost like batch compute nodes in Azure.
I am not very familiar with AWS, but I did find this tutorial.
Unfortunately, it shows how to launch an instance from the store, not an existing configured one.
How would I do what I want to achieve? Should I create an AMI from my configured VM and then just launch that?
Any up-to-date links and/or advice would be appreciated.
Yes, you can create an AMI from the running instance, then launch N instances from that AMI. You can do both using the AWS console or you could call boto3 create_image() and run_instances(). Alternatively, look at Packer for creating AMIs.
You don't strictly need to create an AMI. You could simply the bootstrap each instance as it launches via a user data script or some form of CM like Ansible.
When launching EC2 using Terraform (or cloud formation), we can configure EC2 by putting some scripts in user_data/remote-exec. Alternatively, we can configure EC2 using Ansible/Chef, etc. What are the difference of configuring EC2 in user_data/remote-exec and do that with Ansible/Chef? when to use the former, when to use the latter (I know Ansible/Chef is idempotent)?
In my case, the EC2 is originally manually launched, then manually configured using a lot of linux commands. and the commands are not configured by me. Now I am the person to automate the whole structure using terraform, and configure EC2s. Using user_data/remote-exec to configure EC2 is straightforward. I just need to put all the existing linux commands they have in some scripts with a little change. And if the configuration result using my script is not successful, at least I can quickly figure out whether I miss some commands by comparing my script and the original linux commands. But if I use ansible/chef, I have to rewrite all the steps using different language. And if the configuration is not what expected, it is hard for me to figure out which steps are not correct, because the syntax of ansible/chef and linux commands are totally different.
My question is, in my case, should I use ansible/chef or user_data/remote-exec for configuration?
User Data is good for initial configuration of the system. If you need longer term maintenance a configuration management software like Ansible/Chef/Salt/Puppet is a great option.
Packer can be used for immutable infrastructure, i.e. doesn't change after creation. You can run all the scripts and installs on the system for it to be ready to just boot, this is also faster because you don't have to wait for user data to run.
A few questions you have to ask as well, how often are you going to patch these? Are you going to just update existing or replace with new. Ansible is great for configuration since it's just yaml files an
Blue/Green deployments generally replace servers with all new ones and gradually move traffic over to the new servers.
Some more things to consider with your Infrastructure as code
I have an EC2 instance which runs an app hosted on a private git repo.
I need to be able to launch many of these from my master server. At the moment, I have 5 fixed "worker" instances which I start/stop from the master with no problem. Each worker starts, pulls the repo, and launches the app on startup. This is obviously not a good solution and I want to make it more flexible (launch as many instances as I want, etc). The configuration and packages are final so I feel good about bundling it all into an AMI.
Is there a way for me to bundle my git keys into the AMI, in order to launch many similar instances and have them all pull and launch my app on startup without heving to connect to each of them and enter the password? Is there a better way? I've read about cloud-init, user-data, puppet and many other things, but I'm quite novice in the matter and couldn't find a proper example using ssh keys.
Instead of bundling the keys into the AMI, I suggest you keep them separate from the AMI because:
If you change your git keys, you don't have to build a new AMI
Unauthorized users who have privileges to launch an instance from your AMI cannot launch your app
I suggest using the user-data feature. You can optionally encrypt your keys and base64encode it if you want to. When you launch your instance manually or using CLI/API, you can pass your keys which can be accessed by the instance once it is launched. There are variety of ways to access the data (python, curl to name a few). I suggest you use AWS metadata server because your instance does not need your AWS credentials to fetch the user-data. Once your instance is launched, have your app make the following call, get the keys and then pull the repo:
curl http://169.254.169.254/latest/user-data
returns your metadata (no credentials needed). You can optionally base64decode and decrypt your keys and use it to pull the repo. If you do not want the extra security, you can bypass encrypt/base64 part.