AWS SSM unable to perform patching on Windows EC2 Instances - aws-ssm

We have close to 30 AWS windows quest servers for which we have used AWS SSM for patching the devices with a maintenance window. The issue is now that earlier these devices used WSUS for updates, but customer has changed the setup from WSUS to SCCM. Since the update mechanism is changed to SCCM, AWS SSM is unable to patch this windows instances stating no updates found.
Can someone please help me on how can I automate or fix this issue, so that AWS SSM can go and check SCCM agent and update the patches.
Any other suggestions are welcome.
Thanks
Avinash
Tried AWS SSM--Maintenance Window--set the target for Windows Ec2 instances.
We have close to 30 AWS windows quest servers for which we have used AWS SSM for patching the devices with a maintenance window. The issue is now that earlier these devices used WSUS for updates, but customer has changed the setup from WSUS to SCCM. Since the update mechanism is changed to SCCM, AWS SSM is unable to patch this windows instances stating no updates found.

Related

How to automatically upload ASP.NET Core crash dumps to an Amazon S3 bucket?

We have an ASP.NET Core 3.1 application running in Amazon EC2 instances with Amazon Linux 2 (RHEL based).
Periodically our application crashes with an 11/SEGV status (segmentation fault) so we enabled minidumps to be generated with an environment variable (COMPlus_DbgEnableMiniDump) as documented here
As multiple instances of the application run simultaneously within an auto scaling group, it's hard to keep track of the crashes, so I need to know if there is any tool or recommended way of logging each of these crashes and uploading the generated minidump file into an S3 bucket, so we can easily retrieve them and analyze them in our development environment.
Any recommendations?
Thank you!
Sorry that I am late to this conversation. Hopefully, you have found a solution to this by now.
Adding my thoughts here to help anyone else with a similar challenge.
I can think of a couple of solutions:
Since the application is running on a Linux instance, you could consider saving the crash dumps to an EFS instance. Register a lifecycle hook handler to the ASG and raise an SNS notification capturing the necessary details of the crash dump file.
Option 1: Deploy a process as a side-car that responds to the
notification and moves the dump file to S3 bucket. Please note that
the dump file will be moved by the process that runs on the new
instance (or other instances) spun up by the ASG
Option 2: Deploy the process that is responsible for moving the dump
files to S3 in a dedicated EC2 instance and attach the same EFS
the instance used by the actual service
Option 3: Create a lambda with required permissions to access the EFS
access points.
Refer to AWS EC2 Lifecycle Hooks

How are OS patches with critical security update on GCE, GKE and AI Platform notebook?

Is there complete documentation that explains if and how critical security updates are applied to an OS image on the following IaaS/PaaS?
GCE VM
GKE (VM of in a cluster)
VM on which is running AI Platorm notebook
In which cases is the GCP team taking care of these updates and in which cases should we take care of it?
For example, in the case of a GCE VM (Debian OS) the documentation seems to indicate that no patches are applied at all and no reboots are done.
What are people doing to keep GCE or other VMs up to date with critical security updates, if this is not managed by GCP? Will just restarting the VM do the trick? Is there some special parameter to set in the YAML template of the VM? I guess for GKE or AI notebook instances, this is managed by GCP since this is PaaS, right? Are there some third party tools to do that?
As John mentioned, for the GCE Vm instances, you are responsible for all of the packages updates and it is handled like in any other System:
Linux: sudo apt/yum update/upgrade
Windows: Windows update
There are some internal tools in each GCE image that could help you to automatically update your system:
Windows: automatic updates are enabled by default
RedHat/Centos systems: you can use yum-cron tool to enable automatic updates
Debian: using the tool unattended-upgrade
As per GKE, I think this is done when you upgrade your cluster version, the version of the master is upgraded automatically (since it is Google managed), but the nodes should be done by you. The node update can be automated, please see the second link below for more information.
Please check the following links for more details on how the Upgrade process works in GKE:
Upgrading your cluster
GKE Versioning and upgrades
As per "VM on which is running AI Platform notebook", I don't understand what do you mean by this. Could you provide more details

AWS: How to launch multiple of the same instance from python?

I have an AWS Windows Server 2016 VM. This VM has a bunch of libraries/software installed (dependencies).
I'd like to, using python3, launch and deploy multiple clones of this instance. I want to do this so that I can use them almost like batch compute nodes in Azure.
I am not very familiar with AWS, but I did find this tutorial.
Unfortunately, it shows how to launch an instance from the store, not an existing configured one.
How would I do what I want to achieve? Should I create an AMI from my configured VM and then just launch that?
Any up-to-date links and/or advice would be appreciated.
Yes, you can create an AMI from the running instance, then launch N instances from that AMI. You can do both using the AWS console or you could call boto3 create_image() and run_instances(). Alternatively, look at Packer for creating AMIs.
You don't strictly need to create an AMI. You could simply the bootstrap each instance as it launches via a user data script or some form of CM like Ansible.

AWS EC2 AMI launch with user-data

I am trying to launch my own AMI using user-data so that it can run a script and then terminate.
So I launched an Ec2 Windows Base and configure it to have all the tools I need (NodeJS etc) and saved my script to C:\Projects\index.js.
I then saved it as an Image.
So I then used the console to launch an EC2 from my new AMI with the user-data of
node C:\Projects\index.js --uuid=1
</powershell>
If I run that command having RDP into the EC2 it works, so it seems that the userdata did not run when the Image was started.
Having read some of the other questions and answers it could be because the AMI created was made from an Instance that started already. So the userdata did not persist.
Can anyone advise me on how I can launch my AMI with a custom userdata each time? (as the UUID will change)
Thanks
Another solution that worked for me is to run Sysprep with EC2Launch.
The issue is that AWS doesn't reestablish the route to the profile service (169.254.169.254) in your custom AMI. See response by SanjitPatel in this post. So when I tried to use my custom AMI to create spot requests, my new instances were failing to find user data.
Shutting down with Sysprep, essentially forces AWS re-do all setup work on the instance, as if it were run for the first time. So when you create your instance, shut it down with Sysprep and then create your custom AMI, AWS will setup the profile service route correctly for the new instances and execute your user data. This also avoids manually changing Windows Tasks and executing user data on subsequent boots, as persist tag does.
Here is a quick step-by-step:
1.Create an instance using one of the AWS Windows AMIs (Windows Server 2016 Nano Server doesn't support Sysprep) and passing your desired user data (this may be optional, but good to make sure AWS wires setup scripts correctly to handle user data).
2.Customize your instance as needed.
3.Shut down your instance with Sysprep. Just open EC2LaunchSettings application and click "Shutdown with Sysprep".
4.Create your custom AMI from the instance you just shut down.
5.Use your custom AMI to create other instances, passing user data on instance creation. User data will be executed on instance launch. In my case, I used Spot Request screen, which had a User Data text box.
Hope this helps!

AWS Linux API Patching

I am having around 300 red hat server on AWS which are hosting application. I want to make sure that my Linux instances are up to date from security and other point of view. Also i can not launch a new linux instance and delete the old one to get update one.
Can anyone please suggest me how to patch the AWS linux instances from a centralize location to all 300 servers.
Thanks
Manu.
This seems like a DevOps type question there are many ways to script this use the AWS API and the like in many languages from Python (Using something like Paramiko) or even straight up bash.
Provided you have the keys to access these linux instances the script should be trivial:
Get List of 300 servers from AWS using boto3 or awscli
Iterate over the server set to find the IP which you can SSH (private/public)
SSH to the instance and assume the root account
perform the yum update | yum upgrade command
Log out and move on to the next
Get a coffee and wait
Hope that helps!
Thanks,
//P

Resources