Can anyone help me how should a bash script to check if the RDS instance are running, and if they are, to stop them, look like.
And if they are not running, the script should exit with a message:
No instances running.
I have used aws cli to start and stop the instances, but I am not sure how to collaborate them in one single script.
aws rds stop-db-instance --db-instance-identifier mydbinstance
Related
I'm doing an RND on ansible for just practice where I want to launch EC2 instance using Ansible and as soon as EC2 instance got up I want ansible to start configuring environment e.g. "LAMP Server".
What I have done till now:
So far I'm able to launch the ec2 instance but don't know what to do now cause how I suppose to automate it when I don't even know what will be the IP of the launched EC2 instance.
I am provisioning my Infracture with Terraform and I am using xyz.sh bash script which consists my Deeplearning Model training over GPU Machine.
My question is, How will I get the logs/finishing time of xyz.sh bash script without ssh into the machine? if not possible then, if I will ssh into the machine so, How can i check that the script is still running or finished
When you use user_data for an EC2 instance, what happens internally is that Terraform sends that string to the EC2 API and then the EC2 infrastructure makes that string available to the instance via the Instance metadata and user data API.
How (and whether) that string is used by the EC2 instance is then dependent on what software you have installed in the EC2 instance. A typical configuration for common Linux distribution AMIs is to have cloud-init installed and configured to run on first boot. If you are using an AMI with cloud-init then it will be cloud-init that retrieves the user_data string from the EC2 endpoint and executes it as a script (or, other interpretations), and so cloud-init is the system responsible for emitting any logs that result from that process.
You can read more about debugging cloud-init in Testing and debugging cloud-init, which mentions that cloud-init writes logs to /var/log/cloud-init.log by default (some Linux distributions may customize this) and that you can use the cloud-init analyze subcommand to retrieve information from that log file.
Terraform's involvement in this process is only to send the given user_data string to the EC2 API, so Terraform has no visibility into what happens after the instance is created. Unless the script you submit includes a step to report its progress somewhere, there is no built-in way to determine that other than to inspect the cloud-init log file from within the EC2 instance itself.
You can run command
ls -la /var/log/cloud*
it will out put few of log related to user data :
(in my case, I use Ali Cloud, so it shows as) :
Then you need to identify which one is your userdata, in my case the /var/log/cloud-init-output.log is my all userdata output will be stored.
Other cloud providers might be a little different but the concept should be the same because most of the cloud are using same cloud-init library https://cloud-init.io/
Note: you need to ssh into server.
How to run nginx and Node.js at server startup?
In order to start Amazon EC2 with AWS Auto Scaling, I must directly connect to EC2 to run nginx and Node.js.
Can this be done for Auto Scaling?
In Amazon EC2, I want to run nginx with Node.js during EC2 startup by Auto Scaling.
EC2 is set up as an Auto Scaling group using images. I want to run EC2 Node.js applications and nginx, which are started by Auto Scaling, together with the EC2 server startup.
For nginx, I can run the executable with chkconfig, but the Node.js application will run as pm2, using the code written in package.json.
How can I run nginx and Node.js with EC2 startup and let the new EC2 -- started with Auto Scaling -- respond properly?
comment reply :
I don't want to run node.js using "node app.js" command.
I want to run node.js by package.json ( script )
ex.
"start": "NODE_ENV=production PORT=3000 pm2 start server.js -i -1"
How can I do this?
Your suggestions are using linux server init script file.
But, I want to set NODE_ENV, PORT and use pm2 command.
solution
I solved the problem.
When Linux booted, I tried to use the script file to automatically run node.js.
I created the script file and made the shell script run automatically after linux booted, but it did not seem to be a good idea.
Alternatively, pm2 startup and ecosystem.config.js can be used to solve problems flexibly.
Thank you for your reply.
This has nothing to do with autoscaling. It most often has to do with the EC2 AMI (Amazon Machine Image) that the autoscaler is launching your EC2 instances with, and possibly also with the "user metadata" that you are passing to the instance when it launches. These are the only two things that impact what an EC2 instance does when and after it starts up, up until it starts communicating with the outside world.
So what you need to do is create an AMI that is set up so that the right things launch when an EC2 instance is launched from that AMI. What you'd do is take the AMI you want to use as a starting point, launch that AMI into an instance, make the necessary changes and installations you want, and then save off a new AMI. Then change your autoscaling group to launch new instances with that new AMI.
You could alternately send a script in your "user metadata" that launches things, but this is rarely what you want to do". Most of the time, you want to have your AMI do the right thing.
It's also possible that you are using some sort of post-boot provisioner, like Chef, Ansible or Chef Habitat. If you are, that's where you'd set all of this stuff up. You'd want that system to do the work you're describing. But if you're doing that, what I have said earlier still applies. For this to work, you'd often have also built a custom AMI that has parts of the provisioning system already built into it, so that that system can connect into it and provision it. It's possible for these systems to start from a generic AMI as well. It depends on the system.
I am trying to launch my own AMI using user-data so that it can run a script and then terminate.
So I launched an Ec2 Windows Base and configure it to have all the tools I need (NodeJS etc) and saved my script to C:\Projects\index.js.
I then saved it as an Image.
So I then used the console to launch an EC2 from my new AMI with the user-data of
node C:\Projects\index.js --uuid=1
</powershell>
If I run that command having RDP into the EC2 it works, so it seems that the userdata did not run when the Image was started.
Having read some of the other questions and answers it could be because the AMI created was made from an Instance that started already. So the userdata did not persist.
Can anyone advise me on how I can launch my AMI with a custom userdata each time? (as the UUID will change)
Thanks
Another solution that worked for me is to run Sysprep with EC2Launch.
The issue is that AWS doesn't reestablish the route to the profile service (169.254.169.254) in your custom AMI. See response by SanjitPatel in this post. So when I tried to use my custom AMI to create spot requests, my new instances were failing to find user data.
Shutting down with Sysprep, essentially forces AWS re-do all setup work on the instance, as if it were run for the first time. So when you create your instance, shut it down with Sysprep and then create your custom AMI, AWS will setup the profile service route correctly for the new instances and execute your user data. This also avoids manually changing Windows Tasks and executing user data on subsequent boots, as persist tag does.
Here is a quick step-by-step:
1.Create an instance using one of the AWS Windows AMIs (Windows Server 2016 Nano Server doesn't support Sysprep) and passing your desired user data (this may be optional, but good to make sure AWS wires setup scripts correctly to handle user data).
2.Customize your instance as needed.
3.Shut down your instance with Sysprep. Just open EC2LaunchSettings application and click "Shutdown with Sysprep".
4.Create your custom AMI from the instance you just shut down.
5.Use your custom AMI to create other instances, passing user data on instance creation. User data will be executed on instance launch. In my case, I used Spot Request screen, which had a User Data text box.
Hope this helps!
I have a node.js application that runs inside docker in aws ec2 fargate.
It started to consume high cpu, and i wonder if i can profile it
I couldn't find a way to connect via ssh, and I am not sure if it helps to run it with --prof flag
I am a newbie in AWS myself, so please check everything that I will say. EC2 Fargate is provisioning EC2 instances for you and you are not allowed to interact with them directly (ssh) but I think you can use CloudWatch Logs, that prints every console.log of your app in the specified log groups. There are must be some configurations when you create your task definition or container defifnition. (at least in Cloudformation which I hardly recommend to use). You can console.log the number of users or function calls and use this info to debug what is happening.