Recently we have been facing problem on RDS as our allocated diskspace is getting exhausted on Amazon RDS.
So is there any tool/plugins (nagios plugin) available or any other utility through which we can monitor RDS Disk utilization?
P.S: We know cloudwatch can do this, but we are looking for other alternatives as well.
We use this check_cloudwatch plugin to grab any cloudwatch metric so it can graphed in our private cloud using PNP4Nagios and Graphite, works very well.
check_cloudwatch on github
I use aws cloud watch to monitor my rds instance's free space and cpu utilization. You can set up email alerts based on thresholds say when freespace < 25 GB. Like freespace there are variety of things you can monitor like R/W latency, DB connections, etc. You can find the steps (both via UI and cli) to setup cloud watch here.
aws cloud watch setup
Related
I'm running three MEAN stack programmes. Each application receives over 10,000 monthly users. Could you please assist me in finding an EC2 instance for my apps?
I've been using a "t3.large" instance with two vCPUs and eight gigabytes of RAM, but it costs $62 to $64 per month.
I need help deciding which EC2 instance to use for three Nodejs applications.
First check CloudWatch metrics for the current instances. Is CPU and memory usage consistent over time? Analysing the metrics could help you to decide whether you should select a smaller/bigger instance or not.
One way to avoid too unnecessary costs is to use auto scaling groups and load balancers. By using them and finding and applying proper settings, you could have always right amount of computing power for your applications.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html
Depends on your applications. If your apps need more compute power or more memory or more storage? Deciding a server is similar to installing an app on system. Check what are basic requirements for it & then proceed to choose server.
If you have 10k+ monthly customers, think about using ALB so that traffic gets distributed evenly. Try caching to server some content if possible. Use unlimited burst mode of t3 servers if CPU keeps hitting 100%. Also, try to optimize code so that fewer resources are consumed. Once you are comfortable with ec2 choice, try to purchase saving plans or RIs for less cost.
Also, do monitor the servers & traffic using Cloudwatch agent, internet monitor etc features.
I'm looking for an official AWS CloudWatch Appender for Log4J2.
I've search all over and didn't find any.
Anybody there using CloudWatch in Java Apps with Log4J2?
I've been reading that the best approach to integrate with AWS Cloud Watch logs is using the Cloud Watch Log Agent.
It seems that having an independent agent will be much more reliable that the Application logging directly to Cloud Watch.
[Update] Why it may be more reliable?
If CloudWatch or the WebServer connection is down, the Appender may miss the Log Event. A write to disk would never be miss.
Nothing is faster than write to a stream file on local disk. When high log volume, sending data through a TCP connection could have performance impact or bottolnecks in the Application.
I would support the answer from Gonzalo.
I just want to update the answer with the new unified agent that can collect both logs and performances.
Collecting Metrics and Logs from Amazon EC2 Instances
I have seen two levels of scaling instances in open-source Cloud Foundry.
cf scale -i INSTANCES
cf scale -m MEMORY -k DISK
Is there something available for a cell-level auto-scaling in CF? e.g. If I have 5 instances of an app running and I want to launch 15 more but the current no. of cell VMs that are running have a capacity of running only 15 instances in total. Can I use an existing service that recognises that the load to be served would need one more cell to be launched and spawn another machine?
I'm looking to deploy CF on Azure, so Azure-specific solution would also help.
I think the short answer is no (at least at the time of me writing this). Usually, Cloud Foundry is deployed using Bosh and Bosh does not have an auto scaling feature.
The way that a CF platform is typically managed is that as a CF operator, you would have monitoring setup so that you can see the capacity of your platform (there are metrics that tell you how much capacity is left on your Cells) and also alert when your platform hits certain capacity limits. When you reach these, you can then use Bosh to scale up or down the number of Cells accordingly. This would be a manual operation with Bosh though.
Having said that, I suppose there's nothing to stop you from using the alerts to automatically trigger Bosh to scale up or down the Cells, there's just nothing (as of me writing this) to do that out-of-the-box (i.e. part of Bosh itself).
Hope that helps!
I deployed a node.js app as a learning tool and noticed that I'm getting billed for the project (around a $1/day). I know node.js on Google Cloud uses Compute Engine to run the vm's, but they say the flexible environment has all the advantages of the AppEngine platform, but it seems the instances don't auto stop and start to reduce billing when not in use.
I have java project that's been running on App Engine for years and I've never been billed anything, i'm guessing that's because the instances are shutdown automatically when not in use. So my questions are;
Is there a way to configure the flexible environment to mimic the standard environment to reduce the operating costs?
Am I miss-using something with the flexible environment?
According to Google App Engine Documentation,
Instances within the standard environment have access to a daily limit
of resource usage that is provided at no charge defined by a set of
quotas...
Instances within the flexible environment are charged the cost of the
underlying Google Compute Engine Virtual Machines.
According to this article,
Currently, the Flexible Environment needs at least one instance
running to serve traffic and there is no free tier.
This means that at any one time, you have at least one instance running, if you're using a Flexible VM. That should explain the billing.
Please note that by default appengine launches two g1-small instances. Depending on your application needs, this may be an over-kill. You should configure the compute resource settings in your app.yaml to the appropriate sizes of RAM, disk size and CPU, so as to save costs. You may also want to specify the min_num_instances as 1 in your service scaling settings.
I had the same problem. You can try to use Google's pricing calculator to figure out which configuration you need and how to minimize the cost of your application.
According to the calculator, the minimal cost for a flexible environment app is a little less than 40$ per month, There is nothing to do about it right now.
I eventually moved to Heruko because of that.
At the moment we are running our application on an AWS Beanstalk but are trying to determine the suitablilty of Azure.
Our biggest issue is the amount of wasted CPU time we are paying for but not using. We are running on t2.small instances as these have the min amount of RAM we need but we never use even the base amount of CPU time allotted. (20% for a t2.small ) We need lots of CPU power during short bursts of the day and bringing more instances on line in advance of this is the only way we can handle it.
AWS Lambda looks a good solution for us but we have dependencies on Windows components like SAPI so we have to run inside of Windows VMs.
Looking at Azure cloud services we thought using a Web role would be best fit for our app but it seems a Web role is nothing more than a Win 2012 VM with IIS enabled. So as the app scales it just brings on more of these VMs which is exactly what we have at the moment. Does Azure have a service similar to Lambda where you just pay for the CPU processing time you use?
The reason for our inefficient use of CPU resources is that our speech generation app uses lost of 3rd party voices but can only run single threaded when calling into SAPI because the voice engine is prone to crashing when multithreading. We have no control over this voice engine. It must have access to a system registry and Windows SAPI so the ideal solution is to somehow wrap all dependencies is a package and deploy this onto Azure and then kick off multiple instances of this. What "this" is I have no Idea
Microsoft just announced a new serverless compute service as an alternative to AWS Lambda, called Azure Functions:
https://azure.microsoft.com/en-us/services/functions/
http://www.zdnet.com/article/microsoft-releases-preview-of-new-azure-serverless-compute-service-to-take-on-aws-lambda/
With Azure Functions you only pay for what you use with compute metered to the nearest 100ms at Per/GB price based on the time your function runs and the memory size of the function space you choose. Function space size can range from 128mb to 1536mb. With the first 400k GB/Sec free.
Azure Function requests are charged per million requests, with the first 1 million requests free.
Based on the documentation on Azure website here: https://azure.microsoft.com/en-in/campaigns/azure-vs-aws/mapping/, the services equivalent to AWS Lambda are Web Jobs and Logic Apps.
The most direct equivalent of Lambda on Azure is Azure Automation which does a lot of what Lambda does except it runs Powershell instead of Node etc. It isn't as tightly integrated into other services like Lambda is, but it has the same model. i.e. you write a script, and it is executed on demand.
I presume by SAPI you are refering to the speech API? If so you can create Powershell modules for Azure, and they can include dll files. In which case you could create a module to wrap around the SAPI dll, and that should do what you are looking for.
If you want a full compute environment, without the complexity of multiple machines when you run. You could use Azure Batch which would be the Azure recommended way of running what you are looking for.
The cost benefit you need to evaluate would be how much quicker your solution would run against a native .net stack (in batch), and if performance is significantly degraded when run from Powershell.
Personally I would give Automation a try, it is surprisingly powerful.
There is something called "Cloud Service" in azure which allows you to run code on a pure VM. Scaling options on these include such things as CPU%, queue size, etc. If you can schedule your needs, Azure allows you to easily set up a scheduled scaler, i.e. 4 VM's from 8AM until 08:10AM, and of course, in Azure, you pay by the minute, so it could be a feasible solution.
I'd say more, but the documentation in Azure is really so great that I'd be offending them by offering my "translation" here. Checkout azure.com for more info :)