When will AWS RDS Freeable Memory be recycled? - amazon-rds

The production environment uses AWS RDS (Mysql), RDS Freeable memory has been declining, as picture 1 shown, the RDS instance type is db.m3.xlarge, when can Freeable memory be recycled? Or it will always drop, drop to 0?

As per the documentation available at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html#CHAP_BestPractices.UsingMetrics, freeable memory is the RAM in use by your instance. If you are running out of freeable memory, you should upgrade your instance.
Freeable Memory – How much RAM is available on the DB instance, in megabytes. The red line in the Monitoring tab metrics is marked at 75% for CPU, Memory and Storage Metrics. If instance memory consumption frequently crosses that line, then this indicates that you should check your workload or upgrade your instance.
However, on your instance the available RAM is most likely not causing issues. There are recommendations for configuring RAM on the same article: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html#CHAP_BestPractices.Performance.RAM.

Related

Geeting Redis memory peak issue

I've deployed the project on AWS & we are using local Redis. We have around 150+ Redis keys. having a lot of data. We have 16 GB RAM on the EC2 instance & in Redis config, we defined maxmemory 10 GB.
But we are getting bellow error.
--> redis-cli
--> memory doctor
Sam, I detected a few issues in this Redis instance memory implants:
Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.
I'm here to keep you safe, Sam. I want to help you.
That would be great if anyone can help us to resolve this as soon as possible.
Please let us know.

Varnish RAM problems

I have problems with RAM on servers which use varnish and only it (no other apps there). Each machine has 64GB RAM available for caching and has three separated varnish services for different backends. Currently, the sum of RAM allocated to varnish from all services is 24GB RAM on each server. I want to increase this value up to 48GB (75% of the whole memory available) but I have some problems.
When I tried to allocate 8GB more just for one service (32GB from all), the committed memory got a peak to 70GB RAM(?). What's more, the increased service has restarted a few times after getting 100% ram allocated to its limit (error msg: child not responding to CLI, killing it/died signal=6/Panic message: Assert error in vbf_fetch_thread()). In addition, services use a lot of VSZ (virtual memory size), it that okay?
This could be Transient memory which is uncapped by default and we'll use malloc as storage.
In Transinet stevedore Varnish stores object with a TTL < 10s, therefore if you have many of those that's what you see.
The solution is to either increase the TTLs or cap the Transient storage.
I've changed jemalloc parameters lg_dirty_mult and lg_chunk. Now I was able to assign 42 GB ram to varnish and committed memory is around 60 GB now. The main varnish task is to cash images and TTL is set to 365d.

Cassandra not utilizing CPU and RAM

We are running a Cassandra 5 nodes cluster (3.10) with 8 cores, 32 memory and 2TB disk each.
The cluster is running in k8s over google cloud.
Recently our disk size was increased from 400GB to ~ 800GB in each node, at that point we start suppering from many read/write timeouts.
when checking the usage of the node in their resources we notice that their CPU is at 1.5 - 2, ram is 17GB.
it seems like they are bound from some reason and the only observation we saw is that there a reverse correlation between disk size and used cpu, the higher disk usage the lower the cpu usage.
is there a way to see what's blocking the CPU and RAM from utilizing 100% of their resources?

Kubernetes doesn't take into account total node memory usage when starting Pods

What I see: Kubernetes takes into account only the memory used by its components when scheduling new Pods, and considers the remaining memory as free, even if it's being used by other system processes outside Kubernetes. So, when creating new deployments, it attempts to schedule new pods on a suffocated node.
What I expected to see: Kubernetes automatically take in consideration the total memory usage (by kubernetes components + system processes) and schedule it on another node.
As a work-around, is there a configuration parameter that I need to set or is it a bug?
Yes, there are few parameters to allocate resources:
You can allocate memory and CPU for your pods and allocate memory and CPU for your system daemons manually. In documentation you could find how it works with the example:
Example Scenario
Here is an example to illustrate Node Allocatable computation:
Node has 32Gi of memory, 16 CPUs and 100Gi of Storage
--kube-reserved is set to cpu=1,memory=2Gi,ephemeral-storage=1Gi
--system-reserved is set to cpu=500m,memory=1Gi,ephemeral-storage=1Gi
--eviction-hard is set to memory.available<500Mi,nodefs.available<10%
Under this scenario, Allocatable will be 14.5 CPUs, 28.5Gi of memory and 98Gi of local storage. Scheduler ensures that the total memory requests across all pods on this node does not exceed 28.5Gi and storage doesn’t exceed 88Gi. Kubelet evicts pods whenever the overall memory usage across pods exceeds 28.5Gi, or if overall disk usage exceeds 88GiIf all processes on the node consume as much CPU as they can, pods together cannot consume more than 14.5 CPUs.
If kube-reserved and/or system-reserved is not enforced and system daemons exceed their reservation, kubelet evicts pods whenever the overall node memory usage is higher than 31.5Gi or storage is greater than 90Gi
You can allocate as many as you need for Kubernetes with flag --kube-reserved and for system with flag -system-reserved.
Additionally, if you need stricter rules for spawning pods, you could try to use Pod Affinity.
Kubelet has the parameter --system-reserved that allows you to make a reservation of cpu and memory for system processes.
It is not dynamic (you reserve resources only at launch) but is the only way to tell Kubelet not to use all resources in node.
--system-reserved mapStringString
A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none]

What is the relationship between three metrics of RDS: Free Memory,Active Memory and Freeable Memory?

What are the three metrics of AWS RDS: Free Memory (Enhanced monitoring), Active Memory (Enhanced monitoring), and Freeable Memory (CloudWatch monitor)?
What is the relationship between them?
Look at these two pictures.
The value of three metrics are different.
Let me answer your question in two parts.
What is difference between Enhanced monitoring and Cloudwatch monitoring?
As per official guide
Amazon RDS Enhanced Monitoring — Look at metrics in real time for the
operating system.
Amazon CloudWatch Metrics – Amazon RDS automatically sends metrics to
CloudWatch every minute for each active database. You are not charged
additionally for Amazon RDS metrics in CloudWatch.
Meaning, enhanced monitoring allows your to monitor operating system counters while cloudwatch monitoring enables your to monitor performance counters per database instance.
What does Free/Active/Freeable memory represents?
Enhanced monitoring info source
Free Memory
The amount of unassigned memory, in kilobytes.
Active Memory
The amount of assigned memory, in kilobytes.
Freeable Memory
Official Source
The amount of available random access memory.
Units: Bytes
Freeable memory is not a indication of the actual free memory
available. It is the memory that is currently in use that can be freed
and used for other uses; it's is a combination of buffers and cache in
use on the database instance.

Resources