Redis connection based on latency - node.js

I'm writing a nodejs webserver that will exist in multiple regions across the world.
When using redis on node is it possible to provide a list of all my redis servers and the client can connect to the one that's closest based on latency?

Can we assume that you're using Redis 3+?
If so, CLUSTER NODES is your friend.
For a horrible bash example, on a Redis server with Puppet's facter installed,
INITMASTER1ID=$(redis-cli -h $(facter ipaddress) -c CLUSTER NODES | grep $(facter ipaddress) | grep -Eo '^[^ ]+')
NODEIDLIST=$(redis-cli -h $REDIS2 -c CLUSTER NODES | grep -Eo '^[^ ]+')
http://redis.io/commands/cluster-nodes
For some less horrible examples written in Node.JS, check the docs for https://www.npmjs.com/package/redis-cluster.

Related

Is it possible to connect to running NestJS server stderr/stdout?

I'm not sure if it's possible, but I want to connect to existing application to see what logs it produce in realtime. Is it possible to do so from Linux terminal?
If you know the pid of the process (it should be in the start up logs, or retrieved via a ps aux | grep <port> call) you should be able to tail it via tail -f /proc/<pid>/fd/2. Command found from here

How to calculate PostgreSQL memory usage on Linux?

I searched a lot about how to calculate memory usage of PostgreSQL process on Linux. I read article about how to calculate Memory usage for a generic process but I think that PostgreSQL has some peculiarity. For example, it has some basic processes:logger, checkpointer, bg writer, etc. But Linux creates also a process for each client connection on the master node.
The easy way to calculate the memory usage is with ps command listing the RSS of each process:
ps -aux | grep -v grep | grep postgres | awk '{ print $6 }'
and then sum it. But this doesn't work since the result is larger than the total memory.
Some articles suggest the use of:
/proc/PID/smaps
but as said above I have more PID and I am unable to find a website that shows a script that let me easily calculate this information.
I found this interesting article, but it's not clear to me how to convert it into a working script.
https://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/
Does anyone know which is the best approach to solve this issue?
To apply the information from the blog you quote:
ps -u postgres o pid= | \
sed 's# *\(.*\)#/proc/\1/smaps#' | \
xargs sudo grep ^Pss: | \
awk '{A+=$2} END{print A}'
first, get the process numbers of all processes running under user postgres
then get the name of the corresponding smaps file (/proc/PID/smaps)
then get the Pss line from that file, which contains the “proportional stack size” which divides shared memory by the number of processes attached to it
finally, add up the numbers

Monitoring multiple Linux Systems or Servers Script

I want to modify my script in such a way it can monitor my cpu, memory and ram on 4 servers on my network, the script below is a script that can monitor for one server, is there a way i can check or modify my script below if i have the hosts and username and password.
printf "Memory\t\tDisk\t\tCPU\n"
end=$((SECONDS+3600))
while [ $SECONDS -lt $end ]; do
MEMORY=$(free -m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }')
DISK=$(df -h | awk '$NF=="/"{printf "%s\t\t", $4}')
CPU=$(top -bn1 | grep load | awk '{printf "%.2f%%\t\t\n", $(NF-2)}')
echo "$MEMORY$DISK$CPU"
sleep 5
done
any ideas or suggestions?
A simple, naive implementation might look like:
for server in host1 host2 host3 host4; do
ssh "$server" bash -s <<'EOF'
...your script here...
EOF
done
...with RSA keys preconfigured for passwordless authentication. That could be made slightly less naive by leveraging ControlMaster/ControlSocket functionality in ssh, so you're keeping the same transport up between multiple ssh sessions and reusing it wherever possible.
However -- rolling your own system monitoring tools is a fool's errand, at least until you've been around the block with the existing established ones, know their strengths, know their weaknesses, and can make a reasoned argument as to why they aren't a good fit for you. Use something off-the-shelf maintained by people who've been doing this for a while.

Too many open files in system in kubernetes cluster [duplicate]

I am trying create a bunch of pods, services and deployment using Kubernetes, but keep hitting the following errors when I run the kubectl describe command.
for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container bbdb58770a848733bf7130b1b230d809fcec3062b2b16748c5e4a8b12cc0533a: [8] System error: too many open files in system\n"
I have already terminated all pods and try restarting the machine, but it doesn't solve the issue. I am not an Linux expert, so I am just wondering how shall find all the open files and close them?
You can confirm which process is hogging file descriptors by running:
lsof | awk '{print $2}' | sort | uniq -c | sort -n
That will give you a sorted list of open FD counts with the pid of the process. Then you can look up each process w/
ps -p <pid>
If the main hogs are docker/kubernetes, then I would recommend following along on the issue that caesarxuchao referenced.

distinct several services nodejs with ps aux | grep node

I have 6 services nodejs running in one server. Sometimes I need to kill one of them, but when I execute:
ps aux | grep node
All the 6 services appears with the same name. I dont know which one of them to kill.
There are some way to diff. them?
You can use PM2 to manage your nodejs applications. It can also help you in restarting the node process once server is rebooted.
If you know the port of your services and linux, this could help:
sudo netstat -tulpn

Resources