Scale up multiple microservices through script kubernetes/openshift - linux

I have to scale up and down in one go nearly 200 microservices in my environment is there a possible way to automate scale up and scale down through a script.

While using xargs or parallel or something might work. I suggest using command substitution or similar means, as much as possible, to give Kubernetes right away a list of objects.
For example, you can try the below, which does a dry-run but shows you want would be scaled.
kubectl scale --replicas 2 $(kubectl get deploy -o name) --dry-run=client
The list of deployments, could come also from a text file or somewhere else.

There is for sure a way to script that but we are missing a bit of context about the why you want to do that.
Generaly speaking i would prefer go to the approach of using Horizontal Pod AUtoscaler or even things like knative that allow scaling from zero for doing automated scaling.
But if the reason is unusual such as a maintenance window or something eelse you can go use the scripting approach.
Then 2 you have 2 weapons at your disposal using the raw k8s api through a postman script or something similar or draw your best shell scripting techniques. Going further this way i can imagine something like this will work:
#!/bin/bash
set -eu -o pipefail
cat targets.txt | parallel echo "kubectl scale --replicas 0 -n {}"
where targets.txt is formated this way:
each line is one microservice with the following format "namespace obejctType/objectName"
Exemple:
default deploy/nginx
anotherns deploy/api
... more services
The advantage of using parallel over doing a simple bash loop is that this will be done in ... parallel :-)
This will save you a lot of time.
Also you might also want to include a display/check at the end showing the number of replicas but this wasn't specified in your question.

Related

How to access standard outputs while the process is running on a computer cluster?

Intro
I use a computer cluster on which I submit (qsub) jobs via a .pbs file. The default is to print standard outputs and standard errors on files in the working directory from which the qsub command is executed.
Issue
The issue for me is that the standard output and standard error files appear only at the end of the process while I would love to be able to monitor the advances of the process in direct.
Thoughts
I know the following options exist
#PBS -o
#PBS -e
#PBS -j [oe/eo]
in order to indicate specific file names and locations for standard outputs and standard errors. However, two points are unclear to me...
I don't quite know how it works when using array (via #PBS -t). Will standard outputs from one PBS_ARRAYID overwrite previous standard outputs? Do all the standard outputs for the whole array get printed on the same file or is there a way to include the PBS_ARRAYID in the file name?
Would using these options help me in any way to access the standard outputs before the end of the process?
Question
How can I visualize the standard outputs and standard errors before a process is over when the process is submitted on a computer cluster?
According to this documentation, each task in an array job gets the environment variable PBS_ARRAYID set, and so you could include that in your settings:
#PBS -o /path/for/$PBS_JOBNAME.$PBS_ARRAYID
I'm not sure why your engine would be withholding the output files until the job completes. (My grid engine doesn't do so.) Perhaps it writes them into a temporary location or under a temporary name and then moves them when the job is done?
In the computer cluster I am using at least the standard outputs are kept in the RAM saved to the #PBS -o at the end of the process only. One way around that is to explicitly redirect the standard outputs in the shell script. Do something like:
executable arguments >& StandardOutputs_${PBS_JOBNAME}.txt

How to control the top command in Linux using a script?

Running top interactively you can do different things. Is there a way to write a bash script that would interact with top without using programs like xdotool?
Your best option, if you want to interact with top from a script, would be to have a script that runs it once and captures the output (top -n1 will make it run once and then terminate). You can have your script run top -n1 with appropriate other parameters every time it wants to capture new output, and control it accordingly.
Trying to run an interactive top session, and have a script send it keystrokes, would be very fragile and quite a mess.
top isn't the best tool to run from scripts. Foremost, it's output isn't designed to be processed by scripts. But if you tell us what you want to achieve, then we can tell you commands that do similar things:
ps lists active processes. iostat gives you disk and other I/O data. sar reports system activity (network, interrupts, ...)
top -b -n n1 -d d1
where,
n1 = duration for which top output to be collected (in seconds)
d1 = how often you need to collect top output (in milliseconds)
Example:
top -b -n $n1 -d $d1 | grep "Cpu" > top.txt (n1 and d1 accepted from user)

How do I "dump" the contents of an X terminal programmatically a la /dev/vcs{,a} in the Linux console?

Linux's kernel-level console/non-X terminal emulator contains a very cool feature (if compiled in): each /dev/ttyN device corresponds with /dev/vcsaN and /dev/vcsN devices which represent the in-memory (displayed) state of that tty, with and without attributes (color, flashing, etc) respectively. This allows you to very easily cat /dev/vcs7 and see a dump of /dev/tty7 wherever cat was launched. I used this incredibly practical capability the other day to login to a system via SSH and remotely watch a dd process I'd forgotten to put inside a screen (or similar) session - it was running off a text console, so I took a few moments to finetune the character ranges that I wanted to grab, and presently I was watching dd's transfer status over SSH (once every second, incidentally).
To reiterate and clarify, /dev/vcs{,a}* are character devices that retrieve the current in-memory representation the kernel console VT100 emulator, represented as a single "line" of text (there are no "newlines" at the end of each "line" of the screen). Just to remove confusion, I want to note that I can't tail -f this device: it's not a character stream like the TTY itself is. (But I've never needed this kind of behavior, for what it's worth.)
I've kept my ears perked for many years for a method to dump the character-cell memory state of X terminal emulators - or indeed any arbitrary process that needs to work with ttys, in some similar manner as I can with the Linux console. And... I am rather surprised that there is no practical solution to this problem - since it has, arguably, existed for approximately 30 years - X was introduced in 1984 - or, to be pedantic, at least 19 years - /dev/vcs{,a}* was introduced in kernel 1.1.94; the newest file in that release is dated 22 Feb 1995. (The oldest is from 1st Dec 1993 :P)
I would like to say that I do understand and realize that the tty itself is not a "screen buffer" as such but a character stream, and that the nonstandard feature I essentially exploited above is a quirky capability specific to the Linux VT102 emulator. However, this feature is cool enough (why else would it be in the mainline tree? :D) that, in my opinion, there should be a counterpart to it for things that work with /dev/pts*.
This afternoon, I needed to screen-scrape the output of an interactive ncurses application so I could extract metadata from the information it presented in my terminal. (There was no other practical way to achieve the goal I was aiming for.) Linux' kernel VT100 driver would permit such a task to be completed very easily, and I made the mistake of thinking that it, in light of this, it couldn't truly be that hard to do the same under X11.
By 9AM, I'd decided that the easiest way to experimentally request a dump of a remote screen would be to run it in dtach (think "screen -x" without any other options) and hack the dtach code to request a screen update and quit.
Around 11AM-12PM, I was requesting screen updates and dumping them to stdout.
Around 3:30PM, I accepted that using dtach would be impossible:
First of all, it relies on the application itself to send the screen redraws on request, by design, to keep the code simple. This is great, but, as luck would have it, the application I was using didn't support whole-screen repaints - it would only redraw on screen-size change (and only if the screen size was truly different!).
Running the program inside a screen session (because screen is a true terminal emulator and has an internal 2D character-cell buffer), then running screen -x inside dtach, also mysteriously failed to produce character cell updates.
I have previously examined screen and found the code sufficiently insane enough to remove any inclinations I might otherwise have to hack on it; all I can say is that said insanity may be one of the reasons screen does not already have the capabilities I have presented here (which would arguably be very easy to implement).
Other questions similar to this one frequently get answers to use typescript, or script; I just want to clarify that script saves the stream of the tty itself to a file, which I would need to push through a VT100 emulator to obtain a screen image of the current state of the tty in question. In other words, script would be a very insane solution to my problem.
I'm not marking this as accepted since it doesn't solve the actual core issue (which is many years old), but I was able to achieve the specific goal I set out to do.
My specific requirements were that I wanted to screen-scrape the output of the ncdu interactive disk usage browser, so I could simply press Enter in another terminal (or perform some similar, easy sequence) to add the directory currently highlighted/selected in ncdu to a file-list of files I wanted to work with.My goal was not to have to distract myself with endless copy+paste and/or retyping of directory names (probably with not a few inaccuracies to boot), so I could focus on the directories I wanted to select.
screen has a refresh feature, accessed by pressing (by default) CTRL+A, CTRL+L. I extended my copy of dtach to be capable of sending keystrokes in addition to dumping remote screens to stdout, and wrapped dtach in a script that transmitted the refresh sequence (\001\014) to screen -x running inside dtach. This worked perfectly, retrieving complete screen updates without any flicker.
I will warn anyone interested in trying this technique, however, that you will need to perfect the art of dodging VT100 escape sequences. I used regular expressions for this so I wasn't writing thousands of lines of code; here's the specific part of the script that extracted out the two pieces of information I needed:
sh -c "(sleep 0.1; dtach -k qq $'\001\014') &"; path="$(dtach -d qq -t 130000 | sed -n $'/^\033\[7m.*\/\.\./q;/---.*$/{s/.*--- //;s/ -\+.*//;h};/^\033\[7m/{s/.\033.*//g;s/\r.*//g;s/ *$//g;s/^\033\[7m *[^ ]\+ \[[# ]*\] *\(\/*\)\(.*\)$/\/\\2\\1/;p;g;p;q}' | sed 'N;s/\(.*\)\n\(.*\)/\2\1/')"
Since screenshots are cool and help people visualize things, here's a look at how it works when it's running:
The file shown inverted at the bottom of the ncdu-scrape window is being screen-scraped from the ncdu window itself; the four files in the list are there because I selected them using the arrow keys in ncdu, moved my mouse over to the ncdu-scrape window (I use focus-follows-mouse), and pressed Enter. That added the file to the list (a simple text file itself).
Having said this, I would like to clarify that the regular expression above is not a code sample to run with; it is, rather, a warning: for anything beyond incredibly trivial (!!) content extractions such as the one presented here, you're basically getting into the same territory as large corporations/interests who want to convert from VT100-based systems to something more modern, who have to spend tends of thousands commissioning large translation frameworks that perform the kind of conversion outlined above on an especially large scale.
Saner solutions appreciated.

Reading application stdout data using node.js

Let's take e.g. "top" application which displays system information and periodically updates it.
I want to run it using node.js and display that information (and updates!).
Code I've come up with:
#!/usr/bin/env node
var spawn = require('child_process').spawn;
var top = spawn('top', []);
top.stdout.on('readable', function () {
console.log("readable");
console.log('stdout: '+top.stdout.read());
});
It doesn't behave the way I expected. In fact it produces nothing:
readable
stdout: null
readable
stdout:
readable
stdout: null
And then exits (that is also unexpected).
top application is taken just as an example. Goal is to proxy those updates through the node and display them on the screen (so same way as running top directly from command line).
My initial goal was to write script to send file using scp. Done that and then noticed that I am missing progress information which scp itself displays. Looked around at scp node modules and they also do not proxy it. So backtracked to common application like top.
top is an interactive console program designed to be run against a live pseudo-terminal.
As to your stdout reads, top is seeing that its stdin is not a tty and exiting with an error, thus no output on stdout. You can see this happen in the shell if you do echo | top it will exit because stdin will not be a tty.
Even if it was actually running though, it's output data is going to contain control characters for manipulating a fixed-dimension console. (like "move the cursor to the beginning of line 2"). It is an interactive user interface and a poor choice as a programmatic data source. "Screen scraping" and interpreting this data and extracting meaningful information is going to be quite difficult and fragile. Have you considered a cleaner approach such as getting the data you need out of the /proc/meminfo file and other special files the kernel exposes for this purpose? Ultimately top is getting all this data from readily-available special files and system calls, so you should be able to tap into data sources that are convenient for programmatic access instead of trying to screen scrape top.
Now of course, top has analytics code to do averages and so forth that you may have to re-implement, so both screen-scraping and going through clean data sources have pros and cons, and aspects that are easy and difficult. But my $0.02 would be focus on good data sources instead of trying to screen scrape a console UI.
Other options/resources to consider:
The free command such as free -m
vmstat
and other commands described in this article
the expect program is designed to help automate console programs that expect a terminal
And just to be clear, yes it is certainly possible to run top as a child process, trick it into thinking there's a tty and all the associated environment settings, and get at the data it is writing. It's just extremely complicated and is analogous to trying to get the weather by taking a photo of the weather channel on a TV screen and running optical character recognition on it. Points for style, but there are easier ways. Look into the expect command if you need to research more about tricking console programs into running as subprocesses.

Benchmarking Node.JS server

I've written a Node.JS server which I would like to benchmark. It has the following components that I would like to benchmark separately:
- socket.io: how many continuous connections can it accept and process (where is the saturation point)
- redis: the same as above
- express: don't want to benchmark it
I know there is quite some (not a lot) documentation about that on the internet, but I don't like to reinvent the wheel, plus I don't want to actually spend countless hours of time trying some solution that turns out to be wrong for the job.
This is why I'm asking you guys here: what should I use to get a number/graph (whatever) on the number of simultaneous connections that server can process simultaneuosly without being bogged down? It would also be nice to monitor cpu, memory and swap of the process (yeah, yeah I know I can use countless of techniques or write my own script, but maybe something like that exists already).
I'm not looking for an answer where you'll paste a link to some solution that I already know it exists, I would like an answer in such a way, so that the person giving it has some actual experience and can really make a point or two and point me in the right direction.
Thank you
You can use ApacheBench ab to test the load that your server may take - man page
Some nice tutorials :
nixcraft/howto-performance-benchmarks-a-web-server
petefreitag/Using Apache Bench for Simple Load Testing
Usage :
$ ab -k -n 1000 -c 100 www.yourserver.com
-k - keep alive
-n N - will send N requests to the server
-c X - will send X packets concurrently

Resources