In Redis DB, i have many key in String type to save dowloads time with a application,
example:
Key value
20131028:1 100
20131028:2 15
20131028:3 10
..........
I want to sum all value of all key by redis command, please helpe me solve it. Thank you so much.
Redis is not designed to do this kind of things. You will better served by a RDBMS, MongoDB, or something like ElasticSearch.
Still, if you need to do it (to be launched from a shell):
$ redis-cli keys '20131028:*' | awk '{print "get "$1}' | redis-cli | awk '{x+=$1} END { print x }'
An other way to do it is to use a Lua server-side script:
$ redis-cli eval "local keys = redis.call('keys',KEYS[1]) ; local sum=0 ; for _,k in ipairs(keys) do sum = sum + redis.call('get',k) end ; return sum" 1 '20131028:*'
In both cases, performance will suck if you have many keys in the Redis instance, and the instance will be blocked for all connections while the keys are scanned.
Available since Redis v2.6 is the most awesome ability to execute Lua scripts on the Redis server.
127.0.0.1:6379> EVAL "local sum = 0 local i=1 local a1 = redis.call('hvals','Key') while(a1[i]) do sum=sum+a1[i] i=i+1 end return sum" 0
An important note is that Redis server-side lua scripts blocks EVERYTHING, which can be a dealbreaker in most cases. Source: stackoverflow.com/a/30896608/2440
I think its better to to use mget and fetch all keys with one command instead of issueing one command for each key. that way you get all results with only one call to redis and just have to sum them up. of course this only works if you know the keys beforehand ...
Use redis lua:
eval "local s = 0 for _,v in ipairs(redis.call('hvals', KEYS[1])) do s = s + v end return s" 1 Key
Could save the script with script load, then reuse the script with evalsha for any hash key.
Related
I want to just see the total number of keys available in Azure Redis cache that matches the given pattern. I tried the following command it is showing the count after displaying all the keys (which caused server load), But I need only the count.
>SCAN 0 COUNT 10000000 MATCH "{UID}*"
Except command SCAN, the command KEYS pattern can return the same result as your current command SCAN 0 COUNT 10000000 MATCH "{UID}*".
However, for your real needs to get the number of keys matching a pattern, there is an issue add COUNT command from the Redis offical GitHub repo which had answered by the author antirez for you, as the content below.
Hi, KEYS is only intended for debugging since it is O(N) and performs a full keyspace scan. COUNT has the same problem but without the excuse of being useful for debugging... (since you can simply use redis-cli keys ... | grep ...). So feature not accepted. Thanks for your interest.
So you can not directly get the count of KEYS pattern, but there are some possible solutions for you.
Count the keys return from command KEYS pattern in your programming language for the small number of keys with a pattern, like doing redis-cli KEYS "{UID}*" | wc -l on the host server of Redis.
To use the command EVAL script numkeys key \[key ...\] arg \[arg ...\] to run a Lua script to count the keys with pattern, there are two scripts you can try.
2.1. Script 1
return #redis.call("keys", "{UID}*")
2.2. Script 2
return table.getn(redis.call('keys', ARGV[1]))
The completed command in redis-cli is EVAL "return table.getn(redis.call('keys', ARGV[1]))" 0 {UID}*
declare -a result=`$ORACLE_HOME/bin/sqlplus -silent $DBUSER/$DBPASSWORD#$DB << EOF $SQLPLUSOPTIONS $roam_query exit; EOF`
I am trying to pull data from an oracle database and populate a bash variable. The select query works however it returns multiple rows and those rows are returned as a long continuous string. I want to capture each row from the database in an array index for example:
index[0] = row 1 information
index[1] = row 2 information
Please help. All suggestions are appreciated. I checked all documentation without no luck. Thank you. I am using solaris unix
If you have bash version 4, you can use the readarray -t command to do this. Any vaguely recent linux should have bash v4, but I don't know about Solaris.
BTW, I'd also recommend putting double-quotes around variable references (e.g. "$DBUSER/$DBPASSWORD#$DB" instead of just $DBUSER/$DBPASSWORD#$DB) (except in here-documents), using $( ) instead of backticks, and using lower- or mixed-case variable names (there are a bunch of all-caps names with special meanings, and if you use one of those by accident, weird things can happen).
I'm not sure I have the here-document (the SQL commands) right, but here's roughly how I'd do it:
readarray -t result < <("$oracle_home/bin/sqlplus" -silent "$dbuser/$dbpassword#$db" << EOF
$sqlplusoptions $roam_query
exit;
EOF
)
I'm looking for something similar to BLPOP, but instead of element I want to get them all run running over them in a loop.
It means that I want to get all the records of redis collection, and truncate it.
Consider using a LUA script to do the LRANGE+DEL atomically.
Or use RENAME to move the list to a temporary key which you will use to process the data.
RENAME yourlist temp-list
LRANGE temp-list 0 -1
... process the list
DEL temp-list
I'm currently working on a Bash script that simulates rolling a number of 6-sided dice. This is all taking place within a virtual machine running Debian that's acting as a server. Essentially, my webpage simulates rolling the dice by using the query string to determing the number of dice to be rolled.
For instance, if my URL is http://127.0.0.1/cgi-bin/rolldice.sh?6, I want the webpage to say "You rolled 6 dice" and then, on the next line, print six numbers between 1 and 6 inclusive (that are of course "randomly" generated).
Currently, printing out the "You rolled x dice" header is working fine. However, I'm having trouble with the next part. I'm very new to Bash, so possibly the syntax or something similar is wrong with my loop. Here it is:
for i in {1..$QUERY_STRING }; do
dieRoll = $(( $RANDOM % 6 + 1))
echo $dieRoll
done
Can anyone help me figure out where I'm going wrong? I'll be happy to post the rest of rolldice.sh if needed.
Since .. requires its arguments to be literals, you have to use eval to substitute the variable:
for i in $(eval "echo {1..$QUERY_STRING}"); do
Or if you have the seq command, you can do:
for i in $(seq 1 "$QUERY_STRING")
I recommend the latter -- using eval with input from the user is very dangerous.
I have multiple checks for ten different web servers. One of these checks is to monitor the number of established connections (using neststat and findstr to filter for ESTABLISHED). It works as expected on servers WEB1 through to WEB10. I can graph (using pnp4nagios) the TCP established connection count because the output is an integer. If it's above a certain threshold it goes into warning status, above another it becomes critical.
The individual checks are working the way I want.
However, what I'm looking to do is add up all of these connections into one graph. This would be some sort of aggregate graph or SUM of all of the others.
Is there a way to take the values/outputs from other checks and add them into one?
Server TCP Connections
WEB1 223
WEB2 124
WEB3 412
WEB4 555
WEB5 412
WEB6 60
WEB7 0
WEB8 144
WEB9 234
WEB10 111
TOTAL 2275
I want to graph only the total.
Nagios itself does not use performance data in any way, it just takes it and passes it to whatever you specify in your config. So there's no good way to do this in Nagios (You could pipe the performance output of nagios to some tee command which passes it to pnp4nagios and a different script that sums everything up, but that's just horrible to maintain).
If i had your problem, i'd do the following:
At the end of your current plugin, do something like
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
where nconnections is the number of connections the plugin found. This example is shell, replace if you use some different language for the plugin. The important thing is: it should be easy to write the number to a special file in the plugin.
Then, create a new plugin which has code similar to:
#!/bin/bash
WARN=1000
CRIT=2000
sumconn=$(cat /some/dir/connections.* | awk '{sum += $1} END {print sum}')
if [ $sumconn -ge $CRIT ]; then
echo "Connection sum CRITICAL: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 2
elif [ $sumconn -ge $WARN ]; then
echo "Connection sum WARNING: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 1
else
echo "Connection sum OK: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 0
fi
That way, whenever you probe an individual server, you'll save the data for the new plugin; the plugin will just pick up the data that's there, which makes it extremely short. Of course, the output of the summary will lag behind a bit, but you can minimize that effect by setting the normal_check_interval of the individual services low enough.
If you want to get fancy, add code to remove files older than a certain threshold from the cache directory. Or, you could even remove the individual services from your nagios configuration, and call the individual-server-plugin from the summation plugin for each server, if you're really uninterested in the connection count per server.
EDIT:
To solve the nrpe problem, create a check_nrpe_and_save plugin like this:
#!/bin/bash
output=$($NAGIOS_USER1/check_nrpe "$#")
rc=$?
nconnections=$(echo "$output" | head -1 | sed 's/.*you have \([0-9]*\) connections.*/$1/')
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
echo $output
exit $rc
Create a new define command entry for this script, and use the new command in your service definitions. You'll have to adjust the sed pattern to what your plugin outputs. If you don't have the number of connections in your regular output, an expression like .*connections=\([0-9]*\);.* should work. This check_nrpe_and_save should behave just like check_nrpe, especially it should output the same string and return the same exit code, and write to the special file as well.