I have one log file of mongodb. I want to display all output of grep command in greater than or less than given value"protocol:op_msg 523ms"
sed -n '/2022-09-15T12:26/,/2022-09-15T14:03/p' mongod.log| grep "op_msg 523ms"
output of log file :
7391:2022-11-22T09:23:23.047-0500 I COMMAND [conn26] command test.test appName: "MongoDB Shell" command: find { find: "test", filter: { creationDate: new Date(1663252936409) }, lsid: { id: UUID("7c1bb40c-5e99-4281-9351-893e3d23261d") }, $clusterTime: { clusterTime: Timestamp(1669126970, 1), signature: { hash: BinData(0, B141BFD0978167F8C023DFB4AB32BBB117B3CD80), keyId: 7136078726260850692 } }, $db: "test" } planSummary: COLLSCAN keysExamined:0 docsExamined:337738 cursorExhausted:1 numYields:2640 nreturned:1 queryHash:6F9DC23E planCacheKey:6F9DC23E reslen:304 locks:{ ReplicationStateTransition: { acquireCount: { w: 2641 } }, Global: { acquireCount: { r: 2641 } }, Database: { acquireCount: { r: 2641 } }, Collection: { acquireCount: { r: 2641 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 28615999, timeReadingMicros: 288402 } } protocol:op_msg 523ms
I have tried below command , but this command is only giving exact value. I need to find all query of log file which is greater than 100ms.
sed -n '/2022-09-15T12:26/,/2022-09-15T14:03/p' mongod.log| grep "op_msg 523ms"
Option 1) You can use grep bash onliner like this to filter mongo queries in specific execution time range:
grep -P '\d+ms' /log/mongodb/mongos.log | while read LINE; do querytime="$(echo "$LINE" | grep -oP '\d+ms' | grep -oP '\d+')"; if [ "$querytime" -gt 6000 ]&&[ "$querytime" -lt 7000 ]; then echo "$LINE"; fi; done
Explained:
Filter all log lines having Number+"ms" at the end.
Loop over the queries to extract execution time in querytime variable
Check if $querytime between 6000ms and 7000ms
If $queritime in the specified range print the current $LINE
Option 2) You can use the --slow MS, --fast MS options from mlogfilter from the mtools package where you can do something like:
mlogfilter mongod.log --slow 7000 --fast 6000
If all of the lines of interest have protocol:op_msg as the penultimate column, this becomes pretty trivial:
awk '$(NF-1) ~ /protocol:op_msg/ && $NF > 100 { print $NF }'
Note that this completely ignores the units when doing the comparison and will also print lines in which the final column is 250h or 250ns, but that's easy enough to filter for. With a more accurate problem description, a more precise solution is certainly available.
Related
I am trying to run the following command with the Ansible shell module. This same command works fine in a remote shell but I get the following error in Ansible:
- name: "Searching coincidendences for literal stratum 70 in /var/log/required.log and /var/log/required.1"
shell: "ls /var/log/required.log?(.1) | xargs grep -i 'stratum 70' |wc -l"
args:
executable: /bin/bash
This is the error message with ansible -vvvv:
fatal: [node_test]: FAILED! => {
"changed": true,
"cmd": "ls /var/log/required.log?(.1) | xargs grep -i 'stratum 70' |wc -l",
"delta": "0:00:00.005635",
"end": "2021-07-12 17:50:52.370057",
"invocation": {
"module_args": {
"_raw_params": "ls /var/log/required.log?(.1) | xargs grep -i 'stratum 70' |wc -l",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 2,
"start": "2021-07-12 17:50:52.364422",
"stderr": "/bin/sh: 1: Syntax error: \"(\" unexpected",
"stderr_lines": [
"/bin/sh: 1: Syntax error: \"(\" unexpected"
],
"stdout": "",
"stdout_lines": [] **strong text**
}
It seems that the problem is with the wildcards ?(.1) to replace characters but i don't know how to escape properly
Extended globbing must be declared it inside the block.
For that I tend to prefer a block scalar.
- name: "Searching coincidendences for literal stratum 70 in /var/log/required.log and /var/log/required.1"
shell: |
shopt -s extglob;
grep -i 'stratum 70' /var/log/required.log?(.1) | wc -l
args:
executable: /bin/bash
I don't think you really need it though -
- name: "Searching coincidendences for literal stratum 70 in /var/log/required.log and /var/log/required.1"
shell: "shopt -s extglob; grep -i 'stratum 70' /var/log/required.log?(.1) | wc -l"
args:
executable: /bin/bash
Was there some reason you needed the ls?
I want to create a script such that it checks the conditions:
Port string should be present in the file and if yes then it's value should be 20000.
The file should have a mention of sslkey, sslCert, ssl_cipher (The values against these strings/keys can be anything).
The attempt was made:
$ awk '/port|sslKey|sslCert|ssl_cipher/ {print $2,$3}' pkg.conf
port 20000
sslKey /usr/product/plat/etc/ssl/server.pem
sslCert /usr/product/plat/etc/ssl/server.cert
ssl_cipher ECDH+AES128:ECDH+AESGCM:ECDH+AES256:DH+AES:DH+AESGCM:DH+AES256:RSA+AES:RSA+AESGCM:!aNULL:!RC4:!MD5:!DSS:!3DES
The problem with the above command is even if one of the strings 'port|sslKey|sslCert|ssl_cipher' is missing even then it runs.
Could this be achieved using only a few lines of awk.
If any of the string/condition is missing then the output should display that condition and also the conditions those are met.
Considering Your details in questions here are input examples:
Correct:
$ cat pkg.conf
port 20000
sslKey /usr/product/plat/etc/ssl/server.pem
sslCert /usr/product/plat/etc/ssl/server.cert
ssl_cipher ECDH+AES128:ECDH+AESGCM:ECDH+AES256:DH+AES:DH+AESGCM:DH+AES256:RSA+AES:RSA+AESGCM:!aNULL:!RC4:!MD5:!DSS:!3DES
Wrong (port 18000 and missing sslCert):
$ cat pkg_wrong.conf
port 18000
sslKey /usr/product/plat/etc/ssl/server.pem
ssl_cipher ECDH+AES128:ECDH+AESGCM:ECDH+AES256:DH+AES:DH+AESGCM:DH+AES256:RSA+AES:RSA+AESGCM:!aNULL:!RC4:!MD5:!DSS:!3DES
AWK solution:
Correct pkg.conf. Returns 0 and outputs nothing:
$ awk '
/^port [0-9]+$/ { if ( $2 == 20000 ) isport=1; port=$2; }
/^sslKey .+$/ { issslKey=1; sslKey=$2; }
/^sslCert .+$/ { issslCert=1; sslCert=$2; }
/^ssl_cipher .+$/ { isssl_cipher=1; ssl_cipher=$2; }
END { if (isport && issslKey && issslCert && isssl_cipher) exit(0);
else { print("port " port); print("sslKey " sslKey); print("sslCert " sslCert); print("ssl_cipher " ssl_cipher); exit(1); }
}
' pkg.conf
Wrong pkg_wrong.conf (port 18000 and missing sslCert):
$ awk '
/^port [0-9]+$/ { if ( $2 == 20000 ) isport=1; port=$2; }
/^sslKey .+$/ { issslKey=1; sslKey=$2; }
/^sslCert .+$/ { issslCert=1; sslCert=$2; }
/^ssl_cipher .+$/ { isssl_cipher=1; ssl_cipher=$2; }
END { if (isport && issslKey && issslCert && isssl_cipher) exit(0);
else { print("port " port); print("sslKey " sslKey); print("sslCert " sslCert); print("ssl_cipher " ssl_cipher); exit(1); }
}
' pkg_wrong.conf
port 18000
sslKey /usr/product/plat/etc/ssl/server.pem
sslCert
ssl_cipher ECDH+AES128:ECDH+AESGCM:ECDH+AES256:DH+AES:DH+AESGCM:DH+AES256:RSA+AES:RSA+AESGCM:!aNULL:!RC4:!MD5:!DSS:!3DES
I'm looking to make the following code work somehow, it seems if i do not test the files/folders first I end up with the error:
Error: Failed to apply catalog: Parameter path failed on
File[/opt/dynatrace-6.2]: File paths must be fully qualified, not
'["/opt/dynatrace-6.2", "/opt/dynatrace-5.6.0",
"/opt/rh/httpd24/root/etc/httpd/conf.d/dtload.conf",
"/opt/rh/httpd24/root/etc/httpd/conf.d/01_dtagent.conf"]' at
newrelic.pp:35
The pertinent parts
$dtdeps = [
"/opt/dynatrace-6.2",
"/opt/dynatrace-5.6.0",
"${httpd_root}/conf.d/dtload.conf",
"${httpd_root}/conf.d/01_dtagent.conf",
]
exec { "check_presence":
require => File[$dtdeps],
command => '/bin/true',
onlyif => "/usr/bin/test -e $dtdeps",
}
file { $dtdeps:
require => Exec["check_presence"],
path => $dtdeps,
ensure => absent,
recurse => true,
purge => true,
force => true,
} ## this is line 35 btw
exec { "stop_dt_agent":
command => "PID=$(ps ax |grep dtwsagent |grep -v grep |awk '{print$1}') ; [ ! -z $PID ] && kill -9 $PID",
provider => shell,
}
service { "httpd_restart" :
ensure => running,
enable => true,
restart => "/usr/sbin/apachectl configtest && /etc/init.d/httpd reload",
subscribe => Package["httpd"],
}
Your code looks basically correct, but you went overboard with your file resources:
file { $dtdeps:
require => Exec["check_presence"],
path => $dtdeps,
...
This does create all the file resources from your array (since you use an array for the resource title) but each single one of them will then try to use the same array as the path value, which does not make sense.
TL;DR remove the path parameter and it should Just Work.
You can actually simplify this down a lot. Puppet only runs the file removal if the files don't exist, so the check_presence exec is not required.
You can't give a path an array, but you can pass the title as an array and then the paths get automatically made.
$dtdeps = [
"/opt/dynatrace-6.2",
"/opt/dynatrace-5.6.0",
"${httpd_root}/conf.d/dtload.conf",
"${httpd_root}/conf.d/01_dtagent.conf",
]
file { $dtdeps:
ensure => absent,
recurse => true,
purge => true,
force => true,
}
exec { "stop_dt_agent":
command => '[ ! -z $PID ] && kill -9 $PID',
environment => ["PID=\$(ps ax |grep dtwsagent |grep -v grep |awk '{print$1}'))"],
provider => shell,
}
However, running the stop_dt_agent exec is a bit fragile. You could probably refactor this into a service resource instead:
service { 'dynatrace':
ensure => stopped,
provider => 'base',
stop => 'kill -TERM $(ps ax | grep \"dtwsagent\"|grep -v grep|awk '{print \$1}')',
status => "ps ax | grep "dtwsagent"",
}
Background Information
I've been following this tutorial:
http://adrianmejia.com/blog/2014/10/01/creating-a-restful-api-tutorial-with-nodejs-and-mongodb/#mongoose-read-and-query
I have a mongodb called test and it has the following collections:
> show collections
chassis
ports
customers
locations
system.indexes
>
Symptoms
When I try to query for any document inside the chassis collection, it keeps returning null even though many records exist.
dev#devbox:~/nimble_express$ curl localhost:3000/chassis/55a7cc4193819c033d4d75c9
nulldev#devbox:~/nimble_express$
Problem
After trying many different things, I discovered the following issue in the mongodb logs ( i turned on verbose logging)
In the following log entry, notice the reference to "test.chasses" (which is a typo. It should be "chassis") :
2015-07-29T14:42:25.554-0500 I QUERY [conn141] query test.chasses query: { _id: ObjectId('55a7cc4193819c033d4d75c9') } planSummary: EOF ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 0ms
I've grepped to make sure I don't have this typo anywhere in my code using the following command:
dev#devbox:~/nimble_express/nimbleApp$ grep -ris 'chasses' .
dev#devbox:~/nimble_express/nimbleApp$
I'm not sure where it's getting this collection name from.
Other queries against other collections work just fine. For example, I have a collection called "ports" and I pretty much copied and pasted all logic I have for chassis' and it works just fine.
Here's the proof from the logs:
2015-07-29T14:58:15.127-0500 I QUERY [conn160] query test.ports planSummary: COLLSCAN cursorid:68808242412 ntoreturn:1000 ntoskip:0 nscanned:0 nscannedObjects:1000 keyUpdates:0 writeConflicts:0 numYields:7 nreturned:1000 reslen:188922 locks:{ Global: { acquireCount: { r: 16 } }, MMAPV1Journal: { acquireCount: { r: 8 } }, Database: { acquireCount: { r: 8 } }, Collection: { acquireCount: { R: 8 } } } 0ms
Any suggestions? I'm sure I have a typo somewhere... but I can't find it. All my code is within the nimble_express directory tree.
I copied the 'chassis' collection in my mongodb to "tempCollection" like this:
> db.createCollection('tempCollection')
{ "ok" : 1 }
> db.chassis.copyTo('tempCollection');
WARNING: db.eval is deprecated
57
> exit
bye
And then, I created my schema, a route for this collection.
When I attempted to do a curl request for localhost:3000/tempCollection, I noticed that in the logs, the name of the collection was wrong again.
[[A2015-07-29T15:13:19.661-0500 I QUERY [conn168] query test.tempcollections planSummary: EOF ntoreturn:1000 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 0ms
And that's when it dawned on me. Something somewhere was pluralizing the collection names! So I googled and found this post:
Is there a way to prevent MongoDB adding plural form to collection names?
So the solution for me was to explicitly define the collection name like so:
module.exports = mongoose.model('chassis', ChassisSchema, 'chassis');
inside the model/Chassis.js file
Instead of marking this as a duplicate, I think I should leave this question as is for those who think they have a problem with collection names. For noobs like me, you assume that you are doing something wrong vs. the system performing some automagic for you! But I'm happy to do whatever the community suggests!
We can close this off as a duplicate. Or leave as is.
I am using the nagios to monitor gearman and getting error "CRITICAL - function 'xxx' is not registered in the server"
Script that nagios execute to check the gearman is like
#!/usr/bin/env perl
# taken from: gearmand-0.24/libgearman-server/server.c:974
# function->function_name, function->job_total,
# function->job_running, function->worker_count);
#
# this code give following result with gearadmin --status
#
# FunctionName job_total job_running worker_count
# AdsUpdateCountersFunction 0 0 4
use strict;
use warnings;
use Nagios::Plugin;
my $VERSION="0.2.1";
my $np;
$np = Nagios::Plugin->new(usage => "Usage: %s -f|--flist <func1[:threshold1],..,funcN[:thresholdN]> [--host|-H <host>] [--port|-p <port>] [ -c|--critworkers=<threshold> ] [ -w|--warnworkers=<threshold>] [-?|--usage] [-V|--version] [-h|--help] [-v|--verbose] [-t|--timeout=<timeout>]",
version => $VERSION,
blurb => 'This plugin checks a gearman job server, expecting that every function in function-list arg is registered by at least one worker, and expecting that job_total is not too much high.',
license => "Brought to you AS IS, WITHOUT WARRANTY, under GPL. (C) Remi Paulmier <remi.paulmier\#gmail.com>",
shortname => "CHECK_GEARMAN",
);
$np->add_arg(spec => 'flist|f=s',
help => q(Check for the functions listed in STRING, separated by comma. If optional threshold is given (separated by :), check that waiting jobs for this particular function are not exceeding that value),
required => 1,
);
$np->add_arg(spec => 'host|H=s',
help => q(Check the host indicated in STRING),
required => 0,
default => 'localhost',
);
$np->add_arg(spec => 'port|p=i',
help => q(Use the TCP port indicated in INTEGER),
required => 0,
default => 4730,
);
$np->add_arg(spec => 'critworkers|c=i',
help => q(Exit with CRITICAL status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 1,
);
$np->add_arg(spec => 'warnworkers|w=i',
help => q(Exit with WARNING status if fewer than INTEGER workers have registered a particular function),
required => 0,
default => 4,
);
$np->getopts;
my $ng = $np->opts;
# manage timeout
alarm $ng->timeout;
my $runtime = {'status' => OK,
'message' => "Everything OK",
};
# host & port
my $host = $ng->get('host');
my $port = $ng->get('port');
# verbosity
my $verbose = $ng->get('verbose');# look for gearadmin, use nc if not found
my #paths = grep { -x "$_/gearadmin" } split /:/, $ENV{PATH};
my $cmd = "gearadmin --status -h $host -p $port";
if (#paths == 0) {
print STDERR "gearadmin not found, using nc\n" if ($verbose != 0);
# $cmd = "echo status | nc -w 1 $host $port";
$cmd = "echo status | nc -i 1 -w 1 $host $port";
}
foreach (`$cmd 2>/dev/null | grep -v '^\\.'`) {
chomp;
my ($fname, $job_total, $job_running, $worker_count) =
split /[[:space:]]+/;
$runtime->{'funcs'}{"$fname"} = {job_total => $job_total,
job_running => $job_running,
worker_count => $worker_count };
# print "$fname : $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
}
# get function list
my #flist = split /,/, $ng->get('flist');
foreach (#flist) {
my ($fname, $fthreshold);
if (/\:/) {
($fname, $fthreshold) = split /:/;
} else {
($fname, $fthreshold) = ($_, -1);
}
# print "defined for $fname: $runtime->{'funcs'}{\"$fname\"}{'worker_count'}\n";
# if (defined($runtime->{'funcs'}{"$fname"})) {
# print "$fname is defined\n";
# } else {
# print "$fname is NOT defined\n";
# }
if (!defined($runtime->{'funcs'}{"$fname"}) &&
$runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL, "function '$fname' is not registered in the server");
} else {
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('critworkers') && $runtime->{'status'} <= CRITICAL) {
($runtime->{'status'}, $runtime->{'message'}) =
(CRITICAL,
"less than " .$ng->get('critworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'worker_count'} <
$ng->get('warnworkers') && $runtime->{'status'} <= WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
"less than " .$ng->get('warnworkers').
" workers were found having function '$fname' registered.");
}
if ($runtime->{'funcs'}{"$fname"}{'job_total'} > $fthreshold
&& $fthreshold != -1 && $runtime->{'status'}<=WARNING) {
($runtime->{'status'}, $runtime->{'message'}) =
(WARNING,
$runtime->{'funcs'}{"$fname"}{'job_total'}.
" jobs for $fname exceeds threshold $fthreshold");
}
}
}
$np->nagios_exit($runtime->{'status'}, $runtime->{'message'});
When the script is executed simply by command line it says "everything ok"
But in nagios it shows error "CRITICAL - function 'xxx' is not registered in the server"
Thanks in advance
After spending long time on this, finally got the answer all that have to do is.
yum install nc
nc is what that was missing from the system.
With Regards,
Bankat Vikhe
Not easy to say but it could be related to your script not being executable as embedded Perl.
Try with # nagios: -epn at the beginning of the script.
#!/usr/bin/env perl
# nagios: -epn
use strict;
use warnings;
Be sure to check all the hints in the Perl Plugins section of the Nagios Plugin Development Guidelines