summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)" in JMeter - performance-testing

I have installed JMeter in my new system. Create a script and trying to execute from non-GUI mode. But the script is not getting executed and it gives below summary result.
summary = 0 in 00:00:00 = ******/s
Avg: 0
Min: 9223372036854775807
Max: -9223372036854775808
Err: 0 (0.00%)
What could be the reason? How to resolve this? Kindly guide me as I am from the Loadrunner background.

It means that the script hasn't generated any results as it failed to execute any Samplers.
You can always look into jmeter.log file - most probably it will contain sufficient information to get to the bottom of the issue, if not - you will be able to increase JMeter log verbosity for the components you're using or the whole JMeter engine.
The most common reasons are:
Missing file referenced in the CSV Data Set Config
Missing JMeter Plugin
The script version is not compatible with the JMeter version
So look into jmeter.log file and if you won't be able to figure out what's wrong - post its contents here

Related

What does `TASK_REPORT_MAX` from the `sched.h` serve for on Linux?

I am using the perf trace -bla-bla-bla -e sched:sched_switch on a 5.15.53 kernel, however looks like it is not configured to populate all the trace fields.
trace/events/sched.h does publish the prev_state=%s%s, nevertheless I can not get why exactly does the TASK_REPORT_MAX - 1 exist there? This macro has no docs on it, nor it is clear for me from the source code what great purpose it serves.
perf trace ... output shows something like:
0.000 Timer/19125 sched:sched_switch(prev_comm: "Timer", prev_pid: 19125 (Timer), prev_prio: 120, prev_state: 1, next_comm: "swapper/2", next_prio: 120)
Note: prev_state: 1. 1 is definitely not what I would expect to see there based on the link above I've sent. Do I deal with a misconfigured kernel?

PM2 Log Rotate Weekly Configuration

I'm so confused with the pm2-logrotate configuration, i need some help. I've search for documentation and googled with zero result. I just want to rotate the log every week.
I've tried using pm2 set pm2-logrotate:rotateInverval 0 0 * * 0 but the log file generated daily.
I just don't understand that cron stuff and i need some explaination, can somebody explain it to me?
thank you in advance.
While your cronjob seems fine. But there are some other configurations also associated with pm2-logrotate. Like max_size, the default max size of log is 10 MB, if your log exceeds that then pm2 will rotate it. Say, you want to change it to 10GB, then issue this command pm2 set pm2-logrotate:max_size 10G. You can specify the size as you wish 10K, 10M, 10G. I have faced a similar problem when the log got rotated 3-4 times a day instead of following the specified frequency.
Without being wrapped in quotation marks, it's likely that only the first 0 is being read in your interval. So instead of interpreting the interval as 0 0 * * 0, it is interpreted just as 0.
The following should do the trick:
pm2 set pm2-logrotate:rotateInverval "0 0 * * 0"
As for understanding the cron syntax, try pasting the values in here for an explanation: https://crontab.guru/#0_0___0
Your problem is caused because you spelled rotateInterval wrong.

Datastage: String to Timestamp to milli seconds conversion

I'm trying to insert timestamp with milli seconds into a database. I tried following steps but haven't had any luck.
Extend field value to milli seconds , with length 26 and scale 3.
Used StringToTimestamp(timestampInString,"%yyyy-%mm-%dd %hh:%nn:%ss.3"), Resulting null value in output.
Modified the default time Stamp in job properties to %yyyy-%mm-%dd %hh:%nn:%ss.3
Design :
Sequential file --> TX --> destination (SQL/Seq file)
Could you please assist a solution for this?
What you tried looks good so far with one exception:
length 26 and scale 3
Using Db2 for example you would need to specify length 26 precision 6.
26 and 3 do not fit as
%yyyy-%mm-%dd %hh:%nn:%ss has length of 20
Specifying microsenonds is extended attriute is also necessary.
Have a try and provide more details of the target system if you still have problems
try below command in transformer stage and set the target data type as per the requirement, it will give the required result:
StringToTimestamp(Columnname,"%yyyy-%mm-%dd %hh:%nn:%ss.3")
Most important thing is to override the default NLS settings in the job with expected format (%yyyy-%mm-%dd %hh:%nn:%ss.3) which you have already completed.
Now, for the microseconds part which is not being displayed, you can enable the Microseconds extended attribute for the field in the target stage used.
Also you can refer this link from IBM which explains a similar scenario

graylog2 not showing any data

I'm new to Graylog2. I'm using it for analyze the stored logs from Elasticsearch.
I have done the setup successfully using this link http://www.richardyau.com/?p=377
But, I parsed the logs to elasticsearch under the index name called "xg-*". Not sure why same has not been replicated in graylog2.
when I check the indices status in graylogs2 web interface, it shows only "graylog2_0" index. Not showing my index.
someone please help me what is the reason behind it.
Elasticsearch indices details:
[root#xg bin]# curl http://localhost:9200/_cat/indices?pretty
green open graylog2_0 4 0 0 0 576b 576b
yellow open xg-2015.12.12 5 1 56 0 335.4kb 335.4kb
[root#xg bin]#
Graylog2 Web indices details:
Graylog doesn't support other indexing schemes than its own. If you want to use Graylog to analyze your data, you also have to ingest it through Graylog.

Different results from spamassassin and spamc

I have installed and configured and trained my spamassassin and all seemed to work just fine.
Then when I tried to deploy it via spamc I get partial results.
Why is this happening?
I like spamc for the fact i can get it to output just the report but it seems to be missing checks: SPF, DKIM, BAYES.
I have not managed to figure it out or find any similar reports online.
This has been going on for days now and I am out of ideas.
spamassassin works:
# spamassassin -t < /path/to/spam.eml
Content analysis details: (3.3 points, 5.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
0.0 FSL_HELO_NON_FQDN_1 FSL_HELO_NON_FQDN_1
0.7 SPF_SOFTFAIL SPF: sender does not match SPF record (softfail)
0.8 BAYES_50 BODY: Bayes spam probability is 40 to 60%
[score: 0.5000]
0.5 MISSING_MID Missing Message-Id: header
0.0 HELO_NO_DOMAIN Relay reports its domain incorrectly
1.4 MISSING_DATE Missing Date: header
spamc only partial:
# spamc -R < /path/to/spam.eml
Content analysis details: (1.5 points, 5.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
0.0 FSL_HELO_NON_FQDN_1 FSL_HELO_NON_FQDN_1
0.1 MISSING_MID Missing Message-Id: header
0.0 HELO_NO_DOMAIN Relay reports its domain incorrectly
1.4 MISSING_DATE Missing Date: header
I figured the same problem.
Here is the answer to your question: http://spamassassin.apache.org/full/3.3.x/doc/Mail_SpamAssassin_Conf.html#filename
The bayes databases are saved in the home directory of the user which runs spamassassin:
bayes_path /path/filename (default: ~/.spamassassin/bayes)
This is the directory and filename for Bayes databases. Several databases will be created, with this as the base directory and filename, with _toks, _seen, etc. appended to the base. The default setting results in files called ~/.spamassassin/bayes_seen, ~/.spamassassin/bayes_toks, etc.
By default, each user has their own in their ~/.spamassassin directory with mode 0700/0600. For system-wide SpamAssassin use, you may want to reduce disk space usage by sharing this across all users. However, Bayes appears to be more effective with individual user databases.
And here is the solution that worked for me:
According to this wiki: http://wiki.apache.org/spamassassin/SiteWideBayesSetup , I added in /etc/mail/spamassassin/local.cf the following two lines:
bayes_path /var/spamassassin/bayes_db/bayes
bayes_file_mode 0777
and I created the needed directory: /var/spamassassin/bayes_db/
Please note that the last "bayes" in the path is the prefix for the database files (bayes_journal, bayes_seen, etc.)
Ok, after I restared the spamassassin, nothing happened. No Bayes test yet. Hmm...
So, I copied the already created databases from /root/.spamassassin/* to /var/spamassassin/bayes_db
Update: It seems that I had to change the permissions to these 4 bayes_* files to 0666. Otherwise the autolearner will not save the new data. I don't agree with 0666 permission, but I hope I will find another solution soon.
I ran another test in spamc and... I got the Bayes!! :)
Results for spamassassin
# spamassassin -t -D spf,dkim < /path/to/spam.eml
Content analysis details: (8.2 points, 5.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
[score: 1.0000]
1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
[Blocked - see <http://www.spamcop.net/bl.shtml?141.146.5.61>]
1.0 DATE_IN_PAST_12_24 Date: is 12 to 24 hours before Received: date
-0.0 SPF_PASS SPF: sender matches SPF record
1.3 TRACKER_ID BODY: Incorporates a tracking ID number
0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
[score: 1.0000]
0.0 HTML_MESSAGE BODY: HTML included in message
0.8 RDNS_NONE Delivered to internal network by a host with no rDNS
Results for spamc:
# spamc -R < /path/to/spam.eml
Content analysis details: (8.2 points, 5.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
[Blocked - see <http://www.spamcop.net/bl.shtml?141.146.5.61>]
3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
[score: 1.0000]
1.0 DATE_IN_PAST_12_24 Date: is 12 to 24 hours before Received: date
-0.0 SPF_PASS SPF: sender matches SPF record
1.3 TRACKER_ID BODY: Incorporates a tracking ID number
0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
[score: 1.0000]
0.0 HTML_MESSAGE BODY: HTML included in message
0.8 RDNS_NONE Delivered to internal network by a host with no rDNS
Content analysis details: (8.2 points, 5.0 required)
If spamd is running under a dedicated user account, it will use the preferences found by that user and you can additionally have some access rights issues (e.g. that user not allowed to read a site-wide Bayes database).
You can also have options given to spamd that affects other behaviour (e.g. -L that disables DNS and network tests).
How are you running spamd? You can also run spamd with -D and see if anything interesting pops up.

Resources