Jmeter running in Linux command not returning correctly - linux

I am trying to run a fairly large monitoring test in jmeter through Linux.
The .jmx file I am using runs and keeps the test running on an infinite loop when run on a GUI version of Jmeter.
To run it on Linux I am using :
sh jmeter.sh -n -t MasterMonitorNew.jmx -l log.jtl
but the response I get from the test is:
Creating summariser <summary>
Created the tree successfully using MasterMonitorNew.jmx Starting the
test # Sun Feb 08 18:49:14 EST 2015 (1423439354371)
Waiting for
possible shutdown message on port 4445
summary = 0 in 0s = ******/s Avg: 0 Min: 0 Max: 0 Err: 0 (0.00%) Tidying up ... # Sun Feb 08 18:49:14 EST 2015 (1423439354658) ... end of
run
The entire test takes less then 2sec till it shuts itself off, any help would be much appreciated.

Related

Issue with watchdog ping (watchdog: error opening socket (Operation not permitted))

I have an issue with my Pi 4 where it seems the networking crashes at some point. The pi still runs but network is unreachable. I tried setting the ping command in my watchdog.conf but I am getting an error watchdog: error opening socket (Operation not permitted)
Hardware: Pi 4 8GB
OS: Raspberry Pi OS Lite (64-bit)
Watchdog version: 5.16
My watchdog.conf:
# ====================================================================
# Configuration for the watchdog daemon. For more information on the
# parameters in this file use the command 'man watchdog.conf'
# ====================================================================
# =================== The hardware timer settings ====================
#
# For this daemon to be effective it really needs some hardware timer
# to back up any reboot actions. If you have a server then see if it
# has IPMI support. Otherwise for Intel-based machines try the iTCO_wdt
# module, otherwise (or if that fails) then see if any of the following
# module load and work:
#
# it87_wdt it8712f_wdt w83627hf_wdt w83877f_wdt w83977f_wdt
#
# If all else fails then 'softdog' is better than no timer at all!
# Or work your way through the modules listed under:
#
# /lib/modules/`uname -r`/kernel/drivers/watchdog/
#
# To see if they load, present /dev/watchdog, and are capable of
# resetting the system on time-out.
# Uncomment this to use the watchdog device driver access "file".
#verbose=yes
watchdog-device = /dev/watchdog
# Uncomment and edit this line for hardware timeout values that differ
# from the default of one minute.
watchdog-timeout = 15
# If your watchdog trips by itself when the first timeout interval
# elapses then try uncommenting the line below and changing the
# value to 'yes'.
#watchdog-refresh-use-settimeout = auto
# If you have a buggy watchdog device (e.g. some IPMI implementations)
# try uncommenting this line and setting it to 'yes'.
#watchdog-refresh-ignore-errors = no
# ====================== Other system settings ========================
#
# Interval between tests. Should be a couple of seconds shorter than
# the hardware time-out value.
#interval = 1
# The number of intervals skipped before a log message is written (i.e.
# a multiplier for 'interval' in terms of syslog messages)
#logtick = 1
# Directory for log files (probably best not to change this)
log-dir = /var/log/watchdog
# Email address for sending the reboot reason. This needs sendmail to
# be installed and properly configured. Maybe you should just enable
# syslog forwarding instead?
#admin = root
# Lock the daemon in to memory as a real-time process. This greatly
# decreases the chance that watchdog won't be scheduled before your
# machine is really loaded.
realtime = yes
priority = 1
# ====================== How to handle errors =======================
#
# If you have a custom binary/script to handle errors then uncomment
# this line and provide the path. For 'v1' test binary files they also
# handle error cases.
#repair-binary = /usr/sbin/repair
#repair-timeout = 60
# The retry-timeout and repair limit are used to handle errors in a
# more robust manner. Errors must persist for longer than this to
# action a repair or reboot, and if repair-maximum attempts are
# made without the test passing a reboot is initiated anyway.
#retry-timeout = 60
#repair-maximum = 1
# Configure the delay on reboot from sending SIGTERM to all processes
# and to following up with SIGKILL for any that are ignoring the polite
# request to stop.
#sigterm-delay = 5
# ====================== User-specified tests ========================
#
# Specify the directory for auto-added 'v1' test programs (any executable
# found in the 'test-directory should be listed).
#test-directory = /etc/watchdog.d
# Specify any v0 custom tests here. Multiple lines are permitted, but
# having any 'v1' programs/scripts discovered in the 'test-directory' is
# the better way.
#test-binary =
# Specify the time-out value for a test error to be reported.
#test-timeout = 60
# ====================== Typical tests ===============================
#
# Specify any IPv4 numeric addresses to be probed.
# NOTE: You should check you have permission to ping any machine before
# using it as a test. Also remember if the target goes down then this
# machine will reboot as a result!
#ping = 192.168.1.1
# Set the number of ping attempts in each 'interval' of time. Default
# is 3 and it completes on the first successful ping.
# NOTE: Round-trip delay has to be less than 'interval' / 'ping-count'
# for test success, but this is unlikely to be exceeded except possibly
# on satellite links (very unlikely case!).
# Specify any network interface to be checked for activity.
interface = eth0
# Specify any files to be checked for presence, and if desired, checked
# that they have been updated more recently than 'change' seconds.
#file = /var/log/syslog
#change = 1407
# Uncomment to enable load average tests for 1, 5 and 15 minute
# averages. Setting one of these values to '0' disables it. These
# values will hopefully never reboot your machine during normal use
# (if your machine is really hung, the loadavg will go much higher
# than 25 in most cases).
max-load-1 = 24
#max-load-5 = 18
#max-load-15 = 12
# Check available memory on the machine.
#
# The min-memory check is a passive test from reading the file
# /proc/meminfo and computed from MemFree + Buffers + Cached
# If this is below a few tens of MB you are likely to have problems.
#
# The allocatable-memory is an active test checking it can be paged
# in to use.
#
# Maximum swap should be based on normal use, probably a large part of
# available swap but paging 1GB of swap can take tens of seconds.
#
# NOTE: This is the number of pages, to get the real size, check how
# large the pagesize is on your machine (typically 4kB for x86 hardware).
#min-memory = 1
#allocatable-memory = 1
#max-swap = 0
# Check for over-temperature. Typically the temperature-sensor is a
# 'virtual file' under /sys and it contains the temperature in
# milli-Celsius. Usually these are generated by the 'sensors' package,
# but take care as device enumeration may not be fixed.
#temperature-sensor =
#max-temperature = 90
# Check for a running process/daemon by its PID file. For example,
# check if rsyslogd is still running by enabling the following line:
#pidfile = /var/run/rsyslogd.pid
This runs fine checking the status of the service:
pi#raspberrypi:~ $ sudo service watchdog status
● watchdog.service - watchdog daemon
Loaded: loaded (/lib/systemd/system/watchdog.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-04-12 08:01:53 BST; 2min 17s ago
Process: 2120 ExecStartPre=/bin/sh -c [ -z "${watchdog_module}" ] || [ "${watchdog_module}" = "none" ] || /sbin/modprobe $watchdog_module (code=exited, status=0/SUCCESS)
Process: 2121 ExecStart=/bin/sh -c [ $run_watchdog != 1 ] || exec /usr/sbin/watchdog $watchdog_options (code=exited, status=0/SUCCESS)
Main PID: 2126 (watchdog)
Tasks: 1 (limit: 8986)
CPU: 59ms
CGroup: /system.slice/watchdog.service
└─2126 /usr/sbin/watchdog
Apr 12 08:01:53 raspberrypi watchdog[2126]: interface: eth0
Apr 12 08:01:53 raspberrypi watchdog[2126]: temperature: no sensors to check
Apr 12 08:01:53 raspberrypi watchdog[2126]: no test binary files
Apr 12 08:01:53 raspberrypi watchdog[2126]: no repair binary files
Apr 12 08:01:53 raspberrypi watchdog[2126]: error retry time-out = 60 seconds
Apr 12 08:01:53 raspberrypi watchdog[2126]: repair attempts = 1
Apr 12 08:01:53 raspberrypi watchdog[2126]: alive=/dev/watchdog heartbeat=[none] to=root no_act=no force=no
Apr 12 08:01:53 raspberrypi watchdog[2126]: watchdog now set to 15 seconds
Apr 12 08:01:53 raspberrypi watchdog[2126]: hardware watchdog identity: Broadcom BCM2835 Watchdog timer
Apr 12 08:01:53 raspberrypi systemd[1]: Started watchdog daemon.
However when I try to enable the ping command in the conf file (#ping = 192.168.1.1) I get the following error running watchdog -v:
watchdog -v
watchdog: String 'watchdog-device' found as '/dev/watchdog'
watchdog: Integer 'watchdog-timeout' found = 15
watchdog: String 'log-dir' found as '/var/log/watchdog'
watchdog: Variable 'realtime' found as 'yes' = 1
watchdog: Integer 'priority' found = 1
watchdog: List 'ping' added as '192.168.1.1'
watchdog: List 'interface' added as 'eth0'
watchdog: Integer 'max-load-1' found = 24
watchdog: error opening socket (Operation not permitted)
This seems to indicate that it's not permitted to do the ping test.
I googled the issue and found nothing like this anywhere yet but I did try the solutions in these articles where none of them worked:
https://discuss.linuxcontainers.org/t/even-with-root-user-im-receiving-operation-not-permitted-when-try-creating-gluster-volume-between-ubuntu-14-04-lxc-containers/2699
https://superuser.com/questions/288521/problem-with-ping-open-socket-operation-not-permitted
Anyone have any ideas?

Ubuntu: Pipe raw unbuffered data to TCP port

My overall goal:
I have a hardware device that streams sensor data to a Ubuntu laptop running a Python script. Data comes in chunks of 240 samples (one per line with \n) every 2 seconds and prints to stdout. I start the Python script on the Ubuntu laptop and pipe its output to a TCP port using netcat. I connect to that TCP port from any other device on the network and get the live data stream - without first loading all previous samples.
My Setup:
Two laptops.
1: Ubuntu collects readings from a sensor, and pipes those readings to TCP port 1234.(This is working.) $ py read_sensors.py | nc -lk 1234
2: Windows 10, has WSL, Python, and existing scripts for processing data streamed from the first laptop. (This is working in WSL) $ nc 10.10.10.01 1234
My Problem:
I begin streaming sensor data on the Ubuntu laptop.
10 min later I connect to that stream from my windows laptop...
I expect to receive the most recent sample at the time the connection was established, and all subsequent samples in (pseudo) real-time.
Instead, as soon as I connect I am flooded with all samples collected since I began the streaming pipeline on the Ubuntu laptop, and once it catches up, I start seeing real-time data.
I have tried: Searching led me to try stdbuf. Lack of results led me to try various combinations of $ stdbuf -oL py read_sensors.py | nc -lk 1234 $ py read_sensors.py | stdbuf -oL nc -lk 1234 but every time I wait a little bit then connect to the port from my windows laptop, it loads all samples from the time I started streaming on the Ubuntu laptop.
I assume: This is a buffering issue and that it will have to be fixed on the Ubuntu machine - but the various combinations of stdbuf has not had any effect on the behavior of the system. So, I turn to the SO gods for insight and grace :)
-Nick
Something like this might meet the overall goal. Based on https://docs.python.org/3/library/socketserver.html#socketserver-tcpserver-example
import socketserver
class MyHandler(socketserver.BaseRequestHandler):
def handle(self):
bytes = read_sensor_data() # call the code that reads the sensors
self.request.sendall(bytes)
if __name__ == "__main__":
with socketserver.TCPServer(("localhost", 1234), MyHandler) as server:
server.serve_forever()
Disable netcat buffering:
Force netcat to send messages immediately (without buffering)
Alternately, I believe if you use bash’s built in tcp connections bypassing netcat, it will work. E.g. read_sensors.py > /dev/tcp/10.10.10.1/1234
EDIT: Added sample code that shows how to send and receive.
Example code:
To send:
#!/bin/bash
while true
do
date > /dev/tcp/localhost/1234 || true # replace date command with read_sensors.py
sleep 1
done
to receive:
ubuntu#ubuntu:~$ nc -lk 1234
Output:
Tue Mar 2 20:40:24 UTC 2021
Tue Mar 2 20:40:25 UTC 2021
Tue Mar 2 20:40:26 UTC 2021
Tue Mar 2 20:40:27 UTC 2021
^C
ubuntu#ubuntu:~$ nc -lk 1234
Tue Mar 2 20:40:40 UTC 2021
Tue Mar 2 20:40:41 UTC 2021
Notice 13 second gap while I hit control C, no data sent or buffered up.

Debian init.d script fails to sleep

This problem occurs on a Pogoplug E02 running Debian jessie.
At startup the network interface takes several seconds to come online. A short delay is required after the "networking" script completes to ensure that ensuing network operations occur properly.
I wrote the following script and inserted it using update-rc.d. The script inserted correctly and executes at boot time in proper sequence, after networking and before the network-dependent scripts which were modified to depend on netdelay
cat /etc/init.d/netdelay
#! /bin/sh
### BEGIN INIT INFO
# Provides: netdelay
# Required-Start: networking
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Delay 5s after eth0 up for Pogoplug
# Description:
### END INIT INFO
PATH=/sbin:/usr/sbin:/bin:/usr/bin
./lib/init/vars.sh
./lib/lsb/init-functions
log_action_msg "Pausing for eth0 to come online"
/bin/sleep 5
log_action_msg "Continuing"
exit 0
When the script executes at startup there is no delay. I've used both sleep and /bin/sleep in the script but neither effect the desired delay. Boot log showing this attached below.
Thu Jan 1 00:00:25 1970: Configuring network interfaces...done.
Thu Jan 1 00:00:25 1970: INIT: Entering runlevel: 2
Thu Jan 1 00:00:25 1970: Using makefile-style concurrent boot in runlevel 2.
Thu Jan 1 00:00:26 1970: Starting SASL Authentication Daemon: saslauthd.
Thu Jan 1 00:00:29 1970: Pausing for eth0 to come online.
Thu Jan 1 00:00:30 1970: Continuing.
Thu Jan 1 00:00:33 1970: ntpdate updating system time.
Wed Feb 1 05:33:40 2017: Starting enhanced syslogd: rsyslogd.
(The Pogoplug has no hardware clock and has no idea what time it is until ntpdate has run.)
Can someone see where the problem might be?

Openshift hourly cron job suddenly stopped working

Suddenly the hourly cron job on openshift stopped working.
I am using a free account and the cron was running fine until suddenly it just stopped working.
Minutely jobs on the other hand are running fine, given the following files
app-root/runtime/repo/.openshift/cron/minutely/cminut
#!/bin/bash
echo 'ping'
and
app-root/runtime/repo/.openshift/cron/hourly/chour
#!/bin/bash
echo 'pong'
as well as the following permissions
[xxx-xxxxxxx.rhcloud.com cron]\> ls -la hourly/
total 4
drwx------. 2 1234567 1234567 18 Jun 28 19:04 .
drwx------. 4 1234567 1234567 52 Jun 28 19:04 ..
-rwx--x--x. 1 1234567 1234567 24 Jun 28 19:04 chour
[xxx-xxxxxxx.rhcloud.com cron]\> ls -la minutely/
total 4
drwx------. 2 1234567 1234567 19 Jun 28 19:04 .
drwx------. 4 1234567 1234567 52 Jun 28 19:04 ..
-rwx------. 1 1234567 1234567 24 Jun 28 19:04 cminut
[xxx-xxxxxxx.rhcloud.com cron]\>
The minutely cron job runs fine and i can see the log file cron_minutely.log in $OPENSHIFT_LOG_DIR
For the hourly cron job i can not see the cron_hourly.log neither the job is executed
My previous attempts wen't trough uninstalling and re installing the cron cartridge as mentioned here but there was no success running the hourly cron job.
Is there any other way that i can debug this, or any openshift specific fix known in order to solve this?
After some desperate attempts and inspecting the cron job cartridge script cron_runjobs.sh
i came to note that whenever i runed the hourly cron via this script the log
":SKIPPED: $freq cron run for openshift user '$OPENSHIFT_GEAR_UUID"
popped up, such message was not fired for other crons (minutely, weekly...)
upon closer inspection i noticed that there were several processes with this script cron_runjobs.sh running on the server, after killing this processes and re-deploying the application the hourly cron job started again to work as expected.
I do not know why were these processes hanged and still running, maybe because i used sleep in the hourly cron before, although not sure if that was the reason.

cron command runs but does not know the date or time

Perhaps I'm just missing something simple so here goes.
I have a webmin server on Ubuntu and also OpenGTS on a vps, everything works fine and I set it all up from scratch.
I have a cron job like this:
bash /usr/local/OpenGTS_2.5.0/bin/trim.sh
trim.sh is:
#!/bin/sh
MAILTO=me#mymail.net
cd /usr/local/OpenGTS_2.5.0/bin/
./admin.sh Device -account=vehicles -device=laguna -deleteOldEvents=-5d -confirmDelete
This should delete old entries from the database older than 5 days
When run from command line it outputs correctly
Entry Point: org.opengts.db.tables.Device
Deleting events before "Wed Jun 11 23:59:59 BST 2014" ...
Device: laguna - Deleted 0 old events (Saved last event, Nothing to delete)
However when it runs from cron
Entry Point: org.opengts.db.tables.Device
Deleting events before "Mon Jun 09 23:59:59 BST 2014" ...
Device: laguna - Deleted 0 old events (Empty range)
If I set it to 1 day, or 2 days it still insists on Mon Jun 09 23:59:59 BST 2014
I'm totally stumped, any ideas ?
thanks

Resources