I would need clarification on jmeter.sh and jmeter file without ext ,which are in bin folder.
With example:
1.If i setup different HEAP size in jmeter and jmeter.sh file,which one will be considered.?
2.Does the above depend on how i run the test?(for ex: jmeter -n -t or jmeter.sh -n -t)
3.If the test started with jmeter command instead of jmeter.sh ,will intern jmeter.sh be called and hence heap in jmeter.sh be used or vice-versa?
related question to the difference b/w jmeter.bat and jmeter-
difference between jmeter.bat/jmeter.sh And jmeter.file
jmeter.sh is a wrapper for jmeter script (without extension) which does some pre-requisite validations like getting current working dir, getting Java version, constructing arguments depending on Java version, etc. so you'd better use this file for running JMeter under Unix and derivatives
jmeter is a wrapper for ApacheJMeter.jar binary, it sets default JVM arguments and overrides and adds more Java arguments depending on your operating system
The sequence is the following:
jmeter.sh calls jmeter
jmeter calls ApacheJMeter.jar
If you want to change HEAP or whatever - set the appropriate environment variable like:
HEAP=4G && export HEAP && ./jmeter.sh -n -t /path/to/test.jmx ...
More information: How to Get Started With JMeter: Installation & Test Plans
jmeter.sh calls jmeter, both unix scripts , jmeter is the main/default script
jmeter
run JMeter (in GUI mode by default). Defines some JVM settings which may not work for all JVMs.
jmeter.sh
very basic JMeter script (You may need to adapt JVM options like memory settings).
You can set before running both the JVM_ARGS
It may be necessary to set a few environment variables to configure the JVM used by JMeter. Those variables can be either set directly in the shell starting the jmeter script. For example setting the variable JVM_ARGS will override most pre-defined settings, for example
JVM_ARGS="-Xms1024m -Xmx1024m" jmeter -t test.jmx [etc.]
will override the HEAP settings in the script.
Related
I installed Jmeter on one of my Mesos nodes, but I can not run it I have this error that appears
================================================== ==============================
Do not use GUI mode for load testing, only for Test creation and Test debugging.
For load testing, use CLI Mode (was NOT GUI):
jmeter -n -t [jmx file] -l [results file] -e -o [Path to web report folder]
& increase Java Heap to meet your test requirements:
Modify current env variable HEAP = "- Xms1g -Xmx1g -XX: MaxMetaspaceSize = 256m" in the jmeter batch file
Check: https://jmeter.apache.org/usermanual/best-practices.html
================================================== ==============================
An error occurred: Can not connect to the window server using ': 0' as the value of the DISPLAY variable.
A DISPLAY is only needed if you're running this in GUI mode. Since you're running this in Mesos, you cannot use GUI mode. Add the -n option to run in CLI mode.
JMeter: Getting Started: CLI Mode
How can I get the environment variable from docker file for example I am adding a
ENV URL_PATH="google.com"
in my dockerfile, so can I get the this URL_PATH in my Jmeter.jmx file with the help of User Defined Variable.
On window its working fine with proper {__env(URL_PATH)}
but on docker its not working. How can I solve this problem?
You can use the -e option to pass environment variables into the container when running it.
docker run -e URL_PATH=google.com ...
Docs: https://docs.docker.com/engine/reference/run/#env-environment-variables
As far as I can see __env() is a Custom JMeter Function therefore it is not available in vanilla JMeter so the options are in:
Amend your Dockerfile to include downloading of http://repo1.maven.org/maven2/kg/apc/jmeter-plugins-functions/2.0/jmeter-plugins-functions-2.0.jar to "lib/ext". This way you will be able to use __env() function in Docker environment normally. See Make Use of Docker with JMeter - Learn How for example Docker configuration assuming using JMeter with Plugins.
Switch to __groovy() function. Replace all the occurrences of {__env(URL_PATH)} with the following expression:
${__groovy(System.getenv('URL_PATH'),)}
I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.
I am building some Docker Spark images and I am a little puzzled on how to pass environment (ENV) variables defined in the DockerFile all the way down into the container via "run -e", on into the supervisord and and then into the spark-submit shell without having to hard-code them again in the supervisord.conf file (as seems to be the suggestion in something somewhat similar here: supervisord environment variables setting up application ).
To help explain, imagine the following components:
DockerFile (contains about 20 environment variables "ENV FOO1 bar1", etc.)
run.sh (docker run -d -e my_spark_program)
conf/supervisord.conf ([program:my_spark_program] command=sh /opt/spark/sbin/submit_my_spark_program.sh etc.)
submit_my_spark_program.sh (contains a spark-submit of the jar I want to run - probably also needs something like --files
•--conf 'spark.executor.extraJavaOptions=-Dconfig.resource=app'
•--conf 'spark.driver.extraJavaOptions=-Dconfig.resource=app' but this doesn't quite seem right?)
I guess I would like to define my ENV variables once in the DockerFile and only once, and I think it should be possible to pass them into the container via the run.sh using the "-e" switch, but I can't seem to figure out how to pass them from there to the supervisord and beyond into the spark-submit shell (submit_my_spark_program.sh) so that they are ultimately available to my spark-submitted jar file. This seems a little over-engineered, so maybe I am missing something here...?
Apparently the answer (or at least the workaround) in this case is to not use System.Property(name, default) to get the Docker ENV variables through the supervisor, but instead use the somewhat less useful System.getenv(name) - as this seems to work.
I was hoping to be able to use System.Property(name, default) to get the Docker ENV variables, since this offers an easy way to supply default values, but apparently this does not work in this case. If someone can improve on this answer by providing a way to use System.Property, then by all means join in. Thanks!
I'm trying to export factor system variable using Mcollective shell agent. But once check the factor list new variable is not set properly. What could be the reason here ?
Setting this variable though puppet is not possible as puppet catalog run lookup for this custom factor and then resolve the node. So the facter variable must be set before puppet run.
Is there any other Mcollective agents which could use for this purpose.
mco shell run "export FACTER_deployment_pattern='pattern2'"; factor
[ ============================================================> ] 2 / 2
puppetagent:
qaa-node-5:
Finished processing 2 / 2 hosts in 146.06 ms
The exported environment variable would only be available to processes spawned by the shell spawned by this command. In other words, it doesn't really do anything because it spawns a shell, sets an environment variable, then the shell exits.
In order to create a Facter external fact that is actually available to other processes you can create a file in a path dependent on your installation.