I have a series of smoke tests that my company uses to validate its web-application. These tests are written in Ruby. We want to splits these tests into a series of tasks within locust.io. I am a newby when it comes to Locust.IO. I have written python code that can run these tasks one after the other in succession. However, when make them locust.io tasks nothing is being reported in the stats window. I can see the tests run in the console but the statistics never get updated. What do I need to do? Here is a snippet of the Locustfile.py I generate.
def RunTask(name, task):
code, logs = RunSmokeTestTask(name, task)
info("Smoke Test Task {0}.{1} returned errorcode {2}".format(name, task, code))
info("Smoke Test Task Log Follows ...")
info(logs)
class SmokeTasks(TaskSet):
#task
def ssoTests_test_access_sso(self):
RunTask("ssoTests.rb", "test_access_sso")
. . .
RunSmokeTestTask is what actually runs the task. It is the same code that I am using when I invoke the task outside of Locust.IO. I can see the info in the logfile. Some of them fail but the statistics never update. I know I am probably missing something silly.
You need to actually report the events. (edit: I realize now that maybe you were hoping that locust/python would be able to detect the requests made from Ruby, but that is not possible. If you are ok with just reporting the whole test as a single "request", then keep reading)
Add something like this to your taskset:
self.user.events.request_success.fire(request_type="runtask", name=name, response_time=total_time, response_length=0)
You'll also need to measure the time it took. Here is a more complete example (but also a little complex):
https://docs.locust.io/en/stable/testing-other-systems.html#sample-xml-rpc-user-client
Note: TaskSets are an advanced (useless, imho) feature, you probably want to put the #task directly under a User, and the RunTask method as well.
something like:
class SmokeUser(User):
def RunTask(self, name, task):
start_time = time.time()
code, logs = RunSmokeTestTask(name, task)
total_time = time.time() - start_time
self.events.request_success.fire(request_type="runtask", name=name, response_time=total_time, response_length=0)
info("Smoke Test Task {0}.{1} returned errorcode {2}".format(name, task, code))
info("Smoke Test Task Log Follows ...")
info(logs)
#task
def ssoTests_test_access_sso(self):
self.RunTask("ssoTests.rb", "test_access_sso")
Related
I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.
I have just started using locust with one custom client. At the end of the test (on_stop) I am fetching some results from APM and logging them.
Issue : when I use below command, all 1000 users are fetching those details and logging the data from APM. But I want only the 1st user to do this job and let other users to ignore this call.
$ locust -f --headless -u 1000 -r 100 --run-time 1h30m
Please let me know if someone has done something similar earlier. Thanks in advance.
I think what you probably want is test_stop EventHook. It runs once when the test is stopped. If you run in distributed mode, it runs once only on the master.
https://docs.locust.io/en/stable/api.html#locust.event.Events.test_stop
My understanding is that you are not running in distributed mode and you are using User class' on_stop method. I would try handling it with some global variable, it is quick and might need some work, like this:
apm_logging_user_assigned = False
class MyUser(User):
test stuff here ...
def on_start(self):
if apm_logging_user_assigned is False:
self.apm_log = True
apm_logging_user_assigned = True
super().on_start()
def on_stop(self):
if self.apm_log:
do your apm stuff here...
super().on_stop()
I am working python program using flask, where i want to extract keys from dictionary. this keys is in text format. But I want to repeat this above whole process after every specific interval of time. And display this output on local browser each time.
I have tried this using flask_apscheduler. The program run and shows output but only once, but dose not repeat itself after interval of time.
This is python program which i tried.
#app.route('/trend', methods=['POST', 'GET'])
def run_tasks():
for i in range(0, 1):
app.apscheduler.add_job(func=getTrendingEntities, trigger='cron', args=[i], id='j'+str(i), second = 5)
return "Code run perfect"
#app.route('/loc', methods=['POST', 'GET'])
def getIntentAndSummary(self, request):
if request.method == "POST":
reqStr = request.data.decode("utf-8", "strict")
reqStrArr = reqStr.split()
reqStr = ' '.join(reqStrArr)
text_1 = []
requestBody = json.loads(reqStr)
if requestBody.get('m') is not None:
text_1.append(requestBody.get('m'))
return jsonify(text_1)
if (__name__ == "__main__"):
app.run(port = 8000)
The problem is that you're calling add_job every time the /trend page is requested. The job should only be added once, as part of the initialization, before starting the scheduler (see below).
It would also make more sense to use the 'interval' trigger instead of 'cron', since you want your job to run every 5 seconds. Here's a simple working example:
from flask import Flask
from flask_apscheduler import APScheduler
import datetime
app = Flask(__name__)
#function executed by scheduled job
def my_job(text):
print(text, str(datetime.datetime.now()))
if (__name__ == "__main__"):
scheduler = APScheduler()
scheduler.add_job(func=my_job, args=['job run'], trigger='interval', id='job', seconds=5)
scheduler.start()
app.run(port = 8000)
Sample console output:
job run 2019-03-30 12:49:55.339020
job run 2019-03-30 12:50:00.339467
job run 2019-03-30 12:50:05.343154
job run 2019-03-30 12:50:10.343579
You can then modify the job attributes by calling scheduler.modify_job().
As for the second problem which is refreshing the client view every time the job runs, you can't do that directly from Flask. An ugly but simple way would be to add <meta http-equiv="refresh" content="1" > to the HTML page to instruct the browser to refresh it every second. A much better implementation would be to use SocketIO to send new data in real-time to the web client.
I would recommend that you start a demonized thread, import your application variable, then you can use with app.app_context() in order to log into to your console.
It's a little bit more fiddly but allows the application to run separated by different threads.
I use this method to fire off a bunch of http requests concurrently. The alternative is wait for each response before making a new one.
I'm sure you've realised that the thread will become occupied of you run an infinitely running command.
Make sure to demonize the thread so that when you stop your web app it will kill the thread at the same time gracefully.
I'm trying to write down a script that will get the Build Number of a Build that has been triggered by another job. For example:
I have a build job that calls two other jobs(Call/trigger builds on other project). When the main job is finished with success I would like to get the number of the first build job that was triggered from within. The script I'm trying to run founds the main job, however I can't get in any way the build number of the triggered job.
def job = jenkins.model.Jenkins.instance.getItem("Hourly")
job.builds.each {
def build = it
if (it.getResult().toString().equals("SUCCESS")) {The rest of the code should go here!}
I was trying to find it in the Jenkins java-doc API and online, however without any luck. Can somebody please help me with that?
P.S. The script runs after the job has finished(triggered when needed only).
You can parse the build number (of the child job) from the build log (of the parent job).
For example:
j = Jenkins.getInstance();
jobName = "parentJobName";
job = j.getItem(jobName);
bld = job.getBuildByNumber(parentBuildNumber);
def buildLog = bld.getLog(10); //make sure you read enough lines
def group = (buildLog =~ /#(\d+) of Job : childJobName with/ );
println("The triggered build number: ${group[0][1]}");
I am calling a REST based service from SoapUI. I have created a load test for the service and the test works. I wrote the following code in my setup script for the load test.
log.info("This is from the setup script")
def request = context.expand('${#Request}')
log.info(request)
def response = context.expand('${#Response}')
log.info(response);
The only item I am getting in my log is the "This is from the setup script".
I also added the following lines of code in my Teardown script.
log.info("Teardown script")
def response = context.expand('${#Response}')
log.info(response);
I am not seeing the "Teardown script" text in the log. At this point I am a bit puzzled as to the behavior.
Load Test:
Test Suite
Test case options.
I have unchecked the Discard OK results test box.
What changes do I need to do to my scripts in order to log the requests and the responses?
When you create a setup and/or teardown script, remember those are run only once per run, not per test! What you intended is not going to work.
In your setup, since no tests have run yet, the context is going to be empty ... as you can see from your log message.
In your teardown, I suspect there is a bug in SoapUI and the log is not getting sent to the log tab. If you intentionally create an error (I used logg.info "Hello world!" - note the intentional double g), I still got a an error in the error log tab.