I am trying to write a JMeter test plan for testing a REST server. The server currently supports about 80 GET requests (plus several POST and PUT requests). Is there any easy way to create HTTP request samplers for all of the GET requests without doing them by hand? Can I put the URLs in a CSV file and bulk-load them? How?
Sure.
You can use CSV Data Set Config to read request details from your csv-file - in loop, under While Controller, with condition = until the end of file.
As http sampler to use along with your requests details you can use one of these:
HTTP Request - jmeter's out-of-the-box sampler;
hostname, url/path, protocol can be specified as variables extracted from csv entry, but request METHOD is selected from list - so in case of using this sampler you have to set several loops and csv-files - for GET / POST / PUT respectively.
HTTP Raw Request - custom sampler from jmeter-plugins;
in this case you can completely define all the details and params of request from csv.
Common schema will look like this:
. . .
While Controller
Condition = ${__javaScript("${rMethod}"!="<EOF>",)}
+ CSV Data Set Config
Filename = requests.csv
Varible names = rMethod,rHost,rPort,rPath...
+ HTTP Request / HTTP Raw Request
. . .
Related
I am new to the whole backend stuff I understood that both bodyparser and express.json() will parse the incoming request(body from the client) into the request object.
But what happens if I do not parse the incoming request from the client ?
without middleware parsing your requests, your req.body will not be populated. You will then need to manually go research on the req variable and find out how to get the values you want.
Your bodyParser acts as an interpreter, transforming http request, in to an easily accessible format base on your needs.
You may read more on HTTP request here ( You can even write your own http server )
https://nodejs.org/api/http.html#http_class_http_incomingmessage
You will just lose the data, and request.body field will be empty.
Though the data is still sent to you, so it is transferred to the server, but you have not processed it so you won't have access to the data.
You can parse it yourself, by the way. The request is a Readable stream, so you can listen data and end events to collect and then parse the data.
You shall receive what you asked for in scenarios where you do not convert the data you get the raw data that looks somewhat like this username=scott&password=secret&website=stackabuse.com, Now this ain't that bad but you will manually have to filter out which is params, what is a query and inside of those 2 where is the data..
unless it is a project requirement all that heavy lifting is taken care of by express and you get a nicely formatted object looking like this
{
username: 'scott',
password: 'secret',
website: 'stackabuse.com'
}
For Situation where you DO need to use the raw data express gives you a convenient way of accessing that as well all you need to do is use this line of code
express.raw( [options] ) along with express.json( [options] )
I would like to achieve with mitmproxy and a custom python script to duplicate some requests multiple times.
As a example if the client does a post request to aaa.bbb.ccc and the url contains as a string ["test","runner"] it takes this request and sends it 10 times.
amount = 10
wordwatch = ["test","dev"]
domain = "aaa.bbb.ccc"
def request(context, flow):
print(type(flow.request.url))
if domain in flow.request.url:
if any(s in flow.request.url for s in wordwatch):
for i in range(0, amount + 1):
flow.response
The problem I have is I neither get an error or something in the output. Also on the serverside when I watch the apache log I can't even see the requests incoming. The server uses only http so it shouldn't be a https/certificate issue. It is very good possible that I misunderstood the documentation and samples on Manual Samples .
I am trying to come up with jmeter setup in which i want to read entire csv file that has 200000 rows and i want iterate through each rows creating new thread since i am using JSR223 pre-processor that requires new thread for removing empty parameters from the request body. For some reason when i use while loop then only first test passes and rest of the tests fails as JSR223 pre-processor keeps on reading previous thread. I have also un-checked cached compiled script if available but still no luck. I also want to add that when i explicitly specify the number of threads as 100 out of 200000 then all of my 100 test passes as it reads new thread each time. Below is the screenshot of my set up:
This Fails -
This Passes -
JSR223 Pre-Processor Script that i am using:
def request = new groovy.json.JsonSlurper().parseText(sampler.getArguments().getArgument(0).getValue())
def newRequest = evaluate(request.inspect())
request.body.each { entry ->
if (entry.getValue().equals('')) {
newRequest.body.remove(entry.getKey())
}
}
sampler.getArguments().removeAllArguments()
sampler.addNonEncodedArgument('', new groovy.json.JsonBuilder(newRequest).toPrettyString(), '')
sampler.setPostBodyRaw(true)
Console log when using while controller
Replace this :
sampler.getArguments().removeAllArguments()
By:
def arguments = new org.apache.jmeter.config.Arguments();
sampler.setArguments(arguments);
If you're looking to learn jmeter correctly, this book will help you.
It is not possible to provide a comprehensive answer without seeing:
Your While Controller condition stanza
First 3 lines of your CSV file
Your CSV Data Set Config setup
Your HTTP Request sampler parameters or body
Double check the following:
Compare HTTP Request sampler body for 1st and 2nd requests under the While Controller
JMeter Variables originating from the CSV Data Set Config (you can inspect them using Debug Sampler and View Results Tree listener combination)
Enable debug logging for the While Controller, it might be the case it is not doing what you expect, it can be done by adding the next line to log4j2.xml file:
<Logger name="org.apache.jmeter.control.WhileController" level="debug" />
So the title is a little confusing I guess..
I have a script that I've been writing that will display some random data and other non-essentials when I open my shell. I'm using grequests to make my API calls since I'm using more than one URL. For my weather data, I use WeatherUnderground's API since it will offer active alerts. The alerts and conditions data are on separate pages. What I can't figure out is how to insert the appropriate name in the grequests object when it is making requests. Here is the code that I have:
URLS = ['http://api.wunderground.com/api/'+api_id+'/conditions/q/autoip.json',
'http://www.ourmanna.com/verses/api/get/?format=json',
'http://quotes.rest/qod.json',
'http://httpbin.org/ip']
requests = (grequests.get(url) for url in URLS)
responses = grequests.map(requests)
data = [response.json() for response in responses]
#json parsing from here
In the URL 'http://api.wunderground.com/api/'+api_id+'/conditions/q/autoip.json' I need to make an API request to conditions and alerts to retrieve the data I need. How do I do this without rewriting a fourth URLS string?
I've tried
pages = ['conditions', 'alerts']
URL = ['http://api.wunderground.com/api/'+api_id+([p for p in pages])/q/autoip.json']
but, as I'm sure some of you more seasoned programmers know, threw and exception. So how can I iterate through these pages, or will I have to write out both complete URLS?
Thanks!
Ok I was actually able to figure out how to call each individual page within the grequests object by using a simple for loop. Here is the the code that I used to produced the expected results:
import grequests
pages = ['conditions', 'alerts']
api_id = 'myapikeyhere'
for p in pages:
URLS = ['http://api.wunderground.com/api/'+api_id+'/'+p+'/q/autoip.json',
'http://www.ourmanna.com/verses/api/get/?format=json',
'http://quotes.rest/qod.json',
'http://httpbin.org/ip']
#create grequest object and retrieve results
requests = (grequests.get(url) for url in URLS)
responses = grequests.map(requests)
data = [response.json() for response in responses]
#json parsing from here
I'm still not sure why I couldn't figure this out before.
Documentation for the grequests library here
UPDATE: See MarkLogic 8 - Stream large result set to a file - JavaScript - Node.js Client API for someone's answer on how to do this in Javascript. This question is specifically asking about XQuery.
I have a web application that consumes rest services hosted in node.js.
Node simply proxies the request to XQuery which then queries MarkLogic.
These queries already have paging setup and work fine in the normal case to return a page of data to the UI.
I need to have an export feature such that when I put a URL parameter of export=all on a request, it doesn't lookup a page anymore.
At that point it should get the whole result set, even if it's a million records, and save it to a file.
The actual request needs to return immediately saying, "We will notify you when your download is ready."
One suggestion was to use xdmp:spawn to call the XQuery in the background which would save the results to a file. My actual HTTP request could then return immediately.
For the spawn piece, I think the idea is that I run my query with different options in order to get all results instead of one page. Then I would loop through the data and create a string variable to call xdmp:save with.
Some questions, is this a good idea? Is there a better way? If I loop through the result set and it does happen to be very large (gigabytes) it could cause memory issues.
Is there no way to directly stream the results to a file in XQuery?
Note: Another idea I had was to intercept the request at the proxy (node) layer and then do an xdmp:estimate to get the record count and then loop through querying each page and flushing it to disk. In this case I would need to find some way to return my request immediately yet process in the background in node which seems to have some ideas here: http://www.pubnub.com/blog/node-background-jobs-async-processing-for-async-language/
One possible strategy would be to use a self-spawning task that, on each iteration, gets the next page of the results for a query.
Instead of saving the results directly to a file, however, you might want to consider using xdmp:http-post() to send each page to a server:
http://docs.marklogic.com/xdmp:http-post?q=xdmp:http-post&v=8.0&api=true
In particular, the server could be a Node.js server that appends each page as it arrives to a file or any other datasink.
That way, Node.js could handle the long-running asynchronous IO with minimal load on the database server.
When a self-spawned task hits the end of the query, it can again use an HTTP request to notify Node.js to close the file and report that the export is finished.
Hping that helps,