SOAP UI - How to Capture REST raw Response in a file - groovy

I am trying to capture the raw response for REST (POST) API call into a file using groovy Script.
I can see the response as below in RAW, but when file is produced it is blank.
REST Response:
HTTP/1.1 401 Unauthorized
content-length: 0
Date: Tue 12 jul 2016 12:12:12gmt
WWW-Autheticate: OAuth
Server: Jetty (8.1.17V20150415)
I am using SOAP UI version 5.2.
Any help appreciated.
Groovy Script:
def Date startTime = new Date()
File it=new File("Result")
def cur_Time = startTime.getMonth()+1 + "_" + startTime.getDate()
cur_Time = cur_Time + "_" + startTime.getHours() + startTime.getMinutes() +startTime.getSeconds()
def fileName = it.name + "_" + cur_Time
//Request File
def myXmlRequest="C:\\ConnectivityResults\\"+ "Rest_Request" + fileName+".xml"
def request=context.expand('${Testcasename#Request}')
def req = new File (myXmlRequest)
req.write(request,"UTF-8")
//Response File
def myXmlResponse="C:\\ConnectivityResults\\"+ "Rest_Response" + fileName+".xml"
def response=context.expand('${Testcasename#Response}')
def res = new File (myXmlResponse)
res.write(response,"UTF-8")

The problem isn't probably in your Groovy script, the problem is simply that your request is incorrect and nothing is returned as response. Based on the http-headers you show in the question:
HTTP/1.1 401 Unauthorized
content-length: 0
Date: Tue 12 jul 2016 12:12:12gmt
WWW-Autheticate: OAuth
Server: Jetty (8.1.17V20150415)
You're receiving an 401 Unauthorized response instead of 200 OK, and based on the Content-lenght which is 0. It's normal that your response is blank, so there is no content to save in file.
EDIT BASED ON COMMENT
If you want also to save the http-headers in a file, you can add the follow snippet to your Groovy script:
def fileName = ...
// http-headers file
def httpHeadersFilePath ="C:/ConnectivityResults/Rest_Request${fileName}.txt"
def ts = testRunner.testCase.getTestStepByName('Testcasename')
def headers = ts.getTestRequest().response.responseHeaders
def httpHeaderFile = new File(httpHeadersFilePath)
httpHeaderFile.text = ''
headers.each { key, value ->
httpHeaderFile.append("${key}:${value}\n",'UTF-8')
}
Hope it helps,

Sorry about the late...
There's a simple way to take it and record in a file, using Groovy Script on your SoapUI:
#Take the Raw Request into a variable "request":
def request = context.expand( '${Request 1#RawRequest}' )
#Take the Raw Response into a variable "response":
def response = context.expand( '${Request 1#Response}' )
#Create and fill a file "MyFile.json" whit the variables values:
new File( "C:/foo/bar/MyFile.json" ).write( request + response, "UTF-8" )
Hope that's useful.

Related

python download file into memory and handle broken links

I'm using the following code to download a file into memory :
if 'login_before_download' in obj.keys():
with requests.Session() as session:
session.post(obj['login_before_download'], verify=False)
request = session.get(obj['download_link'], allow_redirects=True)
else:
request = requests.get(obj['download_link'], allow_redirects=True)
print("downloaded {} into memory".format(obj[download_link_key]))
file_content = request.content
obj is a dict that contains the download_link and another key that indicates if I need to login to the page to create a cookie.
The problem with my code is that if the url is broken and there isnt any file to download I'm still getting the html content of the page instead of identifying that the download failed.
Is there any way to identify that the file wasnt downloaded ?
I found the following solution in this url :
import requests
def is_downloadable(url):
"""
Does the url contain a downloadable resource
"""
h = requests.head(url, allow_redirects=True)
header = h.headers
content_type = header.get('content-type')
if 'text' in content_type.lower():
return False
if 'html' in content_type.lower():
return False
return True
print is_downloadable('https://www.youtube.com/watch?v=9bZkp7q19f0')
# >> False
print is_downloadable('http://google.com/favicon.ico')
# >> True

Lambda/boto3/python loop

This code acts as an early warning system for ADFS failures, which works fine when run locally. Problem is that when I run it in Lambda, it loops non stop.
In short:
lambda_handler() runs pagecheck()
pagecheck() produces the info needed then passes 2 lists (msgdet_list, error_list) and an int (error_count) to notification().
notification() collates and prints the output. The output is two key variables (notificationheader and notificationbody).
I've #commentedOut the SNS piece which would usually email the info, and am using print() to instead send the info to CloudWatch logs until I can get the loop sorted. Logs:
CloudWatch logs
If I run this locally, it produces a clean single output. In Lambda, the function will loop until it times out. It's almost like every time the lists are updated, they're passed to the notification() module and it's run. I can limit the function time, but would rather fix the code!
Cheers,
tac
# This python/boto3/lambda script sends a request to an Office 365 landing page, parses return details to confirm a successful redirect to /
# the organisation ADFS homepage, authenticates homepge is correct, raises any errors, and sends a consolodated report to /
# an AWS SNS topic.
# Run once to produce pageserver and htmlchar values for global variables.
# Import required modules
import boto3
import urllib.request
from urllib.request import Request, urlopen
from datetime import datetime
import time
import re
import sys
# Global variables to be set
url = "https://outlook.com/CONTOSSO.com"
adfslink = "https://sts.CONTOSSO.com/adfs/ls/?client-request-id="
# Input after first run
pageserver = "Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0"
htmlchar = 18600
# Input AWS SNS ARN
snsarn = 'arn:aws:sns:ap-southeast-2:XXXXXXXXXXXXX:Daily_Check_Notifications_CONTOSSO'
sns = boto3.client('sns')
def pagecheck():
# Present the request to the webpage as if coming from a user in a browser
user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)'
values = {'name' : 'user'}
headers = { 'User-Agent' : user_agent }
data = urllib.parse.urlencode(values)
data = data.encode('ascii')
# "Null" the Message Detail and Error lists
msgdet_list = []
error_list = []
request = Request(url)
req = urllib.request.Request(url, data, headers)
response = urlopen(request)
with urllib.request.urlopen(request) as response:
# Get the URL. This gets the real URL.
acturl = response.geturl()
msgdet_list.append("\nThe Actual URL is:")
msgdet_list.append(str(acturl))
if adfslink not in acturl:
error_list.append(str("Redirect Fail"))
# Get the HTTP resonse code
httpcode = response.code
msgdet_list.append("\nThe HTTP code is: ")
msgdet_list.append(str(httpcode))
if httpcode//200 != 1:
error_list.append(str("No HTTP 2XX Code"))
# Get the Headers as a dictionary-like object
headers = response.info()
msgdet_list.append("\nThe Headers are:")
msgdet_list.append(str(headers))
if response.info() == "":
error_list.append(str("Header Error"))
# Get the date of request and compare to UTC (DD MMM YYYY HH MM)
date = response.info()['date']
msgdet_list.append("The Date is: ")
msgdet_list.append(str(date))
returndate = str(date.split( )[1:5])
returndate = re.sub(r'[^\w\s]','',returndate)
returndate = returndate[:-2]
currentdate = datetime.utcnow()
currentdate = currentdate.strftime("%d %b %Y %H%M")
if returndate != currentdate:
date_error = ("Date Error. Returned Date: ", returndate, "Expected Date: ", currentdate, "Times in UTC (DD MMM YYYY HH MM)")
date_error = str(date_error)
date_error = re.sub(r'[^\w\s]','',date_error)
error_list.append(str(date_error))
# Get the server
headerserver = response.info()['server']
msgdet_list.append("\nThe Server is: ")
msgdet_list.append(str(headerserver))
if pageserver not in headerserver:
error_list.append(str("Server Error"))
# Get all HTML data and confirm no major change to content size by character lenth (global var: htmlchar).
html = response.read()
htmllength = len(html)
msgdet_list.append("\nHTML Length is: ")
msgdet_list.append(str(htmllength))
msgdet_list.append("\nThe Full HTML is: ")
msgdet_list.append(str(html))
msgdet_list.append("\n")
if htmllength // htmlchar != 1:
error_list.append(str("Page HTML Error - incorrect # of characters"))
if adfslink not in str(acturl):
error_list.append(str("ADFS Link Error"))
error_list.append("\n")
error_count = len(error_list)
if error_count == 1:
error_list.insert(0, 'No Errors Found.')
elif error_count == 2:
error_list.insert(0, 'Error Found:')
else:
error_list.insert(0, 'Multiple Errors Found:')
# Pass completed results and data to the notification() module
notification(msgdet_list, error_list, error_count)
# Use AWS SNS to create a notification email with the additional data generated
def notification(msgdet_list, error_list, errors):
datacheck = str("\n".join(msgdet_list))
errorcheck = str("\n".join(error_list))
notificationbody = str(errorcheck + datacheck)
if errors >1:
result = 'FAILED!'
else:
result = 'passed.'
notificationheader = ('The daily ADFS check has been marked as ' + result + ' ' + str(errors) + ' ' + str(error_list))
if result != 'passed.':
# message = sns.publish(
# TopicArn = snsarn,
# Subject = notificationheader,
# Message = notificationbody
# )
# Output result to CloudWatch logstream
print('Response: ' + notificationheader)
else:
print('passed')
sys.exit()
# Trigger the Lambda handler
def lambda_handler(event, context):
aws_account_ids = [context.invoked_function_arn.split(":")[4]]
pagecheck()
return "Successful"
sys.exit()
Your CloudWatch logs contain the following error message:
Process exited before completing request
This is caused by invoking sys.exit() in your code. Locally your Python interpreter will just terminate when encountering such a sys.exit().
AWS Lambda on the other hand expects a Python function to just return and handles sys.exit() as an error. As your function probably got invoked asynchronously AWS Lambda retries to execute it twice.
To solve your problem, you can replace the occurences of sys.exit() with return or even better, just remove the sys.exit() calls, as there would be already implicit returns in the places where you use sys.exit().

Python: How to get HTTP header using RAW_Sockets

I'm beginner in Python and I would like to build simple port sniffer.
For this purpose I'm using code from this site, as example: Simple packege snffer using python
And I would like to unpack bites from socket to exctract http header, using function struct.unpack()
What format of string should I use in unpack HTTP header, (e.g '!BBH', "!BBHHHBBH4s4s",'!HHLLBBHHH')
the HTTP header is not fixed-length, so you'll need to parse it other way, for example:
import logging
log = logging.getLogger(__name__)
def parse_http(data):
lines = data.split(b'\r\n')
if len(lines) < 1:
log.error('Invalid http header: %s', lines)
return
request = lines[0]
header = {}
rest = []
in_header = True
for line in lines[1:]:
if line == b'':
in_header = False
continue
if in_header:
try:
key, val = line.split(b': ')
except ValueError:
log.error('Invalid header line: %s', line)
continue
header[key] = val
else:
rest.append(line)
return request, header, b'\r\n'.join(rest)
In order to detect a HTTP packet, you could check if the payload starts with POST, GET, HTTP ... etc

File displaying only one form of request and response

I am generating a file where it should output 2 various requests and 2 various responses from different SOAP Request steps which are:
TestRegion
TestRules
However I noticed the file is just generating TestRegion request and response 2 times each over. For each request it displays TestRegion request and for each response it's displaying TestRegion response. Why is it doing this and how can I get the correct request and responses to show? it does display the hard coded strings correct like TESTREGION REQUEST: and then later on TEST REGION RESPONSE, but the request and response is incorrect.
def testRegionRequest = context.expand( '${${TestRegion}#Request}' )
def testRegionResponse = context.expand( '${${TestRegion}#Response}' )
def testRulesRequest = context.expand( '${${TestRules}#Request}' )
def testRulesResponse = context.expand( '${${TestRules}#Response}' )
def fileName = "XXX.txt"
def logFile = new File(fileName)
//Draws a line
def drawLine(def letter = '=', def count = 70) { letter.multiply(count)}
def testResult = new StringBuffer()
testResult.append drawLine('-', 60) + "\n\n"
testResult.append "\n\nTEST REGION REQUEST:\n\n"
testResult.append(testRegionRequest.toString())
testResult.append "\n\n" + drawLine('-', 60) + "\n\n"
testResult.append "\n\nTEST REGION RESPONSE:\n\n"
testResult.append(testRegionResponse.toString())
testResult.append "\n\n" + drawLine('-', 60) + "\n\n"
testResult.append "\n\nTEST RULES REQUEST:\n\n"
testResult.append(testRulesRequest.toString())
testResult.append "\n\n" + drawLine('-', 60) + "\n\n"
testResult.append "\n\nTEST RULES RESPONSE:\n\n"
testResult.append(testRulesResponse.toString())
// Now create and record the result file
logFile.write(testResult.toString())
It appears that you have incorrect values used in the context.expand
Change from:
${${SearchRegion}#Request}
To:
${SearchRegion#Request}
And the same is applicable to other properties as well.

BurpSuite API - Get Response from edited requests

I have a problem with Burpsuite API that I can't find a proper function to print out the response for edited requests . I'm developing a new plugin for burpsuite with python . myscript is simply takes requests from proxy then it edit headers and send it again .
from burp import IBurpExtender
from burp import IHttpListener
import re,urllib2
class BurpExtender(IBurpExtender, IHttpListener):
def registerExtenderCallbacks(self, callbacks):
self._callbacks = callbacks
self._helpers = callbacks.getHelpers()
callbacks.setExtensionName("Burp Plugin Python Demo")
callbacks.registerHttpListener(self)
return
def processHttpMessage(self, toolFlag, messageIsRequest, currentRequest):
# only process requests
if messageIsRequest:
requestInfo = self._helpers.analyzeRequest(currentRequest)
#timestamp = datetime.now()
#print "Intercepting message at:", timestamp.isoformat()
headers = requestInfo.getHeaders()
#print url
if(requestInfo.getMethod() == "GET"):
print "GET"
print requestInfo.getUrl()
response = urllib2.urlopen(requestInfo.getUrl())
print response
elif(requestInfo.getMethod() == "POST"):
print "POST"
print requestInfo.getUrl()
#for header in headers:
#print header
bodyBytes = currentRequest.getRequest()[requestInfo.getBodyOffset():]
bodyStr = self._helpers.bytesToString(bodyBytes)
bodyStr = re.sub(r'=(\w+)','=<xss>',bodyStr)
newMsgBody = bodyStr
newMessage = self._helpers.buildHttpMessage(headers, newMsgBody)
print "Sending modified message:"
print "----------------------------------------------"
print self._helpers.bytesToString(newMessage)
print "----------------------------------------------\n\n"
currentRequest.setRequest(newMessage)
return
You need to print the response but you don't do anything in case messageIsRequest is false. When messageIsRequest is false it means that the currentRequest is a response and you can print out the response as you did for the request. I did it in Java like this:
def processHttpMessage(self, toolFlag, messageIsRequest, httpRequestResponse):
if messageIsRequest:
....
else
HTTPMessage = httpRequestResponse.getResponse()
print HTTPMessage
There is even a method that lets you bind request and response together when using a proxy. It can be found in IInterceptedProxyMessage:
/**
* This method retrieves a unique reference number for this
* request/response.
*
* #return An identifier that is unique to a single request/response pair.
* Extensions can use this to correlate details of requests and responses
* and perform processing on the response message accordingly.
*/
int getMessageReference();
I don't think it is supported for HTTPListeners.
I am writing the extensions in Java and tried to translate to Python for this anwser. I haven't tested this code and some bugs might be introduced due to translation.

Resources