Error 500 : Anyone can guess why i am getting this if sending this request from robot framework python and its working using postman - python-3.x

Code:
Make request for getBookingOptions with valid user
#[Arguments] ${VALID_USER} ${VALID_PASSWORD} ${VALID_EMAIL} ${MEETING_DATE} ${MEETING_TIME} ${MEETING_DURATION} ${TIMEZONE_OFFSET} ${ROOMS} ${REQUIRED_INVITEES}
${body} = Create Dictionary userId=45646546456 password=fgdfgdfg email=mohammednasir.ali#istrbc.com meetingDate=2022-08-30 meetingTime=2022-08-30T16:00:00.000 meetingDuration=30 timeZoneOffset=-14400
${header} = Create Dictionary Content-Type=application/json
${body} Evaluate json.dumps(${body}) json
${response} = Post request CEA /getBookingOptions json=${body} headers=${header}
log ${RESPONSE_DATA}
set test variable ${response} ${response}
set global variable ${RESPONSE_DATA} ${response.json()}
Robot framework generated:
POST Request using
: uri=/getBookingOptions, params=None, files=None, allow_redirects=True, timeout=None
headers={'User-Agent': 'python-requests/2.23.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Type': 'application/json'}
data=None
json={"userId": "64554646", "password": "Disdcovdgery7", "email": "nasir#google.com", "meetingDate": "2022-08-30", "meetingTime": "2022-08-30T16:00:00.000", "meetingDuration": "30", "timeZoneOffset": "-14400"}
Output:
{'message': 'invalid request', 'status': 'failure'}
${response} = <Response [500]>
but same kind of json is working fine using postman. I am unable to find the reason. Can anyone guess please?

Related

HTTP Request using Python

For those who are reading my thread, I'd like to thank you in advance for your assistance in advance and would also like to ask for a bit of leniency when it comes to incorrect terminology, as I am still a 'Newbie'.
I've been trying to retrieve stock codes from KRX website, as I could not find any other resource to retrieve the information that I need. I tried to use requests library in python, but because the data I needed was loaded Asynchronously, which made the data inaccessible.
The problem is that in order to retrieve the information, I need to make two requests to an endpoint, one to retrieve code to be used as body for the second request, but when I made the second request, it returns empty list.
I managed to locate the API calls which retrieved the stock codes as shown below.
TwoRequests
To my knowledge, it requires two API calls, one to retrieve code, which works as access token for the second request in order to retrieve the Stock code that I am trying to retrieve.
I've managed to retrieve the code for the first request with the following codes
import requests
url = 'https://global.krx.co.kr/contents/COM/GenerateOTP.jspx'
headers = {
'Cookie': 'SCOUTER=x22rkf7ltsmr7l; __utma=88009422.986813715.1652669493.1652669493.1652669493.1; SCOUTER=z6pj0p85muce99; JSESSIONID=bOnAJtLWSpK1BiCuhWD0ldj1TqW5z6wEcn65oVgtyie841OlbdJs3fEHpUs1QtAV.bWRjX2RvbWFpbi9tZGNvd2FwMS1tZGNhcHAwMQ==; JSESSIONID=C2794518AD56B7119F0DA630B73B05AA.58tomcat2',
'Connection': 'keep-alive',
'accept': '*/*',
'accept-enconding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,ko;q=0.8',
'host': 'global.krx.co.kr',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
}
params = {
'bld': 'COM/stock_isu_info',
'name': 'finderBld',
'_': '1668677450106',
}
# make get request to the url and keep the connection open
response = requests.get(url, headers=headers, params=params, stream=True)
# response = requests.get(url, params=params, headers=headers)
relay_data = response.text
but upon sending a request to the second endpoint with the code as payload, it returns empty list, but I was expecting the response value for the second request as the following:
PayloadNeeded
The code I used to make the second request is the following (I added lots values for the header and body in hopes to retrieve the data by simulating the values used on the web page):
url = 'https://global.krx.co.kr/contents/GLB/99/GLB99000001.jspx'
headers = {
# ':authority': 'global.krx.co.kr',
# ':method': 'POST',
# ':path': '/contents/GLB/99/GLB99000001.jspx',
# ':scheme': 'https',
'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,ko;q=0.8',
'content-length': '0',
'content-type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Cookie': 'SCOUTER=x22rkf7ltsmr7l; __utma=88009422.986813715.1652669493.1652669493.1652669493.1; SCOUTER=z6pj0p85muce99; JSESSIONID=bOnAJtLWSpK1BiCuhWD0ldj1TqW5z6wEcn65oVgtyie841OlbdJs3fEHpUs1QtAV.bWRjX2RvbWFpbi9tZGNvd2FwMS1tZGNhcHAwMQ==; JSESSIONID=C2794518AD56B7119F0DA630B73B05AA.58tomcat2',
'origin': 'https://global.krx.co.kr',
'referer': 'https://global.krx.co.kr/contents/GLB/99/GLB99000001.jsp',
'sec-ch-ua': '"Google Chrome";v="89", "Chromium";v="89", ";Not A Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'sec-gpc': '1',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
'x-requested-with': 'XMLHttpRequest',
}
payload = {
'market_gubun': '0',
'isu_cdnm': 'All',
'isu_cd': '',
'isu_nm': '',
'isu_srt_cd': '',
'sort':'',
'ck_std_ind_cd': '20',
'par_pr': '',
'cpta_scl': '',
'sttl_trm': '',
'lst_stk_vl': '1',
'in_lst_stk_vl': '',
'in_lst_stk_vl2': '',
'cpt': '1',
'in_cpt': '',
'in_cpt2': '',
'nat_tot_amt': '1',
'in_nat_tot_amt': '',
'in_nat_tot_amt2': '',
'pagePath': '/contents/GLB/03/0308/0308010000/GLB0308010000.jsp',
'code': relay_data,
'pageFirstCall': 'Y',
}
# make request with url, headers, body
response = requests.post(url, headers=headers, data=payload)
print(response.text)
And here is the output for the code above:
{"DS1":[]}
Any help would be very much appreciated

Robot Framework - Unsupported Media Type error

I have the following json file:
{
"company":[
{
"name":"My company",
"security":"WPA2-PSK"
}
],
"name":"one name"
}
I'm trying to send a POST based on this json file but it's getting an error.
My code looks like this:
Create new company
${headers} = Create Dictionary Accept=application/json Content-Type=application/json
${body} = Get file create.json
Create Session http_session ${url} disable_warnings=${True}
${response} = POST On Session http_session ${company} data=${body} headers=${headers}
But I get the following error:
POST Response:
headers={'content-type': 'application/json', 'vary': 'Accept, Origin, Cookie', 'allow': 'GET, POST, HEAD, OPTIONS', 'x-frame-options': 'DENY', 'content-length': '52', 'x-content-type-options': 'nosniff', 'referrer-policy': 'same-origin', 'set-cookie': '64bc588b56c7114f53411b945693e29ba=55323dcbd60f99b6aa8cfcb0f44f578e; path=/; HttpOnly; Secure; SameSite=None'}
body={"detail":"Unsupported media type \"\" in request."}
HTTPError: 415 Client Error: Unsupported Media Type for url: myUrl
So I did:
Create new company
${headers} = Create Dictionary Accept=application/json Content-Type=application/json
${json1} = Get file create.json
${json} = Evaluate json.dumps(${json1})
${body} = Evaluate json.loads('''${json}''') json
Create Session http_session ${url} disable_warnings=${True}
${response} = POST On Session http_session ${company} data=${body} headers=${headers}
so i got the same error.
What did I do wrong?
It was a problem in converting json. I did it like this:
${body} = Evaluate json.loads("""${json}""")
Now it worked.

Python call rest api to get data from url

I've created a Bash script to get the data from the url using rest API from a appliance using username, password and saving the Session ID into a Variable and then using the session ID to get the data into csv format which is working fine.
I want to change the bash code into python3 code as i'm parsing it using pandas.
Bash Code:
#!/bin/bash
sessionID=$(curl -k -H "accept: application/json" -H "content-type: application/json" -H "x-api-version: 120" -d '{"userName":"administrator","password":"adminpass"}' -X POST https://hpe.sysnergy.com/rest/login-sessions | jq -r ".sessionID")
curl -k -H 'accept: application/json' \
-H 'content-type: text/csv' \
-H 'x-api-version: 2' \
-H "auth: $sessionID" \
-X GET https://hpe.sysnergy.com/rest/resource-alerts
Python Version of tries code:
#!/usr/bin/python3
import requests
import json
url = "https://hpe.sysnergy.com/rest/login-sessions"
data = {'username': 'administrator', 'password': 'adminpass'}
headers = {'Content-type': 'text/csv', 'Accept': 'application/json', 'x-api-version': 2}
r = requests.post(url, data=json.dumps(data), headers=headers)
print(r)
I am getting below error:
Error:
requests.exceptions.InvalidHeader: Value for header {x-api-version: 2} must be of type str or bytes, not <class 'int'>
if i convert int to str as '2' then it gives another ssl error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='hpe.synerg.com', port=443): Max retries exceeded with url: /rest/login-sessions (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:877)'),))
EDIT:
I have tried little different approach to get the same code format as bash in python but now it returns new error with new response code.
import os
import requests
sessionID = os.getenv('sessionID')
headers = {
'accept': 'application/json',
'content-type': 'text/csv',
'x-api-version': '2',
'auth': f"{sessionID}",
}
data = '{"userName":"administrator","password":"adminpassword"}'
response = requests.post('https://hpe.synergy.com/rest/login-sessions', headers=headers, data=data, verify=False)
print(response)
Error:
/python3/lib64/python3.6/site-packages/urllib3/connectionpool.py:1020: InsecureRequestWarning: Unverified HTTPS request is being made to host 'hpe.synergy.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
<Response [415]>
Please help or suggest the way to achieve same function in the python.
You first need to make a POST request to get the sessionID, then you need to make a GET request. Also note the headers are slightly different for the 2 requests. Something like this should work:
import requests
session = requests.Session()
url = "https://hpe.sysnergy.com/rest/login-sessions"
credentials = {"userName": "administrator", "password": "adminpass"}
headers = {"accept": "application/json",
"content-type": "application/json",
"x-api-version": "120",
}
response = session.post(url, headers=headers, json=credentials, verify=False)
session_id = response.json()["sessionID"]
url = "https://hpe.sysnergy.com/rest/resource-alerts"
headers = {"accept": "application/json",
"content-type": "text/csv",
"x-api-version": "2",
"auth": session_id,
}
response = session.get(url, headers=headers, verify=False)
print(response)
#print(response.content) # returns bytes
#print(response.text) # returns string

Request email audit export fails with status 400 and "Premature end of file."

according to https://developers.google.com/admin-sdk/email-audit/#creating_a_mailbox_for_export I am trying to request the email audit export of an user in G Suite this way:
def requestAuditExport(account):
credentials = getCredentials()
http = credentials.authorize(httplib2.Http())
url = 'https://apps-apis.google.com/a/feeds/compliance/audit/mail/export/helpling.com/'+account
status, response = http.request(url, 'POST', headers={'Content-Type': 'application/atom+xml'})
print(status)
print(response)
And I get the following result:
{'content-length': '22', 'expires': 'Tue, 13 Dec 2016 14:19:37 GMT', 'date': 'Tue, 13 Dec 2016 14:19:37 GMT', 'x-frame-options': 'SAMEORIGIN', 'transfer-encoding': 'chunked', 'x-xss-protection': '1; mode=block', 'content-type': 'text/html; charset=UTF-8', 'x-content-type-options': 'nosniff', '-content-encoding': 'gzip', 'server': 'GSE', 'status': '400', 'cache-control': 'private, max-age=0', 'alt-svc': 'quic=":443"; ma=2592000; v="35,34"'}
b'Premature end of file.'
I cannot see where the problem is, can someone please give me a hint?
Thanks in advance!
Kay
Fix it by going intp the Admin Console, Manage API client access page under Security and add the Client ID, scope needed for the Directory API. For more information, check this document.
Okay, found out what was wrong and fixed it myself. Finally it looks like this:
http = getCredentials().authorize(httplib2.Http())
url = 'https://apps-apis.google.com/a/feeds/compliance/audit/mail/export/helpling.com/'+account
headers = {'Content-Type': 'application/atom+xml'}
xml_data = """<atom:entry xmlns:atom='http://www.w3.org/2005/Atom' xmlns:apps='http://schemas.google.com/apps/2006'> \
<apps:property name='includeDeleted' value='true'/> \
</atom:entry>"""
status, response = http.request(url, 'POST', headers=headers, body=xml_data)
Not sure if it was about the body or the header. It works now and I hope it will help others.
Thanks anyway.

404 Downloading OneDrive content from Microsoft Graph

I am trying to download the content of a onedrive item via the microsoft api. However, no matter the method I use, I get 404 responses. Here is a reproduction of the problem in python/requests
import requests
import json
root_url = "https://graph.microsoft.com"
base_path = "/v1.0/<tenant_id>/users/<principal_name>/drive/"
token = "ALONGTOKEN"
headers = {"Authorization": "Bearer %s" % token}
r = requests.get(root_url + base_path + "/root/children", headers=headers)
listing = json.loads(r.text)
target = listing["value"][0]
print("Target node:")
print(json.dumps(target))
print("Target node id:")
print(target["id"])
r = requests.get(root_url + base_path + "items/" + target["id"], headers=headers)
print("Target metadata:")
print(r.text)
resp = json.loads(r.text)
download_url = resp["#microsoft.graph.downloadUrl"]
print("Target download url:")
print(download_url)
r = requests.get(download_url, headers=headers)
print("Download response code:")
print(r.status_code)
print("Download response headers:")
print(r.headers)
print("Download response cookies:")
print(r.cookies)
print("Download response redirect history:")
print(r.history)
outputs the following:
Target node:
{"parentReference": {"driveId": "drive_id", "path": "/drive/root:", "id": "parent_id"}, "cTag": "\"c:{tag},1\"", "lastModifiedDateTime": "2016-08-24T17:32:45Z", "name": "birds.png", "createdDateTime": "2016-08-24T17:32:45Z", "image": {}, "webUrl": "https://org-my.sharepoint.com/personal/principal_name/Documents/birds.png", "lastModifiedBy": {"user": {"displayName": "User Name", "id": "user_id"}}, "eTag": "\"{etag},1\"", "createdBy": {"user": {"displayName": "User Name", "id": "user_id"}}, "#microsoft.graph.downloadUrl": "https://org-my.sharepoint.com/personal/principal_name/_layouts/15/download.aspx?guestaccesstoken=access_token&docid=did&expiration=2016-09-01T17%3a12%3a14.000Z&userid=uid&authurl=True&NeverAuth=True", "file": {"hashes": {}}, "id": "01L4SXJGJ2LR2PGPKJMVGZPHIADCAYJEFE", "size": 34038}
Target node id:
01L4SXJGJ2LR2PGPKJMVGZPHIADCAYJEFE
Target metadata:
{"#odata.context":"https://graph.microsoft.com/v1.0/$metadata#users('principal_name')/drive/items/$entity","#microsoft.graph.downloadUrl":"https://org-my.sharepoint.com/personal/principal_name/_layouts/15/download.aspx?guestaccesstoken=accesstoken&docid=docid&expiration=2016-09-01T17%3a12%3a15.000Z&userid=uid&authurl=True&NeverAuth=True","createdBy":{"user":{"id":"user_id","displayName":"User Name"}},"createdDateTime":"2016-08-24T17:32:45Z","eTag":"\"{etag},1\"","id":"01L4SXJGJ2LR2PGPKJMVGZPHIADCAYJEFE","lastModifiedBy":{"user":{"id":"user_id","displayName":"User Name"}},"lastModifiedDateTime":"2016-08-24T17:32:45Z","name":"birds.png","webUrl":"https://org-my.sharepoint.com/personal/principal_name/Documents/birds.png","cTag":"\"c:{ctag},1\"","file":{"hashes":{}},"image":{},"parentReference":{"driveId":"drive_id","id":"parent_id","path":"/drive/root:"},"size":34038}
Target download url:
https://org-my.sharepoint.com/personal/principal_name/_layouts/15/download.aspx?guestaccesstoken=accesstoken&docid=docid&expiration=2016-09-01T17%3a12%3a15.000Z&userid=uid&authurl=True&NeverAuth=True
Download response code:
404
Download response headers:
{'Content-Length': '13702', 'SPIisLatency': '4', 'X-Content-Type-Options': 'nosniff', 'X-AspNet-Version': '4.0.30319', 'request-id': '288b9f9d-c04a-2000-133b-ebab2f6f332b', 'Strict-Transport-Security': 'max-age=31536000', 'MicrosoftSharePointTeamServices': '16.0.0.5625', 'X-Powered-By': 'ASP.NET', 'SPRequestGuid': '288b9f9d-c04a-2000-133b-ebab2f6f332b', 'Server': 'Microsoft-IIS/8.5', 'X-MS-InvokeApp': '1; RequireReadOnly', 'X-SharePointHealthScore': '0', 'SPRequestDuration': '297', 'SharePointError': '0', 'Cache-Control': 'private', 'Date': 'Thu, 01 Sep 2016 16:12:14 GMT', 'P3P': 'CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"', 'Content-Type': 'text/html; charset=utf-8'}
Download response cookies:
<RequestsCookieJar[]>
Download response redirect history:
[]
Which is to say that immediately following the link results in a 404, though it is supposed to download the file bytes. I have reproduced this in java, python, bash/curl, and in the browser. Can anybody help point out what I am doing wrong, or is this a problem with the microsoft graph api?
EDIT:
I can also reproduce the same 404 using the /drive/items/{item-id}/content endpoint described here. The request to this endpoint results in a 302 redirect (as described in the documentation), which, when followed, results in the same 404 behavior as described above.
EDIT2:
Here are all the request-ids I could find in the response headers that looked useful for debugging from Microsoft's side.
For the 200 request on the item object: 'request-id': 'adfa3492-4825-439d-8e59-022f32e78244', 'client-request-id': 'adfa3492-4825-439d-8e59-022f32e78244'
For the 404 request on the download url: 'request-id': '33e09e9d-b0c2-2000-133c-304585c15000', 'SPRequestGuid': '33e09e9d-b0c2-2000-133c-304585c15000',
Additionally, the actual HTML returned from the 404 includes Correlation ID: a8e09e9d-a0bb-2000-133b-ef6fc8ac7015
File download is currently only supported with delegated permissions (e.g. File.Read scope) as documented here. Your request was made with application permissions Files.Read.All and Files.ReadWrite.All, which we're gradually adding support for, but they're not yet fully functional and are not listed here.
Can you check that item_id is in fact the ID of an item? If you are working off the collection returned by GET /v1.0/users//drive/items/, the collection will return an array of folder and item metadata. If you try your request against a folder, you will get the 404 as you've described. That's the only way that I can repro your issue. If that is not the issue, please provide the request/response trace so that we can see the error details.

Resources