I've created a Bash script to get the data from the url using rest API from a appliance using username, password and saving the Session ID into a Variable and then using the session ID to get the data into csv format which is working fine.
I want to change the bash code into python3 code as i'm parsing it using pandas.
Bash Code:
#!/bin/bash
sessionID=$(curl -k -H "accept: application/json" -H "content-type: application/json" -H "x-api-version: 120" -d '{"userName":"administrator","password":"adminpass"}' -X POST https://hpe.sysnergy.com/rest/login-sessions | jq -r ".sessionID")
curl -k -H 'accept: application/json' \
-H 'content-type: text/csv' \
-H 'x-api-version: 2' \
-H "auth: $sessionID" \
-X GET https://hpe.sysnergy.com/rest/resource-alerts
Python Version of tries code:
#!/usr/bin/python3
import requests
import json
url = "https://hpe.sysnergy.com/rest/login-sessions"
data = {'username': 'administrator', 'password': 'adminpass'}
headers = {'Content-type': 'text/csv', 'Accept': 'application/json', 'x-api-version': 2}
r = requests.post(url, data=json.dumps(data), headers=headers)
print(r)
I am getting below error:
Error:
requests.exceptions.InvalidHeader: Value for header {x-api-version: 2} must be of type str or bytes, not <class 'int'>
if i convert int to str as '2' then it gives another ssl error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='hpe.synerg.com', port=443): Max retries exceeded with url: /rest/login-sessions (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:877)'),))
EDIT:
I have tried little different approach to get the same code format as bash in python but now it returns new error with new response code.
import os
import requests
sessionID = os.getenv('sessionID')
headers = {
'accept': 'application/json',
'content-type': 'text/csv',
'x-api-version': '2',
'auth': f"{sessionID}",
}
data = '{"userName":"administrator","password":"adminpassword"}'
response = requests.post('https://hpe.synergy.com/rest/login-sessions', headers=headers, data=data, verify=False)
print(response)
Error:
/python3/lib64/python3.6/site-packages/urllib3/connectionpool.py:1020: InsecureRequestWarning: Unverified HTTPS request is being made to host 'hpe.synergy.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
<Response [415]>
Please help or suggest the way to achieve same function in the python.
You first need to make a POST request to get the sessionID, then you need to make a GET request. Also note the headers are slightly different for the 2 requests. Something like this should work:
import requests
session = requests.Session()
url = "https://hpe.sysnergy.com/rest/login-sessions"
credentials = {"userName": "administrator", "password": "adminpass"}
headers = {"accept": "application/json",
"content-type": "application/json",
"x-api-version": "120",
}
response = session.post(url, headers=headers, json=credentials, verify=False)
session_id = response.json()["sessionID"]
url = "https://hpe.sysnergy.com/rest/resource-alerts"
headers = {"accept": "application/json",
"content-type": "text/csv",
"x-api-version": "2",
"auth": session_id,
}
response = session.get(url, headers=headers, verify=False)
print(response)
#print(response.content) # returns bytes
#print(response.text) # returns string
Related
I have this command:
import requests
url = "https://api.opensubtitles.com/api/v1/download"
payload = {"file_id": id_to_download}
headers = {
"Content-Type": "application/json",
"Api-Key": "myApiKey",
"Authorization": "Bearer myApiKey"
}
response = requests.request("POST", url, json=payload, headers=headers)
print(response.text)
That returns
{
"message":"You cannot consume this service"
}
When the console version works perfectly:
curl --request POST --header 'Api-Key: myApiKey' --url https://api.opensubtitles.com/api/v1/download --header 'Content-Type: application/json, Authorization: Bearer undefined' --data '{"file_id": 934267}'
{"link":"https://www.opensubtitles.com/download/901E4D16AF81FF191D37B3D10AD6517A2ECE30F77679205199EF4742C5595022275ADBA60A53E73F444E251FA5B71825CA101C199D469B02264AFCCC46F1AAAF966A8197FA479E70CC58EE2D1D89FFCB04226FB33DCECBBB3BFF04F888E5CAC73C8D9813FCF84245B7AC80F9B5B18E386524F881292F0EFE45A534879E2AC7D6B92BB55BF6F5E948F6D1A586809E5723BFDA861BB0E6E842AAFB71D5A74ADC9BFB95C067D7B853C9BA2C5819726E5D90536DA0AC9EBB282602133CBECF24E1DDC1337731FEB652A384059CA4D5452F62FC4325C7D75BDA6B9AE06CCE34A1DA872B15DD28BD0D90C548BB122C38ADF8267DA29F7418C8C5F6BDD3A423F8CC20904BC2D8960A1C0C9B30A9CE0EFDC65CCBC696EE74666CE631B17F1139C7D95507CFCAAF65B5D4370C/subfile/Magic.Mike.XXL.2015.720p.BluRay.x264-GECKOS.srt","file_name":"Magic.Mike.XXL.2015.720p.BluRay.x264-GECKOS.srt","requests":8,"remaining":92,"message":"Your quota will be renewed in 16 hours and 06 minutes (2022-10-24 01:25:09 UTC) ","reset_time":"16 hours and 06 minutes","reset_time_utc":"2022-10-24T01:25:09.000Z"}%
Notice the "requests":8,"remaining":92,"message":"Your quota will be renewed in 16 hours and 06 minutes (2022-10-24 01:25:09 UTC) ","reset_time":"16 hours and 06 minutes","reset_time_utc":"2022-10-24T01:25:09.000Z" part, so apparently this is not quota-related. All the other requests work, typically this one:
url = "https://api.opensubtitles.com/api/v1/subtitles"
querystring = {"query": movie_name,
"languages": "en"}
headers = {
"Content-Type": "application/json",
"Api-Key": "myApiKey"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
Works perfectly. Any idea about what could make the POST request fail?
I am struggling with Servant and the CORS configuration: I am exposing and API through Servant and I have the following configuration:
-- Wai application initialization logic
initializeApplication :: IO Application
initializeApplication = do
let frontCors = simpleCorsResourcePolicy { corsOrigins = Just ([pack "https://xxxx.cloudfront.net"], True)
, corsMethods = ["OPTIONS", "GET", "PUT", "POST"]
, corsRequestHeaders = simpleHeaders }
return
$ cors (const $ Just $ frontCors)
$ serve (Proxy #API)
$ hoistServer (Proxy #API) toHandler server
When I perform a query like this through Chromium (by copying and pasting):
curl 'https://api.xxx/' \
-H 'Accept: application/json, text/plain, */*' \
-H 'Referer: https://xxx.cloudfront.net' \
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36' \
-H 'Authorization: Bearer XXX==' \
--compressed
It works but if I copy-paste the fetch query in the dev console:
fetch("https://api.xxx", {
"headers": {
"accept": "application/json, text/plain, */*",
"authorization": "Bearer XXX=="
},
"referrer": "https://xxx.cloudfront.net/",
"referrerPolicy": "no-referrer-when-downgrade",
"body": null,
"method": "GET",
"mode": "cors",
"credentials": "include"
});
I get:
> Access to fetch at 'https://api.xxx' from origin 'https://xxx.cloudfront.net' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
polyfills-es2015.3eb4283ca820c86b1337.js:1 GET https://api.xxx net::ERR_FAILED
e.fetch # polyfills-es2015.3eb4283ca820c86b1337.js:1
> (anonymous) # VM20:1
> x:1 Uncaught (in promise) TypeError: Failed to fetch
Any hints regarding that? Especially why it works in cUrl and not in Chromium?
Thanks in advance.
It was a basic CORS issue, in fact, sending Authorization without having it in the corsRequestHeaders makes the request rejected.
I should have written:
, corsRequestHeaders = ["Authorization", "Content-Type"]
I am trying to convert a curl of pisignage into python requests. The curl is,
curl -X POST "https://swagger.piathome.com/api/files" -H "accept:
application/json" -H "x-access-token: login_session_token" -H
"Content-Type: multipart/form-data" -F "Upload
file=#test.jpg;type=image/jpeg"
My code is,
import requests
files = {'Upload file': open('test.jpg', 'rb'), 'type': 'image/jpeg'}
headers = {'Content-type': 'multipart/form-data', 'accept': 'application/json', 'x-access-token': 'login_session_token'}
file_response = requests.post(
'https://swagger.piathome.com/api/files',
files=files,
headers=headers
)
print(file_response)
It returns 404. I tried uncurl, the code is:
import uncurl
u = uncurl.parse('curl -X POST "https://swagger.piathome.com/api/files" -H "accept: application/json" -H "x-access-token: login_session_token" -H "Content-Type: multipart/form-data" -F "Upload file=test.jpg;type=image/jpeg"')
print(u)
The output is ,
error: unrecognized arguments: -F Upload file=test.jpg;type=image/jpeg
After a day of searching it turns out the swagger documentation is incorrect.
use
files = {
'assets': (open('test.jpg', 'rb'))
}
Try this
import requests
headers = {
'accept': 'application/json',
'x-access-token': 'login_session_token',
'Content-Type': 'multipart/form-data',
}
files = {
'Upload file': (None, 'test.jpg;type'),
}
response = requests.post('https://swagger.piathome.com/api/files', headers=headers, files=files)
link to parse curl to request python
I want to convert the (working) curl command to Python:
$ curl -X POST --header 'Content-Type: multipart/form-data' --header 'Accept: text/html; charset=utf-8; profile="https://www.mediawiki.org/wiki/Specs/HTML/1.7.0"' -F wikitext=%27%27%27Mahikari%27%27%27%20is%20a%20%5B%5BJapan%5D%5Dese%20%5B%5Bnew%20religious%20movement%5D%5D -F body_only=true 'https://en.wikipedia.org/api/rest_v1/transform/wikitext/to/html'
<p id="mwAQ">%27%27%27Mahikari%27%27%27%20is%20a%20%5B%5BJapan%5D%5Dese%20%5B%5Bnew%20religious%20movement%5D%5D</p>
Using Requests, I replace the file requested at the first position of the tuple I pass to files with None (apparently an overloaded feature for this API), but I can't still this code to work. It returns a 400:
import requests
text = """
The '''Runyon classification''' of nontuberculous [[mycobacteria]] based on the rate of growth, production of yellow pigment and whether this pigment was
produced in the dark or only after exposure to light.
It was introduced by Ernest Runyon in 1959.
"""
multipart_data = {
'wikitext': (None, urllib.parse.quote(text)),
'body_only': (None, 'true'),
}
url = 'https://en.wikipedia.org/api/rest_v1/transform/wikitext/to/html'
headers = {'Content-Type': 'multipart/form-data', 'Accept': 'text/html; charset=utf-8; profile="https://www.mediawiki.org/wiki/Specs/HTML/1.7.0"'}
r = requests.post(url, files=multipart_data, headers=headers) # , headers={'Content-Type': 'multipart/form-data', 'Accept': 'text/html; charset=utf-8; profile="https://www.mediawiki.org/wiki/Specs/HTML/1.7.0"'})
if r.status_code == 200:
Update
I tried another solution using requests_toolbelt.multipart.encoder.MultipartEncoder and it still doesn't work:
from requests_toolbelt.multipart.encoder import MultipartEncoder
text = """
The '''Runyon classification''' of nontuberculous [[mycobacteria]] based on the rate of growth, production of yellow pigment and whether this pigment was produced in the dark or only after exposure to light.
It was introduced by Ernest Runyon in 1959.
"""
from requests_toolbelt.multipart.encoder import MultipartEncoder
multipart_data = MultipartEncoder(
fields=(
('wikitext', urllib.parse.quote(text)),
('body_only', 'true'),
)
)
url = 'https://en.wikipedia.org/api/rest_v1/transform/wikitext/to/html'
headers = {'Content-Type': 'multipart/form-data', 'Accept': 'text/html; charset=utf-8; profile="https://www.mediawiki.org/wiki/Specs/HTML/1.7.0"'}
r = requests.post(url, data=multipart_data, headers=headers)
This problem is kind of driving me crazy.
I'm doing a very simple python 3 script to manage an API in a public website.
I am able to do it with curl, but not in pyhton.
Can't use either requests library or curl in my real environment, just for tests
This is working:
curl -d "credential_0=XXXX&credential_1=XXXXXX" -c cookiefile.txt https://XXXXXXXXXXXXXXX/LOGIN
curl -d 'json={"devices" : ["00:1A:1E:29:73:B2","00:1A:1E:29:73:B2"]}' -b cookiefile.txt -v https://XXXXXXXXX/api-path --trace-ascii /dev/stdout
and we can see this in the curl debug:
Send header, 298 bytes (0x12a)
0000: POST /api-path HTTP/1.1
0034: Host: XXXXXXXXXXXXXXXX
0056: User-Agent: curl/7.47.0
006f: Accept: /
007c: Cookie: csrf_token=751b6bd9-0290-496b-820e-XXXXXXXX; session
00bc: =XXXXXX-6d29-4cf9-8907-XXXXXXXXXXXX
00e3: Content-Length: 60
00f7: Content-Type: application/x-www-form-urlencoded
0128:
=> Send data, 60 bytes (0x3c)
0000: json={"devices" : ["00:1A:1E:29:73:B2","00:1A:1E:29:73:B2"]}
== Info: upload completely sent off: 60 out of 60 bytes
This is the python code to replicate the second request, which is the problematic one
string_query={"devices" : [ "34:FC:B9:CE:14:7E","00:1A:1E:29:73:B2" ]}
jsonbody_url=urllib.parse.urlencode(string_query)
jsonbody_url=jsonbody_url.encode("utf-8")
req=urllib.request.Request(url,data=jsonbody_url,headers={"Cookie" :
cookie,"Content-Type": "application/x-www-form-urlencoded","User-
Agent":"curl/7.47.0","charset":"UTF-8","Content-
length":len(jsonbody_url),
"Connection": "Keep-Alive"},method='POST')
And the server is completely ignoring the Json content.
Everything else is working, login and other url parameters from the same API
Any ideas?
Try this:
import requests
string_query={"devices" : [ "34:FC:B9:CE:14:7E","00:1A:1E:29:73:B2" ]}
headers={
"Cookie" : cookie,
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent":"curl/7.47.0",
"charset":"UTF-8",
"Connection": "Keep-Alive"
}
response = requests.post(url,data=string_query,headers=headers)
print(response.content)