Error to post the data in python requests - python-3.x

While am trying to post the data with python requests, error raises
Actual form data from browser inspection console:
{"params":"query=&hitsPerPage=1000&facetFilters=%5B%5B%22catalogs%3A000buyvallencom%22%5D%2C%22active%3Atrue%22%2C%22slug%3A3m-05710-superbuff-pad-adapter-p8hg1vv3b6b2%22%2C%22active%3Atrue%22%5D"}
I had tried the following:
session=requests.session()
data={
"params":"query=&hitsPerPage=1000&facetFilters=%5B%5B%22catalogs%3A000buyvallencom%22%5D%2C%22active%3Atrue%22%2C%22slug%3A3m-05710-superbuff-pad-adapter-p8hg1vv3b6b2%22%2C%22active%3Atrue%22%5D"
}
response = session.post('https://lcz09p4p1r-dsn.algolia.net/1/indexes/ecomm_production_products/query?x-algolia-agent=Algolia%20for%20AngularJS%203.32.0&x-algolia-application-id=LCZ09P4P1R&x-algolia-api-key=2d74cf84e190a2f9cd8f4fe6d32613cc',data=data)
print(response.text)
But while am posting, getting an error as
{"message":"lexical error: invalid char in json text. Around 'params=que' near line:1 column:1","status":400}

The API accepts JSON encoded POST data. Change the data=data to json=data in your post request.
From the documentation
Instead of encoding the dict yourself, you can also pass it directly
using the json parameter (added in version 2.4.2) and it will be
encoded automatically:
>>> url = 'https://api.github.com/some/endpoint'
>>> payload = {'some': 'data'}
>>> r = requests.post(url, json=payload)
Note, the json parameter is ignored if either data or files is passed.
Using the json parameter in the request will change the Content-Type
in the header to application/json.
Code
import requests
session=requests.session()
url='https://lcz09p4p1r-dsn.algolia.net/1/indexes/ecomm_production_products/query?x-algolia-agent=Algolia%20for%20AngularJS%203.32.0&x-algolia-application-id=LCZ09P4P1R&x-algolia-api-key=2d74cf84e190a2f9cd8f4fe6d32613cc'
data={
"params":"query=&hitsPerPage=1000&facetFilters=%5B%5B%22catalogs%3A000buyvallencom%22%5D%2C%22active%3Atrue%22%2C%22slug%3A3m-05710-superbuff-pad-adapter-p8hg1vv3b6b2%22%2C%22active%3Atrue%22%5D"
}
response = session.post(url,json=data)
print(response.text)
Output
{"hits":[{"active":true,"heading":"05710 Superbuff Pad Adapter","heading_reversed":"Adapter Pad Superbuff 05710","subheading":"","features":"Our 3M™ Adaptors for Polishers are designed for saving time and hassle in collision repair jobs requiring double-sided screw-on compounding and polishing pads. It is part of a fast, effective assembly that incorporates our 3M™ Perfect-It™ Backup Pad and wool polishing pads, allowing users to quickly attach them without removing the adaptor. This durable adaptor is used with all polishers.<br>• Part of a complete assembly for compounding and polishing<br>• Designed to attach buffing pads or backup pads to machine polishers<br>• Helps reduce wear and vibration<br>• Users can change screw-on pads without removing the adaptor, saving time<br>• Provides hassle-free centering with 3M double-sided wool compounding and polishing pads","product_id":"P8HG1VV3B6B2","product_number":"IDG05114405710","brand":"","keywords":"sanding, polishing, buffing","image":"G8HI043XOD5X.jpg","image_type":"illustration","unspsc":"31191505","system":"sxe","cost":9.0039,"catalogs":["000BuyVallenCom"],"vendor":{"name":"3M","slug":"3m","vendor_id":"VACF1JS0AAP0","image":"G8HIP6V1J7UJ.jpg"},"taxonomy":{"department":{"name":"Paint, Marking & Tape","slug":"paint-marking-tape"},"category":{"name":"Filling, Polishing, Buffing","slug":"filling-polishing-buffing"},"style":{"name":"Adapters","slug":"adapters"},"type":{"name":"Pads","slug":"pads"},"vendor":{"name":"3M","slug":"3m"}},"slug":"3m-05710-superbuff-pad-adapter-p8hg1vv3b6b2","color":null,"material":null,"model":null,"model_number":null,"shape":null,"size":null,"display_brand":null,"style":null,"purpose":null,"product_type":null,"specifications":[],"item_specifications":[],"batch_id":"000BuyVallenCom-1551410144451","status":"Stk","erp":"05114405710","iref":null,"cpn":null,"gtin":"00051144057108","description":"05710 ADAPTOR 5/8 SHAFT SUPERBUFF","sequence":10,"item_id":"I8HG1VV6JL3B","vpn":"05710","uom":"Ea","specification_values":[],"objectID":"000BuyVallenCom-P8HG1VV3B6B2-I8HG1VV6JL3B","_highlightResult":{"heading_reversed":{"value":"Adapter Pad Superbuff 05710","matchLevel":"none","matchedWords":[]},"subheading":{"value":"","matchLevel":"none","matchedWords":[]},"brand":{"value":"","matchLevel":"none","matchedWords":[]},"taxonomy":{"style":{"name":{"value":"Adapters","matchLevel":"none","matchedWords":[]}}}}}],"nbHits":1,"page":0,"nbPages":1,"hitsPerPage":1000,"processingTimeMS":1,"exhaustiveNbHits":true,"query":"","params":"query=&hitsPerPage=1000&facetFilters=%5B%5B%22catalogs%3A000buyvallencom%22%5D%2C%22active%3Atrue%22%2C%22slug%3A3m-05710-superbuff-pad-adapter-p8hg1vv3b6b2%22%2C%22active%3Atrue%22%5D"}
Documentation
More complicated POST requests
Algolia API

Related

Python requests module GET method: handling pagination token in params containing %

I am trying to handle an API response with pagination. The first page provides a pagination token to reach the next one, but when I try to feed this back into the params parameter of the requests.get method it seems to slightly encode the token in the wrong way.
My attempt to retrieve the next page (using the response output of the first requests.get method):
# Initial request
response = requests.get(url=url, headers=headers, params=params)
params.update({"paginationToken": response.json()["paginationToken"]})
# Next page
response = requests.get(url=url, headers=headers, params=params)
This fails with status 500: Internal Server Error and message Padding is invalid and cannot be removed.
An example pagination token:
gyuqfh%2bqyNrV9SI1%2bXulE6MXxJgb1VmOu68eH4YZ6dWUgRItb7yJPnO9bcEXdwg6gnYStBuiFhuMxILSB2gpZCLb2UjRE0pp9RkDdIP226M%3d
The url attribute of response seems to show a slightly different token if you look carefully, especially around the '%' signs:
https://www.wikiart.org/en/Api/2/DictionariesByGroup?group=1&paginationToken=gyuqfh%252bqyNrV9SI1%252bXulE6MXxJgb1VmOu68eH4YZ6dWUgRItb7yJPnO9bcEXdwg6gnYStBuiFhuMxILSB2gpZCLb2UjRE0pp9RkDdIP226M%253d
For example, the pagination token and url end differently: 226M%3d and 226M%253d. When I manually copy the first part of the url and add in the correct pagination token it does retrieve the information in a browser.
Am I missing some kind of encoding I should apply to the request.get parameters before feeding them back into a new request?
You are right it is some form of encoding, percentage encoding to be precise. It is frequently used to encode URLs. It is easy to decode:
from urllib.parse import unquote
pagination_token="gyuqfh%252bqyNrV9SI1%252bXulE6MXxJgb1VmOu68eH4YZ6dWUgRItb7yJPnO9bcEXdwg6gnYStBuiFhuMxILSB2gpZCLb2UjRE0pp9RkDdIP226M%253d"
pagination_token = unquote(pagination_token)
print(pagination_token)
Outputs:
gyuqfh%2bqyNrV9SI1%2bXulE6MXxJgb1VmOu68eH4YZ6dWUgRItb7yJPnO9bcEXdwg6gnYStBuiFhuMxILSB2gpZCLb2UjRE0pp9RkDdIP226M%3d
But I expect that is half your problem, use a requests session object https://requests.readthedocs.io/en/master/user/advanced/#session-objects to make the requests as there is most likely a cookie which will be sent with the request to be used in conjunction with the pagination token. I can not tell for sure as the website is currently down.

How to handle large amount of json data response payload in fastapi?

a get call which has many lines of json respone gets some time to respond in swagger ui.
how can reduce this time ; but, i want each response attrebute from my big responds model!.
i have tried gzip content encoading. but, it does not solved my problem; because of the large number of line response;
for eg: while getting all job details (note:one job responds 36000 line response)
I'm a beginner
You don't have an issue with FastAPI. Your question is how to handle a large response body with Swagger UI.
a get call which has many lines of json respone gets some time to respond in swagger ui.
This is a known issue with Swagger UI, even sometimes large response bodies cause hanging(see).
how can reduce this time
In your case using some tool like Postman or Insomnia could fix this.
i have tried gzip content encoading. but, it does not solved my problem; because of the large number of line response;
Expected. This will not make any effect in Swagger, yes it can reduce the latency when you are dealing with large response bodies. But in the end, Swagger is going to show it as JSON. So this will not make any changes to your Swagger experience.

Python Client Rest API Invocation - Invalid character found in method name [{}POST]. HTTP method names must be tokens

Client
Python Version - 3.9,
Python Requests module version - 2.25
Server
Java 13,
Tomcat 9.
I have a Tomcat+Java based server exposing REST APIs. I am writing a client in python to consume those APIs. Everything is fine until I send empty body in POST request. It is a valid use case for us. If I send empty body I get 400 bad request error - Invalid character found in method name [{}POST]. HTTP method names must be tokens. If I send empty request from POSTMAN or Java or CURL it works fine, problem is only when I used python as a client.
Following is python snippet -
json_object={}
header = {'alias': 'A', 'Content-Type' : 'application/json', 'Content-Length' : '0'}
resp = requests.post(url, auth=(username, password), headers=header, json=json_object)
I tried using data as well instead of json param to send payload with not much of success.
I captured the wireshark dumps to undertand it further and found that, the request tomcat received is not as per RFC2616 (https://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html). Especially the part -
Request-Line = Method SP Request-URI SP HTTP-Version CRLF
Because I could see in from wireshark dumps it looked like - {}POST MY-APP-URI HTTP/1.1
As we can see the empty body is getting prefixed with http-method, hence tomcat reports that as an error.
I then looked at python http library code -client.py. Following are relevant details -
File - client.py
Method - _send_output (starting at line # 1001) - It first sends the header at line #1010 and then the body somewhere down in the code. I thought(I could be wrong here) perhaps in this case header is way longer 310 bytes than body 2 bytes, so by the time complete header is sent on wire body is pushed and hence TCP frames are order in such a way that body appears first. To corroborate this I added a delay of 1 second just after sending header line#1011 and bingo, the error disappeared and it started working fine. Not sure if this is completely correct analysis, but can someone in the know can confirm or let me know how to fix this.

How to read GTFS Real-time feed using Python

This is not a repeat question. I am desgning a routing algorithm that works in realtime. I know all about GTFS static feed but cant figure out my cities realtime. I want to understand how to parse and GTFS realtime feed using Python.
Link to Realtime set of my city is https://opendata.iiitd.edu.in/data/realtime/
I know a little about Protocol buffers and request library. Documentaion does not give the link to which API request should be placed.
What is the Url for this set which goes into requests.get?
There is a python library called gtfs_realtime_pb2, which can be used to parse the response of a request to the realtime feed and receive usefull output.
The main page for this library is here. There are also binding for other languages.
The main workflow is to initialize the
feed = gtfs_realtime_pb2.FeedMessage()
get the response from the feed with the package requests
response = requests.get(<url>, allow-redirects = True)
and parse it for example:
feed.ParseFromString(response.content)
feed.entity[int]

Python3 - Error posting data to a stikked instance

I'm writing a Python 3 (3.5) script that will act as a simple command line interface to a user's stikked install. The API is pretty straight forward, and its documentation is available.
My post function is:
def submit_paste(paste):
global settings
data = {
'title': settings['title'],
'name': settings['author'],
'text': paste,
'lang': settings['language'],
'private': settings['private']
}
data = bytes(urllib.parse.urlencode(data).encode())
print(data)
handler = urllib.request.urlopen(settings['url'], data)
print(handler.read().decode('utf-8'))
When I run the script, I get the printed output of data, and the message returned from the API. The data encoding looks correct to me, and outputs:
b'private=0&text=hello%2C+world%0A&lang=text&title=Untitled&name=jacob'
As you can see, that contains the text= attribute, which is the only one actually required for the API call to successfully work. I've been able to successfully post to the API using curl as shown in that link.
The actual error produced by the API is:
Error: Missing paste text
Is the text attribute somehow being encoded incorrectly?
Turns out the problem wasn't with the post function, but with the URL. My virtual host automatically forwards http traffic to https. Apparently, Apache drops the post variables when it forwards.

Resources