Github Search API for pull requests: can't search for > created date (even though same code worked for searching issues) - github-api

I have some working code to search Github issues for a specific word for issues filed after a certain date. Now I wanted to modify the code so that it searched for pull requests rather than issues. So I changed the code from
def search_prs(search_str, date_str, page_num):
req = requests.get('http://api.github.com/search/issues',
{'q': f'{search_str}+created:>{date_str}',
'per_page': 100, 'page': page_num, 'sort': 'created', 'order': 'asc'},
headers={'Authorization': 'token ' + ACCESS_TOKEN,
'Accept': 'application/vnd.github.cloak-preview'}, timeout=10)
return json.loads(req.text)
to
def search_prs(search_str, date_str, page_num):
req = requests.get('http://api.github.com/search/issues',
{'q': f'{search_str}+is:pr+created:>{date_str}',
'per_page': 100, 'page': page_num, 'sort': 'created', 'order': 'asc'},
headers={'Authorization': 'token ' + ACCESS_TOKEN,
'Accept': 'application/vnd.github.cloak-preview'}, timeout=10)
return json.loads(req.text)
So, all I changed was adding "+is:pr" in the query string. However, in the latter case no results are returned, and no error is given. Is this some limitation of the Github API that ordinary users (non-enterprise) are not allowed to search by content and date range for pull requests (even though this code did work for issues)?

Related

REST API Post method Recursive run returning Different data-probably data with error in Azure Functions

I am using REST API Post method and deployed to Azure functions .The First Run[immediate run post changes and deployment to azure] is success and subsequent runs are showing error in data and counts are not same as the first Run.Following is the code snippet that I am using.
` def __init__(self):
url = "https://login.microsoftonline.com/abcdefgh12345/oauth2/v2.0/token"
payload = 'grant_type=client_credentials&scope=api%AB1234567890CDE%.default'
headers = {
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': 'Basic ' + "ABCDEFG",
'Cookie': 'XYZMANOP'
}
response = requests.request("POST", url, headers=headers, data=payload)
self.auth = response.json()`
Then I am using self.auth in other methods
Why is this happening and what is the solution???PS:When I try the code in local it works like a charm.
I tried extracting the access_token and added it in subsequent functions. But the result is also the same. Also I changed the schedule since access_token is valid only for 3600s and rescheduled to execute post 2 hours.But its returning same result

Unable to verify Discord signature for bot on AWS Lambda Python 3 (Interactions_Endpoint_URL)

I am attempting to validate the signature for my bot application using discord's "INTERACTIONS ENDPOINT URL" in a lambda function running python 3.7. Using the documentation here under the "Security and Authorization" section, I still seem to be unable to get a valid return on the signature, with the exception being triggered each time. I'm unsure which aspect of the validation is incorrect. I am using AWS API Gateway to forward the headers to the lambda function in order to access them. Any help with pointing me in the right direction would be appreciated.
Edit:
Here is the output of the event in lambda for reference. I removed some of the values for security marked by <>.
{'body': {'application_id': '<AppID>', 'id': '<ID>', 'token': '<Token>', 'type': 1, 'user': {'avatar': '4cbeed4cdd11cac74eec2abf31086e59', 'discriminator': '9405', 'id': '340202973932027906', 'public_flags': 0, 'username': '<username>'}, 'version': 1}, 'headers': {'accept': '*/*', 'content-type': 'application/json', 'Host': '<AWS Lambda address>', 'User-Agent': 'Discord-Interactions/1.0 (+https://discord.com)', 'X-Amzn-Trace-Id': 'Root=1-60a570b8-00381f6e26f023df5f9396b1', 'X-Forwarded-For': '<IP>', 'X-Forwarded-Port': '443', 'X-Forwarded-Proto': 'https', 'x-signature-ed25519': 'de8c8e64be2058f40421e9ff8c7941bdabbf501a697ebcf42aa0419858c978e19c5fb745811659b41909c0117fd89430c720cbf1da33c9dcfb217f669c496c00', 'x-signature-timestamp': '1621455032'}}
import json
import os
from nacl.signing import VerifyKey
from nacl.exceptions import BadSignatureError
def lambda_handler(event, context):
# Your public key can be found on your application in the Developer Portal
PUBLIC_KEY = os.environ['DISCORD_PUBLIC_KEY']
verify_key = VerifyKey(bytes.fromhex(PUBLIC_KEY))
signature = event['headers']["x-signature-ed25519"]
timestamp = event['headers']["x-signature-timestamp"]
body = event['body']
try:
verify_key.verify(f'{timestamp}{body}'.encode(), bytes.fromhex(signature))
except BadSignatureError:
return (401, 'invalid request signature')
I was able to diagnose the issue. I was unable to verify the signature because AWS API Gateway was altering the body into JSON before it got to my lambda function. This made the signature verification come up as invalid each time. I solved this by checking Lambda Proxy Integration in the Integration Request section in API Gateway. Lambda Proxy Check Box. This allowed an unaltered body being sent to Lambda, which I could then verify my discord outgoing webhook. Below is my final code.
import json
import os
from nacl.signing import VerifyKey
from nacl.exceptions import BadSignatureError
def lambda_handler(event, context):
PUBLIC_KEY = os.environ['DISCORD_PUBLIC_KEY']
verify_key = VerifyKey(bytes.fromhex(PUBLIC_KEY))
signature = event['headers']["x-signature-ed25519"]
timestamp = event['headers']["x-signature-timestamp"]
body = event['body']
try:
verify_key.verify(f'{timestamp}{body}'.encode(), bytes.fromhex(signature))
body = json.loads(event['body'])
if body["type"] == 1:
return {
'statusCode': 200,
'body': json.dumps({'type': 1})
}
except (BadSignatureError) as e:
return {
'statusCode': 401,
'body': json.dumps("Bad Signature")
}

Django DRF Post with files and data works in Postman, not Python. No TemporaryUploadedFile

Running a Django App locally, i can use Postman to upload a zip file along with some dict data. Breaking the application in 'def Post()' i can see that Postman's successful upload:
request.data = <QueryDict: {'request_id': ['44'], 'status': [' Ready For Review'], 'is_analyzed': ['True'], 'project': ['test value'], 'plate': ['Plate_R0'], 'antigen': ['tuna'], 'experiment_type': ['test'], 'raw_file': [<TemporaryUploadedFile: testfile.zip (application/zip)>]}>
Postman offers the following python code to replicate these results in my python script:
import requests
url = "http://127.0.0.1:8000/api/upload/"
payload = {'request_id': '44',
'status': ' Ready For Review',
'is_analyzed': 'True',
'project': 'test value',
'plate': 'Plate_R0',
'antigen': 'tuna',
'experiment_type': 'test'}
files = [
('raw_file', open(r'C:/testfile.zip','rb'))
]
headers = {
'Content-Type': 'application/x-www-form-urlencoded'
}
response = requests.request("POST", url, headers=headers, data = payload, files = files)
print(response.text.encode('utf8'))
running this code directly and retrieving the request.data (server side) i see the binary representation of the xlsx file is in the object and the payload data is not there (this is the error in the response).
How do i get my python script to produce the same server-side object as postman? Specifically how do i upload my data such that the file is represented as: <TemporaryUploadedFile: testfile.zip (application/zip)>
Thanks.
Turns out, inspection of the object posted by Postman shows that it was using multipart-form upload. Searching around i found this answer to a related question to describe posting as multi-part: https://stackoverflow.com/a/50595768/2917170 .
The working python is as follows:
from requests_toolbelt import MultipartEncoder
url = "http://127.0.0.1:8000/api/upload/"
zip_file = r'C:/testfile.zip'
m = MultipartEncoder([('raw_file', (os.path.basename(zip_file), open(zip_file, 'rb'))),
('request_id', '44'),
('status', 'Ready For Review'),
('is_analyzed', 'True'),
('project', 'test value'),
('plate', 'Plate_R0'),
('antigen', 'tuna'),
('experiment_type', 'test')])
header = {'content-type': m.content_type}
response = requests.post(url, headers=header, data=m, verify=True)

response time in python not the same as postman

When I execute the code below the response time is 0.4, but when I make the exact request in postman the response time is ~4, what's wrong with my code?
import requests
url = "https://www.wechall.net/challenge/training/mysql/auth_bypass2/index.php"
payload = {'username': 'admin\' and username like \'a%\' and sleep(4)#',
'password': '',
'login': 'Login'}
headers = {
'Content-Type': 'multipart/form-data; boundary=--------------------------626487670766176098971255'
}
response = requests.request("POST", url, headers=headers, data = payload)
print(response.elapsed.total_seconds())
According to https://kite.com/python/docs/requests.Response.elapsed, elapsed measures the time taken between sending the first byte of the request and finishing parsing the headers, and not until the full response has been transfered.
So in this case, time calculated by postman client is the response time for the api.

Send a Python 3.x API POST request without killing webserver

So I sent a request to a website using an API key and the site (REDCAP) became unresponsive for nearly 10 minutes. The request I made ultimately completed successfully, but I'm unsure of how to prevent this from happening again. Should I have made a bunch of smaller request instead of one large one?
The key and value pairs themselves take up about as many characters are exampled below, but there are many groups of them, about 600 or so being sent in one post request.
[{'KEY1': 'VALUE1', 'KEY2': 'VALUE2', 'KEY3':'VALUE3'},...{x600}]
The knowledge I have about python has been cobbled together over the course of a couple weeks so I not even sure what I need to ask exactly to prevent this from happening again. After a bit of digging I saw that I should have also included a timeout duration in the post request and should add code to handle that when it fails, but that does not help me with my biggest problem, which is not bringing the site to a grinding halt.
I'll provide my code for context, and I'm sure there are 100 things that could be done better or differently, but it worked as intended more or less so I'm not asking for feedback on my code, though will take any advice given. I've obfuscated anything I wasn't comfortable sharing on the web with three hash marks ###.
#!/usr/bin/env python
import requests,json,logging
logging.basicConfig(filename='###.log', \
format='%(asctime)s %(message)s', \
datefmt='%m/%d/%Y %I:%M:%S %p', \
level=logging.DEBUG)
#TOKEN = '###' #Live
TOKEN = '###' #Test
URL = '###'
dload = {
'token': TOKEN,
'content': 'record',
'format': 'json',
'type': 'flat',
'fields[0]': '###',
'rawOrLabel': 'raw',
'rawOrLabelHeaders': 'raw',
'exportCheckboxLabel': 'false',
'exportSurveyFields': 'false',
'exportDataAccessGroups': 'false',
'returnFormat': 'json',
'filterLogic': 'datediff([###],"TODAY","d",true) < 7'
}
dloadget=requests.post(URL,dload)
dmeta=dloadget.json()
logging.debug('Raw exported data: %s \n',dmeta)
if len(dmeta) == 0:
logging.info('Records up to date')
else:
for i,tpoints in enumerate(dmeta):
dmeta[i].update({'###':None})
logging.debug('Raw imported data: %s \n',dmeta)
DATA = json.dumps(dmeta)
fields = {
'token': TOKEN,
'content': 'record',
'format': 'json',
'type': 'flat',
'overwriteBehavior': 'overwrite',
'data': DATA
}
r2=requests.post(URL, data=fields)
if r2.status_code == 200:
logging.info('global notes log updated')
else:
logging.warning('updated failed %s',r2)

Resources