I am trying to add dictionary data to our check-mk Web API wit Python requests, but I keep getting an Error of missing keys:
{"result": "Check_MK exception: Missing required key(s): aux_tags, tag_groups", "result_code": 1}
Here is my code:
import json
import requests
params_get = (
('action', 'get_hosttags'),
('_username', 'user'),
('_secret', 'secret'),
('request_format', 'json'),
)
params_set = (
('action', 'set_hosttags'),
('_username', 'user'),
('_secret', 'secret'),
('request_format', 'json'),
)
url = 'http://monitoringtest.local.domain/test/check_mk/webapi.py'
tags = ['tag1', 'tag2']
response_data = requests.get(url, params=params_get)
data = response_data.json()
new_data_tags = data['result']['tag_groups']
new_data = data['result']
# new_tags = []
for tag in tags:
new_data['aux_tags'].append({'id': tag, 'title': 'tags in datacenter'})
# new_tags.extend([{'aux_tags': [], 'id': tag, 'title': tag.upper() + ' Tag'}])
# all_tags = new_data_tags.extend([{'tags': new_tags, 'id': 'group1', 'title': 'tags in datacenter'}])
json.dump(data['result'], open("new_file", "w"))
response_data_new = requests.get(url, params=params_set, json=json.dumps(data['result']))
# response_data_new = requests.put(url, params=params_set)
# requests.post(url, params=params_set)
print(response_data_new.text)
# print(data['result'])
# json.dump(data['result'], open("new_file", "w"))
When I use curl every thing works well with a success message:
{"result": null, "result_code": 0}
Do you have any idea what causes the error?
Thanks
I found the mistake, it was just not focused. The Data variable contains two keys that are sent as well result at the beginning and result_code at the end, which need to be truncated. I just had to modify the response as follows, and sending the data wit POST:
resp = requests.post(url, params=params_set, data={'request': json.dumps(data['result'])})
Thanks #DeepSpace
Related
Here's the code:
API_URL = "https://rest.uniprot.org"
def submit_id_mapping(from_db, to_db, ids):
request = requests.get(
f"{API_URL}/idmapping/run",
data={"from": from_db, "to": to_db, "ids": ",".join(ids)},
)
#request.raise_for_status()
return request.json()
submit_id_mapping(from_db="UniProtKB_AC-ID", to_db="ChEMBL", ids=["P05067", "P12345"])
Taken directly from the official example
And returning the following 404 client error. I tried accessing the url myself and it doesn't seem to work. Given that this is the official documentation I don't really know what to do. Any suggestions are welcomed.
{'url': 'http://rest.uniprot.org/idmapping/run',
'messages': ['Internal server error']}
I also have another script for this but it no longer works! And I don't know why :(
in_f = open("filename")
url = 'https://www.uniprot.org/uploadlists/'
ids=in_f.read().splitlines()
ids_s=" ".join(ids)
params = {
'from': 'PDB_ID',
'to': 'ACC',
'format': 'tab',
'query': ids_s
}
data = urllib.parse.urlencode(params)
data = data.encode('utf-8')
req = urllib.request.Request(url, data)
with urllib.request.urlopen(req) as f:
response = f.read()
print(response.decode('utf-8'))
Error:
urllib.error.HTTPError: HTTP Error 405: Not Allowed
Try to change .get to .post:
import requests
API_URL = "https://rest.uniprot.org"
def submit_id_mapping(from_db, to_db, ids):
data = {"from": from_db, "to": to_db, "ids": ids}
request = requests.post(f"{API_URL}/idmapping/run", data=data)
return request.json()
result = submit_id_mapping(
from_db="UniProtKB_AC-ID", to_db="ChEMBL", ids=["P05067", "P12345"]
)
print(result)
Prints:
{'jobId': 'da05fa39cf0fc3fb3ea1c4718ba094b8ddb64461'}
I' a bit lost with something I haven't seen before. I hope I will be able to explain my issue with clarity. I have the cloud function below:
url = 'https://reports.api.clockify.me/v1/workspaces/xxxx/reports/detailed'
headers = {'X-Api-Key': '{}'.format(api_key)}
Start = "2022-01-01T00:00:00.000"
End = "2022-03-26T23:59:59.000"
page = 1
raw_data = []
while True:
parameters = {"dateRangeStart":"{}".format(Start),"dateRangeEnd":"{}".format(End), "rounding": True, "exportType": "JSON", "amountShown": "EARNED","detailedFilter":{ "page":"{}".format(page),"pageSize":"1000"}}
print(parameters)
request = requests.post(url, headers=headers, json=parameters)
#response = request.json()['timeentries']
response = request.json()
if len(response) == 0:
break
raw_data.append(response)
page = page + 1
print(raw_data)
Everything is pretty standard (I think!). I'm looping through pages of clockify API request. The issue I have is that I found some logging information in my raw_data array.
When I'm printing the response I can see the following output in the log:
"severity": "INFO", "message": "{'totals': [{'_id': '', 'totalTime': 1345687, 'totalBillableTime': 1223465435, 'entriesCount': 1500, 'totalAmount': 23457354.0, 'amounts': [{'type': 'EARNED', 'value': 3746553.0}]}], 'timeentries': [{ .... standard clockify time entries ....
Why do I get "severity": "INFO", "message": when printing my response? Why is this part of my request? Then I found it in my raw_data array messing up with my JSON object... When using print I always have the raw value and not these logging info messages...
I am working on a script which i have to modify in order loop through the multiple resources within a functions.
Below are the items which we need to loop through to get the data from and, this is coming from Config_local
BASE_URL = "https://synergy.hpe.example.com/rest/"
RES_EXT = [ 'resource-alerts?count=500&start=1',
'resource-alerts?count=500&start=501'
'resource-alerts?count=500&start=1001,
'resource-alerts?count=500&start=1501'
]
While i am looping through above list under def main(): section and taking get_resource_alerts_response() under loop the data coming out of the loop getting over-written and thus returning only last loop data only.
Main Script:
import os
import shutil
import smtplib
from email.message import EmailMessage
import pandas as pd
pd.set_option('expand_frame_repr', True)
import requests
from Config_local import (
BASE_URL,
DST_DIR,
OUTFILE,
PASSWORD,
SRC_DIR,
TIMESTAMP_DST,
USERNAME,
SUBJECT,
FROM,
TO,
EMAIL_TEMPLATE,
SMTP_SERVER,
RES_EXT,
)
class FileMoveFailure(Exception):
pass
class SynergyRequestFailure(Exception):
pass
class SessionIdRetrievalFailure(SynergyRequestFailure):
pass
class ResourceAlertsRetrievalFailure(SynergyRequestFailure):
pass
def move_csv_files():
for csv_file in os.listdir(SRC_DIR):
if csv_file.endswith(".csv") and os.path.isfile(os.path.join(SRC_DIR, csv_file)):
try:
shutil.move(
os.path.join(f"{SRC_DIR}/{csv_file}"),
f"{DST_DIR}/{csv_file}-{TIMESTAMP_DST}.log"
)
except OSError as os_error:
raise FileMoveFailure(
f'Moving file {csv_file} has failed: {os_error}'
)
def get_session_id(session):
try:
response = session.post(
url=f"{BASE_URL}/login-sessions",
headers={
"accept": "application/json",
"content-type": "application/json",
"x-api-version": "120",
},
json={
"userName": USERNAME,
"password": PASSWORD
},
verify=False
)
except requests.exceptions.RequestException as req_exception:
# you should also get this logged somewhere, or at least
# printed depending on your use case
raise SessionIdRetrievalFailure(
f"Could not get session id: {req_exception}"
)
json_response = response.json()
if not json_response.get("sessionID"):
# always assume the worse and do sanity checks & validations
# on fetched data
raise KeyError("Could not fetch session id")
return json_response["sessionID"]
#def get_all_text(session, session_id):
# all_text = ''
# for res in RES_EXT:
# url= f"{BASE_URL}{res}"
# newresult = get_resource_alerts_response(session, session_id, url)
# all_text += newresult
# print(f"{all_text}")
# return str(all_text)
#
def get_resource_alerts_response(session, session_id, res):
try:
return session.get(
url=f"{BASE_URL}{res}",
headers={
"accept": "application/json",
"content-type": "text/csv",
"x-api-version": "2",
"auth": session_id,
},
verify=False,
stream=True
)
except requests.exceptions.RequestException as req_exception:
# you should also get this logged somewhere, or at least
# printed depending on your use case
raise ResourceAlertsRetrievalFailure(
f"Could not fetch resource alerts: {req_exception}"
)
def resource_alerts_to_df(resource_alerts_response):
with open(OUTFILE, 'wb') as f:
for chunk in resource_alerts_response.iter_content(chunk_size=1024*36):
f.write(chunk)
return pd.read_csv(OUTFILE)
def send_email(df):
server = smtplib.SMTP(SMTP_SERVER)
msg = EmailMessage()
msg['Subject'], msg['From'], msg['To'] = SUBJECT, FROM, TO
msg.set_content("Text version of your html template")
msg.add_alternative(
EMAIL_TEMPLATE.format(df.to_html(index=False)),
subtype='html'
)
server.send_message(msg)
def main():
move_csv_files()
session = requests.Session()
session_id = get_session_id(session)
for res in RES_EXT:
resource_alerts_response = get_resource_alerts_response(session,
session_id, res)
print(resource_alerts_response)
df = resource_alerts_to_df(resource_alerts_response)
print(df)
send_email(df)
if __name__ == '__main__':
main()
any help or hint will be much appreciated.
This is a copy of the code which I recall we had over SO but not what you want now, However, as the whole code body is okay and the idea of for loop is also looks good, you Just need to tweek it to meet the requirement.
1- You need to create and empty DataFrame assignment Synergy_Data = pd.DataFrame()
2- then you can append the data you received from for loop ie resource_alerts_response which becomes df = resource_alerts_to_df(resource_alerts_response)
3- lastly, you can append this df to the empty Synergy_Data and then call that under your if __name__ == '__main__' to send an e-mail. Also don't forget to declare Synergy_Data as a global variable.
Synergy_Data = pd.DataFrame()
def main():
global Synergy_Data
move_csv_files()
session = requests.Session()
session_id = get_session_id(session)
for res in RES_EXT:
resource_alerts_response = get_resource_alerts_response(session,
session_id, res)
df = resource_alerts_to_df(resource_alerts_response)
Synergy_Data = Synergy_Data.append(df)
if __name__ == '__main__':
main()
send_email(Synergy_Data)
Hope this will be helpful.
Only the last response is being used because this value is overwritten in resource_alerts_response on each iteration of the loop. You may consider acting on the data on each iteration or storing it for use later i.e. after the loop. I've included these options with modifications to the main() function below.
Option 1
Send an email for each resource alert response
def main():
move_csv_files()
session = requests.Session()
session_id = get_session_id(session)
for res in RES_EXT:
resource_alerts_response = get_resource_alerts_response(session,
session_id, res)
print(resource_alerts_response)
# Indent lines below so that the operations below are executed in each loop iteration
df = resource_alerts_to_df(resource_alerts_response)
print(df)
send_email(df)
Option 2
Merge all resource alert responses and send one email
def main():
move_csv_files()
session = requests.Session()
session_id = get_session_id(session)
df_resource_alerts_responses = None
for res in RES_EXT:
resource_alerts_response = get_resource_alerts_response(session,
session_id, res)
print(resource_alerts_response)
df = resource_alerts_to_df(resource_alerts_response)
if df_resource_alerts_responses is None:
df_resource_alerts_responses = df
else:
df_resource_alerts_responses = df_resource_alerts_responses.append(df, ignore_index=True)
print(df_resource_alerts_responses)
if df_resource_alerts_responses is not None:
send_email(df_resource_alerts_responses)
Trying to get information with post request but it returns RESPONSE [500]. I have tried to use the get response and it returns response 200 code. I copied the form_data from the website and included in my code.
import requests, json
from bs4 import BeautifulSoup as bs
from requests.utils import quote
url = 'https://mpcr.cbar.az/service/mpcr.publicSearchApplication'
tax_ids = [
'1000276532',
]
data = {
'searchFilter': {"fields":"{\"orgTin\":\"%s\"}"},
'compId': 'frontend',
'access_token': '123',
'refresh_token': '123',
'token': '123',
'requestNumber': '1p5i9m5s9T9W8',
'_token': 'MQ2cZyD092qFKknKpZMV0w2ksNhqWj17PSl6iaCG',
}
with requests.Session() as s:
for i in tax_ids:
data["searchFilter"] = quote(json.dumps(data["searchFilter"]["fields"] % i))
print(data)
print("--" * 10)
r = s.post(url, data=data)
print(r)
I need to parse the country code of each comment in my web page then store it in a json file, but I am having an issue when I try to turn to the next page.
I'm not sure whether I used the correct way to send the request.
Here's my code:
index = 1
def parse_fb(self, response):
data = response.body
soup = BeautifulSoup(data, "html.parser")
with open(ArticlesSpider.pro_id+'.json', 'a+') as f:
user_country = soup.find_all('div', class_='user-country')
for i in range(len(user_country)):
code = str(user_country[i])
code = code.split('">')
code = str(code[2])
code = code.split('</b>')
code = code[0]
json.dump(code, f)
print(code)
request_url='https://feedback.aliexpress.com/display/productEvaluation.htm'
data = {
'ownerMemberId': '',
'memberType':'seller',
'productId': str(ArticlesSpider.pro_id),
'companyId': '',
'evaStarFilterValue': 'all Stars',
'evaSortValue': 'sortdefault#feedback',
'page': str(index),
'currentPage': '',
'startValidDate': '',
'i18n': 'false',
'withPictures': 'false',
'withPersonalInfo': 'false',
'withAdditionalFeedback': 'false',
'onlyFromMyCountry': 'false',
'version': 'evaNlpV1_2',
'isOpened': 'true',
'translate': 'Y',
'jumpToTop':'false',
'${csrfToken.parameterName}': '${csrfToken.token}',
}
index += 1
yield scrapy.FormRequest(request_url,formdata=data,callback=self.parse_fb)
Why do you need BeautifulSoup? All this is superfluous.
Here is the working code for your product:
import scrapy
class CodeInfo(scrapy.Item):
code = scrapy.Field()
class feedback_aliexpress_com(scrapy.Spider):
name = 'feedback_aliexpress_com'
domain = 'feedback.aliexpress.com'
allowed_domains = ['feedback.aliexpress.com']
start_urls = ['https://feedback.aliexpress.com/display/productEvaluation.htm?' +
'productId=32911361727&ownerMemberId=206054366&companyId=&memberType=seller&startValidDate=']
url = 'https://feedback.aliexpress.com/display/productEvaluation.htm'
page = 1
def parse(self, response):
code = CodeInfo()
if response.css('.user-country'):
for listing in response.css('.feedback-item'):
code['code'] = listing.css('.user-country > b::text').extract_first()
yield code
self.page += 1
self.url = 'https://feedback.aliexpress.com/display/productEvaluation.htm?productId=32911361727&ownerMemberId=206054366&page=' \
+ str(self.page)
yield response.follow(url=self.url, callback=self.parse)
A lot of excess))) I know))) check it out) did in a hurry
Well, you are changing index, but not using it: your request_url is the same during the process. If this bit is the one that you are expecting to change the page
yield scrapy.FormRequest(request_url,formdata=data,callback=self.parse_fb)
than you have to change request_url before calling that.