I have try-catch that behaved differently, Once an exception is raised, the next time the try-catch block is called, it's always raising an exception even if the params passed are valid
class Some
def some_method(variable_id: str):
response = requests.request('https://...........)
if response.status_code != 200:
if response.status_code == 404:
raise ClientError(
'_NOT_FOUND', 'Consignment does not exist. Please provide a valid variable_id', 404)
elif response.status_code == 400:
raise ClientError(
'_ALREADY_CANCELLED', f"Cannot print {variable_id}", 400)
raise ClientError(
'ERROR', "ERROR", 400)
return response
try catch
class Other:
def __init__(self):
self.error = False
def somefunc(id: str):
//id = '123' //working one
try:
response = Some().some_method(id)
return self.error, response
except Exception as e:
self.error = True
return error,[]
The weird thing is if I first called the try-catch using the id that the API check as valid, it will return the response in the try-catch block as expected. But when I replace the id with a value that the API is returning 400 status_code, it will always raise the 400 status code exception even if I called the try-catch block with the previous id that should return 200.
So I tried to print the response.status_code, it is correctly 200 for a valid id, and 400 for an invalid id, but I don't know why it always raises the exception after I call the try-catch block with an invalid id, and change again with a valid id instead of returning the response in Some.some_method class method.
What have I done wrong here?
Ah my bad. I need to reset the error
def somefunc(id: str):
//id = '123'
try:
response = Some().some_method(id)
return self.error, response
except Exception as e:
self.error = True
return error,[]
finally:
self.error = False
Related
I have async call, for example
from httpx import AsyncClient, Response
client = AsyncClient()
my_call = client.get(f"{HOST}/api/my_method") # async call
And I want to pass it to some retry logic like
async def retry_http(http_call):
count = 5
status, response = None, None
while count > 0:
response: Response = await http_call
if response.status_code == 200:
break
count -= 1
if response.status_code in (502, 504):
await asyncio.sleep(2)
else:
break
if response.status_code != 200:
return {
"success": False,
"result": {
"error": f"Response Error",
"response_code": response.status_code,
"response": response.text,
}
}
return response.json()
await retry_http(my_call)
but I got
RuntimeError
cannot reuse already awaited coroutine
Are there any method to make my_call an reusable coroutine ?
It is not possible in Python - a co-routine, once created, have an internal state that can't be easily duplicated - so once it runs, the internal state changes, including the internal line of code that is in execution, and there is no way to "rewind" that.
The most simple approach is to do like in #RyabchenkoAlexander's answer and accept the co-routine function and its parameters separately, and create the co-routine inside your retry function.
An alternative that is a nice Python idiom is to decorate the co-routine function - you make your retry_http a decorator instead, which wraps the underlying co-routine function in the retrying code.
Then, if the functions where you want this behavior are in your code, you can use the decorator syntax (#name prefixing the function definion) so that all calls will have the retry behavior, or you can apply it as a plain expression to get a new, retriable, co-routine function. Your final call could be:
result = await (retry_http(client.get) (f"{HOST}/api/my_method"))
(note the extra pair of parentheses around client.get, decorating it)
The decorator itself could be:
def retry_http(coro_func):
async def wrapper(*args, **kw):
# your original code - just replacing the await expression
...
while count > 5:
...
result = await coro_func(*args, **kw)
...
...
return result
return wrapper
As for your original intent: it would actually be possible to introspect a coroutine object, its internal variables and passed parameter, to recreate a co-routine object that has not yet started - however, that would involve using introspection to locate the original callable, and making the call again - it would be cumbersome, could be slow, and for little gain. I will outline the requirements, nonetheless:
A co-routine object has the cr_code and cr_frame attributes - you'd need to retrieve the function associated with the code object in cr_code- probably using the garbage colector API, or recreate a new function re-using the same code object, by calling types.FunctionType with the same parameters - and the local and global variables can be retrieved from the frame object in cr_frame.
can be fixed in next way
async def retry_http(http_call, *args, **kwargs):
count = 5
status, response = None, None
while count > 0:
response: Response = await http_call(*args, **kwargs)
if response.status_code == 200:
break
count -= 1
if response.status_code in (502, 504):
await asyncio.sleep(2)
else:
break
if response.status_code != 200:
return {
"success": False,
"result": {
"error": f"Response Error",
"response_code": response.status_code,
"response": response.text,
}
}
return response.json()
client = AsyncClient()
await retry_http(client.get, f"{HOST}/api/my_method")
In a Python/Flask application, I have defined this endpoint that I expect to return 404 if a client tries to get an id that doesn't exist on my database.
For example:
#app.route('/plants/<int:plant_id>', methods=['GET'])
def get_plant(plant_id):
try:
plant = Plant.query.filter(Plant.id == plant_id).one_or_none()
if plant is None:
abort(404)
return jsonify({
'success': True,
'plant': plant.format()
})
except:
abort(422)
The problem is that when I try to execute it, it always seems to raise an exception and returns 422.
If I remove the try/except syntax, it works as expected and returns the 404. But I lose the capacity of handling exceptions... so it's not a solution for me.
Why am I doing wrong? How could I correctly trigger 404 without setting 404 as the except return?
Thanks!!
Ok, finally I was able to understand it and solve it. I post my findings here so maybe it could help someone in the future. :)
The answer is very basic, actually: every time I abort, I trigger an exception.
So, when I aborted, no matter the status code I used, I fell into my except statement, which was returning 422 by default.
What I did to solve it was to implement a custom RequestError, and every time I have a controlled error, I trigger my custom error, which output I can control separately.
This is the implementation of my custom error:
class RequestError(Exception):
def __init__(self, status):
self.status = status
def __str__(self):
return repr(self.status)
And I've changed my route implementation for something like this:
(note that I'm now handling first the custom error exception, and only then triggering a generic 422 error)
#app.route('/plants/<int:plant_id>', methods=['GET'])
def get_plant(plant_id):
try:
plant = Plant.query.filter(Plant.id == plant_id).one_or_none()
if plant is None:
raise RequestError(404)
return jsonify({
'success': True,
'plant': plant.format()
})
except RequestError as error:
abort(error.status)
except:
abort(422)
And that does it! \o/
I am unable to understand how async works. I send simple get requests to google with a proxy to check the validity of proxy in a async method. I get the error:
'''object Response can't be used in 'await' expression'''
Method to get proxies. Code for getting the list of proxies is copied from a tutorial:
def get_proxies(self, number_of_proxies=10):
"""Returns max 10 free https proxies by scraping
free-proxy website.
#arg number_of_proxies to be returned"""
try:
if number_of_proxies > 10: number_of_proxies = 10
url = 'https://abc-list.net/'
response = requests.get(url)
response_text = response.text
parser = fromstring(response_text)
proxies = set()
for i in parser.xpath('//tbody/tr'):
if len(proxies) >= number_of_proxies:
break
if i.xpath('.//td[7][contains(text(),"yes")]'):
#Grabbing IP and corresponding PORT
proxy = ":".join([i.xpath('.//td[1]/text()')[0], i.xpath('.//td[2]/text()')[0]])
proxies.add(proxy)
return proxies
except Exception as e:
print('Exception while abc list from url: ', e)
return None
Method to check the validity of proxy:
async def is_valid_proxy(self, proxy):
"""Check the validity of a proxy by sending
get request to google using the given proxy."""
try:
response = await requests.get("http://8.8.4.4", proxies={"http": proxy, "https": proxy}, timeout=10)
if await response.status_code == requests.codes.ok:
print('got a valid proxy')
return True
except Exception as e:
print('Invalid proxy. Exception: ', e)
return False
Method to get the valid proxies:
async def get_valid_proxies(self, number_of_proxies=10):
proxies = self.get_proxies(number_of_proxies)
print(len(proxies), proxies)
valid_proxies = []
valid_proxies = await asyncio.gather(*[proxy for proxy in proxies if await self.is_valid_proxy(proxy)])
return valid_proxies
And a call to the above method:
proxies = asyncio.run(get_valid_proxies())
Now the best solution for me will be to check the validity of a proxy in def get_proxies(self, number_of_proxies=10): before adding it to the proxies list. But have no clue how to achieve that in async way. Therefore, I tried to do a workaround but that is also not working. The method works without async but I call this method many times and it is very slow, therefore would like to make it async.
Thank you
Now after changing the above code by using aiohttp it still throws an exception and doesn't look like async as the requests seem to be sent after one finishes as its very slow same as before.
New is_valid_proxy:
async with aiohttp.ClientSession() as session:
session.proxies={"http": proxy, "https": proxy}
async with session.get('http://8.8.4.4',
timeout=10) as response:
status_code = await response.status_code
# response = await requests.get("https://www.google.com/", proxies={"http": proxy, "https": proxy}, timeout=10)
# if await response.status_code == requests.codes.ok:
if status_code == requests.codes.ok:
print('got a valid proxy')
return True
except Exception as e:
print('Invalid proxy. Exception: ', e)
return False
Won't even display the error or exception. Here is the message:
Invalid proxy. Exception:
I am trying to raise the custom exception using the starlette framework in python. I have the API call which checks some condtions depends on the result, it should raise exception.
I have two files app.py and error.py
#app.py
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from error import EmptyParamError
async def homepage(request):
a=1
b=0
if a == 1:
raise EmptyParamError(400, "status_code")
return JSONResponse({'hello': 'world'})
routes = [
Route("/", endpoint=homepage)
]
app = Starlette(routes=routes,debug=True)`
#error.py ```
from starlette.responses import JSONResponse
class BaseError(Exception):
def __init__(self, status_code: int, detail: str = None) -> None:
if detail is None:
detail = "http.HTTPStatus(status_code).phrase"
self.status_code = status_code
self.detail = detail
async def not_found(self):
return JSONResponse(content=self.title, status_code=self.status_code)
class EmptyParamError(BaseError):
""" Error is raised when group name is not provided. """
status_code = 400
title = "Missing Group Name"
When the condition is true, i want to raise the exception but its not returning the jsonrespnse but its returning the stacktrace on the console.
Please let me know if anything is wrong here
adding try block resolved the issue
try:
if a==1:
raise InvalidUsage(100,"invalid this one")
if b == 0:
raise EmptyParamError("this is empty paramuvi")
except InvalidUsage as e:
return JSONResponse({'hello': str(e.message)})
except EmptyParamError as e:
return JSONResponse({'hello': str(e.message)})
I have the following function and it is a generic function which will make API call based on the input hostname and data. It will construct http request to make API and will return the response. This function will throw four types of exception(invalid URL, timeout, auth error and status check). How can I Mcok and Test the exception raised in API call using pytest? Which will be the best method to test the exceptions raised from API call?
import ssl
import urllib
import urllib.request
import urllib.error
import xml
import xml.etree.ElementTree as ET
def call_api(hostname, data):
'''Function to make API call
'''
# Todo:
# Context to separate function?
# check response for status codes and return reponse.read() if success
# Else throw exception and catch it in calling function
error_codes = {
"1": "Unknown command",
"6": "Bad Xpath",
"7": "Object not present",
"8": "Object not unique"
}
url = "http://" + hostname + "/api"
encoded_data = urllib.parse.urlencode(data).encode('utf-8')
try:
response = urllib.request.urlopen(url, data=encoded_data,
timeout=10).read()
root = ET.fromstring(response)
if root.attrib.get('status') != "success":
Errorcode = root.attrib.get('code')
raise Exception(pan_error_codes.get(Errorcode, "UnknownError"),
response)
else:
return response
except urllib.error.HTTPError as e:
raise Exception(f"HttpError: {e.code} {e.reason} at {e.url}", None)
except urllib.error.URLError as e:
raise Exception(f"Urlerror: {e.reason}", None)
If i call this function
def create_key(hostname, username, password):
hostname = 'myhost ip'
data = {
'type': 'keygen',
'username': username,
'password': password
}
username = 'myuser'
password = 'password'
response = call_api(hostname, data)
return response
i will get a response like following
b"<response status = 'success'><result><key>mykey</key></result></response>"
You can mock error raising via side_effect parameter:
Alternatively side_effect can be an exception class or instance. In this case the exception will be raised when the mock is called.
In your case, this can be used like this (assuming call_api is defined in module foo):
import pytest
from unittest.mock import patch
def test_api():
with patch('foo.call_api', side_effect=Exception('mocked error')):
with pytest.raises(Exception) as excinfo:
create_key('localhost:8080', 'spam', 'eggs')
assert excinfo.value.message == 'mocked error'