Where do I place the validation exception code in my pyramid app? - python-3.x

I have a model file in my pyramid app, and inside of that model file, I am doing automatic validation before an insert using formencode. A failed validation inside of my model file raises a formencode.Invalid exception.
I found the following documentation on how to set up a custom exception view, but I am unclear on a couple of things:
Where do I put the actual exception view code? This is clearly view code, so it should be in a view somewhere. Should it be in its own view file? I've pasted the code I need to place at the bottom.
How do I make the rest of my pyramid app aware of this code? The only obvious way that I see is to import the view file inside of my model files, but that gives me a bad taste in my mouth. I'm sure there must be another way to do it, but I'm not sure what that is.
Code to place:
from pyramid.view import view_config
from helloworld.exceptions import ValidationFailure
#view_config(context=ValidationFailure)
def failed_validation(exc, request):
response = Response('Failed validation: %s' % exc.msg)
response.status_int = 500
return response

1) Anywhere in your project directory. I made a new file called exceptions.py where I place all my HTTP status code and validation exceptions. I placed this file in the same directory as my views.py, models.py, etc.
2) That bad taste in your mouth is Python, because importing methods is the Pythonic way to go about using classes and functions in other files, rather than some sort of magic. Might be weird at first, but you'll quickly get used to it. Promise.
I want to note that in your models.py file, you're only going to be importing ValidationFailure from helloworld.exception and raising ValidationFailure wherever you want. You aren't importing the whole view function you've defined (failed_validation). That's why the context for that view function is ValidationFailure, so it knows to go there when you simply raise ValidationFailure

Related

Azure ML Studio: How to get data directory path from class DatasetConsumptionConfig?

I am trying to read my data files from an Azure ML dataset. My code is as follows:
from azureml.core import Dataset
dataset = Dataset.get_by_name(aml_workspace, "mydatasetname")
dataset_mount = dataset.as_named_input("mydatasetname").as_mount(path_on_compute="dataset")
The type of dataset_mount is class DatasetConsumptionConfig. How do I get the actual directory path from that class? I can do it in a very complicated manner by passing the dataset_mount into a script as follows:
PythonScriptStep(script_name="myscript.py", arguments=["--dataset_mount", dataset_mount], ...)
Then, when that script step is run, "myscript.py" mysteriously gets the real directory path of the data in the argument "--dataset_mount", instead of it being DatasetConsumptionConfig. So, DatasetConsumptionConfig somehow gets converted into directory path under the hoods. However, that's an overcomplicated and strange approach to get the thing done. Is there any direct way to get the data path from DatasetConsumptionConfig? Or maybe I have misunderstood something here?

Handling staleelement reference exception Coverfox website using Python pytest selenium

content of test_homepage.py
def test_insurance_pages_open_successfully_using_fixtures(page_object, load_home_page, insurance_data):
page_object.open_insurance(insurance_data)
assert page_object.ui.contains_text('Buying two-wheeler insurance from Coverfox is simple')
open_insurance function in page object home_page.py
def open_insurance(self, insurance):
self._ui.move_to(locators.drp_dwn_insurance)
self._ui.click(format_locator(locators.lnk_insurance, insurance))
move_to function in another file.py
def move_to(self, locator):
to_element = self.find_element(locator)
print("element value", to_element)
self.action.move_to_element(to_element).perform()
What I am trying to over here is
test_insurance_pages_open_successfully_using_fixtures takes 3 fixtures as arguments 1.
page_object that provides a page object at a session-level 2.
load_home_page to load the home page again at session-level 3.
insurance_data fixture in conftest.py which suppliers list of link texts read from some CSV file
So, in essence, it will load the page and open all links one by one for website - https://www.coverfox.com/
First test case passes for link Two-wheelers insurance but for 2nd test data run it fails giving an exception of stale element reference on the point where it is trying to move to(move_to function) insurance link again.
I am not storing elements anywhere and function is written in a way that it will find the element again.
What is causing this? Or Pytest does some sort of element caching in the background
It seems that you should use function-level fixture for load_home_page or refresh the page after you have done some actions.
In the current approach (at least how you described it) you are using the same page and page state for different tests.
Could you please share the fixtures code as well?

Error handling netCDF file in Python

I am extracting data from netCDF files with Python code. I need to check if the netCDF files are in agreement with the CORDEX standards (CORDEX is a coordinated effort to carry modelling experiments with regional climate models). For this I need to access an attribute of the netCDF file. If the attribute is not found, then the code should go to the next file.
A snipet of my code is as follows:
import netCDF4
cdf_dataset = netCDF4.Dataset(file_2read)
try:
cdf_domain = cdf_dataset.CORDEX_domain
print(cdf_domain)
except:
print('No CORDEX domain found. Will exit')
....some more code....
When the attribute "CORDEX_domain" is available everything is fine. If the attribute is not available then the following exception is raised.
AttributeError: NetCDF: Attribute not found
This is a third party exception, which I would say should be handled as a general one, but it is not, as I am not able to get my "print" inside the "except" statement to work or anything else for that matter. Can anyone point me the way to handle this? Thanks.
There is no need for a try/except block; netCDF4.Dataset has a method ncattrs which returns all global attributes, you can test if the required attribute is in there. For example:
if 'CORDEX_domain' in cdf_dataset.ncattrs():
do_something()
You can do the same to test if (for example) a required variable is present:
if 'some_var_name' in cdf_dataset.variables:
do_something_else()
p.s.: "catch alls" are usually a bad idea..., e.g. Python: about catching ANY exception
EDIT:
You can do the same for variable attributes, e.g.:
var = cdf_dataset.variables['some_var_name']
if 'some_attribute' in var.ncattrs():
do_something_completely_else()

How do I remove an entire call tree from pycallgraph with a filter

I want to see what's happening with a specific operation in a python3 package I've been working on. I use pycallgraph and it looks great. But I can't figure out how to remove an entire tree of calls from the output.
I made a quick script make_call_graphs.py:
import doms.client.schedule as sched
from pycallgraph import PyCallGraph
from pycallgraph.output import GraphvizOutput
from pycallgraph import Config
from pycallgraph import GlobbingFilter
config = Config()
config.trace_filter = GlobbingFilter(exclude=[
'_find_and_load',
'_find_and_load.*', # Tried a few similar variations
'_handle_fromlist',
'_handle_fromlist.*',
])
with PyCallGraph(output=GraphvizOutput(output_file='schedule_hourly_call_graph.png'), config=config):
sched.hourly()
Before I started using the GlobbingFilter, _find_and_load was at the top of the tree outside of my doms library call stack. It seems that the filter only removes the top level block, but every subsequent call remains in the output. (See BEFORE and AFTER below)
Obviously I can read the result and copy every single call I don't want to see into the filter, but that is silly. What can I do to remove that whole chunk of stuff outside my doms box? Is there a RecursiveFilter or something I could use?
BEFORE:
AFTER:
The solution was much easier than I originally thought and right in front of me: the include kwarg given to the GlobbingFilter.
config.trace_filter = GlobbingFilter(include=['__main__', 'doms.*'])

Expression Engine - Import Member Data, create XML file Parse Error

I am trying to use the the Utilities > Member Import Utility to create an XML file that I can then use to import member data.
I have about seventy members to import. I was able to work through the mapping with what appeared to be a good match, but when I click the button, I get the following error:
Line does not match structure
I am using a .csv file to bring the data and I have selected comma as the deliminator. I can map the fields but when I click Create XML I get the Parse error.
Any suggestions on how to resolve this?
Thanks.
I found the answer. I appears to automatically understand that it is relative. When I simply put the name of the file in it went in with error.
So the correct path is: customer.txt
However, because the username is a number and not alpha numeric it cannot be imported.

Resources