How do I supply a configured value to a #view_config-decorated function or class?
E.g.
#view_config(route_name='example', renderer=some_config['template.name'])
class MyClass(BaseView):
...
Or
#view_defaults(route_name='example', renderer=some_config['template.name2'])
class MyClass2(BaseView):
...
Or
#view_config(route_name='example', renderer=some_config['template.name3'])
def method3(request):
...
It's very hard to know where to start, as I'm trying to edit a pyramid plugin, which pulls together its config in an includeme function, so it doesn't have anything obvious that I can include, and it's hard to know what's available to the #view_config decorator.
You can add views using declarative configuration (what you are doing now using #view_config or alternatively using imperative configuration by calling config.add_view() method.
In this case, as you need to access the Pyramid registry and settings file, it is easier to do adding the views imperatively.
In your __init__.py you can do:
settings = config.registry.settings
# You need to call config.add_route("foobar") to map view to URL also
config.add_view('views.MyClass', route_name="foobar", renderer=settings['template.name3'])
Then in your views.py:
class MyClass(BaseView):
pass
#view_config() and add_view() arguments are equal.
I thin kyou can also mix view_config and add_view() arguments for the same view, but I am not sure aobut this. Hope this helps.
Related
Firstly, thank you for taking the time to read and input. It is greatly appreciated.
Question: What kind of approach can we take to keep the same public API of a class currently using multiple mixins but refactor it internally to be composed of objects that do the same work as the mixin. Autocomplete is a must (so runtime dynamics are kind of out such as hacking things on via __getattr__ or similar - I know this depends on the runtime environment i.e ipython vs pycharm etc, for the sake of this question, assume pycharm which cannot leverage __dir__ I think fully.
Accompanying Information:
I am writing a little assertion library in python and I have a core class which is instantiated with a value and subsequently inherits various assertion capabilities against that value via a growing number of mixin classes:
class Asserto(StringMixin, RegexMixin):
def __init__(self, value: typing.Any, type_of: str = AssertTypes.HARD, description: typing.Optional[str] = None):
self.value = value
self.type_of = type_of
self.description = description
These mixin classes offer various assertion methods for particular types, here is a quick example of one:
from __future__ import annotations
class StringMixin:
def ends_with(self, suffix: str) -> StringMixin:
if not self.value.endswith(suffix):
self.error(f"{self.value} did not end with {suffix}")
def starts_with(self, prefix: str) -> StringMixin:
if not self.value.startswith(prefix):
self.error(f"{self.value} did not end with {prefix}")
I would like to refactor the Asserto class to compose itself of various implementations of some sort of Assertable interface rather than clobber together a god class here with Mixins, I'm likely to have 10+ Mixins by the time I am finished.
Is there a way to achieve the same public facing API as this mixins setup so that client code has access to everything through the Asserto(value).check_something(...) but using composition internally?
I could define every single method in the Asserto class that just delegate to the appropriate concrete obj internally but then I am just making a massive god class anyway and the composition feels like a pointless endeavour in that instance?
for example in client code, I'd like all the current mixins methods to be available on an Asserto instance with autocomplete.
def test_something():
Asserto("foo").ends_with("oo")
Thank you for your time. Perhaps using the mixin approach is the correct way here, but it feels kind of clunky.
I am trying to abstract away some of the route class logic (i.e. I am looking to dynamically generate routes). api.add_resource seemed like the right place to do this.
So this is what I am trying to do:
# app.py
from flask import Flask
from flask_restplus import Api, Resource, fields
from mylib import MyPost
# Define my model
json_model = api.schema_model(...)
api.add_resource(
MyPost,
'/acme',
resource_class_kwargs={"json_model": json_model}
)
And then in mylib:
# mylib.py
def validate_endpoint(f):
def wrapper(*args, **kwargs):
return api.expect(json_fprint)(f(*args, **kwargs))
return wrapper
class MyPost(Resource):
def __init__(self, *args, **kwargs):
# Passed in via api.add_resource
self.api = args[0]
self.json_model = kwargs['json_model']
# I can't do this because I don't have access to 'api' here...
# #api.expect(json_model)
# So I am trying to make this work
#validate_endpoint
def post(self):
return {"data":'some data'}, 200
I don’t have access to the global api object here so I can’t call #api.expect(json_model). But I do have access to api and json_model inside of the post method. Which is why I am trying to create my own validate_endpoint decorator.
This does not work though. Is what I am trying to do here even possible? Is there a better approach I should be taking?
Stop using flask-restplus. Thats the most valuable answer I can give you (and anyone else).
Ownership is not there
Flask-restplus is a fork of flask-restful. Some engineers started developing features that suited them. The core guy has ghosted the project so its been officially forked again as Flask-Restx.
Poorly designed
I used to love flask when I was a yout’. I’ve realized since then that having global request, application, config that all magically update is not a good design. Their application factory pattern (to which flask-restplus conforms) is a style of statefully mutating the application object. First of all, Its hard to test. Second of all, it means that flask-restplus is wrapping the app and therefore all of the requests/handlers. How can anyone thing thats a good thing? A library whose main feature is endpoint documentation has its filthy hands all over every one of my requests?? (btw, this is whats leading to your problem above) Because my post is serious and thoughtful I’m skipping my thoughts on the Resource class pattern as it would probably push me into the waters of ranting.
Random Feature Set
A good library has a single purpose and it does that single thing well. Flask-restplus does 15 things (masking, swagger generation, postman generation, marshaling, request arg validation). Some features you can’t even tell are in the libraries code by reading the docs.
My solution to your problem
If you want to document your code via function decorators and models use a tool that does that alone and does it well. Use one that won’t touch your handlers or effect your actual request decorators. Use oapispec for swagger generation. For the other features of flask-restplus you’ve got marshmallow for marshaling request/response data, pydantic for validating request objects and args, and so on.
btw, I know all this because I had to build an api with it. After weeks of fighting the framework I forked it, ripped it apart, created oapispec, and trashed it from the project.
I'd want to be able to dispatch the instantiation of a class depending of a parameter.
This has been asked here and the following answer provides a good solution, but the use of the globals()[name]() seems a bit ugly to me. Also, I understand how it works if the file is the main one, but not sure why it works if this module is imported somewhere else.
Now, what I'd like is to instead of being able to use all possible loader classes, to be able to define a subset of all available ones. Basically to have a list/dict to toggle the availability. The use cases are for example,
I'm working on a new loader, but don't want it to be used
I'd like to provide a list of the available loaders.
What I've tried, based on the previously linked answer, is to define a dict with keys as a more user friendly string and values the name of the class that acts as loader.
loaders = {'sqlite': sqlite_loader, 'mysql': mysql_loader}
class loader:
#staticmethod
def get_loader(name):
return loaders[name]()
def available_loaders(self):
return [k for k in available_loaders.keys()]
class sqlite_loader(loader): pass
class mysql_loader(loader): pass
print(type(loader.get_loader('sqlite')))
print(type(loader.get_loader('mysql')))
This code does not work with the error that sqlite_loader is not defined. This I understand, but I'm not being able to find what to add to the loaders dictionary to make it able to find the loader classes.
loaders must be defined after the classes it refers to are defined. Just move the *_loader definitions to the top.
What I need: a class with two parents, which are ContextBoundObject and another class.
Why: I need to access the ContextBoundOject to log the method calls.
Composition works? As of now, no (types are not recognized, among other things).
Are other ways to do this? Yes, but not so automatable and without third-party components (maybe a T4 could do, but I'm no expert).
A more detailed explanation.
I need to extend System classes (some of which have already MarshalByRefObject (which is the parent of ContextBoundObject) for parent, for example ServiceBase and FileSystemWatcher, and some not, for example Exception and Timer) to access some inner workings of the framework, so I can log method calls (for now; in future it may change).
If I use this way I only have to add a class name to the object I want to log, instead of adding the logging calls to every method, but obviously I can't do this:
public class MyService:ServiceBase,ContextBoundObject,IDisposable{
public MyService(){}
public Dispose(){}
}
so one could try the usual solution, interfaces, but then if I call Run as in:
ServiceBase.Run(new MyService());
using a hypotethical interface IServiceBase it wouldn't work, because the type ServiceBase is not castable to IServiceBase -- it doesn't inherit from any interface. The problem is even worse with exceptions: throw only accepts a type descending from Exception.
The reverse, producing a IContextBoundObject interface, doesn't seem to work either: the logging mechanism doesn't work by methods, so I don't need to implement any, just an attribute and some small internal classes (and inheriting from ContextBoundObject, not even from MarshalByRefObject, which the metadata present as practically the same).
From what I see, extending from ContextBoundObject puts the extended class in a Proxy (probably because in this way the method calls use SyncProcessMessage(IMessage) and so can be intercepted and logged), maybe there's a way to do it without inheritance, or maybe there could be pre or post compiling techniques available for surrounding methods with logging calls (like T4 Text Templates), I don't know.
If someone wants to give this a look, I used a customized version of MSTestExtentions in my program to do the logging (of the method calls).
Any ideas are appreciated. There could be the need for more explanations, just ask.
Logging method calls is usually done using attributes to annotate classes or methods for which you want to have logging enabled. This is called Aspect Oriented Programming.
For this to work, you need a software that understands those attributes and post-processes your assembly by adding the necessary code to the methods / classes that have been annotated.
For C# there exists PostSharp. See here for an introduction.
Experimenting with proxies I found a way that apparently logs explicit calls.
Essentially I create a RealProxy like in example in the msdn, then obtain the TransparentProxy and use that as the normal object.
The logging is done in the Invoke method overridden in the customized RealProxy class.
static void Main(){
...
var ServiceClassProxy=new ServiceRealProxy(typeof(AServiceBaseClass),new object[]{/*args*/});
aServiceInstance=(AServiceBaseClass)ServiceClassProxy.GetTransparentProxy();
ServiceBase.Run(aServiceInstance);
...
}
In the proxy class the Invoke will be done like this:
class ServiceRealProxy:RealProxy{
...
[SecurityPermissionAttribute(SecurityAction.LinkDemand, Flags=SecurityPermissionFlag.Infrastructure)]
public override IMessage Invoke(IMessage myIMessage){
// remember to set the "__Uri" property you get in the constructor
...
/* logging before */
myReturnMessage = ChannelServices.SyncDispatchMessage(myIMessage);
/* logging after */
...
return myReturnMessage;
// it could be useful making a switch for all the derived types from IMessage; I see 18 of them, from
// System.Runtime.Remoting.Messaging.ConstructionCall
// ... to
// System.Runtime.Remoting.Messaging.TransitionCall
}
...
}
I have still to investigate extensively, but the logging happened. This isn't an answer to my original problem because I have still to test this on classes that don't inherit from MarshalByRefObject.
To get to run Cucumber with my app and subdomains, I read that I should add default parameters to the default_url_options.
However, I can't seem to find a way to add default parameters to the url_for_event helper that Apotomo gives. I believe this would be the first (if not the only) step to getting integration tests, Apotomo, and subdomains to work.
I got an answer from Paul Hagstrom in the Apotomo mailing list:
class YourBaseWidget < Apotomo::Widget
def default_url_options
...
end
end
class YourOtherWidgets < YourBaseWidget
...
end
This works a lot like how most of your Rails controllers inherit from ApplicationController. Thus, anything you apply to ApplicationController will apply, by inheritance, to your child controllers.