Pyramid WebTest not catching ContextualVersionConflict - pyramid

I have a pyramid 1.10 that I start with pserve. When I start the application it crashes with
File "/home/cquiros/data/projects2017/personal/software/env_formshare/lib/python3.6/site-packages/pkg_resources/__init__.py", line 783, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (protobuf 3.11.3 (/home/cquiros/data/projects2017/personal/software/env_formshare/lib/python3.6/site-packages), Requirement.parse('protobuf==3.6.1'), {'mysql-connector-python'})
However if I run the WebTest checks with this code no error is reported:
class FunctionalTests(unittest.TestCase):
def setUp(self):
from .config import server_config
from formshare import main
app = main(None, **server_config)
from webtest import TestApp
self.testapp = TestApp(app)
I can see that TestApp uses paste.deploy.loadapp so why the test does not report the ContextualVersionConflict error?

I just added pkg_resources.require("my_app") to the tests to catch it.

Related

Error Launching Blob Trigger function in Azure Functions expected str, bytes or os.PathLike object, not PosixPath

My problem is: I try to execute a fresh uploaded python function in an Azure Function App service and launch it (no matter if I use blob trigger or http trigger) I allways get the same error:
Exception while executing function: Functions.TestBlobTrigger Result: Failure
Exception: TypeError: expected str, bytes or os.PathLike object, not PosixPath
Stack: File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/dispatcher.py", line 284, in _handle__function_load_request
func = loader.load_function(
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 40, in call
return func(*args, **kwargs)
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/loader.py", line 53, in load_function
register_function_dir(dir_path.parent)
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/loader.py", line 26, in register_function_dir
_submodule_dirs.append(fspath(path))
Why is this happening: when the function is successfully deployed I upload a file in a blob in order to trigger the function but I get allways the same error, caused by the pathlib library (https://pypi.org/project/pathlib/). I have written a very easy function that works in my local vscode and it just prints a message.
import logging
import configparser
import azure.functions as func
from azure.storage.blob import BlockBlobService
import os
import datetime
import io
import json
import calendar
import aanalytics2 as api2
import time
import pandas as pd
import csv
from io import StringIO
def main(myblob: func.InputStream):
logging.info("BLob Trigger function Launched ");
blob_bytes = myblob.read();
blobmessage=blob_bytes.decode()
func1 = PythonAPP.callMain();
func1.main(blobmessage);
The Pythonapp class is:
class PythonAPP:
def __init__(self):
logging.info('START extractor. ');
self.parameter="product";
def main(self,message1):
var1="--";
try:
var1="---";
logging.info('END: ->paramet '+str(message1));
except Exception as inst:
logging.error("Error PythonAPP.main : " + str(inst));
return var1;
My requirements.txt file is:
azure-storage-blob== 0.37.1
azure-functions-durable
azure-functions
pandas
xlrd
pysftp
openpyxl
configparser
PyJWT==1.7.1
pathlib
dicttoxml
requests
aanalytics2
I've created this simple function in order to check if I can upload the simpliest example in Azure Functions, is there any dependencies that am I forgetting?
Checking the status of the functions I found this:
------------UPDATE1--------------------
The function is failing because the pathlib import, this is because in the requirements of the function it downloads this library and fails with AZ functions. Please see the requirements.txt file in the following link: https://github.com/pitchmuc/adobe_analytics_api_2.0/blob/master/requirements.txt
Can I exlude it somehow?
Well I can't provide an answer for that, I made a walkarround. In this case I created a copy of the library in a github repository. In this copy I erased the references to the pathlib in the requrements.txt and setup.py because this libary causes the failure in the AZ function APPS. By the way in the requirements file of the proyect make a reference to the project, so please mind the requiremnts file that I wrote above and change aanalytics2 reference to:
git+git://github.com/theURLtotherepository.git#master#egg=theproyectname
LINKS.
I've checked a lot of examples in google but none of them helped me:
Azure function failing after successfull deployment with OSError: [Errno 107]
https://github.com/Azure/azure-functions-host/issues/6835
https://learn.microsoft.com/en-us/answers/questions/39865/azure-functions-python-httptrigger-with-multiple-s.html
https://learn.microsoft.com/en-us/answers/questions/147627/azure-functions-modulenotfounderror-for-python-scr.html
Missing Dependencies on Python Azure Function --> no bundler flag or –build remote
https://github.com/OpenMined/PySyft/issues/3400
This is a bug in the azure codebase itself; specifically
within:
azure-functions-python-worker/azure_functions_worker/dispatcher.py
the problematic code within dispatcher looks to be setting up the exposed functions with the metadata parameters found within function.json
you will not encounter the issue if you're not using pathlib within your function app / web app code
if pathlib is available the issue manifests
rather than the simple os.path strings pathlib.Path objects are exposed - deeper down in the codebase there looks to be a conditional import and use of pathlib
to resolve simply remove the pathlib from your requirements.txt and redeploy
you'll need to refactor any of your own code that used pathlib - use equivalent methods in the os module
there looks to have been a ticket opened to resolve this around the time of the OP post - but it is notresolved in the current release

Robo framework test fails when try to import other module in test or keyword file

I am trying to set up custom test library and user keyword using robot framework.
I created my test file or userkeyword file as below hello.py.
please look at the import statement in the hello.py.
if i comment out import statement(import
src.factory.serverconnectfactory) and run - test case is PASS **
if i uncomment import statement - test case is FAIL. don't know why it
fails ?
I did try to google a lot did not find any clue , even read the user guide of robot framework
not able to get any clue.
How can I import src.factory.serverconnectfactory and use it in hello.py ?
example : hello.py
from robot.api.deco import keyword
# import src.factory.serverconnectfactory ---- this line is the issue
class helloworld:
def __init__(self):
pass
#keyword("print hello")
def printhelloworld(self):
return "Hello World"
my robot test file
****** Settings ***
Library ..//..//keywordslib//helloworld.py
*** Test Cases ***
This is some test case
print hello
the error i use to get is
> Importing test library failed: ModuleNotFoundError: No module named
thanks in advance.
Solution :
Robot Framework expects Filename and classname to be same

Azure function import pyodbc error after publishing

First- thanks a ton for all your posts and responses that helped me immensely getting this far!
I have successfully created an Azure function that has import pyodbc, azure.function like shown below.
*import logging
import pyodbc
import json
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
try:*
It works fine in VS Code but when I try to run it after publishing, it fails with
**2019-11-22T14:31:17.743 [Information] Executing 'Functions.godataexcelautomation' (Reason='This function was programmatically called via the host APIs.', Id=79cebf6c-b371-4a12-b623-16931abe7757)
2019-11-22T14:31:17.761 [Error] Executed 'Functions.godataexcelautomation' (Failed, Id=79cebf6c-b371-4a12-b623-16931abe7757)
Result: Failure
Exception: ModuleNotFoundError: No module named 'pyodbc'
Stack: File "/azure-functions-host/workers/python/3.6/LINUX/X64/azure_functions_worker/dispatcher.py", line 242, in _handle__function_load_request
func_request.metadata.entry_point)
File "/azure-functions-host/workers/python/3.6/LINUX/X64/azure_functions_worker/loader.py", line 66, in load_function
mod = importlib.import_module(fullmodname)
File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/site/wwwroot/godataexcelautomation/__init__.py", line 2, in <module>
import pyodbc**
Appreciate any help you can.. seems like I need make pyodbc available to azure portal? in the .json file?
Thanks in advance!
I got the same error as you when I deploy python function from VS code directly.Pls have a check that you have added pyobdc in your requirements.txt and try the command below to deploy your python function , it solved my issue:
func azure functionapp publish <APP_NAME> --build remote
Btw,you should define ODBC Driver 17 for SQL Server as odbc driver on python function.
Result :

ModuleNotFoundError when running imported Flask app

I have a python module with the following layout:
foo
| __init__.py
| __main__.py
| bar.py
__init__.py is empty.
Content of foo/bar.py:
from flask import Flask
app = Flask(__name__)
def baz(): pass
When running python3 -m foo i get confusing results.
Contents of foo/__main__.py
# Results in a ModuleNotFoundError: No module named 'foo'
from foo.bar import app
app.run()
# Raises no error and correctly prints the type
from foo.bar import app
print(type(app))
# Also runs without an error
from foo.bar import baz
baz()
Why is it possible to import and execute a function from this module, but when trying to do the same with a flask app it results in a ModuleNotFoundError?
I just can't see any way this makes any sense.
Edit:
The error is persistent even with this code:
from foo.bar import app
print(type(app))
app.run()
Output:
<class 'flask.app.Flask'>
* Serving Flask app "foo.bar" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
Traceback (most recent call last):
File "/home/user/projects/ftest/foo/__main__.py", line 1, in <module>
from foo.bar import app
ModuleNotFoundError: No module named 'foo'
So, obviously the module can be imported, because type(app) works just fine and flask does start. It seems like flask does a reload and is messing around with imports somehow.
Edit 2:
I turned debug mode off and it works just fine.
This error only occurs if you set export FLASK_DEBUG=True or explicitly enable debug via app.config["DEBUG"] = True
It turns out it's a bug in werkzeug.
The code works as expected if werkzeug's reloader is disabled.
How to reproduce the behaviour
Directory structure:
foo
| __init__.py
| __main__.py
Content of __init__.py:
from flask import Flask
app = Flask(__name__)
app.config["DEBUG"] = True
Content of __main__.py:
from foo import app
app.run()
If we run it:
$python3 -m foo
* Serving Flask app "foo" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
Traceback (most recent call last):
File "/home/user/projects/ftest/foo/__main__.py", line 1, in <module>
from foo import app
ModuleNotFoundError: No module named 'foo'
If we change __main__.py:
from foo import app
app.run(use_reloader=False)
Everything works just fine.
What's going on
The problem is in werkzeug._reloader.ReloaderLoop.restart_with_reloader. It calls a subprocess with the arguments provided by werkzeug._reloader._get_args_for_reloading but this function does not behave as expected when executing a package via the -m switch.
def _get_args_for_reloading():
"""Returns the executable. This contains a workaround for windows
if the executable is incorrectly reported to not have the .exe
extension which can cause bugs on reloading.
"""
rv = [sys.executable]
py_script = sys.argv[0]
if os.name == 'nt' and not os.path.exists(py_script) and \
os.path.exists(py_script + '.exe'):
py_script += '.exe'
if os.path.splitext(rv[0])[1] == '.exe' and os.path.splitext(py_script)[1] == '.exe':
rv.pop(0)
rv.append(py_script)
rv.extend(sys.argv[1:])
return rv
In our case it returns ['/usr/local/bin/python3.7', '/home/user/projects/ftest/foo/__main__.py']. This is because sys.argv[0] is set to the full path of the module file but it should return ['/usr/local/bin/python3.7', '-m', 'foo']` (At least from my understanding it should and it works this way).
I have no good idea on how to fix this behaviour, or if it is something that need to be fixed. It just seems weird to me that I'm the only one that has encountered this problem, since it doesn't seem too much of a corner case to me.
Adding the following line before app.run() works around the werkzeug reloader bug:
os.environ['PYTHONPATH'] = os.getcwd()
Thanks to #bootc for the tip! https://github.com/pallets/flask/issues/1246
Have you tried from foo import app in your main.py file?

How not to call initialize at each request with Tornado

I want to set variables when starting my Tornado webserver, so I tried to override initialize on my RequestHandler class. But apparently, initialize is launched each time a request is made, according to the following code and its output:
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def initialize(self):
print("Launching initialization...")
def get(self):
print("Get: {}{}".format(self.request.host, self.request.uri))
app = tornado.web.Application([=
(r"/.*", MainHandler)
])
def runserver():
import tornado.ioloop
app.listen(8080)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
runserver()
stdout:
~ ➤ ./redirector.py
Launching initialization...
Get: 127.0.0.1:8080/
Launching initialization...
Get: 127.0.0.1:8080/favicon.ico
Launching initialization...
Get: 127.0.0.1:8080/favicon.ico
Launching initialization...
Get: 127.0.0.1:8080/
This behavior is the complete contrary to what is written in the doc:
Hook for subclass initialization.
(Meaning it is called at the end of __init__)
So, does anybody know how to do what I want to ?
Thanks in advance.
It's not contrary to the doc; have a look at the Structure of a Tornado app section. A RequestHandler object is created for each request.
If you want code to be executed when the app is started only, subclass the Application class and override __init__, or just put it in your runserver function.

Resources