Python Azure Function does not install 'extras' requirements - azure

I have an azure function that depends on extras of a specific library, however it seems that Functions is ignoring the extras specified in the requirements.txt
I have specified this in the requirements.txt:
functown>=2.0.0
functown[jwt]>=2.0.0
This should install additional dependencies (python-jose in this case), however it seems like the extras are ignored by Azure functions:
No module named 'jose'. Please check the requirements.txt file for the missing module.
I also tested this locally and can confirm that for a regular pip install -r requirements.txt the extras are picked up and python-jose is indeed installed (which I can verify by import jose).
Are there special settings to be set in Azure Function or is this a bug?
Update 1:
In particular I want to install the dependencies on an extra of a python library (defined here and reqs here), which works perfectly when setting up a local environment on my system. So I assume it is not a python or requirements problem. However, it does not work when deploying to Azure Functions, leading me to assume that there is an inherent problem with Azure Functions picking up extras requirements?

I have reproduced from my end and got below results.
Test Case 1:
With the functown Python Package, I have tested in Azure Functions Python Version 3.9 Http Trigger with the below Code:
init.py:
import logging
from logging import Logger
from functown import ErrorHandler
import azure.functions as func
#ErrorHandler(debug=True, enable_logger=True)
def main(req: func.HttpRequest, logger: Logger, **kwargs) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
a = 3
b= 5
if a > b:
raise ValueError("something went wrong")
else:
print ("a is less than b")
return func.HttpResponse("Hello Pravallika, This HTTP triggered function executed successfully.", status_code=200)
requirements.txt:
azure-functions
functown
Test Case 2:
This is with the Python-Jose and Jose Libraries in Azure Functions Python v3.9 Http Trigger:
init.py:
import logging
from logging import Logger
from functown import ErrorHandler
import azure.functions as func
from jose import jwt
#ErrorHandler(debug=True, enable_logger=True)
def main(req: func.HttpRequest, logger: Logger, **kwargs) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
# Sample Code Test with functown Packages in Azure Functions Python Http Trigger
a = 3
b= 5
if a > b:
raise ValueError("something went wrong")
else:
print ("a is less than b")
# Sample Code Test with Jose, Python-Jose Packages in Azure Functions Python Http Trigger
token = jwt.encode({'key': 'value'}, 'secret', algorithm='HS256')
u'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJ2YWx1ZSJ9.FG-8UppwHaFp1LgRYQQeS6EDQF7_6-bMFegNucHjmWg'
print(token)
jwt.decode(token, 'secret', algorithms=['HS256'])
{u'key': u'value'}
print(token)
return func.HttpResponse("Hello Pravallika, This HTTP triggered function executed successfully.", status_code=200)
requirements.txt:
azure-functions
functown
jose
Python-jose
Code Samples taken from the references doc1 and doc1.
For your error, I suggest you:
check the code how you imported the modules such as Jose, functown in the code as I seen similar issue in the SO #61061435 where users given code snippet for how to import the Jose Packages and the above code snippets I have provided in practical.
Make sure you have Virtual Environment installed in your Azure Functions Python Project for running the Python Functions.

Related

Error Launching Blob Trigger function in Azure Functions expected str, bytes or os.PathLike object, not PosixPath

My problem is: I try to execute a fresh uploaded python function in an Azure Function App service and launch it (no matter if I use blob trigger or http trigger) I allways get the same error:
Exception while executing function: Functions.TestBlobTrigger Result: Failure
Exception: TypeError: expected str, bytes or os.PathLike object, not PosixPath
Stack: File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/dispatcher.py", line 284, in _handle__function_load_request
func = loader.load_function(
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 40, in call
return func(*args, **kwargs)
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/loader.py", line 53, in load_function
register_function_dir(dir_path.parent)
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/loader.py", line 26, in register_function_dir
_submodule_dirs.append(fspath(path))
Why is this happening: when the function is successfully deployed I upload a file in a blob in order to trigger the function but I get allways the same error, caused by the pathlib library (https://pypi.org/project/pathlib/). I have written a very easy function that works in my local vscode and it just prints a message.
import logging
import configparser
import azure.functions as func
from azure.storage.blob import BlockBlobService
import os
import datetime
import io
import json
import calendar
import aanalytics2 as api2
import time
import pandas as pd
import csv
from io import StringIO
def main(myblob: func.InputStream):
logging.info("BLob Trigger function Launched ");
blob_bytes = myblob.read();
blobmessage=blob_bytes.decode()
func1 = PythonAPP.callMain();
func1.main(blobmessage);
The Pythonapp class is:
class PythonAPP:
def __init__(self):
logging.info('START extractor. ');
self.parameter="product";
def main(self,message1):
var1="--";
try:
var1="---";
logging.info('END: ->paramet '+str(message1));
except Exception as inst:
logging.error("Error PythonAPP.main : " + str(inst));
return var1;
My requirements.txt file is:
azure-storage-blob== 0.37.1
azure-functions-durable
azure-functions
pandas
xlrd
pysftp
openpyxl
configparser
PyJWT==1.7.1
pathlib
dicttoxml
requests
aanalytics2
I've created this simple function in order to check if I can upload the simpliest example in Azure Functions, is there any dependencies that am I forgetting?
Checking the status of the functions I found this:
------------UPDATE1--------------------
The function is failing because the pathlib import, this is because in the requirements of the function it downloads this library and fails with AZ functions. Please see the requirements.txt file in the following link: https://github.com/pitchmuc/adobe_analytics_api_2.0/blob/master/requirements.txt
Can I exlude it somehow?
Well I can't provide an answer for that, I made a walkarround. In this case I created a copy of the library in a github repository. In this copy I erased the references to the pathlib in the requrements.txt and setup.py because this libary causes the failure in the AZ function APPS. By the way in the requirements file of the proyect make a reference to the project, so please mind the requiremnts file that I wrote above and change aanalytics2 reference to:
git+git://github.com/theURLtotherepository.git#master#egg=theproyectname
LINKS.
I've checked a lot of examples in google but none of them helped me:
Azure function failing after successfull deployment with OSError: [Errno 107]
https://github.com/Azure/azure-functions-host/issues/6835
https://learn.microsoft.com/en-us/answers/questions/39865/azure-functions-python-httptrigger-with-multiple-s.html
https://learn.microsoft.com/en-us/answers/questions/147627/azure-functions-modulenotfounderror-for-python-scr.html
Missing Dependencies on Python Azure Function --> no bundler flag or –build remote
https://github.com/OpenMined/PySyft/issues/3400
This is a bug in the azure codebase itself; specifically
within:
azure-functions-python-worker/azure_functions_worker/dispatcher.py
the problematic code within dispatcher looks to be setting up the exposed functions with the metadata parameters found within function.json
you will not encounter the issue if you're not using pathlib within your function app / web app code
if pathlib is available the issue manifests
rather than the simple os.path strings pathlib.Path objects are exposed - deeper down in the codebase there looks to be a conditional import and use of pathlib
to resolve simply remove the pathlib from your requirements.txt and redeploy
you'll need to refactor any of your own code that used pathlib - use equivalent methods in the os module
there looks to have been a ticket opened to resolve this around the time of the OP post - but it is notresolved in the current release

VSCode pytest test discovery failed?

For some reason, VSCode is not picking up my "test_*" file?
Here is the code:
import pytest
import requests
import responses
#responses.activate
def test_simple(api_key: str, url: str):
"""Test"""
responses.add(responses.GET, url=url+api_key,
json={"error": "not found"}, status=404)
resp = requests.get(url+api_key)
assert resp.json() == {"error": "not found"}
I've tried using CTRL+SHIFT+P and configuring tests, but this also fails, it says: Test discovery failed and Test discovery error, please check the configuration settings for the tests.
In my settings, the only thing related to pytest is: "python.testing.pytestEnabled": true.
What is going on here? My file starts with "test" and so does my function within that file?

VSCode test explorer stops discovering tests when I add an import to python code file

This python code file works perfectly. But when I add either of the commented imports, the vscode test feature gives "No tests discovered, please check the configuration settings for the tests." No other errors.
# import boto3
# import pymysql
import decimal
import datetime
def increment(x):
return x + 1
def decrement(x):
return x - 1
What is it that I don't understand about imports and the test feature that explains why these would break the test explorer?

Python, Flask print to console and log file simultaneously

I'm using python 3.7.3, with flask version 1.0.2.
When running my app.py file without the following imports:
import logging
logging.basicConfig(filename='api.log',level=logging.DEBUG)
Flask will display relevant debug information to console, such as POST/GET requests and which IP they came from.
As soon as DEBUG logging is enabled, I no longer receive this output. I have tried running my application in debug mode:
app.run(host='0.0.0.0', port=80, debug=True)
But this produces the same results. Is there a way to have both console output, and python logging enabled? This might sound like a silly request, but I would like to use the console for demonstration purposes, while having the log file present for troubleshooting.
Found a solution:
import logging
from flask import Flask
app = Flask(__name__)
logger = logging.getLogger('werkzeug') # grabs underlying WSGI logger
handler = logging.FileHandler('test.log') # creates handler for the log file
logger.addHandler(handler) # adds handler to the werkzeug WSGI logger
#app.route("/")
def index():
logger.info("Here's some info")
return "Hello World"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
Other Examples:
# logs to console, and log file
logger.info("Some text for console and log file")
# prints exception, and logs to file
except Exception as ue:
logger.error("Unexpected Error: malformed JSON in POST request, check key/value pair at: ")
logger.error(ue)
Source:
https://docstrings.wordpress.com/2014/04/19/flask-access-log-write-requests-to-file/
If link is broken:
You may be confused because adding a handler to Flask’s app.logger doesn’t catch the output you see in the console like:
127.0.0.1 - - [19/Apr/2014 18:51:26] "GET / HTTP/1.1" 200 -
This is because app.logger is for Flask and that output comes from the underlying WSGI module, Werkzeug.
To access Werkzeug’s logger we must call logging.getLogger() and give it the name Werkzeug uses. This allows us to log requests to an access log using the following:
logger = logging.getLogger('werkzeug')
handler = logging.FileHandler('access.log')
logger.addHandler(handler)
# Also add the handler to Flask's logger for cases
# where Werkzeug isn't used as the underlying WSGI server.
# This wasn't required in my case, but can be uncommented as needed
# app.logger.addHandler(handler)
You can of course add your own formatting and other handlers.
Flask has a built-in logger that can be accessed using app.logger. It is just an instance of the standard library logging.Logger class which means that you are able to use it as you normally would the basic logger. The documentation for it is here.
To get the built-in logger to write to a file, you have to add a logging.FileHandler to the logger. Setting debug=True in app.run, starts the development server, but does not change the log level to debug. As such, you'll need to set the log level to logging.DEBUG manually.
Example:
import logging
from flask import Flask
app = Flask(__name__)
handler = logging.FileHandler("test.log") # Create the file logger
app.logger.addHandler(handler) # Add it to the built-in logger
app.logger.setLevel(logging.DEBUG) # Set the log level to debug
#app.route("/")
def index():
app.logger.error("Something has gone very wrong")
app.logger.warning("You've been warned")
app.logger.info("Here's some info")
app.logger.debug("Meaningless debug information")
return "Hello World"
app.run(host="127.0.0.1", port=8080)
If you then look at the log file, it should have all 4 lines printed out in it and the console will also have the lines.

aiohttp 1->2 with gunicorn unable to use timeout context manager

My small aiohttp1 app was working fine until I have started preparing for deploying to Heroku servers. Heroku forces me to use gunicorn. It wasn't working with aiohttp1 (some strange errors), and after a small migration aiohttp2 I have started app with
gunicorn app:application --worker-class aiohttp.GunicornUVLoopWebWorker
command and I works, until I have requested view that calls class' async method with
with async_timeout.timeout(self.timeout):
res = await self.method()
code inside and it raises RuntimeError: Timeout context manager should be used inside a task error.
The only thing that changed until last successful version: the fact that I'm using gunicorn. I'm using Python 3.5.2 and uvloop 0.8.0 and gunicorn 19.7.1.
What is wrong and how can I fix this?
UPD
The problem is that I have LOOP variable that store newly created event loop. This variable definition is in setting.py file, so before aiohttp.web.Application creating, code below gets evaluated:
LOOP = uvloop.new_event_loop()
asyncio.set_evet_loop(LOOP)
and this causes error when using async_timeout.timeout context manager.
Code to reproduce the error:
# test.py
from aiohttp import web
import uvloop
import asyncio
import async_timeout
asyncio.set_event_loop(uvloop.new_event_loop())
class A:
async def am(self):
with async_timeout.timeout(timeout=1):
await asyncio.sleep(0.5)
class B:
a = A()
async def bm(self):
await self.a.am()
b = B()
async def hello(request):
await b.bm()
return web.Response(text="Hello, world")
app = web.Application()
app.router.add_get('/', hello)
if __name__ == '__main__':
web.run_app(app, port=5000, host='localhost')
and just run gunicorn test:app --worker-class aiohttp.GunicornUVLoopWebWorker (same error when using GunicornWebWorker with uvloop event loop, or any other combination).
solution
I have fixed this by calling asyncio.get_event_loop when I need to use event loop.

Resources