Python 3.7 - Logger not showing any log on GCP LOG Viewer - python-3.x

I've made a decorator named "logger" and I'm using it to show log on the console.
In my python 2.7 everything works fine but in my python 3.7 code, it doesn't print any log in GCP log viewer. (My code is deployed on GCP standard environment)
I've tried to set this code but it doesn't work.
logging.getLogger().setLevel(logging.INFO)
Here's my simple logger decorator
def logger(func):
def wrapper(*args, **kwargs):
logging.info('INSIDE %s PARAMETERS: %s', func.__name__, args)
res = func(*args, **kwargs)
logging.info('OUTSIDE %s OUTPUT: %s', func.__name__, res)
return res
return wrapper
Even a simple logging.info statement is not working.
I've tried everything with logging, but nothing gets printed out on GCP Log Viewer.
logging.info("Text info")
logging.debug("Text debug")
logging.warning("Text warning")
logging.error("Text error")
logging.critical("Text critical")
Here is the log viewer

I deployed the following app to App Engine:
In app.yaml:
runtime: python37
In requirements.txt:
Flask==1.0.2
In main.py:
from flask import Flask
import logging
logging.getLogger().setLevel(logging.INFO)
app = Flask(__name__)
#app.route('/')
def hello():
logging.info("Text info")
logging.debug("Text debug")
logging.warning("Text warning")
logging.error("Text error")
logging.critical("Text critical")
return 'Hello World!'
When I look at the Stackdriver logs for the application after a request, I see the INFO, WARNING, ERROR and CRITICAL log messages as expected.
Is it possible you're looking somewhere else for the logs? From the App Engine Dashboard, you can go to
Services > [your service] > Diagnose > Tools > Logs
to find the logs for your service.

Related

Python Azure Function does not install 'extras' requirements

I have an azure function that depends on extras of a specific library, however it seems that Functions is ignoring the extras specified in the requirements.txt
I have specified this in the requirements.txt:
functown>=2.0.0
functown[jwt]>=2.0.0
This should install additional dependencies (python-jose in this case), however it seems like the extras are ignored by Azure functions:
No module named 'jose'. Please check the requirements.txt file for the missing module.
I also tested this locally and can confirm that for a regular pip install -r requirements.txt the extras are picked up and python-jose is indeed installed (which I can verify by import jose).
Are there special settings to be set in Azure Function or is this a bug?
Update 1:
In particular I want to install the dependencies on an extra of a python library (defined here and reqs here), which works perfectly when setting up a local environment on my system. So I assume it is not a python or requirements problem. However, it does not work when deploying to Azure Functions, leading me to assume that there is an inherent problem with Azure Functions picking up extras requirements?
I have reproduced from my end and got below results.
Test Case 1:
With the functown Python Package, I have tested in Azure Functions Python Version 3.9 Http Trigger with the below Code:
init.py:
import logging
from logging import Logger
from functown import ErrorHandler
import azure.functions as func
#ErrorHandler(debug=True, enable_logger=True)
def main(req: func.HttpRequest, logger: Logger, **kwargs) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
a = 3
b= 5
if a > b:
raise ValueError("something went wrong")
else:
print ("a is less than b")
return func.HttpResponse("Hello Pravallika, This HTTP triggered function executed successfully.", status_code=200)
requirements.txt:
azure-functions
functown
Test Case 2:
This is with the Python-Jose and Jose Libraries in Azure Functions Python v3.9 Http Trigger:
init.py:
import logging
from logging import Logger
from functown import ErrorHandler
import azure.functions as func
from jose import jwt
#ErrorHandler(debug=True, enable_logger=True)
def main(req: func.HttpRequest, logger: Logger, **kwargs) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
# Sample Code Test with functown Packages in Azure Functions Python Http Trigger
a = 3
b= 5
if a > b:
raise ValueError("something went wrong")
else:
print ("a is less than b")
# Sample Code Test with Jose, Python-Jose Packages in Azure Functions Python Http Trigger
token = jwt.encode({'key': 'value'}, 'secret', algorithm='HS256')
u'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJ2YWx1ZSJ9.FG-8UppwHaFp1LgRYQQeS6EDQF7_6-bMFegNucHjmWg'
print(token)
jwt.decode(token, 'secret', algorithms=['HS256'])
{u'key': u'value'}
print(token)
return func.HttpResponse("Hello Pravallika, This HTTP triggered function executed successfully.", status_code=200)
requirements.txt:
azure-functions
functown
jose
Python-jose
Code Samples taken from the references doc1 and doc1.
For your error, I suggest you:
check the code how you imported the modules such as Jose, functown in the code as I seen similar issue in the SO #61061435 where users given code snippet for how to import the Jose Packages and the above code snippets I have provided in practical.
Make sure you have Virtual Environment installed in your Azure Functions Python Project for running the Python Functions.

VSCode pytest test discovery failed?

For some reason, VSCode is not picking up my "test_*" file?
Here is the code:
import pytest
import requests
import responses
#responses.activate
def test_simple(api_key: str, url: str):
"""Test"""
responses.add(responses.GET, url=url+api_key,
json={"error": "not found"}, status=404)
resp = requests.get(url+api_key)
assert resp.json() == {"error": "not found"}
I've tried using CTRL+SHIFT+P and configuring tests, but this also fails, it says: Test discovery failed and Test discovery error, please check the configuration settings for the tests.
In my settings, the only thing related to pytest is: "python.testing.pytestEnabled": true.
What is going on here? My file starts with "test" and so does my function within that file?

Python, Flask print to console and log file simultaneously

I'm using python 3.7.3, with flask version 1.0.2.
When running my app.py file without the following imports:
import logging
logging.basicConfig(filename='api.log',level=logging.DEBUG)
Flask will display relevant debug information to console, such as POST/GET requests and which IP they came from.
As soon as DEBUG logging is enabled, I no longer receive this output. I have tried running my application in debug mode:
app.run(host='0.0.0.0', port=80, debug=True)
But this produces the same results. Is there a way to have both console output, and python logging enabled? This might sound like a silly request, but I would like to use the console for demonstration purposes, while having the log file present for troubleshooting.
Found a solution:
import logging
from flask import Flask
app = Flask(__name__)
logger = logging.getLogger('werkzeug') # grabs underlying WSGI logger
handler = logging.FileHandler('test.log') # creates handler for the log file
logger.addHandler(handler) # adds handler to the werkzeug WSGI logger
#app.route("/")
def index():
logger.info("Here's some info")
return "Hello World"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
Other Examples:
# logs to console, and log file
logger.info("Some text for console and log file")
# prints exception, and logs to file
except Exception as ue:
logger.error("Unexpected Error: malformed JSON in POST request, check key/value pair at: ")
logger.error(ue)
Source:
https://docstrings.wordpress.com/2014/04/19/flask-access-log-write-requests-to-file/
If link is broken:
You may be confused because adding a handler to Flask’s app.logger doesn’t catch the output you see in the console like:
127.0.0.1 - - [19/Apr/2014 18:51:26] "GET / HTTP/1.1" 200 -
This is because app.logger is for Flask and that output comes from the underlying WSGI module, Werkzeug.
To access Werkzeug’s logger we must call logging.getLogger() and give it the name Werkzeug uses. This allows us to log requests to an access log using the following:
logger = logging.getLogger('werkzeug')
handler = logging.FileHandler('access.log')
logger.addHandler(handler)
# Also add the handler to Flask's logger for cases
# where Werkzeug isn't used as the underlying WSGI server.
# This wasn't required in my case, but can be uncommented as needed
# app.logger.addHandler(handler)
You can of course add your own formatting and other handlers.
Flask has a built-in logger that can be accessed using app.logger. It is just an instance of the standard library logging.Logger class which means that you are able to use it as you normally would the basic logger. The documentation for it is here.
To get the built-in logger to write to a file, you have to add a logging.FileHandler to the logger. Setting debug=True in app.run, starts the development server, but does not change the log level to debug. As such, you'll need to set the log level to logging.DEBUG manually.
Example:
import logging
from flask import Flask
app = Flask(__name__)
handler = logging.FileHandler("test.log") # Create the file logger
app.logger.addHandler(handler) # Add it to the built-in logger
app.logger.setLevel(logging.DEBUG) # Set the log level to debug
#app.route("/")
def index():
app.logger.error("Something has gone very wrong")
app.logger.warning("You've been warned")
app.logger.info("Here's some info")
app.logger.debug("Meaningless debug information")
return "Hello World"
app.run(host="127.0.0.1", port=8080)
If you then look at the log file, it should have all 4 lines printed out in it and the console will also have the lines.

Spin up a local flask server for testing with pytest

I have the following problem.
I'd like to run tests on the local flask server before deploying to production. I use pytest for that. My conftest.py looks like that for the moment:
import pytest
from toolbox import Toolbox
import subprocess
def pytest_addoption(parser):
"""Add option to pass --testenv=local to pytest cli command"""
parser.addoption(
"--testenv", action="store", default="exodemo", help="my option: type1 or type2"
)
#pytest.fixture(scope="module")
def testenv(request):
return request.config.getoption("--testenv")
#pytest.fixture(scope="module")
def testurl(testenv):
if testenv == 'local':
return 'http://localhost:5000/'
else:
return 'https://api.domain.com/'
This allows me to test the production api by typing the command pytest and to test a local flask server by typing pytest --testenv=local
This code WORKS flawlessly.
My problem is that I have to manually instantiate the local flask server from the terminal each time I want to test locally like this:
source ../pypyenv/bin/activate
python ../app.py
Now I wanted to add a fixture that initiates a terminal in the background at the beginning of the tests and closes the server down after having finished testing. After a lot of research and testing, I still cannot get it to work. This is the line I added to the conftest.py:
#pytest.fixture(scope="module", autouse=True)
def spinup(testenv):
if testenv == 'local':
cmd = ['../pypyenv/bin/python', '../app.py']
p = subprocess.Popen(cmd, shell=True)
yield
p.terminate()
else:
pass
The errors I get are from the requests package that says that there is no connection/ refused.
E requests.exceptions.ConnectionError:
HTTPConnectionPool(host='localhost', port=5000): Max retries exceeded
with url: /login (Caused by
NewConnectionError(': Failed to establish a new connection:
[Errno 111] Connection refused',))
/usr/lib/python3/dist-packages/requests/adapters.py:437:
ConnectionError
This means for me that the flask server under app.py is not online. Any suggestions? I am open to more elegant alternatives
For local testing the Flask test_client is a more elegant solution. See the docs on Testing. You can create a fixture for the test_client and create test requests with that:
#pytest.fixture
def app():
app = create_app()
yield app
# teardown here
#pytest.fixture
def client(app):
return app.test_client()
And use it like this:
def test_can_login(client):
response = client.post('/login', data={username='username', password='password'})
assert response.status_code == 200
If the only problem are the manual steps, maybe consider a bash script that does your manual setup for you and after that executes pytest.
I am using the following for this purpose so that testing configuration is also preserved in the test server
#pytest.fixture(scope="session")
def app():
db_fd, db_path = tempfile.mkstemp()
app = create_app({
'TESTING': True,
'DATABASE': db_path
})
yield app
os.close(db_fd)
os.unlink(db_path)
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
#pytest.fixture
def server(app):
#app.route('/shutdown',methods=('POST',))
def shutdown():
shutdown_server()
return 'Shutting down server ...'
import threading
t = threading.Thread(target=app.run)
yield t.start()
import requests
requests.post('http://localhost:5000/shutdown')
References
https://flask.palletsprojects.com/en/1.1.x/tutorial/tests/
How do I terminate a flask app that's running as a service?
How to stop flask application without using ctrl-c
With a bash script (thanks #ekuusela) I now finally succeeded in what I wanted.
I added a fixture that calls the bashscript spinserver.sh in a new terminal window. This works in ubuntu, the command is different in different environments (see Execute terminal command from python in new terminal window? for other environments).
#pytest.fixture(scope="session", autouse=True)
def client(testenv):
if testenv != 'local':
pass
else:
p = subprocess.Popen(['gnome-terminal', '-x', './spinserver.sh'])
time.sleep(3)
yield
Here the very simple bashscript
#!/bin/bash
cd ..
source pypyenv/bin/activate
python app.py
The sleep command is necessary because the server takes some time to
initialize.
Don't forget to make your bash script executable (chmod
u+x spinserver.sh)
I tried to do a teardown after yield, but p.kill does not really
close the window. This is acceptable for me as it does not matter
if I have to manually close a terminal window & I can even see
flask debugging if necessary

aiohttp 1->2 with gunicorn unable to use timeout context manager

My small aiohttp1 app was working fine until I have started preparing for deploying to Heroku servers. Heroku forces me to use gunicorn. It wasn't working with aiohttp1 (some strange errors), and after a small migration aiohttp2 I have started app with
gunicorn app:application --worker-class aiohttp.GunicornUVLoopWebWorker
command and I works, until I have requested view that calls class' async method with
with async_timeout.timeout(self.timeout):
res = await self.method()
code inside and it raises RuntimeError: Timeout context manager should be used inside a task error.
The only thing that changed until last successful version: the fact that I'm using gunicorn. I'm using Python 3.5.2 and uvloop 0.8.0 and gunicorn 19.7.1.
What is wrong and how can I fix this?
UPD
The problem is that I have LOOP variable that store newly created event loop. This variable definition is in setting.py file, so before aiohttp.web.Application creating, code below gets evaluated:
LOOP = uvloop.new_event_loop()
asyncio.set_evet_loop(LOOP)
and this causes error when using async_timeout.timeout context manager.
Code to reproduce the error:
# test.py
from aiohttp import web
import uvloop
import asyncio
import async_timeout
asyncio.set_event_loop(uvloop.new_event_loop())
class A:
async def am(self):
with async_timeout.timeout(timeout=1):
await asyncio.sleep(0.5)
class B:
a = A()
async def bm(self):
await self.a.am()
b = B()
async def hello(request):
await b.bm()
return web.Response(text="Hello, world")
app = web.Application()
app.router.add_get('/', hello)
if __name__ == '__main__':
web.run_app(app, port=5000, host='localhost')
and just run gunicorn test:app --worker-class aiohttp.GunicornUVLoopWebWorker (same error when using GunicornWebWorker with uvloop event loop, or any other combination).
solution
I have fixed this by calling asyncio.get_event_loop when I need to use event loop.

Resources