I am starting gunicorn with the --paster option for running Pyramid.
gunicorn -w 1 --paster development.ini
gunicorn's own messages show up fine on console, for example
2014-02-20 22:38:50 [44201] [INFO] Starting gunicorn 18.0
2014-02-20 22:38:50 [44201] [INFO] Listening at: http://0.0.0.0:6543 (44201)
2014-02-20 22:38:50 [44201] [INFO] Using worker: sync
However the log messages in my Pyramid app are not showing up.
If I use pserve development.ini, which uses waitress as WSGI server, the log messages show up on console just fine.
My development.ini includes a pretty vanilla logging configuration section.
[loggers]
keys = root, apipython
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_apipython]
level = DEBUG
handlers =
qualname = apipython
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = DEBUG
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
I am at lost why the logs are not showing up when I use gunicorn.
Do not use pserve with gunicorn, it is deprecated and most likely will be removed in some of the next versions.
Gunicorn has "logconfig" setting, just set it to your config via command line argument:
gunicorn -w 1 --paster development.ini --log-config development.ini
or in the same config:
[server:main]
use = egg:gunicorn#main
logconfig = %(here)s/development.ini
It happens because of "pserve" command not only starts server and loads application, it also setups logging. While "gunicorn --paster" just loads application. To fix it you can explicitly setup logging on your application:
from pyramid.config import Configurator
from pyramid.paster import setup_logging
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application. """
setup_logging(global_config['__file__'])
config = Configurator(settings=settings)
# Configure application
return config.make_wsgi_app()
Or as you pointed in comment, change server in config file and use "pserve" command:
[server:main]
use = egg:gunicorn#main
Related
I have written code to add logs using logging module in python. I tried running code through Sonarqube, It is showing following error .
Make sure that this logger's configuration is safe.
python code:
from logging.config import fileConfig
import logging
#this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger("alembic.env")
class DefaultConfig:
DEVELOPMENT = False
DEBUG = False
TESTING = False
LOGGING_LEVEL = "DEBUG"
CSRF_ENABLED = True
Please help to resolve this hotspot. And one more question I have, Is it mandatory to look into low priority hotspots.
I'm using python 3.7.3, with flask version 1.0.2.
When running my app.py file without the following imports:
import logging
logging.basicConfig(filename='api.log',level=logging.DEBUG)
Flask will display relevant debug information to console, such as POST/GET requests and which IP they came from.
As soon as DEBUG logging is enabled, I no longer receive this output. I have tried running my application in debug mode:
app.run(host='0.0.0.0', port=80, debug=True)
But this produces the same results. Is there a way to have both console output, and python logging enabled? This might sound like a silly request, but I would like to use the console for demonstration purposes, while having the log file present for troubleshooting.
Found a solution:
import logging
from flask import Flask
app = Flask(__name__)
logger = logging.getLogger('werkzeug') # grabs underlying WSGI logger
handler = logging.FileHandler('test.log') # creates handler for the log file
logger.addHandler(handler) # adds handler to the werkzeug WSGI logger
#app.route("/")
def index():
logger.info("Here's some info")
return "Hello World"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
Other Examples:
# logs to console, and log file
logger.info("Some text for console and log file")
# prints exception, and logs to file
except Exception as ue:
logger.error("Unexpected Error: malformed JSON in POST request, check key/value pair at: ")
logger.error(ue)
Source:
https://docstrings.wordpress.com/2014/04/19/flask-access-log-write-requests-to-file/
If link is broken:
You may be confused because adding a handler to Flask’s app.logger doesn’t catch the output you see in the console like:
127.0.0.1 - - [19/Apr/2014 18:51:26] "GET / HTTP/1.1" 200 -
This is because app.logger is for Flask and that output comes from the underlying WSGI module, Werkzeug.
To access Werkzeug’s logger we must call logging.getLogger() and give it the name Werkzeug uses. This allows us to log requests to an access log using the following:
logger = logging.getLogger('werkzeug')
handler = logging.FileHandler('access.log')
logger.addHandler(handler)
# Also add the handler to Flask's logger for cases
# where Werkzeug isn't used as the underlying WSGI server.
# This wasn't required in my case, but can be uncommented as needed
# app.logger.addHandler(handler)
You can of course add your own formatting and other handlers.
Flask has a built-in logger that can be accessed using app.logger. It is just an instance of the standard library logging.Logger class which means that you are able to use it as you normally would the basic logger. The documentation for it is here.
To get the built-in logger to write to a file, you have to add a logging.FileHandler to the logger. Setting debug=True in app.run, starts the development server, but does not change the log level to debug. As such, you'll need to set the log level to logging.DEBUG manually.
Example:
import logging
from flask import Flask
app = Flask(__name__)
handler = logging.FileHandler("test.log") # Create the file logger
app.logger.addHandler(handler) # Add it to the built-in logger
app.logger.setLevel(logging.DEBUG) # Set the log level to debug
#app.route("/")
def index():
app.logger.error("Something has gone very wrong")
app.logger.warning("You've been warned")
app.logger.info("Here's some info")
app.logger.debug("Meaningless debug information")
return "Hello World"
app.run(host="127.0.0.1", port=8080)
If you then look at the log file, it should have all 4 lines printed out in it and the console will also have the lines.
I'd like to run a simple test (run a task) first via RabbitMQ and once this is setup correctly, then encapsulate in Docker and run from there.
My structure looks like so:
-rabbitmq_docker
- test_celery
- __init__.py
- celeryapp.py
- celeryconfig.py
- runtasks.py
- tasks.py
- docker-compose.yml
- dockerfile
- requirements.txt
celeryconfig.py
## List of modules to import when celery starts
CELERY_IMPORTS = ['test_celery.tasks',] # Required to import module containing tasks
## Message Broker (RabbitMQ) settings
CELERY_BROKER_URL = "amqp://guest#localhost//"
CELERY_BROKER_PORT = 5672
CELERY_RESULT_BACKEND = 'rpc://'
celeryapp.py
from celery import Celery
app = Celery('test_celery')
app.config_from_object('test_celery.celeryconfig', namespace='CELERY')
__init__.py
from .celeryapp import app as celery_app
run_tasks.py
from tasks import reverse
from celery.utils.log import get_task_logger
LOGGER = get_task_logger(__name__)
if __name__ == '__main__':
async_result = reverse.delay("rabbitmq")
LOGGER.info(async_result.get())
tasks.py
from test_celery.celeryapp import app
#app.task(name='tasks.reverse')
def reverse(string):
return string[::-1]
I run celery -A test_celery worker --loglevel=info from the rabbitmq_docker directory. Then in a separate window I trigger reverse.delay("rabbitmq") in the Python console, after importing the required module. This works. Now when I try to trigger the reverse function via the run_tasks.py i.e. python test_celery/run_tasks.py I get:
Traceback (most recent call last):
File "test_celery/run_tasks.py", line 1, in <module>
from tasks import reverse
File "/Users/my_mbp/Software/rabbitmq_docker/test_celery/tasks.py", line 1, in <module>
from test_celery.celeryapp import app
ModuleNotFoundError: No module named 'test_celery'
What I don't get is why this Traceback doesn't get thrown when called directly from the Python console. Could anyone help me out here? I'd eventually like to startup docker, and just run the tests automatically (without going into the Python console).
The problem is simply because your module is not in the Python path.
These should help:
Specify the PYTHONPATH to point to the directory where your test_celery package.
Always run your Python code in the directory where your test_celery package is located.
Or alternatively reorganise your imports...
I am running a script in EC2 instance as job (through task scheduler) which creates its own log file. On my local machine it runs perfectly fine and creates the file, but on EC2 i'm not able to see the file at all.
Here is the sample code
import logging
import logging.handlers
def setup_logging(logger, logfile):
logger.setLevel(logging.INFO)
handler = logging.handlers.RotatingFileHandler(
logfile, maxBytes=(1048576 * 5), backupCount=7)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
logfile = 'one_time_loader'
logger = logging.getLogger()
setup_logging(logger, logfile)
for i in range(0,1000):
logger.info(i)
Kindly help me to resolve it.
You'll need to import Boto3 from the AWS SDK. https://aws.amazon.com/sdk-for-python/ Since you're running it as an EC2 instance type, you'll need to make calls against the Resource APIs. See the doc for more information.
I use scrapy 1.1.0, and I have 5 spiders in the "spiders" folder.
In every spider, I try to use python3 logging module. And the code structure like this :
import other modules
import logging
class ExampleSpider(scrapy.Spider):
name = 'special'
def __init__(self):
# other initializations
# set log
self.log = logging.getLogger('special')
self.log.setLevel(logging.DEBUG)
logFormatter = logging.Formatter('%(asctime)s %(levelname)s: %(message)s')
# file handler
fileHandler = logging.FileHandler(LOG_PATH) # LOG_PATH has defined
fileHandler.setLevel(logging.DEBUG)
fileHandler.setFormatter(logFormater)
self.log.addHandler(fileHandler)
# other functions
every spider has the same structure.When I run these spiders, I check the log file, they did exist, but their size are always 0 byte.
And the other question is that when I run one spider, it always generated two or more log files. Like I run a spider, and it will generate a.log and b.log.
Any answers would appreciate.
You can set log file via LOG_FILE setting in settings.py or via command line argument --logfile FILE, i.e. scrapy crawl myspider --logfile myspider.log
As described in the official docs