Stop submodules logging - running script from batch file - python-3.x

I have a python script with a cli argument parser (based on argparse)
I am calling it from a batch file:
set VAR1=arg_1
set VAR2=arg_2
python script.py --arg1 %VAR1% --arg2 %VAR2%
within the script.py I call a logger:
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
This script utilizes chromedriver, selenium and requests to automate some clicking and moving between web pages.
When running from within PyCharm (configured so that the script has arg_1 and arg_2 passed to it) everything is great - I get log messages from my logger only.
When I run the batch file - I get a bunch of logging messages from chromedriver or requests (I think).
I have tried:
#echo off at the start of the batch file.
Setting the level on the root logger.
Getting the logging logger dictionary and setting each logger to WARNING - based on this question.
None of these work and I keep getting logging messages from submodules - ONLY when run from a batch file.
Anybody know how to fix this?

You can use the following configuration options to do this
import logging.config
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': True,
})

Related

how to output python celery standard log to both console and logfile

I have a question about how to send celery log in default format output to both console (stdout) and logfile.
log default format: "[%(asctime)s: %(levelname)s/%(processName)s] %(message)s"
task log default format: "[%(asctime)s: %(levelname)s/%(processName)s]
[%(task_name)s(%(task_id)s)] %(message)s"
I start up the celery app using:
celery_app = Celery()
celery_app.start(argv=["celery", "worker", "-l", "info"]
this only sends the full celery log to console (stdout).
When I do:
celery_app = Celery()
celery_app.start(argv=["celery", "worker", "-l", "info", "--logfile='./tasks.log'"]
this only sends the full celery log to logfile (tasks.log).
how can I send the full celery log to both console and logfile at the same time?
I tried using logging.config.dictConfig(config) to set up both streamHandler and fileHandler to output to console and logfile. This way doesn't allow me to include the task_id and task_name in the default celery log format because the task_id and task_name require to use TaskFormatter class.
The standard way of doing this is to hook into either celery.signals.setup_logging or celery.signals.after_setup_task_logger. You can add an additional logging handler in your signal callback function. As an aside, even if you use dictConfig, you can still add the TaskFormatter to your logging config.

Why when I start uvicorn in my FastAPI service does my configuration method run twice?

I have written a service using fastapi and uvicorn. I have a main in my service that starts uvicorn (see below). In that main, the first thing I do is load configuration settings. I have some INFO outputs that output the settings when I load the configuration. I notice when I start my service, the configuration loading method seems to be running twice.
# INITIALIZE
if __name__ == "__main__":
# Load the config once at bootstrap time. This outputs the string "Loading configuration settings..."
config = CdfAuthConfig()
print("Loaded Configuration")
# Create FastAPI object
app = FastAPI()
# Start uvicorn
uvicorn.run(app, host="127.0.0.1", port=5050)
The output when I run the service looks like:
Loading configuration settings...
Loading configuration settings...
Loaded Configuration
Why is the "CdfAuthConfig()" class being instantiated twice? It obviously has something to do with the "uvicorn.run" command.
I had a similar setup and this behavior made me curious, I did some tests and now I see probably why.
Your if __name__ == "__main__": is being reached only once, this is a fact.
How can you test this.
Add the following line before your if:
print(__name__)
If you run your code as is, but adding the line I mentioned, it will print:
__main__ # in the first run
Then uvicorn will call your program again and will print something like:
__mp_main__ # after uvicorn starts your code again
And right after it will also print:
app # since this is the argument you gave to uvicorn
If you want to avoid that, you should call uvicorn from the command line, like:
uvicorn main:app --reload --host 0.0.0.0 --port 5000 # assuming main.py is your file name
uvicorn will reload your code since you are calling it from inside the code. Maybe a work around would be to have the uvicorn call in a separate file, or as I said, just use the command line.
If you don't wanna write the command with the arguments all the time, you can write a small script (app_start.sh)
I hope this helps you understand a little bit better.

logging module failed to create file while running through pytest

I am using below logging config to create a log file. It is working perfectly fine without pytest run, but with pytest, it is failed to create log file.
logging.basicConfig(filename='test.log', filemode='a',format='%(message)s',level=logging.INFO)
Guessing, this above config was overwritten by pytest's config. Hence it failed to create it.
Please suggest how we can execute above logging config and get the log file while running through pytest?

How to get logs from inside the container executed using DockerOperator?(Airflow)

I'm facing logging issues with DockerOperator.
I'm running a python script inside the docker container using DockerOperator and I need airflow to spit out the logs from the python script running inside the container. Airlfow is marking the job as success but the script inside the container is failing and I have no clue of what is going as I cannot see the logs properly. Is there way to set up logging for DockerOpertor apart from setting up tty option to True as suggested in docs
It looks like you can have logs pushed to XComs, but it's off by default. First, you need to pass xcom_push=True for it to at least start sending the last line of output to XCom. Then additionally, you can pass xcom_all=True to send all output to XCom, not just the first line.
Perhaps not the most convenient place to put debug information, but it's pretty accessible in the UI at least either in the XCom tab when you click into a task or there's a page you can list and filter XComs (under Browse).
Source: https://github.com/apache/airflow/blob/1.10.10/airflow/operators/docker_operator.py#L112-L117 and https://github.com/apache/airflow/blob/1.10.10/airflow/operators/docker_operator.py#L248-L250
Instead of DockerOperator you can use client.containers.run and then do the following:
with DAG(dag_id='dag_1',
default_args=default_args,
schedule_interval=None,
tags=['my_dags']) as dag:
#task(task_id='task_1')
def start_task(**kwargs):
# get the docker params from the environment
client = docker.from_env()
# run the container
response = client.containers.run(
# The container you wish to call
image='__container__:latest',
# The command to run inside the container
command="python test.py",
version='auto',
auto_remove=True,
stdout = True,
stderr=True,
tty=True,
detach=True,
remove=True,
ipc_mode='host',
network_mode='bridge',
# Passing the GPU access
device_requests=[
docker.types.DeviceRequest(count=-1, capabilities=[['gpu']])
],
# Give the proper system volume mount point
volumes=[
'src:/src',
],
working_dir='/src'
)
output = response.attach(stdout=True, stream=True, logs=True)
for line in output:
print(line.decode())
return str(response)
test = start_task()
Then in your test.py script (in the docker container) you have to do the logging using the standard Python logging module:
import logging
logger = logging.getLogger("airflow.task")
logger.info("Log something.")
Reference: here

mikrotik python3 API call from celery

I use the Python3 API of mikrotik to create some backups files,
When celery executes the python script the process doesn’t complete
and the backup file is not created . I attach a screenshot so you can see the output from the api.
Any suggestion, please advice.
When I run the python script it works fine but when the script runs through celery i get this output. So I examined the api code and I added "time.sleep( 5 )" after the line with "apiros.writeSentence(inputsentence)" and it works fine. Seems that the api returns before the end of the backup process and sends the output to /dev/null

Resources