Django Logging - Unable to set propagate to False - python-3.x

What I'm experiencing is getting duplicate entries in my logs. I've looked around, and it seems that I have to set 'propagate': False in my loggers configuration which I have already done. However when I printout the logger.propagate it returns True. I even tried to manually set logger.propagate = False but it still returns True and I'm receiving duplicate entries in my logs.
What might be the cause of problem?
import logging
logger = logging.getLogger(__name__)
logger.propagate = False
class PostsListAPIView(ListAPIView):
def get_queryset(self):
# Getting both twice
logger.error('Something went wrong!')
logger.error(logger.propagate) # Returns True
queryset = ...
return queryset
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"simple": {
"format": "{levelname} {message}",
"style": "{",
}
},
"handlers": {
"console": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "simple",
},
},
"loggers": {
"app": {
"level": "DEBUG",
"handlers": ["console"],
'propagate': False,
}
},
}
I also tried setting "disable_existing_loggers": True, but it has no effects.

As for getting duplicate log entries, I found out that the cause is get_queryset method being called twice in order to display the forms as my BrowsableAPI is enabled.
However I still have not Idea why logger.propagate returns True.

Related

Databricks API - Instance Pool - How to update an existing job to use instance pool instead?

I am trying to update a batch of jobs to use some instance pools with the databricks api and when I try to use the update endpoint, the job just does not update. It says it executed without errors, but when I check the job, it was not updated.
What am I doing wrong?
What i used to update the job:
I used the get endpoint using the job_id to get my job settings and all
I updated the resulting data with the values that i needed and executed the call to update the job.
'custom_tags': {'ResourceClass': 'Serverless'},
'driver_instance_pool_id': 'my-pool-id',
'driver_node_type_id': None,
'instance_pool_id': 'my-other-pool-id',
'node_type_id': None
I used this documentation, https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsUpdate
here is my payload
{
"created_time": 1672165913242,
"creator_user_name": "email#email.com",
"job_id": 123123123123,
"run_as_owner": true,
"run_as_user_name": "email#email.com",
"settings": {
"email_notifications": {
"no_alert_for_skipped_runs": false,
"on_failure": [
"email1#email.com",
"email2#email.com"
]
},
"format": "MULTI_TASK",
"job_clusters": [
{
"job_cluster_key": "the_cluster_key",
"new_cluster": {
"autoscale": {
"max_workers": 4,
"min_workers": 2
},
"aws_attributes": {
"availability": "SPOT_WITH_FALLBACK",
"ebs_volume_count": 0,
"first_on_demand": 1,
"instance_profile_arn": "arn:aws:iam::XXXXXXXXXX:instance-profile/instance-profile",
"spot_bid_price_percent": 100,
"zone_id": "us-east-1a"
},
"cluster_log_conf": {
"s3": {
"canned_acl": "bucket-owner-full-control",
"destination": "s3://some-bucket/log/log_123123123/",
"enable_encryption": true,
"region": "us-east-1"
}
},
"cluster_name": "",
"custom_tags": {
"ResourceClass": "Serverless"
},
"data_security_mode": "SINGLE_USER",
"driver_instance_pool_id": "my-driver-pool-id",
"enable_elastic_disk": true,
"instance_pool_id": "my-worker-pool-id",
"runtime_engine": "PHOTON",
"spark_conf": {...},
"spark_env_vars": {...},
"spark_version": "..."
}
}
],
"max_concurrent_runs": 1,
"name": "my_job",
"schedule": {...},
"tags": {...},
"tasks": [{...},{...},{...}],
"timeout_seconds": 79200,
"webhook_notifications": {}
}
}
I tried to use the update endpoint and reading the docs for information but I found nothing related to the issue.
I finally got it
I was using partial update and found that this does not work for the whole job payload
So I changed the endpoint to use full update (reset) and it worked

My vs code will flicker when I save. I turned on black and flake8 and formatonSave. Why does it flicker? How to stop it?

I have the following workspace settings for my Django project in VSCODE.
{
"folders": [
{
"path": "."
}
],
"settings": {
"python.pythonPath": "/Users/kim/.pyenv/versions/3.7.7/bin/python3.7",
"files.exclude": {
"**/.classpath": true,
"**/.project": true,
"**/.settings": true,
"**/.factorypath": true
},
"editor.tabSize": 4,
"[javascript]": {
"editor.tabSize": 2
},
"[json]": {
"editor.tabSize": 2
},
"[markdown]": {
"editor.tabSize": 2
},
"javascript.format.insertSpaceAfterOpeningAndBeforeClosingNonemptyParenthesis": true,
"python.linting.flake8Enabled": true,
"python.linting.pylintArgs": ["--enable=unused-import", "--enable=W0614"],
"python.formatting.provider": "black",
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
},
"workbench.iconTheme": "vscode-icons",
"editor.formatOnSave": true
}
}
You can see how it flickers as I press Cmd+S to save the file. From this gif
Why does this flickering happen? I can understand if it happens once. But it will flicker back and forth. As if vscode is formatting on save between two different formats and cannot make its mind.
How do I properly solve this issue?
Ok I realized I found the issue and solution.
Basically the issue is that there are two formatters used to sort the imports.
I use the answer in this github comment
https://github.com/microsoft/vscode-python/issues/6933#issuecomment-543059396
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports.python": true
}
},
"python.formatting.blackArgs": ["--line-length", "88"],
"python.sortImports.args": [
"--multi-line=3",
"--trailing-comma",
"--force-grid-wrap=0",
"--use-parentheses",
"--line-width=88",
],
Notice I change the organizeImports to organizeImports.python.
Also I added a "python.formatting.blackArgs": ["--line-length", "88"]. This is to be consistent with the line-length.
It was suggested in another issue for the same problem. I tried that alone but didn't work. So I combined that with the sortImports args

How to save log file into subdirectory using dictConfig configuration?

I have the following config in my Python 3.6 application:
config = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": ("[%(asctime)s] %(levelname)s - %(threadName)s - %(name)s - %(message)s")
},
},
"handlers": {
"default": {
"class": "logging.StreamHandler",
"formatter": "standard",
"level": DEFAULT_LOG_LEVEL,
},
"fh": {
"class": "logging.handlers.RotatingFileHandler",
"formatter": "standard",
"level": FILE_LOG_LEVEL,
"filename": "myfile.log",
"maxBytes": 1024*1000, # 1 MB
"backupCount": 20
}
},
"loggers": {
'': {
"handlers": ["default", "fh"],
'level': DEFAULT_LOG_LEVEL,
'propagate': True
}
}
}
This currently works; however, it saves all the .log files to the current working directory (where the executable is on Windows).
What I'd like to do is have it save this to the "logs" directory which is in the same level as the executable itself and already existing.
My thought was to modify the "filename" property int he log configuration, but I'm not sure how that impacts my script files that use:
logger = logging.getLogger(__name__)
Would that need to be modified as well to get it to log to the correct location?
Desired Structure:
app.exe
logs
myfile.log
My initial thought seems do what I need after testing. I simply put the directory name in front of the file name in the config's "filename" property value. No modification was needed to the script's logger line.

Logging RotatingFileHandler not rotating

I have implemented a custom RotatingFileHandler:
class FreezeAwareFileHandler(RotatingFileHandler):
def emit(self, record):
try:
msg = self.format(record)
stream = self.stream
stream.write(msg)
stream.write('\n')
self.flush()
except (KeyboardInterrupt, SystemExit): #pragma: no cover
raise
except:
self.handleError(record)
And I have this configuration json file (I have tried even with yaml and specifying the configuration through the class methods):
{
"version": 1,
"disable_existing_loggers": false,
"formatters": {
"standard": {
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S"
}
},
"handlers": {
"freeze_aware_file_handler": {
"class": "Logging.MyHandler",
"formatter": "standard",
"level": "INFO",
"filename": "logs\\MyLog.log",
"maxBytes": 1024,
"backupCount": 10,
"encoding": "utf8"
}
},
"loggers": {
"my_module": {
"level": "INFO",
"handlers": ["my_handler"],
"propagate": "no"
}
},
"root": {
"level": "INFO",
"handlers": ["my_handler"]
}
}
And this is the code I use for initializing:
if os.path.exists(path):
with open(path, 'rt') as f:
config_json = json.load(f)
logging.config.dictConfig(config_json)
logger = logging.getLogger("my_handler")
I can normally log to the specified file but it is never rotating.
Someone know why I'm having this behavior?
I'm using Python 3.5
It turned out the problem was really simple.
For the rotating file handler it is needed to explicitly call the shouldRollver and doRollover methods in the emit of the base class.
def emit(self, record):
try:
if self.shouldRollover(record):
self.doRollover()
...
do any custom actions here
...
except:
self.handleError(record)

Algolia filter by sub key value

I have data like where one set is:
{
"creatorUsername": "mbalex99",
"description": "For Hikers and All the Lovers Alike!",
"imageUrl": "https://s3.amazonaws.com/edenmessenger/uploads/28C03B77-E3E9-4D33-A433-6522C0480C16.jpg",
"isPrivate": true,
"name": "Nature Lovers ",
"roomId": "-KILq0nBN8wHQuEjMYRF",
"usernames": {
"bannon": true,
"loveless": true,
"mbalex99": true,
"terra": true
},
"objectID": "-KILq0nBN8wHQuEjMYRF"
}
I can't seem to find usernames where a key equals mbalex99?
This is indeed not possible with Algolia. You can only filter by value.
However, you can definitely add an array containing the keys of your object and filter by this attribute:
"usernames": {
"bannon": true,
"loveless": true,
"mbalex99": true,
"terra": true
},
"usernameList": ["bannon", "loveless", "mbalex99", "terra"]
// ...
and
// At query time:
{ "facetFilters": "usernameList:mbalex99" }

Resources