Is RotatingFileHandler thread-safe in Python 3.8? - python-3.x

In a project I am using rotatingfilehandler for logs. It is configured to have size of 5M and backup count of 2. Log handler is used by different threads. The program crashes after a while with the following error:
"File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py",
line 70, in emit
self.doRollover() File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 70, in emit
self.doRollover() File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 171, in doRollover
self.rotate(self.baseFilename, dfn) File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 171, in doRollover
self.rotate(self.baseFilename, dfn) File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 111, in rotate
os.rename(source, dest) File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 111, in rotate
os.rename(source, dest) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process"
I think it happens because logger try to rename the file while it is already open in another thread. Any suggestions are appreciated. Thanks.

Are they actually operating in different threads? Or are they different processes?
As the docs explain
logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python
Those same docs also outline some possible solutions to this problem, such as using a multiprocessing Lock, which I won't replicate here.

Related

Hydrapaper will not open - need help interpreting errors

I'm running Ubuntu 20.04.3 LTS and until recently, I used Hydrapaper for my dual monitor setup. It's not imperative that I be able to have separate wallpapers but I am being driven kind of mad by the errors I'm seeing when I try to open Hydrapaper from the terminal. This is what I get:
michael#michael-Inspiron-7790-AIO:~$ flatpak run org.gabmus.hydrapaper
Traceback (most recent call last):
File "/app/lib/python3.9/site-packages/hydrapaper/__main__.py", line 206, in do_command_line
self.do_activate()
File "/app/lib/python3.9/site-packages/hydrapaper/__main__.py", line 146, in do_activate
self.window = HydraPaperAppWindow()
File "/app/lib/python3.9/site-packages/hydrapaper/app_window.py", line 44, in __init__
self.monitors_flowbox = HydraPaperMonitorsFlowbox()
File "/app/lib/python3.9/site-packages/hydrapaper/monitors_flowbox.py", line 132, in __init__
self.populate()
File "/app/lib/python3.9/site-packages/hydrapaper/monitors_flowbox.py", line 151, in populate
HydraPaperMonitorsFlowboxItem(m), -1
File "/app/lib/python3.9/site-packages/hydrapaper/monitors_flowbox.py", line 71, in __init__
self.set_picture()
File "/app/lib/python3.9/site-packages/hydrapaper/monitors_flowbox.py", line 94, in set_picture
pixbuf = GdkPixbuf.Pixbuf.new_from_file_at_scale(
gi.repository.GLib.GError: gdk-pixbuf-error-quark: Couldn’t recognize the image file format for file “/home/michael/.var/app/org.gabmus.hydrapaper/cache/org.gabmus.hydrapaper/thumbnails//a5debe6ea02641a70325dc008910a85c61765e55906c29dd338e9f63506378a4.png” (3)
And as I said, Hydrapaper won't open. It appears in my topbar for a few seconds then disappears. Can anyone suggest a fix? Thanks in advance.
I had this same exact issue, what fixed it for me was going to the file mentioned at the end of the error and deleting it, hydra paper worked perfectly after that

Django cookiecutter with postgresql setup on Ubuntu 20.4 can't migrate

I installed django cookiecutter in Ubuntu 20.4
with postgresql when I try to make migrate to the database I get this error:
python manage.py migrate
Traceback (most recent call last): File "manage.py", line 10, in
execute_from_command_line(sys.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 381, in execute_from_command_line
utility.execute() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 323, in run_from_argv
self.execute(*args, **cmd_options) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 361, in execute
self.check() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 387, in check
all_issues = self._run_checks( File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/commands/migrate.py",
line 64, in _run_checks
issues = run_checks(tags=[Tags.database]) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/registry.py",
line 72, in run_checks
new_errors = check(app_configs=app_configs) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/database.py",
line 9, in check_database_backends
for conn in connections.all(): File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 216, in all
return [self[alias] for alias in self] File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 213, in iter
return iter(self.databases) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/utils/functional.py",
line 80, in get
res = instance.dict[self.name] = self.func(instance) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 147, in databases
self._databases = settings.DATABASES File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 79, in getattr
self._setup(name) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 66, in _setup
self._wrapped = Settings(settings_module) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 176, in init
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY
setting must not be empty.
I did the whole instructions in cookiecutter docs and createdb what is the wrong?
Python libraries are so many and to make things simple and to enable the code to be re-usable, modules call each other. First of all, don't be scared on seeing such a big error. It is only a traceback to the error, as one code calls the other, which calls the other. To debug any such problem, it's important to see the first and last .py file names. In your case, the nesting in the traceback is like this:
Traceback Flowchart
So, the key problem for you is The SECRET_KEY setting must not be empty.
I would recommend putting the secret key under the "config/.env" file, as mentioned here:
https://wemake-django-template.readthedocs.io/en/latest/pages/template/django.html#secret-settings-in-production
Initially, you should find the SECRET_KEY inside the setting.py file of the project folder. But it needs to be inside .env file in production/LIVE environment. And NEVER post the SECRET_KEY of live environments on github or even here, as it's a security risk.
Your main problem is very clear in the logs.
You need to set your environment SECRET_KEY give it a value, and it should skip this error message, it might throw another error if there are some other configurations that are not set properly.

Registering and downloading a fastText .bin model fails with Azure Machine Learning Service

I have a simple RegisterModel.py script that uses the Azure ML Service SDK to register a fastText .bin model. This completes successfully and I can see the model in the Azure Portal UI (I cannot see what model files are in it). I then want to download the model (DownloadModel.py) and use it (for testing purposes), however it throws an error on the model.download method (tarfile.ReadError: file could not be opened successfully) and makes a 0 byte rjtestmodel8.tar.gz file.
I then use the Azure Portal and Add Model and select the same bin model file and it uploads fine. Downloading it with the download.py script below works fine, so I am assuming something is not correct with the Register script.
Here are the 2 scripts and the stacktrace - let me know if you can see anything wrong:
RegisterModel.py
import azureml.core
from azureml.core import Workspace, Model
ws = Workspace.from_config()
model = Model.register(workspace=ws,
model_name='rjSDKmodel10',
model_path='riskModel.bin')
DownloadModel.py
# Works when downloading the UI Uploaded .bin file, but not the SDK registered .bin file
import os
import azureml.core
from azureml.core import Workspace, Model
ws = Workspace.from_config()
model = Model(workspace=ws, name='rjSDKmodel10')
model.download(target_dir=os.getcwd(), exist_ok=True)
Stacktrace
Traceback (most recent call last):
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\lib\python\ptvsd\__main__.py", line 432, in main
run()
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\lib\python\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "...\.conda\envs\DoC\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "...\.conda\envs\DoC\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "...\.conda\envs\DoC\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "...\\DownloadModel.py", line 21, in <module>
model.download(target_dir=os.getcwd(), exist_ok=True)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 712, in download
file_paths = self._download_model_files(sas_to_relative_download_path, target_dir, exist_ok)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 658, in _download_model_files
file_paths = self._handle_packed_model_file(tar_path, target_dir, exist_ok)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 670, in _handle_packed_model_file
with tarfile.open(tar_path) as tar:
File "...\.conda\envs\DoC\lib\tarfile.py", line 1578, in open
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully
Environment
riskModel.bin is 6 megs
AMLS 1.0.60
Python 3.7
Working locally with Visual Code
The Azure Machine Learning service SDK has a bug with how it interacts with Azure Storage, which causes it to upload corrupted files if it has to retry uploading.
A couple workarounds:
The bug was introduced in 1.0.60 release. If you downgrade to AzureML-SDK 1.0.55, the code should fail when there are issue uploading instead of silently corrupting data.
It's possible that the retry is being triggered by the low timeout values that the AzureML-SDK defaults to. You could investigate changing the timeout in site-packages/azureml/_restclient/artifacts_client.py
This bug should be fixed in the next release of the AzureML-SDK.

How to use paralledots api in app engine?

I want to check text similarity using paralleldots api in app engine, but when setting the api key in app engine using.
paralleldots.set_api_key("XXXXXXXXXXXXXXXXXXXXXXXXXXX")
App engine giving Error:
with open('settings.cfg', 'w') as configfile:
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/stubs.py", line 278, in __init__
raise IOError(errno.EROFS, 'Read-only file system', filename)
IOError: [Errno 30] Read-only file system: 'settings.cfg'
INFO 2019-03-17 10:43:59,852 module.py:835] default: "GET / HTTP/1.1" 500 -
INFO 2019-03-17 10:46:47,548 client.py:777] Refreshing access_token
ERROR 2019-03-17 10:46:50,931 wsgi.py:263]
Traceback (most recent call last):
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/home/ulti72/Desktop/koda/main.py", line 26, in <module>
paralleldots.set_api_key("7PR8iwo42DGFB8qpLjpUGJPqEQHU322lqTDkgaMrX7I")
File "/home/ulti72/Desktop/koda/lib/paralleldots/config.py", line 13, in set_api_key
with open('settings.cfg', 'w') as configfile:
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/stubs.py", line 278, in __init__
raise IOError(errno.EROFS, 'Read-only file system', filename)
IOError: [Errno 30] Read-only file system: 'settings.cfg'
The paralleldots api, seems to want to save a settings.cfg file to the local filesystem in response to that call. Which is not allowed in the 1st generation standard environment and only allowed for files in the /tmp filesystem in the 2nd generation.
The local development server was designed for the 1st generation standard env and enforces the restriction with that error. It has limited support for the 2nd generation env, see Python 3.7 Local Development Server Options for new app engine apps.
Things to try:
check if specifying the location of the settings.cfg is supported and if so make it reside under /tmp. Maybe the local development server allows that or you switch to some other local development method than the development server.
check if saving the settings using an already open file handler is supported and, if so, use one obtained from Cloud Storage client library, something along these lines: How to zip or tar a static folder without writing anything to the filesystem in python?
check if set_api_key() supports some other method of persisting the API key than saving the settings to a file
check if it's possible to specify the API key for every subsequent call so you don't have to persist it using set_api_key() (maybe using a common wrapper function for convenience)

How to get credentials from Azure Linux VM (kubernetes cluster) on machine different from where it was created?

I need to gain access to Kubernetes cluster in Azure on a Windows Server 2016 machine. I did not create the cluster but I am assigned as Global Admin in the Azure Account. I already made a successful login to the azure account, but not yet to the server. I already installed kubectl CLI in the machine.
Now I need to access the cluster.
I have .kube/config, .ssh/id_rsa and .ssh/id_rsa.pub inside my C:\Users\Administrator folder. I have tried the ssh -i ~/.ssh/id_rsa kubectluser#ourDNSname and I was able to access it. So my private key is good. But, I don't want to work inside the VM. My working directory is supposed to be inside the WinServer2016 machine. I should be able to kubectl get nodes and it must return me a table of the 3 VMs.
This is what happens though (AGAIN, the ssh to VM works, I can do the kubectl commands w/o problem inside the VM)
az acs kubernetes get-credentials --resource-group=myRGroup --name=myClusterName
returns
Authentication failed. Traceback (most recent call last): File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\main.py", line 36, in main cmd_result = APPLICATION.execute(args) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\application.py", line 216, in execute result = expanded_arg.func(params) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands\__init__.py", line 377, in __call__ return self.handler(*args, **kwargs) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands\__init__.py", line 620, in _execute_command reraise(*sys.exc_info()) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\six.py", line 693, in reraise raise value File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands\__init__.py", line 602, in _execute_command result = op(client, **kwargs) if client else op(**kwargs) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\command_modules\acs\custom.py", line 776, in k8s_get_credentials _k8s_get_credentials_internal(name, acs_info, path, ssh_key_file) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\command_modules\acs\custom.py", line 797, in _k8s_get_credentials_internal '.kube/config', path_candidate, key_filename=ssh_key_file) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\command_modules\acs\acs_client.py", line 72, in secure_copy ssh.connect(host, username=user, pkey=pkey, sock=proxy) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\client.py", line 416, in connect look_for_keys, gss_auth, gss_kex, gss_deleg_creds, t.gss_host, File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\client.py", line 701, in _auth raise saved_exception File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\client.py", line 678, in _auth self._transport.auth_publickey(username, key)) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\transport.py", line 1447, in auth_publickey return self.auth_handler.wait_for_response(my_event) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\auth_handler.py", line 223, in wait_for_response raise e paramiko.ssh_exception.AuthenticationException: Authentication failed.
kubectl get nodes
returns
You must be logged in to the server
I can't use kubectl create or kubectl set image deployment because of this.
What do I need to do? What information do I need from the person and machine who/which created the cluster?
Edit:
I have .kube/config, .ssh/id_rsa and .ssh/id_rsa.pub inside my
C:\Users\Administrator folder.
The default path to an SSH key file is ~\.ssh\id_rsa, in Windows, we should specify the path, like this:
C:\Users\jason\.ssh>az acs kubernetes get-credentials --resource-group=jasonk8s --name jasonk8s --ssh-key-file C:\Users\jason\.ssh\k8s
Merged "jasontest321mgmt" as current context in C:\Users\jason\.kube\config
C:\Users\jason\.ssh>kubectl.exe get nodes
NAME STATUS ROLES AGE VERSION
k8s-agent-c99b4149-0 Ready agent 7m v1.7.7
k8s-master-c99b4149-0 Ready master 8m v1.7.7
In your scenario, please try to use this script to get credentials:
az acs kubernetes get-credentials --resource-group=myRGroup --name=myClusterName --ssh-key-file C:\Users\Administrator\.ssh\id_rsa

Resources