Conflicting logging with Python pid package - python-3.x

I've been working on a python daemon with pid, and after initial logs directly to console I wanted to switch to file logging, using python's logging module. This is when I run into the problem.
I have start/stop functions to manage the daemon:
import os
import sys
import time
import signal
import lockfile
import logging
import logging.config
import daemon
from pid import PidFile
from mpmonitor.monitor import MempoolMonitor
# logging.config.fileConfig(fname="logging.conf", disable_existing_loggers=False)
# log = logging.getLogger("mpmonitor")
def start():
print("Starting Mempool Monitor")
_pid_file = PidFile(pidname="mpmonitor.pid", piddir=curr_dir)
with daemon.DaemonContext(stdout=sys.stdout,
stderr=sys.stderr,
stdin=sys.stdin,
pidfile=_pid_file):
# Start the monitor:
mpmonitor = MempoolMonitor()
mpmonitor.run()
def stop():
print("\n{}\n".format(pid_file))
try:
with open(pid_file, "r") as f:
content = f.read()
f.close()
except FileNotFoundError as fnf_err:
print("WARNING - PID file not found, cannot stop daemon.\n({})".format(pid_file))
sys.exit()
print("Stopping Mempool Monitor")
# log.info("Stopping Mempool Monitor")
pid = int(content)
os.kill(pid, signal.SIGTERM)
sys.exit()
which works as you would expect it to. (Note the logging code is commented.)
Uncommenting the logging code breaks everything and some pretty random stuff happens. The error message (trimmed down, full traceback "looks like spam"):
--- Logging error ---
OSError: [Errno 9] Bad file descriptor
Call stack:
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 77, in setup
self.logger.debug("%r entering setup", self)
Message: '%r entering setup'
Arguments: (<pid.PidFile object at 0x7fc8faa479e8>,)
--- Logging error ---
OSError: [Errno 9] Bad file descriptor
Call stack:
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 170, in create
self.logger.debug("%r create pidfile: %s", self, self.filename)
Message: '%r create pidfile: %s'
Arguments: (<pid.PidFile object at 0x7fc8faa479e8>, '/home/leilerg/python/mempoolmon/mpmonitor.pid')
Traceback (most recent call last):
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 136, in inner_check
pid = int(pid_str)
ValueError: invalid literal for int() with base 10: 'DEBUG - 2020-04-'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 139, in inner_check
raise PidFileUnreadableError(exc)
pid.PidFileUnreadableError: invalid literal for int() with base 10: 'DEBUG - 2020-04-'
--- Logging error ---
Traceback (most recent call last):
OSError: [Errno 9] Bad file descriptor
Call stack:
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 197, in close
self.logger.debug("%r closing pidfile: %s", self, self.filename)
Message: '%r closing pidfile: %s'
Arguments: (<pid.PidFile object at 0x7fc8faa479e8>, '/home/leilerg/python/mempoolmon/mpmonitor.pid')
The random stuff I was referring to is now the file mpmonitor.pid doesn't contain a PID anymore but some attempted logs/error messages
user#mylaptor:mempoolmon: cat mpmonitor.pid
DEBUG - 2020-04-05 10:52:55,676 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> entering setup
DEBUG - 2020-04-05 10:52:55,678 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> create pidfile: /home/leilerg/python/mempoolmon/mpmonitor.pid
DEBUG - 2020-04-05 10:52:55,678 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> check pidfile: /home/leilerg/python/mempoolmon/mpmonitor.pid
DEBUG - 2020-04-05 10:52:55,678 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> closing pidfile: /home/leilerg/python/mempoolmon/mpmonitor.pid
To me this looks like the pid logfile got confused with the PID file somehow. This is odd as I explicitly set disable_existing_loggers=False.
Any ideas?
If relevant, I'm on the latest Linux Mint. I also posted the question on the pid project GitHub, as I suspect this is a bug.

The problem has been solved on the GitHub page, issue 31.

Related

pika.exceptions.ChannelClosedByBroker: (405, "RESOURCE_LOCKED - cannot obtain exclusive access to locked queue 'q1' in vhost '/'. It could be

I wrote receiver as below:
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
ch = connection.channel()
ch.exchange_declare(exchange='ex1', exchange_type='fanout')
ch.queue_declare(queue='q1',exclusive=True)
ch.queue_bind(exchange='ex1',queue='q1')
print("wating for log")
def callback(ch, method, properties, body):
print(f'This is from reciver {body}')
ch.basic_consume(queue='q1',on_message_callback=callback, auto_ack=True)
ch.start_consuming()
and i got this error when run it:
Traceback (most recent call last):
File "receiver2.py", line 7, in <module>
ch.queue_declare(queue='q1',exclusive=True)
File "/train/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 2506, in queue_declare
self._flush_output(declare_ok_result.is_ready)
File "/train/lib/python3.8/site-packages/pika/adapters/blocking_connection.py", line 1339, in _flush_output
raise self._closing_reason # pylint: disable=E0702
pika.exceptions.ChannelClosedByBroker: (405, "RESOURCE_LOCKED - cannot obtain exclusive access to locked queue 'q1' in vhost '/'. It could be originally declared on another connection or the exclusive property value does not match that of the original declaration.")
I removed queue name(q1) in queue_declare
ch.queue_declare(queue='q1',exclusive=True)
to
ch.queue_declare(queue='',exclusive=True)

starting celery worker from python script results in error - "click.exceptions.UsageError: No such command" using celery==5.1.2

Directory structure:
Here is my cw_manage_integration/psa_integration/api_service/sync_config/init.py:
from celery import Celery
from kombu import Queue
from psa_integration.celery_config import QUEUE, USER, MAX_PRIORITIES_SUPPORT_AT_TIME
BROKER = "amqp://{0}:{1}#{2}/xyz".format("abc", "pqrst", "x.x.x.x)
APP = Celery(
"sync service",
broker=BROKER,
backend='rpc://',
include=["psa_integration.sync_service.alert_sync.alert",
"psa_integration.sync_service.tenant_sync.tenant",
"psa_integration.sync_service.alert_sync.update_status"]
)
APP.conf.task_queues = [
Queue(QUEUE, queue_arguments={'x-max-priority': MAX_PRIORITIES_SUPPORT_AT_TIME}),
]
The below is the cw_manage_integration/start_service.py:
"""Scrip to start Sync service via Celery."""
from psa_integration.utils.logger import *
from psa_integration import sync_service
from psa_integration.celery_config import CELERY_CONCURRENCY
APP = sync_service.APP
try:
APP.start(["__init__.py", "worker", "-c", str(CELERY_CONCURRENCY)])
except Exception as scheduler_exception:
logging.exception("Exception occurred while starting services. Exception = {}".format(scheduler_exception))
When I run the command python3 start_service.py using celery version celery==4.4.5, it just works fine by starting celery workers.
But when the same start_service.py is run using celery==5.1.2, it is throwing the below error:
>python3 start_service.py
MainProcess INFO 2021-07-07 16:27:42,725 all_logs
79 : started MainProcess INFO 2021-07-07 16:27:42,725 all_logs
80 : log file name:
/home/sdodmane/PycharmProjects/cw_manage_integration1/cw_manage_integration/psa_integration/logs/worker_2021-07-07.log
MainProcess INFO 2021-07-07 16:27:42,725 all_logs
81 : Level: 4 Traceback (most recent call last): File
"/home/sdodmane/.local/lib/python3.8/site-packages/click_didyoumean/init.py",
line 34, in resolve_command
return super(DYMMixin, self).resolve_command(ctx, args) File "/usr/lib/python3/dist-packages/click/core.py", line 1188, in
resolve_command
ctx.fail('No such command "%s".' % original_cmd_name) File "/usr/lib/python3/dist-packages/click/core.py", line 496, in fail
raise UsageError(message, self) click.exceptions.UsageError: No such command "init.py".
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "start_service.py", line 10,
in
APP.start(["init.py", "worker", "-c", str(CELERY_CONCURRENCY)]) File
"/usr/local/lib/python3.8/dist-packages/celery/app/base.py", line 371,
in start
celery.main(args=argv, standalone_mode=False) File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx) File "/usr/lib/python3/dist-packages/click/core.py", line 1132, in invoke
cmd_name, cmd, args = self.resolve_command(ctx, args) File "/home/sdodmane/.local/lib/python3.8/site-packages/click_didyoumean/init.py",
line 42, in resolve_command
raise click.exceptions.UsageError(error_msg, error.ctx) click.exceptions.UsageError: No such command "init.py".
Not able to differentiate between celery==4.4.5 and celery==5.1.2 in this context.
Please help me in solving this problem.
Currently, the start method is broken in the current(5.1.2) release.
It has been fixed (https://github.com/celery/celery/pull/6825/files) but has not been released yet. Hopefully, the next release v5.1.3 will fix this issue.
I had a similar problem and was able to fix with the following change:
# celery==5.1.1
APP.start()
# celery==5.2.6
import sys
APP.start(argv=sys.argv[1:])
For you, that may mean removing the __init__.py in your list of args:
APP.start(["worker", "-c", str(CELERY_CONCURRENCY)])

python 3.5.4 compiled by Pyinstaller, the builtin print() statement throws OSError: [Errno 22] Invalid argument

I have a program.exe file that is compiled with pyinstaller version 3.5 by running the command pyinstaller --log-level DEBUG program.py. program.py includes several print(string_variables) statements.
# The codes below run 3 times per second, as it keeps receiving image data using a socket of TCP protocol.
log_text(time_now() + "Sent " + connCmd, log_to_disk=False)
# ...
log_text(time_now() + "Written to cur_image.jpg", log_to_disk=False)
# ...
# project01\logger.py
def log_text(self, text, log_to_disk=True):
log_text = time_now() + str(text)
try:
print(log_text)
except:
self.log_to_txt(format_exc())
self.log_to_txt("Print statement raised error while printing \'" + log_text + "\'")
if log_to_disk:
self.log_to_txt(log_text)
def log_to_txt(self, content):
try:
with open(self.log_path, "a") as f:
f.write(content + "\n")
except Exception as e:
# ... not shown as no exception has ever been caught here.
The errors look like this:
Traceback (most recent call last):
File "project01\logger.py", line 36, in log_text
OSError: [Errno 22] Invalid argument
Print statement raised error while printing '09/03/2020 09:22:28: Took 2 loops to receive the current image.'
09/03/2020 09:22:37: Received a TCP command signal.
09/03/2020 09:22:43: Received a TCP command signal
Traceback (most recent call last):
File "project01\logger.py", line 36, in log_text
OSError: [Errno 22] Invalid argument
Print statement raised error while printing '09/03/2020 09:22:45: Written to cur_image.jpg'
Traceback (most recent call last):
File "project01\logger.py", line 36, in log_text
OSError: [Errno 22] Invalid argument
Print statement raised error while printing '09/03/2020 09:22:59: Sent BBBB'
The same OSError: [Errno 22] Invalid argument will occur about once every 20 seconds, and this continues for several hours, then it never happens for another couple of hours, and then it happens again for several hours, and so on.
I could not find a way to even make print() statement throw an OSError: [Errno 22] Invalid argument. It couldn't possibly be because of any of the string variables that were passed into print() right? As all of them could be logged to disk. Any idea what could have caused the error?
Try adding the absolute path instead of relative path:
/home/user name/folder/project01/logger.py
in ubuntu adding absolute paths working, in windows, I have the same problem

'NoneType' object has no attribute 'open_session

i wrote a script to connect to sftp server in python but it showing this error below and i do not understand it. please help me fix the bug
import pysftp
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
with pysftp.Connection(host="127.0.0.1", username="new34",password="password",cnopts=cnopts) as srv:
print("connection successful")
# Get the directory and file listing
data = srv.listdir()
srv.put("testfile.txt")
# Closes the connection
srv.close()
# Prints out the directories and files, line by line
for i in data:
print(i)
it shows the following error; please help to fix the bug
C:\Users\Rohan\PycharmProjects\untitled1\venv\Scripts\python.exe C:/Users/Rohan/PycharmProjects/untitled1/yu.py
C:\Users\Rohan\PycharmProjects\untitled1\venv\lib\site-packages\pysftp\__init__.py:61: UserWarning: Failed to load HostKeys from C:\Users\Rohan\.ssh\known_hosts. You will need to explicitly load HostKeys (cnopts.hostkeys.load(filename)) or disableHostKey checking (cnopts.hostkeys = None).
warnings.warn(wmsg, UserWarning)
Traceback (most recent call last):
connection successful
File "C:/Users/Rohan/PycharmProjects/untitled1/yu.py", line 10, in <module>
data = srv.listdir()
File "C:\Users\Rohan\PycharmProjects\untitled1\venv\lib\site-packages\pysftp\__init__.py", line 591, in listdir
self._sftp_connect()
File "C:\Users\Rohan\PycharmProjects\untitled1\venv\lib\site-packages\pysftp\__init__.py", line 205, in _sftp_connect
self._sftp = paramiko.SFTPClient.from_transport(self._transport)
File "C:\Users\Rohan\PycharmProjects\untitled1\venv\lib\site-packages\paramiko\sftp_client.py", line 164, in from_transport
chan = t.open_session(
AttributeError: 'NoneType' object has no attribute 'open_session'
Process finished with exit code 1
Your code has indentation issue.
Try this,
with pysftp.Connection(host="127.0.0.1", username="new34",password="password",cnopts=cnopts) as srv:
print("connection successful")
# Get the directory and file listing
data = srv.listdir()
srv.put("testfile.txt")
with automatically closes the connection. No need to close explicitly.

Error while initializing Ray on an EC2 master node

I am using Ray to run a parallel loop on an Ubuntu 14.04 cluster on AWS EC2. The following Python 3 script works well on my local machine with just 4 workers (imports and local initializations left out):-
ray.init() #initialize Ray
#ray.remote
def test_loop(n):
c=tests[n,0]
tout=100
rc=-1
with tmp.TemporaryDirectory() as path: #Create a temporary directory
for files in filelist: #then copy in all of the
sh.copy(filelist,path) #files
txtfile=path+'/inputf.txt' #create the external
fileId=open(txtfile,'w') #data input text file,
s='Number = '+str(c)+"\n" #write test number,
fileId.write(s)
fileId.close() #close external parameter file,
os.chdir(path) #and change working directory
try: #Try running simulation:
rc=sp.call('./simulation.run',timeout=tout,stdout=sp.DEVNULL,\
stderr=sp.DEVNULL,shell=True) #(must use .call for timeout)
outdat=sio.loadmat('outputf.dat') #get the output data struct
rt_Data=outdat.get('rt_Data') #extract simulation output
err=float(rt_Data[-1]) #use final value of error
except: #If system fails to execute,
err=deferr #use failure default
#end try
if (err<=0) or (err>deferr) or (rc!=0):
err=deferr #Catch other types of failure
return err
if __name__=='__main__':
result=ray.get([test_loop.remote(n) for n in range(0,ntest)])
print(result)
The unusual bit here is that the simulation.run has to read in a different test number from an external text file when it runs. The file name is the same for all iterations of the loop, but the test number is different.
I launched an EC2 cluster using Ray, with the number of CPUs available equal to n (I am trusting that Ray will not default to multi-threading). Then I had to copy the filelist (which includes the Python script) from my local machine to the master node using rsync, because I couldn't do this from the config (see recent question: "Workers not being launched on EC2 by Ray"). Then ssh into that node, and run the script. The result is a file-finding error:-
~$ python3 test_small.py
2019-04-29 23:39:27,065 WARNING worker.py:1337 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes.
2019-04-29 23:39:27,065 INFO node.py:469 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-04-29_23-39-27_3897/logs.
2019-04-29 23:39:27,172 INFO services.py:407 -- Waiting for redis server at 127.0.0.1:42930 to respond...
2019-04-29 23:39:27,281 INFO services.py:407 -- Waiting for redis server at 127.0.0.1:47779 to respond...
2019-04-29 23:39:27,282 INFO services.py:804 -- Starting Redis shard with 0.21 GB max memory.
2019-04-29 23:39:27,296 INFO node.py:483 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-04-29_23-39-27_3897/logs.
2019-04-29 23:39:27,296 INFO services.py:1427 -- Starting the Plasma object store with 0.31 GB memory using /dev/shm.
(pid=3917) sh: 0: getcwd() failed: No such file or directory
2019-04-29 23:39:44,960 ERROR worker.py:1672 -- Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/worker.py", line 909, in _process_task
self._store_outputs_in_object_store(return_object_ids, outputs)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/worker.py", line 820, in _store_outputs_in_object_store
self.put_object(object_ids[i], outputs[i])
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/worker.py", line 375, in put_object
self.store_and_register(object_id, value)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/worker.py", line 309, in store_and_register
self.task_driver_id))
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/worker.py", line 238, in get_serialization_context
_initialize_serialization(driver_id)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/worker.py", line 1148, in _initialize_serialization
serialization_context = pyarrow.default_serialization_context()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/pyarrow_files/pyarrow/serialization.py", line 326, in default_serialization_context
register_default_serialization_handlers(context)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/pyarrow_files/pyarrow/serialization.py", line 321, in register_default_serialization_handlers
_register_custom_pandas_handlers(serialization_context)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/pyarrow_files/pyarrow/serialization.py", line 129, in _register_custom_pandas_handlers
import pandas as pd
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/pandas/__init__.py", line 42, in <module>
from pandas.core.api import *
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/pandas/core/api.py", line 10, in <module>
from pandas.core.groupby import Grouper
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/pandas/core/groupby.py", line 49, in <module>
from pandas.core.frame import DataFrame
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 74, in <module>
from pandas.core.series import Series
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/pandas/core/series.py", line 3042, in <module>
import pandas.plotting._core as _gfx # noqa
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/pandas/plotting/__init__.py", line 8, in <module>
from pandas.plotting import _converter
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/pandas/plotting/_converter.py", line 7, in <module>
import matplotlib.units as units
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py", line 1060, in <module>
rcParams = rc_params()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py", line 892, in rc_params
fname = matplotlib_fname()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py", line 736, in matplotlib_fname
for fname in gen_candidates():
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py", line 725, in gen_candidates
yield os.path.join(six.moves.getcwd(), 'matplotlibrc')
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
The problem then seems to repeat for all the other workers and finally gives up:-
AttributeError: module 'pandas' has no attribute 'core'
This error is unexpected and should not have happened. Somehow a worker
crashed in an unanticipated way causing the main_loop to throw an exception,
which is being caught in "python/ray/workers/default_worker.py".
2019-04-29 23:44:08,489 ERROR worker.py:1672 -- A worker died or was killed while executing task 000000002d95245f833cdbf259672412d8455d89.
Traceback (most recent call last):
File "test_small.py", line 82, in <module>
result=ray.get([test_loop.remote(n) for n in range(0,ntest)])
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/worker.py", line 2184, in get
raise value
ray.exceptions.RayWorkerError: The worker died unexpectedly while executing this task.
I suspect that I am not initializing Ray correctly. I tried with ray.init(redis_address="172.31.50.149:6379") - which was the redis address given when the cluster was formed, but the error was more or less the same. I also tried starting Ray on the master (in case it needed starting):-
~$ ray start --redis-address 172.31.50.149:6379 #Start Ray
2019-04-29 23:46:20,774 INFO services.py:407 -- Waiting for redis server at 172.31.50.149:6379 to respond...
2019-04-29 23:48:29,076 INFO services.py:412 -- Failed to connect to the redis server, retrying.
....etc.
The installation of pandas and matplotlib on the master node seems to have solved the problem. Ray now initializes successfully.

Resources