Python logging call crashes at usesTime - python-3.x

Every once in 50-100 calls to the logger, the program crashes with the following trace messages:
Mar 20 07:10:14 service.bash[7693]: Fatal Python error: Cannot recover from stack overflow.
Mar 20 07:10:14 service.bash[7693]: Current thread 0x76fa3010 (most recent call first):
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 381 in usesTime
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 537 in usesTime
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 569 in format
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 831 in format
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 981 in emit
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 856 in handle
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 1488 in callHandlers
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 1426 in handle
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 1416 in _log
Mar 20 07:10:14 service.bash[7693]: File "/usr/lib/python3.5/logging/__init__.py", line 1280 in info
Mar 20 07:10:14 service.bash[7693]: File "*****************_app/base.py", line 63 in log_this
Any idea what could be causing this crash?
Don't see similar or other logging calls elsewhere in the program crashing it.
Here is the stack of the calls made to the logger:
self.info("cs={} miso={} mosi{} clk{}".format( self.csPin, self.misoPin, self.mosiPin, self.clkPin))
|
self.log_this("info", msg)
|
self.log.info(msg)
The logger is setup in the base class initialization routine in the following way:
# Global logger is declared as a class attribute
cls.log = logging.getLogger(cls.args["app"])
c_handler = logging.StreamHandler()
f_handler = logging.handlers.RotatingFileHandler(
cls.args["--log"],
maxBytes=(10**6)*int(cls.args["--logsize"]), # CLI is in MB
backupCount=1)
# Create handlers
if cls.args["--debug"]:
cls.log.setLevel(logging.DEBUG)
c_handler.setLevel(logging.DEBUG)
f_handler.setLevel(logging.DEBUG)
else:
cls.log.setLevel(logging.INFO)
c_handler.setLevel(logging.INFO)
f_handler.setLevel(logging.INFO)
# Create formatters and add it to handlers
c_format = logging.Formatter('%(name)s: %(levelname)s: %(message)s')
f_format = logging.Formatter('%(asctime)s: %(name)s: %(levelname)s: %(message)s')
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
cls.log.addHandler(c_handler)
cls.log.addHandler(f_handler)
cls.log.info("Logger initialized")
Thank you.

Are you using Windows and switching to an UTF8 locale within Python? There is a Python bug in this specific scenario (similar issue: https://bugs.python.org/issue36792). Here's a minimal code sample that reproduces the bug on my machine:
import locale
import logging
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(asctime)s'))
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.info(1)
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
logging.info(2)
Output:
2019-10-25 21:20:15,657
Process finished with exit code -1073740940 (0xC0000374)
If you are unsure if this might be related to your problem, try adding the following line in front of your call to the logging module and see if it solves the problem:
locale.setlocale(locale.LC_ALL, 'en_US')

Related

AttributeError: 'super' object has no attribute 'S'

Still learning and new to code. I am working on (really playing around and learning) a self motivated projected to try and log stock HALTS cleanly. I am able to get the symbols I want, I can even trigger the status message when a HALT occurs by streaming.
However, when a status message occurs, I cannot parse the object correctly and cannot figure out why.
To start, I scan from symbols and make a list. I then create a for loop to subscribe to status messages for those symbols:
for symbol in symbols:
stream.subscribe_statuses(on_status, symbol)
My handler, 'on_status' is where I think , I am having the problem.
async def on_status(status):
symbol = status.symbol
status_message = status.status_message
logging.info("==============")
logging.info(f"{symbol} | {status_message}")
logging.info("==============")
When a status message is streamed I receive an error:
Sep 27 10:27:12 error during websocket communication: 'super' object has no attribute 'S'
Sep 27 10:27:12 Traceback (most recent call last):
Sep 27 10:27:12 File "/app/.heroku/python/lib/python3.10/site-packages/alpaca_trade_api/stream.py", line 254, in _run_forever
Sep 27 10:27:12 await self._consume()
Sep 27 10:27:12 File "/app/.heroku/python/lib/python3.10/site-packages/alpaca_trade_api/stream.py", line 130, in _consume
Sep 27 10:27:12 await self._dispatch(msg)
Sep 27 10:27:12 File "/app/.heroku/python/lib/python3.10/site-packages/alpaca_trade_api/stream.py", line 381, in _dispatch
Sep 27 10:27:12 await handler(self._cast(msg_type, msg))
Sep 27 10:27:12 File "/app/tading_halts.py", line 1184, in on_status
Sep 27 10:27:12 logging.info(f"{symbol} | {status_message}")
Sep 27 10:27:12 File "/app/.heroku/python/lib/python3.10/site-packages/alpaca_trade_api/entity_v2.py", line 135, in __getattr__
Sep 27 10:27:12 return super().__getattr__(self._reversed_mapping[key])
Sep 27 10:27:12 File "/app/.heroku/python/lib/python3.10/site-packages/alpaca_trade_api/entity.py", line 149, in __getattr__
Sep 27 10:27:12 return getattr(super(), key)
Sep 27 10:27:12 AttributeError: 'super' object has no attribute 'S'
Now, if I just use {status} I do get the StatusV2 object but I cannot figure out how to parse out the symbol and the message from that object. The object comes back as this:
Sep 27 10:00:09 | StatusV2({ 'reason_code': '',
Sep 27 10:00:09 'reason_message': '',
Sep 27 10:00:09 'status_code': 'T',
Sep 27 10:00:09 'status_message': 'Trading Resumption',
Sep 27 10:00:09 'symbol': 'ATXI',
Sep 27 10:00:09 'tape': 'C',
Sep 27 10:00:09 'timestamp': 1664287209393738868})
Any help would greatly be appreciated as I continue to learn.

Spark/Kafka error after updating Spark 2.4 to Spark 3.1

I tried to upgrade from Juypterhub using pyspark 3.1.2 (using Python 3.7) using Debian Linux with Kafka from Spark 2.4.1 to Spark 3.1.2. Therefore, I also update Kafka from 2.4.1 to 2.8 but this does not seem to be the problem. I checked the dependencies from https://spark.apache.org/docs/latest/ and it seems fine so far.
For Spark 2.4.1 I used these additional jar in the sparks directory:
slf4j-api-1.7.26.jar
unused-1.0.0.jar
lz4-java-1.6.0.jar
kafka-clients-2.3.0.jar
spark-streaming-kafka-0-10_2.11-2.4.3.jar
spark-sql-kafka-0-10_2.11-2.4.3.jar
For Spark 3.1.2 I updated these jars and already added some more the other file already existed like unused:
spark-sql-kafka-0-10_2.12-3.1.2.jar
spark-streaming-kafka-0-10_2.12-3.1.2.jar
spark-streaming-kafka-0-10-assembly_2.12-3.1.2.jar
spark-token-provider-kafka-0-10_2.12-3.1.2.jar
kafka-clients-2.8.0.jar
I striped my pyspark code to this that works with spark 2.4.1 but not with Spark 3.1.2:
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.sql.utils import AnalysisException
import datetime
# configuration of target db
db_target_url = "jdbc:mysql://localhost/test"
db_target_properties = {"user": "john", "password": "doe"}
# create spark session
spark = SparkSession.builder.appName("live1").getOrCreate()
spark.conf.set('spark.sql.caseSensitive', True)
# create schema for the json iba data
schema_tww_vs = T.StructType([T.StructField("[22:8]", T.DoubleType()),\
T.StructField("[1:3]", T.DoubleType()),\
T.StructField("Timestamp", T.StringType())])
# create dataframe representing the stream and take the json data into a usable df structure
d = spark.readStream \
.format("kafka").option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "test_so") \
.load() \
.selectExpr("timestamp", "cast (value as string) as json") \
.select("timestamp", F.from_json("json", schema_tww_vs).alias("struct")) \
.selectExpr("timestamp", "struct.*") \
# add timestamp of this spark processing
d = d.withColumn("time_spark", F.current_timestamp())
d1 = d.withColumnRenamed('[1:3]','signal1') \
.withColumnRenamed('[22:8]','ident_orig') \
.withColumnRenamed('timestamp','time_kafka') \
.withColumnRenamed('Timestamp','time_source')
d1 = d1.withColumn("ident", F.round(d1["ident_orig"]).cast('integer'))
d4 = d1.where("signal1 > 3000")
d4a = d4.withWatermark("time_kafka", "1 second") \
.groupby('ident', F.window('time_kafka', "5 second")) \
.agg(
F.count("*").alias("count"), \
F.min("time_kafka").alias("time_start"), \
F.round(F.avg("signal1"),1).alias('signal1_avg'),)
# Remove the column "windows" since this struct (with start and stop time) cannot be written to the db
d4a = d4a.drop('window')
d8a = d4a.select('time_start', 'ident', 'count', 'signal1_avg')
# write the dataframe into the database using the streaming mode
def write_into_sink(df, epoch_id):
df.write.jdbc(table="test_so", mode="append", url=db_target_url, properties=db_target_properties)
pass
query_write_sink = d8a.writeStream \
.foreachBatch(write_into_sink) \
.trigger(processingTime = "1 seconds") \
.start()
Some of the errors are:
java.lang.NoClassDefFoundError: org/apache/commons/pool2/impl/GenericKeyedObjectPoolConfig
Jul 22 15:41:22 run [847]: #011at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$.<init>(KafkaDataConsumer.scala:623)
Jul 22 15:41:22 run [847]: #011at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$.<clinit>(KafkaDataConsumer.scala)
…
jupyterhub-start.sh[847]: 21/07/22 15:41:22 ERROR TaskSetManager: Task 0 in stage 2.0 failed 1 times; aborting job
Jul 22 15:41:22 run [847]: 21/07/22 15:41:22 ERROR MicroBatchExecution: Query [id = 5d2a70aa-1463-48f3-a4a6-995ceef22891, runId = d1f856b5-eb0c-4635-b78a-d55e7ce81f2b] terminated with error
Jul 22 15:41:22 run [847]: py4j.Py4JException: An exception was raised by the Python Proxy. Return Message: Traceback (most recent call last):
Jul 22 15:41:22 run [847]: File "/opt/anaconda/envs/env1/lib/python3.7/site-packages/py4j/java_gateway.py", line 2451, in _call_proxy
Jul 22 15:41:22 run [847]: return_value = getattr(self.pool[obj_id], method)(*params)
Jul 22 15:41:22 run [847]: File "/opt/spark/python/pyspark/sql/utils.py", line 196, in call
Jul 22 15:41:22 run [847]: raise e
Jul 22 15:41:22 run [847]: File "/opt/spark/python/pyspark/sql/utils.py", line 193, in call
Jul 22 15:41:22 run [847]: self.func(DataFrame(jdf, self.sql_ctx), batch_id)
Jul 22 15:41:22 run [847]: File "<ipython-input-10-d40564c31f71>", line 3, in write_into_sink
Jul 22 15:41:22 run [847]: df.write.jdbc(table="test_so", mode="append", url=db_target_url, properties=db_target_properties)
Jul 22 15:41:22 run [847]: File "/opt/spark/python/pyspark/sql/readwriter.py", line 1445, in jdbc
Jul 22 15:41:22 run [847]: self.mode(mode)._jwrite.jdbc(url, table, jprop)
Jul 22 15:41:22 run [847]: File "/opt/anaconda/envs/env1/lib/python3.7/site-packages/py4j/java_gateway.py", line 1310, in __call__
Jul 22 15:41:22 run [847]: answer, self.gateway_client, self.target_id, self.name)
Jul 22 15:41:22 run [847]: File "/opt/spark/python/pyspark/sql/utils.py", line 111, in deco
Jul 22 15:41:22 run [847]: return f(*a, **kw)
Jul 22 15:41:22 run [847]: File "/opt/anaconda/envs/env1/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
Jul 22 15:41:22 run [847]: format(target_id, ".", name), value)
Jul 22 15:41:22 run [847]: py4j.protocol.Py4JJavaError: An error occurred while calling o101.jdbc.
Jul 22 15:41:22 run [847]: : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 200) (master executor driver): java.lang.NoClassDefFoundError: org/apache/commons/pool2/impl/GenericKeyedObjectPoolConfig
Do you have ideas what causes this error?
As devesh said there was one jar file missing:
commons-pool2-2.8.0.jar that can be downloaded from https://mvnrepository.com/artifact/org.apache.commons/commons-pool2/2.8.0

installed anaconda and when I run pandas it won't work. Is there a fix to this? Do I need to uninstall and reinstall for this to work?

OSError Traceback (most recent call last)
<ipython-input-2-7dd3504c366f> in <module>
----> 1 import pandas as pd
~\AppData\Roaming\Python\Python38\site-packages\pandas\__init__.py in <module>
9 for dependency in hard_dependencies:
10 try:
---> 11 __import__(dependency)
12 except ImportError as e:
13 missing_dependencies.append(f"{dependency}: {e}")
~\AppData\Roaming\Python\Python38\site-packages\numpy\__init__.py in <module>
136
137 # Allow distributors to run custom init code
--> 138 from . import _distributor_init
139
140 from . import core
~\AppData\Roaming\Python\Python38\site-packages\numpy\_distributor_init.py in <module>
24 # NOTE: would it change behavior to load ALL
25 # DLLs at this path vs. the name restriction?
---> 26 WinDLL(os.path.abspath(filename))
27 DLL_filenames.append(filename)
28 if len(DLL_filenames) > 1:
~\Anaconda3\lib\ctypes\__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error,
winmode)
379
380 if handle is None:
--> 381 self._handle = _dlopen(self._name, mode)
382 else:
383 self._handle = handle
OSError: [WinError 193] %1 is not a valid Win32 application
I tried to "import pandas as pd" and this is the output in the jupyter notebook. what does the traceback(most recent call last) mean? Also why are there arrows pointing at specific lines?

shutil.copystat() fails inside Docker on Azure

The failing code runs inside a Docker container based on python:3.6-stretch debian.
It happens while Django moves a file from one Docker volume to another.
When I test on MacOS 10, it works without error. Here, the Docker containers are started with docker-compose and use regular Docker volumes on the local machine.
Deployed into Azure (AKS - Kubernetes on Azure), moving the file succeeds but copying the stats fails with the following error:
File "/usr/local/lib/python3.6/site-packages/django/core/files/move.py", line 70, in file_move_safe
copystat(old_file_name, new_file_name)
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: '/some/path/file.pdf'
The volumes on Azure are persistent volume claims with ReadWriteMany access mode.
Now, copystat is documented as:
copystat() never returns failure.
https://docs.python.org/3/library/shutil.html
My questions are:
Is this a "bug" because the documentation says that it should "never return failure"?
Can I savely try/except this error because the file in question is moved (it only fails later on, while trying to copy the stats)
Can I change something about the Azure settings that fix this? (probably not)
Here some small test on the machine in Azure itself:
root:/media/documents# ls -al
insgesamt 267
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 .
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 ..
-rwxrwxrwx 1 1000 1000 136479 Jul 31 16:48 orig.pdf
-rwxrwxrwx 1 1000 1000 136479 Jul 31 15:29 testfile
root:/media/documents# lsattr
--S-----c-jI------- ./orig.pdf
--S-----c-jI------- ./testfile
root:/media/documents# python
Python 3.6.6 (default, Jul 17 2018, 11:12:33)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shutil
>>> shutil.copystat('orig.pdf', 'testfile')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>> shutil.copystat('orig.pdf', 'testfile', follow_symlinks=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>>
The following solution is a hotfix. It would have to be applied to any method that calls copystat directly or indirectly (or any shutil method that produces an ignorable errno.ENOSYS).
if hasattr(os, 'listxattr'):
LOGGER.warning('patching listxattr to avoid ERROR 38 (errno.ENOSYS)')
# avoid "ERROR 38 function not implemented on Azure"
with mock.patch('os.listxattr', return_value=[]):
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
else:
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
file_field.save is the Django method that calls the shutil code in question. It's the last location in my code before the error.

crontab Failed to import the site module

I want crontab to run the python script for a public welfare project.
I can successfully run the script in Pycharm.
When I run it with crontab, there is an error.
Environment: Mac OS, python3.5
After I type 'crontab -e', it shows that:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/Users/yy/anaconda/bin/python3:/Users/yy/anaconda/bin/
32 14 * * * PATH=$PATH:/Users/yy/anaconda/bin/ cd /Users/yy/PycharmProjects/selenium_test/ && /Users/yy/anaconda/bin/python3 /Users/yy/PycharmProjects/selenium_test/selenium_test.py >> /Users/yy/PycharmProjects/selenium_test/log.txt
I got an error as follows in the /var/mail/username:
From yy#YY.local Thu Jun 8 14:32:00 2017
Return-Path: <yy#YY.local>
X-Original-To: yy
Delivered-To: yy#YY.local
Received: by YY.local (Postfix, from userid 501)
id A7F1F38FFFCC; Thu, 8 Jun 2017 14:32:00 -0500 (CDT)
From: yy#YY.local (Cron Daemon)
To: yy#YY.local
Subject: Cron <yy#YY> PATH=$PATH:/Users/yy/anaconda/bin/ cd /Users/yy/PycharmProjects/selenium_test/ && /Users/yy/anaconda/bin/python3 /Users/yy/PycharmProjects/selenium_test/selenium_test.py >> /Users/yy/PycharmProjects/selenium_test/log.txt
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/Users/yy/anaconda/bin/python3:/Users/yy/anaconda/bin/>
X-Cron-Env: <LOGNAME=yy>
X-Cron-Env: <USER=yy>
X-Cron-Env: <HOME=/Users/yy>
Message-Id: <20170608193200.A7F1F38FFFCC#YY.local>
Date: Thu, 8 Jun 2017 14:32:00 -0500 (CDT)
Failed to import the site module
Traceback (most recent call last):
File "/Users/yy/anaconda/lib/python3.5/site.py", line 567, in <module>
main()
File "/Users/yy/anaconda/lib/python3.5/site.py", line 550, in main
known_paths = addsitepackages(known_paths)
File "/Users/yy/anaconda/lib/python3.5/site.py", line 327, in addsitepackages
addsitedir(sitedir, known_paths)
File "/Users/yy/anaconda/lib/python3.5/site.py", line 206, in addsitedir
addpackage(sitedir, name, known_paths)
File "/Users/yy/anaconda/lib/python3.5/site.py", line 162, in addpackage
for n, line in enumerate(f):
File "/Users/yy/anaconda/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 127: ordinal not in range(128)
I spent two hours on this error.
However, No solutions work...
Please help.
Thanks!
#
I use python3.5, so the default encoding is utf-8. The
UnicodeDecodeError is strange...
The problem is about encoding.
The default of Python3.5 is utf-8.
However, some packages I installed is encoded with Unicode.
I solve the problem by modifying the site.py file in path=/Users/user_name/anaconda/lib/python3.5/site.py.
Line 158:
f = open(fullname, "rb") -> f = open(fullname, "rb")
Line 163:
if line.startswith("#"): -> if line.startswith(b"#"):
Line 166:
if line.startswith(("import ", "import\t")): -> if line.startswith((b"import ", b"import\t")):
Line 170:
dir, dircase = makepath(sitedir, line) -> dir, dircase = makepath(sitedir, str(line))
I don't think it's a good idea to modify the "site.py" in anaconda...
But this fix the problem.
Hope it will be helpful.

Resources