I have model related to some bunch. Each record in one bunch should have it's own unique version (in order of creation records in DB).
As for me, the best way to do this is using transactions, but I faced a problem in parallel execution of transaction blocks. When I remove transaction.atomic() block, everything works, but versions are not updated after execution.
I wrote some banch of code to test concurrent incremention version of record in database:
def _save_instance(instance):
time = random.randint(1, 50)
sleep(time/1000)
instance.text = str(time)
instance.save()
def _parallel():
instances = MyModel.objects.all()
# clear version
print('-- clear old numbers -- ')
instances.update(version=None)
processes = []
for instance in instances:
p = Process(target=_save_instance, args=(instance,))
processes.append(p)
print('-- launching -- ')
for p in processes:
p.start()
for p in processes:
p.join()
sleep(1)
...
# assertions to check if versions are correct in one bunch
print('parallel Ok!')
save() method in MyModel is defined like this:
...
def save(self, *args, **kwargs) -> None:
with transaction.atomic():
if not self.number and self.banch_id:
max_number = MyModel.objects.filter(
banch_id=self.banch_id
).aggregate(max_number=models.Max('version'))['max_number']
self.version = max_number + 1 if max_number else 1
super().save(*args, **kwargs)
When I run my test code on random amount of records (30-300), I get an error:
django.db.utils.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
After that every process stacks and I can stop script only with KeyboardInterrupt.
Full process stack trace:
Process Process-14:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/local/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/app/scripts/test_concurrent_saving.py", line 17, in _save_instance
instance.save()
File "/app/apps/incident/models.py", line 385, in save
).aggregate(max_number=models.Max('version'))['max_number']
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 384, in aggregate
return query.get_aggregation(self.db, kwargs)
File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/query.py", line 503, in get_aggregation
result = compiler.execute_sql(SINGLE)
File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1152, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/raven/contrib/django/client.py", line 123, in execute
return real_execute(self, sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
What is the reason of such behavior?
I will be gratefull for any help or advice!
Related
I'm trying to get an instance of Netbox setup. I'm at the step where I need to create a super user.
As per documentation, I'm running source /opt/netbox/venv/bin/activate
and confirm i'm in the venv
Followed by python3 manage.py createsuperuser
What I get in response is
`You have 167 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, sessions, social_django, taggit, tenancy, users, virtualization, wireless.
Run 'python manage.py migrate' to apply them.
Traceback (most recent call last):
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedTable: relation "auth_user" does not exist
LINE 1: ...user"."is_active", "auth_user"."date_joined" FROM "auth_user...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/netbox/netbox/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/core/management/init.py", line 446, in execute_from_command_line
utility.execute()
File "/opt/netbox/venv/lib/python3.10/site-packages/django/core/management/init.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 88, in execute
return super().execute(*args, **options)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 109, in handle
default_username = get_default_username(database=database)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/contrib/auth/management/init.py", line 163, in get_default_username
auth_app.User._default_manager.db_manager(database).get(
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/query.py", line 646, in get
num = len(clone)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/query.py", line 376, in len
self._fetch_all()
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/query.py", line 1867, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/query.py", line 87, in iter
results = compiler.execute_sql(
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1398, in execute_sql
cursor.execute(sql, params)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/utils.py", line 91, in exit
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "auth_user" does not exist
LINE 1: ...user"."is_active", "auth_user"."date_joined" FROM "auth_user...`
Originally I was getting an error with my authorized users where I had forgot to put it in quotes. Fixed that, and this was the next error to come out.
I found the line in question, but I'm just not sure how I should change it to pass this command successfully?
See this part of your output:
You have 167 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, sessions, social_django, taggit, tenancy, users, virtualization, wireless. Run 'python manage.py migrate' to apply them.
Try applying your django migrations as prompted:
python manage.py migrate
This will install the necessary database tables where your new superuser will be stored.
We're trying to execute a Dask DAG as part of an airflow scheduled job (Airflow only really being used here as the scheduler). I'm using the example from Dasks own docs as it gives us the same error.
Pickling has been disabled since that seemed an obvious solution, but it has no effect.
We're also using a DaskExecutor with airflow which could be part of the issue here.
So, at this point we're at a bit of a loss on how to get this working since docs and prior work are light on this.
def fib(n):
if n < 2:
return n
client = get_client()
a_future = client.submit(fib, n - 1)
b_future = client.submit(fib, n - 2)
secede()
a, b = client.gather([a_future, b_future])
rejoin()
return a + b
#dag(default_args=default_args,
schedule_interval="0 21 * * *",
tags=['test'])
def dask_future():
#task
def go_to_the_future():
print("Doing fibonacci calculation")
# these features require the dask.distributed scheduler
client = Client(DASK_CLUSTER_IP)
future = client.submit(fib, 10)
result = future.result()
print(result) # prints "55"
go_to_the_future()
dask_future = dask_future()
However we get this error:
[2022-04-13, 15:26:00 CEST] {taskinstance.py:1270} INFO - Marking task as UP_FOR_RETRY. dag_id=dask_future, task_id=go_to_the_future, execution_date=20220413T132555, start_date=20220413T132559, end_date=20220413T132600
[2022-04-13, 15:26:00 CEST] {standard_task_runner.py:88} ERROR - Failed to execute job 368 for task go_to_the_future
Traceback (most recent call last):
File "/data/venv/lib64/python3.9/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/data/venv/lib64/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/data/venv/lib64/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/data/venv/lib64/python3.9/site-packages/airflow/cli/commands/task_command.py", line 292, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/data/venv/lib64/python3.9/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/data/venv/lib64/python3.9/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/data/venv/lib64/python3.9/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/data/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 1332, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/data/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 1458, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/data/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 1514, in _execute_task
result = execute_callable(context=context)
File "/data/venv/lib64/python3.9/site-packages/airflow/decorators/base.py", line 134, in execute
return_value = super().execute(context)
File "/data/venv/lib64/python3.9/site-packages/airflow/operators/python.py", line 151, in execute
return_value = self.execute_callable()
File "/data/venv/lib64/python3.9/site-packages/airflow/operators/python.py", line 162, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/data/airflow/dags/dask_future.py", line 77, in go_to_the_future
result = future.result()
File "/data/venv/lib64/python3.9/site-packages/distributed/client.py", line 279, in result
raise exc.with_traceback(tb)
File "/data/venv/lib64/python3.9/site-packages/distributed/protocol/pickle.py", line 66, in loads
return pickle.loads(x)
ModuleNotFoundError: No module named 'unusual_prefix_876615abe2868d0d67afd03ccd8a249a86eb44dc_dask_future'
[2022-04-13, 15:26:00 CEST] {local_task_job.py:154} INFO - Task exited with return code 1
This question already has answers here:
Why do I get AttributeError: 'NoneType' object has no attribute 'something'?
(11 answers)
Closed 1 year ago.
I'm getting the error when trying to get all the Urls for which the limited field is True. I've tried deleting the migrations and creating new migrations, but still getting the same error.
these are the installed dependencies:
asgiref==3.4.1
backports.zoneinfo==0.2.1
Django==4.0
django-cors-headers==3.10.1
django-shell-plus==1.1.7
djangorestframework==3.13.1
djongo==1.3.6
dnspython==2.1.0
gunicorn==20.1.0
pymongo==3.12.1
python-dotenv==0.19.2
pytz==2021.3
sqlparse==0.2.4
tzdata==2021.5
models.py:
class Urls(models.Model):
_id = models.ObjectIdField()
record_id = models.IntegerField(unique=True)
hash = models.CharField(unique=True, max_length=1000)
long_url = models.URLField()
created_at = models.DateField(auto_now_add=True)
expires_in = models.DurationField(default=timedelta(days=365*3))
expires_on = models.DateField(null=True, blank=True)
limited = models.BooleanField()
exhausted = models.BooleanField()
query:
Urls.objects.filter(limited=False)
error:
Traceback (most recent call last):
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 857, in parse
return handler(self, statement)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 933, in _select
return SelectQuery(self.db, self.connection_properties, sm, self._params)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 116, in __init__
super().__init__(*args)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 62, in __init__
self.parse()
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 152, in parse
self.where = WhereConverter(self, statement)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/converters.py", line 27, in __init__
self.parse()
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/converters.py", line 119, in parse
self.op = WhereOp(
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/operators.py", line 476, in __init__
self.evaluate()
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/operators.py", line 465, in evaluate
op.evaluate()
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/operators.py", line 258, in evaluate
self.rhs.negate()
AttributeError: 'NoneType' object has no attribute 'negate'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/cursor.py", line 51, in execute
self.result = Query(
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 784, in __init__
self._query = self.parse()
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/sql2mongo/query.py", line 885, in parse
raise exe from e
djongo.exceptions.SQLDecodeError:
Keyword: None
Sub SQL: None
FAILED SQL: SELECT "short_urls"."_id", "short_urls"."record_id", "short_urls"."hash", "short_urls"."long_url", "short_urls"."created_at", "short_urls"."expires_in", "short_urls"."expires_on", "short_urls"."limited", "short_urls"."exhausted" FROM "short_urls" WHERE NOT "short_urls"."limited" LIMIT 21
Params: ()
Version: 1.3.6
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/cursor.py", line 59, in execute
raise db_exe from e
djongo.database.DatabaseError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/models/query.py", line 256, in __repr__
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/models/query.py", line 262, in __len__
self._fetch_all()
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/models/query.py", line 1354, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/models/query.py", line 51, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1202, in execute_sql
cursor.execute(sql, params)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/backends/utils.py", line 99, in execute
return super().execute(sql, params)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/mnt/c/Users/pranjal/Desktop/urlhash/env/lib/python3.8/site-packages/djongo/cursor.py", line 59, in execute
raise db_exe from e
django.db.utils.DatabaseError
I would need to see your views.py where you do that query, Urls.objects.filter(limited=False) but first, perhaps the query is not finding any Urls objects that match the criteria, limited=False, thus the query returns None, which has no attributes. Second, I'm not familiar with negate. I don't see it in your model. Perhaps if I see the views.py where you make that query I will understand better.
I am using the following libraries:
from binance.websockets import BinanceSocketManager
from twisted.internet import reactor
To create a websocket to fetch data from an API (Binance API) and print the price of bitcoin in 1s intervals:
conn_key = bsm.start_symbol_ticker_socket('BTCUSDT', asset_price_stream)
Every time there is a new update, the function asset_price_stream is executed, with the new data as parameter. This works (I can, for example, simply print the data to the console in asset_price_stream).
Now, I want to share this data with an asyncio function. At the moment, I am using a janus queue for this.
In asset_price_stream:
price_update_queue_object.queue.sync_q.put_nowait({msg['s']: msg['a']})
and in the asynchronous thread:
async def price_updater(update_queue):
while True:
priceupdate = await update_queue.async_q.get()
print(priceupdate)
update_queue.async_q.task_done()
I am using the sync_q interface in asset_price_stream and the async_q interface in the asyncio function. If I use the async_q interface also in asset_price_stream, I get an error:
Unhandled Error
Traceback (most recent call last):
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/log.py", line 101, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/log.py", line 85, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/context.py", line 83, in callWithContext
return func(*args, **kw)
--- <exception caught here> ---
File "/home/elias/.local/lib/python3.7/site-packages/twisted/internet/posixbase.py", line 687, in _doReadOrWrite
why = selectable.doRead()
File "/home/elias/.local/lib/python3.7/site-packages/twisted/internet/tcp.py", line 246, in doRead
return self._dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/internet/tcp.py", line 251, in _dataReceived
rval = self.protocol.dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/protocols/tls.py", line 324, in dataReceived
self._flushReceiveBIO()
File "/home/elias/.local/lib/python3.7/site-packages/twisted/protocols/tls.py", line 290, in _flushReceiveBIO
ProtocolWrapper.dataReceived(self, bytes)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/protocols/policies.py", line 107, in dataReceived
self.wrappedProtocol.dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/twisted/websocket.py", line 290, in dataReceived
self._dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1207, in _dataReceived
self.consumeData()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1219, in consumeData
while self.processData() and self.state != WebSocketProtocol.STATE_CLOSED:
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1579, in processData
fr = self.onFrameEnd()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1704, in onFrameEnd
self._onMessageEnd()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/twisted/websocket.py", line 318, in _onMessageEnd
self.onMessageEnd()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 628, in onMessageEnd
self._onMessage(payload, self.message_is_binary)
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/twisted/websocket.py", line 321, in _onMessage
self.onMessage(payload, isBinary)
File "/home/elias/.local/lib/python3.7/site-packages/binance/websockets.py", line 31, in onMessage
self.factory.callback(payload_obj)
File "dollarbot.py", line 425, in asset_price_stream
price_update_queue_object.queue.async_q.put_nowait({msg['s']: msg['a']})
File "/home/elias/.local/lib/python3.7/site-packages/janus/__init__.py", line 438, in put_nowait
self._parent._notify_async_not_empty(threadsafe=False)
File "/home/elias/.local/lib/python3.7/site-packages/janus/__init__.py", line 158, in _notify_async_not_empty
self._call_soon(task_maker)
File "/home/elias/.local/lib/python3.7/site-packages/janus/__init__.py", line 60, in checked_call_soon
self._loop.call_soon(callback, *args)
File "/usr/lib/python3.7/asyncio/base_events.py", line 690, in call_soon
self._check_thread()
File "/usr/lib/python3.7/asyncio/base_events.py", line 728, in _check_thread
"Non-thread-safe operation invoked on an event loop other "
builtins.RuntimeError: Non-thread-safe operation invoked on an event loop other than the current one
When using sync_q, it works so far. The async thread can print the price updates. But sometimes, it simply gets stuck and I don't know why. The API is still delivering data (as I have double checked) but it stops to arrive at the async thread through the queue. I have no idea why this happens, and it is not always reproducible (I can run the same code without modifications 5 times in a row, four times it works, and once it doesn't). The interesting thing is: as soon as i press CTRL-C to abort the program, the data immediately arrives at the queue when it didn't before! (In the few seconds while the program is waiting to shut all asyncio threads down) So I have the feeling that something with the sync-async-queue communication is wrong/concurrent with something else that is aborted when I press CTRL-C.
So my question is: How can I improve that procedure? Is there a better way to send data to an asyncio thread from a twisted.internet websocket than janus.Queue() ? I tried some different things (accessing a global object for example, but I cannot access an asyncio.lock() from the asset_price_stream as it is not an async function... so it would not be thread safe).
I am using gremlin server, I have a big data set and I performing the gremlin paging. Following is the sample of query:
query = """g.V().both().both().count()"""
data = execute_query(query)
for x in range(0,int(data[0]/10000)+1):
print(x*10000, " - ",(x+1)*10000)
query = """g.V().both().both().range({0}*10000, {1}*10000)""".format(x,x+1)
data = execute_query(query)
def execute_query(query):
"""query execution"""
Above query is working fine, for pagination i have to know the rang where to stop the execution of the query. for getting the range i have to first fetch the count of the query and pass to the for loop. Is there any other to use the pagination of gremlin.
-- Pagination is required, because its fails when fetching 100k data in a single ex. g.V().both().both().count()
if we don't use pagination then its giving me this following error:
ERROR:tornado.application:Uncaught exception, closing connection.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
ERROR:tornado.application:Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f3e1c409ae8>)
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/ioloop.py", line 604, in _run_callback
ret = callback()
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
Traceback (most recent call last):
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 59, in <module>
data = execute_query(query)
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 53, in execute_query
results = future_results.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/resultset.py", line 81, in cb
f.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 398, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/connection.py", line 77, in _receive
self._protocol.data_received(data, self._results)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
this line repeats 100 times File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
This question is largely answered here but I'll add some more comment.
Your approach to pagination is really expensive as I'm not aware of any graphs that will optimize that particular traversal and you're basically iterating all that data a lot of times. You do it once for the count(), then you iterate the first 10000, then for the second 10000, you iterate the first 10000 followed by the second 10000, and then on the third 10000, you iterate the first 20000 followed by the third 10000 and so on...
I'm not sure if there is more to your logic, but what you have looks like a form of "batching" to get smaller bunches of results. There isn't much need to do it that way as Gremlin Server is already doing that for you internally. Were you to just send g.V().both().both() Gremlin Server is going to batch up results given the resultIterationBatchSize configuration option.
Anyway, there isn't really a better way to make paging work that I am aware of beyond what was explained in the other question that I mentioned.