I am using the following libraries:
from binance.websockets import BinanceSocketManager
from twisted.internet import reactor
To create a websocket to fetch data from an API (Binance API) and print the price of bitcoin in 1s intervals:
conn_key = bsm.start_symbol_ticker_socket('BTCUSDT', asset_price_stream)
Every time there is a new update, the function asset_price_stream is executed, with the new data as parameter. This works (I can, for example, simply print the data to the console in asset_price_stream).
Now, I want to share this data with an asyncio function. At the moment, I am using a janus queue for this.
In asset_price_stream:
price_update_queue_object.queue.sync_q.put_nowait({msg['s']: msg['a']})
and in the asynchronous thread:
async def price_updater(update_queue):
while True:
priceupdate = await update_queue.async_q.get()
print(priceupdate)
update_queue.async_q.task_done()
I am using the sync_q interface in asset_price_stream and the async_q interface in the asyncio function. If I use the async_q interface also in asset_price_stream, I get an error:
Unhandled Error
Traceback (most recent call last):
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/log.py", line 101, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/log.py", line 85, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/python/context.py", line 83, in callWithContext
return func(*args, **kw)
--- <exception caught here> ---
File "/home/elias/.local/lib/python3.7/site-packages/twisted/internet/posixbase.py", line 687, in _doReadOrWrite
why = selectable.doRead()
File "/home/elias/.local/lib/python3.7/site-packages/twisted/internet/tcp.py", line 246, in doRead
return self._dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/internet/tcp.py", line 251, in _dataReceived
rval = self.protocol.dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/protocols/tls.py", line 324, in dataReceived
self._flushReceiveBIO()
File "/home/elias/.local/lib/python3.7/site-packages/twisted/protocols/tls.py", line 290, in _flushReceiveBIO
ProtocolWrapper.dataReceived(self, bytes)
File "/home/elias/.local/lib/python3.7/site-packages/twisted/protocols/policies.py", line 107, in dataReceived
self.wrappedProtocol.dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/twisted/websocket.py", line 290, in dataReceived
self._dataReceived(data)
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1207, in _dataReceived
self.consumeData()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1219, in consumeData
while self.processData() and self.state != WebSocketProtocol.STATE_CLOSED:
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1579, in processData
fr = self.onFrameEnd()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 1704, in onFrameEnd
self._onMessageEnd()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/twisted/websocket.py", line 318, in _onMessageEnd
self.onMessageEnd()
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/websocket/protocol.py", line 628, in onMessageEnd
self._onMessage(payload, self.message_is_binary)
File "/home/elias/.local/lib/python3.7/site-packages/autobahn/twisted/websocket.py", line 321, in _onMessage
self.onMessage(payload, isBinary)
File "/home/elias/.local/lib/python3.7/site-packages/binance/websockets.py", line 31, in onMessage
self.factory.callback(payload_obj)
File "dollarbot.py", line 425, in asset_price_stream
price_update_queue_object.queue.async_q.put_nowait({msg['s']: msg['a']})
File "/home/elias/.local/lib/python3.7/site-packages/janus/__init__.py", line 438, in put_nowait
self._parent._notify_async_not_empty(threadsafe=False)
File "/home/elias/.local/lib/python3.7/site-packages/janus/__init__.py", line 158, in _notify_async_not_empty
self._call_soon(task_maker)
File "/home/elias/.local/lib/python3.7/site-packages/janus/__init__.py", line 60, in checked_call_soon
self._loop.call_soon(callback, *args)
File "/usr/lib/python3.7/asyncio/base_events.py", line 690, in call_soon
self._check_thread()
File "/usr/lib/python3.7/asyncio/base_events.py", line 728, in _check_thread
"Non-thread-safe operation invoked on an event loop other "
builtins.RuntimeError: Non-thread-safe operation invoked on an event loop other than the current one
When using sync_q, it works so far. The async thread can print the price updates. But sometimes, it simply gets stuck and I don't know why. The API is still delivering data (as I have double checked) but it stops to arrive at the async thread through the queue. I have no idea why this happens, and it is not always reproducible (I can run the same code without modifications 5 times in a row, four times it works, and once it doesn't). The interesting thing is: as soon as i press CTRL-C to abort the program, the data immediately arrives at the queue when it didn't before! (In the few seconds while the program is waiting to shut all asyncio threads down) So I have the feeling that something with the sync-async-queue communication is wrong/concurrent with something else that is aborted when I press CTRL-C.
So my question is: How can I improve that procedure? Is there a better way to send data to an asyncio thread from a twisted.internet websocket than janus.Queue() ? I tried some different things (accessing a global object for example, but I cannot access an asyncio.lock() from the asset_price_stream as it is not an async function... so it would not be thread safe).
Related
I have model related to some bunch. Each record in one bunch should have it's own unique version (in order of creation records in DB).
As for me, the best way to do this is using transactions, but I faced a problem in parallel execution of transaction blocks. When I remove transaction.atomic() block, everything works, but versions are not updated after execution.
I wrote some banch of code to test concurrent incremention version of record in database:
def _save_instance(instance):
time = random.randint(1, 50)
sleep(time/1000)
instance.text = str(time)
instance.save()
def _parallel():
instances = MyModel.objects.all()
# clear version
print('-- clear old numbers -- ')
instances.update(version=None)
processes = []
for instance in instances:
p = Process(target=_save_instance, args=(instance,))
processes.append(p)
print('-- launching -- ')
for p in processes:
p.start()
for p in processes:
p.join()
sleep(1)
...
# assertions to check if versions are correct in one bunch
print('parallel Ok!')
save() method in MyModel is defined like this:
...
def save(self, *args, **kwargs) -> None:
with transaction.atomic():
if not self.number and self.banch_id:
max_number = MyModel.objects.filter(
banch_id=self.banch_id
).aggregate(max_number=models.Max('version'))['max_number']
self.version = max_number + 1 if max_number else 1
super().save(*args, **kwargs)
When I run my test code on random amount of records (30-300), I get an error:
django.db.utils.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
After that every process stacks and I can stop script only with KeyboardInterrupt.
Full process stack trace:
Process Process-14:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/local/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/app/scripts/test_concurrent_saving.py", line 17, in _save_instance
instance.save()
File "/app/apps/incident/models.py", line 385, in save
).aggregate(max_number=models.Max('version'))['max_number']
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 384, in aggregate
return query.get_aggregation(self.db, kwargs)
File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/query.py", line 503, in get_aggregation
result = compiler.execute_sql(SINGLE)
File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1152, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/raven/contrib/django/client.py", line 123, in execute
return real_execute(self, sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
What is the reason of such behavior?
I will be gratefull for any help or advice!
I need help on where I have gone wrong. I followed https://pimylifeup.com/raspberry-pi-google-assistant/. The program asks me to input a request but it doesn't output the answer through the speakers. I have attached when happens when I talk to the microphone (send a request).
(env) pi#raspberrypi:~ $ googlesamples-assistant-pushtotalk --project-id raspberry-pi--home1 --device-model-id raspberry-pi--home1-google-assistant-fr9vci
INFO:root:Connecting to embeddedassistant.googleapis.com
INFO:root:Using device model raspberry-pi--home1-google-assistant-fr9vci and device id 2c9c8f3a-6286-11ec-b862-b827ebae3b0b
Press Enter to send a new request...
INFO:root:Recording audio request.
INFO:root:Transcript of user request: "what".
INFO:root:Transcript of user request: "what time".
INFO:root:Transcript of user request: "what time is".
INFO:root:Transcript of user request: "what time is it".
INFO:root:Transcript of user request: "what time is it".
INFO:root:Transcript of user request: "what time is it".
INFO:root:End of audio request detected.
INFO:root:Stopping recording.
INFO:root:Transcript of user request: "what time is it".
INFO:root:Playing assistant response.
Traceback (most recent call last):
File "/home/pi/env/bin/googlesamples-assistant-pushtotalk", line 8, in <module>
sys.exit(main())
File "/home/pi/env/lib/python3.9/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/home/pi/env/lib/python3.9/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/home/pi/env/lib/python3.9/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/pi/env/lib/python3.9/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/home/pi/env/lib/python3.9/site-packages/googlesamples/assistant/grpc/pushtotalk.py", line 458, in main
continue_conversation = assistant.assist()
File "/home/pi/env/lib/python3.9/site-packages/tenacity/__init__.py", line 241, in wrapped_f
return self.call(f, *args, **kw)
File "/home/pi/env/lib/python3.9/site-packages/tenacity/__init__.py", line 329, in call
do = self.iter(result=result, exc_info=exc_info,
File "/home/pi/env/lib/python3.9/site-packages/tenacity/__init__.py", line 279, in iter
return fut.result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 433, in result
return self.__get_result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/pi/env/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in call
result = fn(*args, **kwargs)
File "/home/pi/env/lib/python3.9/site-packages/googlesamples/assistant/grpc/pushtotalk.py", line 154, in assist
self.conversation_stream.write(resp.audio_out.audio_data)
File "/home/pi/env/lib/python3.9/site-packages/googlesamples/assistant/grpc/audio_helpers.py", line 326, in write
buf = normalize_audio_buffer(buf, self.volume_percentage)
File "/home/pi/env/lib/python3.9/site-packages/googlesamples/assistant/grpc/audio_helpers.py", line 57, in normalize_audio_buffer
buf = arr.tostring()
AttributeError: 'array.array' object has no attribute 'tostring'
(env) pi#raspberrypi:~ $
I wouldn't qualify this as an answer. Just been going through the same issue when installing Assistant SDK on a raspberry. Unfortunately haven't done much with Python, but I think the key is
File "/home/pi/env/lib/python3.9/site-backages/googlesamples/assistant/grpc/audio_helpers.py", line 57, in normalize_audio_buffer
buf = arr.tostring()
AttributeError: 'array.array' object has no attribute 'tostring'
Possibly some type casting issue? or a deprecated function. Tomorrow I'll roll back to an older Python and try again.
Diego.
from odps import ODPS
from odps import options
import csv
import os
from datetime import timedelta, datetime
options.sql.use_odps2_extension = True
options.tunnel.use_instance_tunnel = True
options.connect_timeout = 60
options.read_timeout=130
options.retry_times = 7
options.chunk_size = 8192*2
odps = ODPS('id','secret','project', endpoint ='endpointUrl')
table = odps.get_table('eventTable')
def uploadFile(file):
with table.open_writer(partition=None) as writer:
with open(file, 'rt') as csvfile:
rows = csv.reader(csvfile, delimiter='~')
for final in rows:
writer.write(final)
writer.close();
uploadFile('xyz.csv')
Assume I pass number of files in uploadFile call one by one from directory To connect alibaba cloud from python to migrate data into max compute table over the cloud. When I run this code, service stops either after working long time or at night time. It reports me error Read Time Out Error at line writer.write(final).
Error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/response.py", line 226, in _error_catcher
yield
File "/usr/lib/python3/dist-packages/urllib3/response.py", line 301, in read
data = self._fp.read(amt)
File "/usr/lib/python3.5/http/client.py", line 448, in read
n = self.readinto(b)
File "/usr/lib/python3.5/http/client.py", line 488, in readinto
n = self.fp.readinto(b)
File "/usr/lib/python3.5/socket.py", line 575, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/models.py", line 660, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/usr/lib/python3/dist-packages/urllib3/response.py", line 344, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/lib/python3/dist-packages/urllib3/response.py", line 311, in read
flush_decoder = True
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python3/dist-packages/urllib3/response.py", line 231, in _error_catcher
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
requests.packages.urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='dt.odps.aliyun.com', port=80): Read timed out.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/dataUploader.py", line 66, in <module>
uploadFile('xyz.csv')
File "/dataUploader.py", line 53, in uploadFile
writer.write(final)
File "/usr/local/lib/python3.5/dist-packages/odps/models/table.py", line 643, in __exit__
self.close()
File "/usr/local/lib/python3.5/dist-packages/odps/models/table.py", line 631, in close
upload_session.commit(written_blocks)
File "/usr/local/lib/python3.5/dist-packages/odps/tunnel/tabletunnel.py", line 308, in commit
in self.get_block_list()])
File "/usr/local/lib/python3.5/dist-packages/odps/tunnel/tabletunnel.py", line 298, in get_block_list
self.reload()
File "/usr/local/lib/python3.5/dist-packages/odps/tunnel/tabletunnel.py", line 238, in reload
resp = self._client.get(url, params=params, headers=headers)
File "/usr/local/lib/python3.5/dist-packages/odps/rest.py", line 138, in get
return self.request(url, 'get', stream=stream, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/odps/rest.py", line 125, in request
proxies=self._proxy)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 608, in send
r.content
File "/usr/lib/python3/dist-packages/requests/models.py", line 737, in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
File "/usr/lib/python3/dist-packages/requests/models.py", line 667, in generate
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='dt.odps.aliyun.com', port=80): Read timed out.
packet_write_wait: Connection to 122.121.122.121 port 22: Broken pipe
This is the error what I got. Can you suggest what is the problem ?
The read timeout is the timeout on waiting to read data. Usually, if the server fails to send a byte seconds after the last byte, a read timeout error will be raised.
This happens because of the server couldn`t read the file within the specified timeout period.
Here, the read timeout was set to 130 seconds, which is less if your file size is very high.
Please increase the timeout limit from 130 seconds to 500 seconds, i.e options.read_timeout=130 to options.read_timeout=500
It would resolve your problem, at the same time minimize the retry times from 7 to 3,
i.e options.retry_times=7 to options.retry_times=3
This error is usually caused by network issue.
Execute curl endpoint URL in a terminal. If it returns immediately with something like this:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchObject</Code>
<Message><![CDATA[Unknown http request location: /]]></Message>
<RequestId>5E5CC9526283FEC94F19DAAE</RequestId>
<HostId>localhost</HostId>
</Error>
Then the endpoint URL is reachable. But if it hangs, then you should check if you are using the right endpoint URL.
Since MaxCompute (ODPS) has public and private endpoint, it could be confusing sometimes.
I am using gremlin server, I have a big data set and I performing the gremlin paging. Following is the sample of query:
query = """g.V().both().both().count()"""
data = execute_query(query)
for x in range(0,int(data[0]/10000)+1):
print(x*10000, " - ",(x+1)*10000)
query = """g.V().both().both().range({0}*10000, {1}*10000)""".format(x,x+1)
data = execute_query(query)
def execute_query(query):
"""query execution"""
Above query is working fine, for pagination i have to know the rang where to stop the execution of the query. for getting the range i have to first fetch the count of the query and pass to the for loop. Is there any other to use the pagination of gremlin.
-- Pagination is required, because its fails when fetching 100k data in a single ex. g.V().both().both().count()
if we don't use pagination then its giving me this following error:
ERROR:tornado.application:Uncaught exception, closing connection.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
ERROR:tornado.application:Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f3e1c409ae8>)
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/ioloop.py", line 604, in _run_callback
ret = callback()
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
Traceback (most recent call last):
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 59, in <module>
data = execute_query(query)
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 53, in execute_query
results = future_results.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/resultset.py", line 81, in cb
f.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 398, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/connection.py", line 77, in _receive
self._protocol.data_received(data, self._results)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
this line repeats 100 times File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
This question is largely answered here but I'll add some more comment.
Your approach to pagination is really expensive as I'm not aware of any graphs that will optimize that particular traversal and you're basically iterating all that data a lot of times. You do it once for the count(), then you iterate the first 10000, then for the second 10000, you iterate the first 10000 followed by the second 10000, and then on the third 10000, you iterate the first 20000 followed by the third 10000 and so on...
I'm not sure if there is more to your logic, but what you have looks like a form of "batching" to get smaller bunches of results. There isn't much need to do it that way as Gremlin Server is already doing that for you internally. Were you to just send g.V().both().both() Gremlin Server is going to batch up results given the resultIterationBatchSize configuration option.
Anyway, there isn't really a better way to make paging work that I am aware of beyond what was explained in the other question that I mentioned.
I'm running my tornado.testing.AsyncHTTPTestCase based testcases and occasionally see this stacktrace in the log. It appears to happen randomly so running the test suite multiple times just sometimes run into this behaviour.
Is this a tornado 3.2 bug or am I supposed to handle this Exception some how?
It doesn't appear to affect any of my test results, but I'm not too happy about random exceptions being left like this.
ERROR:tornado.application:Exception in callback functools.partial(<function wrap.<locals>.wrapped at 0x10727d830>)
Traceback (most recent call last):
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/ioloop.py", line 477, in _run_callback
callback()
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/stack_context.py", line 331, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/stack_context.py", line 302, in wrapped
ret = fn(*args, **kwargs)
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/iostream.py", line 366, in wrapper
self._maybe_add_error_listener()
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/iostream.py", line 600, in _maybe_add_error_listener
self._add_io_state(ioloop.IOLoop.READ)
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/iostream.py", line 630, in _add_io_state
self.fileno(), self._handle_events, self._state)
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/ioloop.py", line 545, in add_handler
self._impl.register(fd, events | self.ERROR)
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/platform/kqueue.py", line 41, in register
self._control(fd, events, select.KQ_EV_ADD)
File "/Users/fredrik/.virtualenvs/project/lib/python3.3/site-packages/tornado/platform/kqueue.py", line 60, in _control
fd, filter=select.KQ_FILTER_READ, flags=flags))
OverflowError: can't convert negative int to unsigned
I haven't seen this before, but it looks like a socket object has been closed (changing its file descriptor to -1) while an IOStream is still trying to read from it. Are you doing any unusual cleanup in your tests or reaching into any IOStreams to access the socket object directly?