Gremlin paging for big dataset query - python-3.x

I am using gremlin server, I have a big data set and I performing the gremlin paging. Following is the sample of query:
query = """g.V().both().both().count()"""
data = execute_query(query)
for x in range(0,int(data[0]/10000)+1):
print(x*10000, " - ",(x+1)*10000)
query = """g.V().both().both().range({0}*10000, {1}*10000)""".format(x,x+1)
data = execute_query(query)
def execute_query(query):
"""query execution"""
Above query is working fine, for pagination i have to know the rang where to stop the execution of the query. for getting the range i have to first fetch the count of the query and pass to the for loop. Is there any other to use the pagination of gremlin.
-- Pagination is required, because its fails when fetching 100k data in a single ex. g.V().both().both().count()
if we don't use pagination then its giving me this following error:
ERROR:tornado.application:Uncaught exception, closing connection.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
ERROR:tornado.application:Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f3e1c409ae8>)
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/ioloop.py", line 604, in _run_callback
ret = callback()
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
Traceback (most recent call last):
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 59, in <module>
data = execute_query(query)
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 53, in execute_query
results = future_results.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/resultset.py", line 81, in cb
f.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 398, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/connection.py", line 77, in _receive
self._protocol.data_received(data, self._results)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
this line repeats 100 times File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received

This question is largely answered here but I'll add some more comment.
Your approach to pagination is really expensive as I'm not aware of any graphs that will optimize that particular traversal and you're basically iterating all that data a lot of times. You do it once for the count(), then you iterate the first 10000, then for the second 10000, you iterate the first 10000 followed by the second 10000, and then on the third 10000, you iterate the first 20000 followed by the third 10000 and so on...
I'm not sure if there is more to your logic, but what you have looks like a form of "batching" to get smaller bunches of results. There isn't much need to do it that way as Gremlin Server is already doing that for you internally. Were you to just send g.V().both().both() Gremlin Server is going to batch up results given the resultIterationBatchSize configuration option.
Anyway, there isn't really a better way to make paging work that I am aware of beyond what was explained in the other question that I mentioned.

Related

Error when using joblib in python with undetected chromedriver

when i use (self.links is an array of strings)
Parallel(n_jobs=2)(delayed(self.buybysize)(link) for link in self.links)
with this function
def buybysize(self, link):
browser = self.browser()
//other commented stuff
def browser(self):
options = uc.ChromeOptions()
options.user_data_dir = self.user_data_dir
options.add_argument(self.add_argument)
driver = uc.Chrome(options=options)
return driver
i get the error
oblib.externals.loky.process_executor._RemoteTraceback:
Traceback (most recent call last):
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 436, in _process_worker
r = call_item()
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 288, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 595, in __call__
return self.func(*args, **kwargs)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "/home/Me/PycharmProjects/zalando_buy/Zalando.py", line 91, in buybysize
browser = self.browser()
File "/home/Me/PycharmProjects/zalando_buy/Zalando.py", line 38, in browser
driver = uc.Chrome(options=options)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/undetected_chromedriver/__init__.py", line 388, in __init__
self.browser_pid = start_detached(
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/undetected_chromedriver/dprocess.py", line 30, in start_detached
multiprocessing.Process(
File "/usr/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/externals/loky/backend/process.py", line 39, in _Popen
return Popen(process_obj)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/externals/loky/backend/popen_loky_posix.py", line 52, in __init__
self._launch(process_obj)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/externals/loky/backend/popen_loky_posix.py", line 157, in _launch
pid = fork_exec(cmd_python, self._fds, env=process_obj.env)
AttributeError: 'Process' object has no attribute 'env'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/Me/PycharmProjects/zalando_buy/Start.py", line 4, in <module>
class Start:
File "/home/Me/PycharmProjects/zalando_buy/Start.py", line 7, in Start
zalando.startshopping()
File "/home/Me/PycharmProjects/zalando_buy/Zalando.py", line 42, in startshopping
self.openlinks()
File "/home/Me/PycharmProjects/zalando_buy/Zalando.py", line 50, in openlinks
Parallel(n_jobs=2)(delayed(self.buybysize)(link) for link in self.links)
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/parallel.py", line 1056, in __call__
self.retrieve()
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/parallel.py", line 935, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/home/Me/PycharmProjects/zalando_buy/venv/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
return future.result(timeout=timeout)
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
AttributeError: 'Process' object has no attribute 'env'
Process finished with exit code 1
For me it looks like there are instabilities because undetected chromedriver maybe uses multiprocessing already, but isnt there any way where i can open multiple Browsers with UC and process each iteration parallel?
Edit: i debugged and the error appears after trying to execute this line:
driver = uc.Chrome(options=options)

Creating an Expectation Suite With an Automated Profiler Great Expectation

I am a newbie to great expectations and trying to set up but facing the below issue while creating an expectation Suite with an Automated Profiler.
C:\Users\user\great_expectations>great_expectations --v3-api suite new
Using v3 (Batch Request) API
How would you like to create your Expectation Suite?
1. Manually, without interacting with a sample batch of data (default)
2. Interactively, with a sample batch of data
3. Automatically, using a profiler
: 3
A batch of data is required to edit the suite - let's help you to specify it.
Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\Scripts\great_expectations.exe\__main__.py", line 7, in <module>
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\great_expectations\cli\cli.py", line 190, in main
cli()
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\great_expectations\cli\suite.py", line 151, in suite_new
_suite_new_workflow(
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\great_expectations\cli\suite.py", line 335, in _suite_new_workflow
raise e
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\great_expectations\cli\suite.py", line 268, in _suite_new_workflow
suite: ExpectationSuite = toolkit.get_or_create_expectation_suite(
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\great_expectations\cli\toolkit.py", line 82, in get_or_create_expectation_suite
default_expectation_suite_name: str = get_default_expectation_suite_name(
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\great_expectations\cli\toolkit.py", line 131, in get_default_expectation_suite_name
suite_name = f"batch-{BatchRequest(**batch_request).id}"
TypeError: BatchRequest.__init__() missing 1 required positional argument: 'data_asset_name'
C:\Users\user\great_expectations>
I had the same issue and, for me, the problem came from a badly configured data source. What I suggest you to do is to test your data source config and see how many datasets it found:
from ruamel import yaml
import great_expectations as ge
context = ge.get_context()
datasource_config = {...}
context.test_yaml_config(yaml.dump(datasource_config))
When running this, the test_yaml_config will output a report on how many assets it found.
If it didn't find any, then you'll run into the issue you're describing when you'll try to create a suite on your data.

Cannot delete from datastore emulator

I have this simple piece of code:
from google.cloud import datastore
import requests
ds_c = datastore.Client(_http=requests.Session)
for entity in ds_c.query(kind='Kind').fetch():
ds_c.delete(entity.key)
I am getting:
google.api_core.exceptions.ResourceExhausted: 429 Received message larger than max (4207799 vs. 4194304)
Full stack trace:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 729, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.RESOURCE_EXHAUSTED
details = "Received message larger than max (4207799 vs. 4194304)"
debug_error_string = "{"created":"#1589308786.798030838","description":"Received message larger than max (4207799 vs. 4194304)","file":"src/core/ext/filters/message_size/message_size_filter.cc","file_line":191,"grpc_status":8}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/vagrant/.local/bin/fab", line 8, in <module>
sys.exit(program.run())
File "/home/vagrant/.local/lib/python3.7/site-packages/invoke/program.py", line 384, in run
self.execute()
File "/home/vagrant/.local/lib/python3.7/site-packages/invoke/program.py", line 566, in execute
executor.execute(*self.tasks)
File "/home/vagrant/.local/lib/python3.7/site-packages/invoke/executor.py", line 129, in execute
result = call.task(*args, **call.kwargs)
File "/home/vagrant/.local/lib/python3.7/site-packages/invoke/tasks.py", line 127, in __call__
result = self.body(*args, **kwargs)
File "/vagrant/cyclone/fabfile.py", line 34, in clean_test_datastore
for entity in ds_c.query(kind=kind).fetch():
File "/usr/local/lib/python3.7/dist-packages/google/api_core/page_iterator.py", line 212, in _items_iter
for page in self._page_iter(increment=False):
File "/usr/local/lib/python3.7/dist-packages/google/api_core/page_iterator.py", line 249, in _page_iter
page = self._next_page()
File "/usr/local/lib/python3.7/dist-packages/google/cloud/datastore/query.py", line 537, in _next_page
self._query.project, partition_id, read_options, query=query_pb
File "/usr/local/lib/python3.7/dist-packages/google/cloud/datastore_v1/gapic/datastore_client.py", line 384, in run_query
request, retry=retry, timeout=timeout, metadata=metadata
File "/usr/local/lib/python3.7/dist-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/usr/local/lib/python3.7/dist-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.ResourceExhausted: 429 Received message larger than max (4207799 vs. 4194304)
The easiest work around would be to fetch only the keys you are looking for, e.g. for entity in ds_c.query(kind='Kind').keys_only().fetch(): It's probably a good idea to add a limit to your fetch to reduce the size of the response, e.g. for entity in ds_c.query(kind='Kind').keys_only().fetch(limit=500):

PyTorch Getting Started example not working

I followed this tutorial in the Getting Started section on the PyTorch website: "Deep Learning with PyTorch: A 60 Minute Blitz" and I downloaded the code for "Training a Classifier" on the bottom of the page and I ran it, and it's not working for me. I'm using the CPU version of PyTorch if that makes a difference. I'm new to Python and basically learning it for Pytorch. Here's the error message, Control + K isn't working for me because I think the editing interface is different for the first few posts and Stack Overflow needs to fix it. Or it could just be my browser:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Anonymous\PycharmProjects\pytorchHelloWorld\train_network.py", line 100, in <module>
dataiter = iter(trainloader)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
return _DataLoaderIter(self)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "C:/Users/Anonymous/PycharmProjects/pytorchHelloWorld/train_network.py", line 100, in <module>
dataiter = iter(trainloader)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
return _DataLoaderIter(self)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
The error is likely due to multiprocessing in DataLoader and Windows since the tutorial is using num_workers=2. Python3 documentation shares some guidelines on this:
Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
You can either set num_workers=0 or you need to wrap your code within if __name__ == '__main__'
# Safe DataLoader multiprocessing with Windows
if __name__ == '__main__':
# Code to load the data with num_workers > 1
Check this reply on PyTorch forum for more details and this issue on GitHub.

python-telegram-bot doesn't work on Python 3.x

I'm trying to make a program that uses python-telegram-bot, and I need to use Python 3 for that. But when I try to launch it using Python 3, I get some error I don't quite understand. The same happens when I use the built-in examples. Could somebody explain what it means? The precise output of the program follows.
2016-06-29 06:17:44,260 - telegram.ext.updater - ERROR - unhandled exception
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/telegram/ext/updater.py", line 105, in _thread_wrapper
target(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/telegram/ext/updater.py", line 216, in _start_polling
self._bootstrap(bootstrap_retries, clean=clean, webhook_url='')
File "/usr/local/lib/python3.4/dist-packages/telegram/ext/updater.py", line 320, in _bootstrap
self.bot.setWebhook(webhook_url=webhook_url, certificate=cert)
File "/usr/local/lib/python3.4/dist-packages/telegram/bot.py", line 121, in decorator
result = func(self, *args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/telegram/bot.py", line 1263, in setWebhook
result = request.post(url, data, timeout=kwargs.get('timeout'))
File "/usr/local/lib/python3.4/dist-packages/telegram/utils/request.py", line 174, in post
headers={'Content-Type': 'application/json'})
File "/usr/local/lib/python3.4/dist-packages/telegram/utils/request.py", line 100, in _request_wrapper
resp = _get_con_pool().request(*args, **kwargs)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 72, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 135, in request_encode_body
**urlopen_kw)
TypeError: urlopen() got multiple values for keyword argument 'body'
Exception in thread updater:
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.4/dist-packages/telegram/ext/updater.py", line 105, in _thread_wrapper
target(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/telegram/ext/updater.py", line 216, in _start_polling
self._bootstrap(bootstrap_retries, clean=clean, webhook_url='')
File "/usr/local/lib/python3.4/dist-packages/telegram/ext/updater.py", line 320, in _bootstrap
self.bot.setWebhook(webhook_url=webhook_url, certificate=cert)
File "/usr/local/lib/python3.4/dist-packages/telegram/bot.py", line 121, in decorator
result = func(self, *args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/telegram/bot.py", line 1263, in setWebhook
result = request.post(url, data, timeout=kwargs.get('timeout'))
File "/usr/local/lib/python3.4/dist-packages/telegram/utils/request.py", line 174, in post
headers={'Content-Type': 'application/json'})
File "/usr/local/lib/python3.4/dist-packages/telegram/utils/request.py", line 100, in _request_wrapper
resp = _get_con_pool().request(*args, **kwargs)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 72, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 135, in request_encode_body
**urlopen_kw)
TypeError: urlopen() got multiple values for keyword argument 'body'
2016-06-29 06:17:45,248 - telegram.ext.dispatcher - CRITICAL - stopping due to exception in another thread
For this to work, install urllib3>=1.10 from the package manager.

Resources