Optimising HTTP Requests/Browser Throttling Requests? - browser

I currently have an application that trades virtual items and is making at least 40 CFHTTP requests per second to the host's server.
The issue I'm encountering is that it's taking anywhere from 400ms+ for my CFHTTP call to return a response which means my application is missing out on 99% of the deals it finds, as there are lots of other competiting applications out there that are getting a faster response.
I've struggled to find a cause and/or a solution to this situation so I wrote a script in both CF and C# that makes 10 http requests timing each one which resulted in the following response times:
In CF using the following browsers:
IE9: 384, 444, 302, 570, 535, 317, 510, 349, 357, 467 - Average 423.5ms
Firefox 27.0.1: 354, 587, 291, 480, 437, 304, 537, 322, 286, 652 - Average 425ms
Chrome: 300, 328, 328, 639, 285, 259, 348, 291, 299, 414 - Average 349.7ms
In C# Console Application:
597, 43, 96, 52, 44, 305, 67, 91, 54, 270 - Average 161.9ms
As you can see there is a big performance difference when making an HTTPWebRequest in a C# Console Application which is making me think that perhaps the CFHTTP requests are being throttled? Or could it maybe be something to do with the browsers?
Any help would be greatly appreciated!

I dont have enough to comment, so ill ask here.
Have you tried the Java classes to make the http calls?
obj = CreateObject("java", "org.apache.commons.httpclient.HttpClient");
get = CreateObject("java", "org.apache.commons.httpclient.methods.GetMethod");
header = CreateObject("java", "org.apache.commons.httpclient.Header");
obj.init();
tmp = get.init("http://google.com");
res = obj.executeMethod(tmp);
return res.response;

Related

Dagster - Tail process exception # compute logs

I am new to using dagster. After much tinkering, I managed to create a partition pipeline.
However, when I tried to run >10 backfills using the dagit UI, I encounter this error below.
Additionally, I have 5 ops but only 2 managed to run to completion and the remaining 3 ops are skipped; this resulted in the UI display success even though it should have failed.
I do not encounter this issue if I run < 5 backfills at 1 go.
Any kind souls able to help on this? I will try to provide more info if necessary.
Not sure if anything to do with dagster.yaml but I did include this section:
run_coordinator:
module: dagster.core.run_coordinator
class: QueuedRunCoordinator
config:
max_concurrent_runs: 3
Exception:
Exception: Timed out waiting for tail process to start
File "o:\sin\ca1e\04_data science stuff\virtual environments\dagsterpipeline\lib\site-packages\dagster\core\execution\plan\execute_plan.py", line 96, in inner_plan_execution_iterator
stack.close()
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 533, in close
self.__exit__(None, None, None)
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 525, in __exit__
raise exc_details[1]
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 510, in __exit__
if cb(*exc_details):
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "o:\sin\ca1e\04_data science stuff\virtual environments\dagsterpipeline\lib\site-packages\dagster\core\storage\compute_log_manager.py", line 70, in watch
yield
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "o:\sin\ca1e\04_data science stuff\virtual environments\dagsterpipeline\lib\site-packages\dagster\core\storage\local_compute_log_manager.py", line 52, in _watch_logs
yield
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "o:\sin\ca1e\04_data science stuff\virtual environments\dagsterpipeline\lib\site-packages\dagster\core\execution\compute_logs.py", line 31, in mirror_stream_to_file
yield pids
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "o:\sin\ca1e\04_data science stuff\virtual environments\dagsterpipeline\lib\site-packages\dagster\core\execution\compute_logs.py", line 75, in tail_to_stream
yield pids
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "o:\sin\ca1e\04_data science stuff\virtual environments\dagsterpipeline\lib\site-packages\dagster\core\execution\compute_logs.py", line 104, in execute_windows_tail
raise Exception("Timed out waiting for tail process to start")

Getting MultiValueDictKeyError on Django while trying to receiving a stripe hook

I am getting a django.utils.datastructures.MultiValueDictKeyError while trying to receive a stripe signal when a subscription's charge fails,
Here is the traceback
Traceback (most recent call last):
File "/home/aditya/dev/cn/pmx_env/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/aditya/dev/cn/pmx_env/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/aditya/dev/cn/pmx_env/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/aditya/dev/cn/pmx_env/lib/python3.7/site-packages/sentry_sdk/integrations/django/views.py", line 67, in sentry_wrapped_callback
return callback(request, *args, **kwargs)
File "/usr/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/home/aditya/dev/cn/pmx_env/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/home/aditya/dev/cn/pmx_env/lib/python3.7/site-packages/zebra/views.py", line 38, in webhooks
request_data = json.loads(request.POST["request_data"])
File "/home/aditya/dev/cn/pmx_env/lib/python3.7/site-packages/django/utils/datastructures.py", line 80, in __getitem__
raise MultiValueDictKeyError(key)
django.utils.datastructures.MultiValueDictKeyError: 'request_data'
[13/Nov/2021 09:36:29] "POST /zebra/webhooks/ HTTP/1.1" 500 121292
I am using
django 2.2
Python 3.7.12
Thanks in advance for any solutions and suggestions.
There is no need for json.loads, you can just use the request.POST.get() method like this
request_data = request.POST.get("request_data")
This will return a None value if there is no request_data key in the dictionary.

googletrans Translate() not working on Spyder but works on Colab

I am using googletrans Translator on offline data in local repo:
translator = Translator()
translations = []
for element in df['myText']:
translations.append(translator.translate(element).text)
df['translations'] = translations
On Google Colab it works fine(20 mins) but on my machine it takes 30 mins and stops with ReadTimeout error:
File "<ipython-input-9-2209313a9a78>", line 4, in <module>
translations.append(translator.translate(element).text)
File "C:\Anaconda3\lib\site-packages\googletrans\client.py", line 182, in translate
data = self._translate(text, dest, src, kwargs)
File "C:\Anaconda3\lib\site-packages\googletrans\client.py", line 83, in _translate
r = self.client.get(url, params=params)
File "C:\Anaconda3\lib\site-packages\httpx\_client.py", line 763, in get
timeout=timeout,
File "C:\Anaconda3\lib\site-packages\httpx\_client.py", line 601, in request
request, auth=auth, allow_redirects=allow_redirects, timeout=timeout,
File "C:\Anaconda3\lib\site-packages\httpx\_client.py", line 621, in send
request, auth=auth, timeout=timeout, allow_redirects=allow_redirects,
File "C:\Anaconda3\lib\site-packages\httpx\_client.py", line 648, in send_handling_redirects
request, auth=auth, timeout=timeout, history=history
File "C:\Anaconda3\lib\site-packages\httpx\_client.py", line 684, in send_handling_auth
response = self.send_single_request(request, timeout)
File "C:\Anaconda3\lib\site-packages\httpx\_client.py", line 719, in send_single_request
timeout=timeout.as_dict(),
File "C:\Anaconda3\lib\site-packages\httpcore\_sync\connection_pool.py", line 153, in request
method, url, headers=headers, stream=stream, timeout=timeout
File "C:\Anaconda3\lib\site-packages\httpcore\_sync\connection.py", line 78, in request
return self.connection.request(method, url, headers, stream, timeout)
File "C:\Anaconda3\lib\site-packages\httpcore\_sync\http11.py", line 62, in request
) = self._receive_response(timeout)
File "C:\Anaconda3\lib\site-packages\httpcore\_sync\http11.py", line 115, in _receive_response
event = self._receive_event(timeout)
File "C:\Anaconda3\lib\site-packages\httpcore\_sync\http11.py", line 145, in _receive_event
data = self.socket.read(self.READ_NUM_BYTES, timeout)
File "C:\Anaconda3\lib\site-packages\httpcore\_backends\sync.py", line 62, in read
return self.sock.recv(n)
File "C:\Anaconda3\lib\contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Anaconda3\lib\site-packages\httpcore\_exceptions.py", line 12, in map_exceptions
raise to_exc(exc) from None
ReadTimeout: The read operation timed out
My machine: 16 GB Ram (i5 + NVIDIA);
Google Colab RAM: 0.87 GB/12.72 GB
# Data Size
len(df) : 1800
Not sure why doesn't it run on my local machine? I have worked on heavier datasets before.
I am using Python 3 (Spyder 4.0).
I'm having some problems with this translation as well... It appears that the error you're getting has nothing to do with your machine, but with the request to the API timing out. Try and pass a Timeout object from the httpx library to the Translator builder. Something like this:
import httpx
timeout = httpx.Timeout(5) # 5 seconds timeout
translator = Translator(timeout=timeout)
You can change to 5 to another value, if needed. It has solver the problem for me, so far.

Gremlin paging for big dataset query

I am using gremlin server, I have a big data set and I performing the gremlin paging. Following is the sample of query:
query = """g.V().both().both().count()"""
data = execute_query(query)
for x in range(0,int(data[0]/10000)+1):
print(x*10000, " - ",(x+1)*10000)
query = """g.V().both().both().range({0}*10000, {1}*10000)""".format(x,x+1)
data = execute_query(query)
def execute_query(query):
"""query execution"""
Above query is working fine, for pagination i have to know the rang where to stop the execution of the query. for getting the range i have to first fetch the count of the query and pass to the for loop. Is there any other to use the pagination of gremlin.
-- Pagination is required, because its fails when fetching 100k data in a single ex. g.V().both().both().count()
if we don't use pagination then its giving me this following error:
ERROR:tornado.application:Uncaught exception, closing connection.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
ERROR:tornado.application:Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f3e1c409ae8>)
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/ioloop.py", line 604, in _run_callback
ret = callback()
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 554, in wrapper
return callback(*args)
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 343, in wrapped
raise_exc_info(exc)
File "<string>", line 3, in raise_exc_info
File "/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py", line 314, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 807, in _on_frame_data
self._receive_frame()
File "/usr/local/lib/python3.5/dist-packages/tornado/websocket.py", line 697, in _receive_frame
self.stream.read_bytes(2, self._on_frame_start)
File "/usr/local/lib/python3.5/dist-packages/tornado/iostream.py", line 312, in read_bytes
assert isinstance(num_bytes, numbers.Integral)
File "/usr/lib/python3.5/abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "/usr/lib/python3.5/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
Traceback (most recent call last):
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 59, in <module>
data = execute_query(query)
File "/home/rgupta/Documents/BitBucket/ecodrone/ecodrone/test2.py", line 53, in execute_query
results = future_results.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/resultset.py", line 81, in cb
f.result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 398, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/connection.py", line 77, in _receive
self._protocol.data_received(data, self._results)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
self.data_received(data, results_dict)
File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
this line repeats 100 times File "/usr/local/lib/python3.5/dist-packages/gremlin_python/driver/protocol.py", line 100, in data_received
This question is largely answered here but I'll add some more comment.
Your approach to pagination is really expensive as I'm not aware of any graphs that will optimize that particular traversal and you're basically iterating all that data a lot of times. You do it once for the count(), then you iterate the first 10000, then for the second 10000, you iterate the first 10000 followed by the second 10000, and then on the third 10000, you iterate the first 20000 followed by the third 10000 and so on...
I'm not sure if there is more to your logic, but what you have looks like a form of "batching" to get smaller bunches of results. There isn't much need to do it that way as Gremlin Server is already doing that for you internally. Were you to just send g.V().both().both() Gremlin Server is going to batch up results given the resultIterationBatchSize configuration option.
Anyway, there isn't really a better way to make paging work that I am aware of beyond what was explained in the other question that I mentioned.

Error 403 Forbidden - After a corrupted uninstall of Live_Chat module

I have been testing odoo for about 2 weeks now, my site is kind of up and run when I tried to uninstall Live_chat module, everything seems broken to part ... I have quite a lot of data on my test_odoo_machine, so it is greatly appreciated if someone could help me out.
Instead of install odoo on ubuntu or windows like most other people, I wanted to try something different so i go with DOCKER_VERSION of odoo.. everything was working fine until i tried to uninstall live_chat module, everything stop workings and I have been searching, searching and searching for solution but without lucks.
Could someone please help me, here is the log:
24 10:53:14,202 1 [1;32m[1;49mINFO[0m edh_vietnam werkzeug: 172.17.0.1 - - [24/Jul/2017 10:53:14] "GET / HTTP/1.1" 500 - 24 10:53:14,210 1 [1;31m[1;49mERROR[0m edh_vietnam werkzeug: Error on request:
k (most recent call last):
sr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi self.server.app)
sr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute ion_iter = app(environ, start_response)
sr/lib/python2.7/dist-packages/odoo/service/server.py", line 246, in app elf.app(e, s)
sr/lib/python2.7/dist-packages/odoo/service/wsgi_server.py", line 184, in application pplication_unproxied(environ, start_response)
sr/lib/python2.7/dist-packages/odoo/service/wsgi_server.py", line 170, in application_unproxied handler(environ, start_response)
sr/lib/python2.7/dist-packages/odoo/http.py", line 1306, in __call__ elf.dispatch(environ, start_response)
sr/lib/python2.7/dist-packages/odoo/http.py", line 1280, in __call__ elf.app(environ, start_wrapped)
sr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 588, in __call__ elf.app(environ, start_response)
sr/lib/python2.7/dist-packages/odoo/http.py", line 1471, in dispatch ir_http._dispatch()
sr/lib/python2.7/dist-packages/odoo/addons/website/models/ir_http.py", line 191, in _dispatch url_lang and request.website_multilang and request.lang != request.website.default_lang_code)
sr/lib/python2.7/dist-packages/odoo/fields.py", line 869, in __get__ ermine_value(record)
sr/lib/python2.7/dist-packages/odoo/fields.py", line 971, in determine_value prefetch_field(self)
sr/lib/python2.7/dist-packages/odoo/models.py", line 3056, in _prefetch_field records.read([f.name for f in fs], load='_classic_write')
sr/lib/python2.7/dist-packages/odoo/models.py", line 2996, in read ad_from_database(stored, inherited)
sr/lib/python2.7/dist-packages/odoo/models.py", line 3124, in _read_from_database te(query_str, params)
sr/lib/python2.7/dist-packages/odoo/sql_db.py", line 141, in wrapper
(self, *args, **kwargs)
sr/lib/python2.7/dist-packages/odoo/sql_db.py", line 218, in execute lf._obj.execute(query, params)
ingError: column website.channel_id does not exist
...d","website"."social_youtube" as "social_youtube","website"....
24 10:53:14,484 1 [1;32m[1;49mINFO[0m edh_vietnam odoo.sql_db: bad query: SELECT "website"."company_id" as "company_id","website"."social_linkedin" as "social_linkedin","website"."id" as "id","website"."compress_html" as "compress_html","website"."google_analytics_key" as "google_analytics_key","website"."create_uid" as "create_uid","website"."cdn_activated" as "cdn_activated","website"."default_lang_id" as "default_lang_id","website"."website_form_enable_metadata" as "website_form_enable_metadata","website"."name" as "name","website"."write_uid" as "write_uid","website"."social_youtube" as "social_youtube","website"."channel_id" as "channel_id","website"."social_twitter" as "social_twitter","website"."domain" as "domain","website"."user_id" as "user_id","website"."cdn_filters" as "cdn_filters","website"."social_facebook" as "social_facebook","website"."cdn_url" as "cdn_url","website"."create_date" as "create_date","website"."social_github" as "social_github","website"."default_lang_code" as "default_lang_code","website"."write_date" as "write_date","website"."social_googleplus" as "social_googleplus" FROM "website"
ebsite".id IN (1) ORDER BY "website"."id"

Resources