Gunicorn + pyramid 'OPTIONS' method has stopped working - pyramid

I have an app server and api server running with gunicorn. Both servers are pyramid apps.
Browser loads html and static files from app server and then talks to api server directly.
Browser needs to do a file upload operation. This has always worked. I have done the following for the OPTION method that does file upload because it is a cross domain post
response.headers['Access-Control-Allow-Origin'] = origin
The issue is that the local machine has started timing out the OPTION call to the api server. The first time this error shows up
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/workers/sync.py", line 87, in handle
req = six.next(parser)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/parser.py", line 39, in __next__
self.mesg = self.mesg_class(self.cfg, self.unreader, self.req_count)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 152, in __init__
super(Request, self).__init__(cfg, unreader)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 49, in __init__
unused = self.parse(self.unreader)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 167, in parse
line, rbuf = self.read_line(unreader, buf, self.limit_request_line)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 217, in read_line
self.get_data(unreader, buf)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 155, in get_data
data = unreader.read()
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/unreader.py", line 38, in read
d = self.chunk()
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/unreader.py", line 65, in chunk
return self.sock.recv(self.mxchunk)
error: [Errno 4] Interrupted system call
2013-05-27 11:37:17 [19097] [INFO] Handling signal: winch
2013-05-27 11:37:17,533 ERROR [gunicorn.error][MainThread] Error processing request.
And after that it stops working altogether. All other calls work. The OPTIONS call used to work. I am not sure why this is not working and it has left me frustrated. The same works on our test servers etc. I am not sure what is broken and why

Related

Pandas - Suppress call to _validate_archive? - Python 3.9.13/Pandas 1.4.2/Openpyxl 3.0.10

I have been digging through questions/answers for the BadZipFile exception when calling read_excel() using the openpyxl engine. I looked at my error stack and dug into the Python files and it looks like ZipFile.py is being very strict when validating an archive. It is looking for an EOCD (end of central directory) signature in my XLSX archive file.
When unzipping an archive, if the EOCD cannot be found or validated, there is supposed to be an error thrown when calling unzip in Linux, but I am not seeing it. I am unsure whether the EOCD is there/correct or not (anyone know of a tool to check?).
However, from looking through my stack (below) I am examining what is happening in openpyxl/reader/excel.py. At line 67, the _validate_archive function is defined. I am wondering about the examination for a "file like object".
My use case is an AWS Lambda function which has an HTTP endpoint. I POST an Excel file (I am testing with Postman and using the binary body for the request where I select my Excel file) to the endpoint. The function needs to handle both CSV and XLSX. I include a custom header in which I specify the original file name. I split the filename, look at the extension, and either call read_csv or read_excel. read_csv is working great.
Either way, the file is coming in as base64. For an XLSX file, Pandas handles this OK - up until we get to _validate_archive... What I am unsure of is how the "file like object" check at line 76...
is_file_like = hasattr(filename, 'read')
... interacts with the type by which the Base64 is handled. I am trying straight string (event["body"]), the bytes() object, the BytesIO class, and the StringIO class, all to the same BadZipFile exception.
So... is it possible in Pandas/Openpyxl to suppress the validation of the archive? I want to be able to call read_excel() but not have the archive validated and see what happens.
My error stack:
"Error: (<class 'zipfile.BadZipFile'>, BadZipFile('File is not a zip file'),
<traceback object at 0x7f1019589dc0>)\r\n<class 'zipfile.BadZipFile'>\r\n
File is not a zip file\r\nTraceback (most recent call last):\n
File "/var/task/lambda_function.py", line 20, in lambda_handler\n inv = pd.read_excel( bufferedString, engine='openpyxl', index_col=0 )\n
File "/opt/python/pandas/util/_decorators.py", line 311, in wrapper\n return func(*args, **kwargs)\n
File "/opt/python/pandas/io/excel/_base.py", line 457, in read_excel\n io = ExcelFile(io, storage_options=storage_options, engine=engine)\n
File "/opt/python/pandas/io/excel/_base.py", line 1419, in init\n self._reader = self._engines[engine](self._io, storage_options=storage_options)\n
File "/opt/python/pandas/io/excel/_openpyxl.py", line 525, in init\n super().init(filepath_or_buffer, storage_options=storage_options)\n
File "/opt/python/pandas/io/excel/_base.py", line 518, in init\n self.book = self.load_workbook(self.handles.handle)\n
File "/opt/python/pandas/io/excel/_openpyxl.py", line 536, in load_workbook\n return load_workbook(\n
File "/opt/python/openpyxl/reader/excel.py", line 315, in load_workbook\n reader = ExcelReader(filename, read_only, keep_vba,\n
File "/opt/python/openpyxl/reader/excel.py", line 124, in init\n self.archive = _validate_archive(fn)\n
File "/opt/python/openpyxl/reader/excel.py", line 96, in _validate_archive\n archive = ZipFile(filename, 'r')\n
File "/var/lang/lib/python3.9/zipfile.py", line 1264, in init\n self._RealGetContents()\n
File "/var/lang/lib/python3.9/zipfile.py", line 1331, in _RealGetContents\n
raise BadZipFile("File is not a zip file")\n
zipfile.BadZipFile: File is not a zip file\n"

Websocket error: "Handhshake status 403 forbidden"

i have a problem with my written code which i wrote with the help of this documentation: https://exchange.blockchain.com/api/#introduction.
The code should send messages, receive messages and then i want to work with the received messages. And that 3 times per day (in the morning, noon and evening).
However the code seems to be working sometimes, actually up to 6 days (record, yay!) and then it stops working and i get an error output:
Traceback (most recent call last):
File "<ipython-input-1-052db9d33d34>", line 1, in <module>
runfile('Pathtofile/File.py', wdir='workingdirectory')
File "C:\Users\iamdabest\Downloads\latexNpython\WinPython64-3.6.8.0\python-
3.6.8.amd64\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line
786, in runfile
execfile(filename, namespace)
File "C:\Users\orami?\Downloads\latexNpython\WinPython64-3.6.8.0\python-
3.6.8.amd64\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line
110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "Pathtofile/File.py", line 235, in <module>
main()
File "Pathtofile/File.py", line 229, in main
trigger(updater.bot)
File "Pathtofile/File.py", line 171, in trigger
y = price_func()[0]
File "Pathtofile/File.py", line 46, in price_func
ws = create_connection(url, **options)
File "C:\Users\name\Downloads\latexNpython\WinPython64-3.6.8.0\python-
3.6.8.amd64\lib\site-packages\websocket\_core.py", line 606, in
create_connection
websock.connect(url, **options)
File "C:\Users\name\Downloads\latexNpython\WinPython64-3.6.8.0\python-
3.6.8.amd64\lib\site-packages\websocket\_core.py", line 253, in connect
self.handshake_response = handshake(self.sock, *addrs, **options)
File "C:\Users\name\Downloads\latexNpython\WinPython64-3.6.8.0\python-
3.6.8.amd64\lib\site-packages\websocket\_handshake.py", line 57, in handshake
status, resp = _get_resp_headers(sock)
File "C:\Users\name\Downloads\latexNpython\WinPython64-3.6.8.0\python-
3.6.8.amd64\lib\site-packages\websocket\_handshake.py", line 143, in
_get_resp_headers
raise WebSocketBadStatusException("Handshake status %d %s", status,
status_message, resp_headers)
WebSocketBadStatusException: Handshake status 403 Forbidden
Now please keep in mind that i never got teached this stuff by a trained professional. All the knowledge i gained is from playing around (plotting and analyzing data) with python and the help of the internet.
I don't really understand why it is working few days and then suddenly stops working.
My questions:
In fact i have no problem when the codes doesn't gives an output as long as it keeps running. So how can i tell the code to 'ignore' the error and keep moving or maybe to automatic restart the code if the error occurs? I tried simple stuff like if statement or try and expect but i wasn't successful.
The error comes because the authentication between Server and Client failed? or what does it mean?
Is there another workaround? I already read somewhere to use another websocket (websockets) but wasn't able to get it working.
Here is a (M)WE of my code:
from websocket import create_connection
import json
options = {}
options['origin'] = 'https://exchange.blockchain.com'
url = "wss://ws.prod.blockchain.info/mercury-gateway/v1/ws"
def price_func():
ws = create_connection(url, **options)
msg = '{"token": "mysecretAPItoken", "action": "subscribe", "channel": "auth"}' # here i subscribe
ws.send(msg)
msg = '{"action": "subscribe", "channel": "balances"}' # here i am asking for the json files
ws.send(msg)
result1 = json.loads(ws.recv()) # first answer is the confirmation of connecting
result2 = json.loads(ws.recv()) # save the received answers to work with them later
msg = '{"token": "mysecretAPItoken", "action": "unsubscribe", "channel": "balances"}' # unsubscribing, not sure if i need to do that but it seems to me like 'clean' working; if smth goes wrong it unsubscribes.
ws.send(msg)
msg = '{"token": "mysecretAPItoken", "action": "unsubscribe", "channel": "auth"}' # unsubscribing; same as 3 lines above
ws.send(msg)
ws.close()
return result2['balances'][0]['rate'], result2['balances'][1]['rate'] # gives me the prices
x = price_func()[0]
print('Price of Shitcoin is around' + str(round(x,2)))
Thx for any help.

Snowflake connector.connect breaks with an error message that tells me nothing

I installed the snowflake connector via the command: pip3 install snowflake-connector-python[pandas]==2.3.3 asn1crypto==1.3.0 --user
I attempted to connect via:
from snowflake import connector
con = connector.connect(
host='.snowflakecomputing.com',
user='THE USER I USE FOR LOGGING IN TO MY TRIAL ACCOUNT',
password='THE PASSWORD I USE FOR LOGGING IN TO MY TRIAL ACCOUNT',
account='zka81761.us-east-1',
warehouse='COMPUTE_WH',
database='DEMO_DB',
schema='PUBLIC',
protocol='https',
port=443)
When executing the above code it just hangs for several minutes then I get an error:
snowflake.connector.errors.OperationalError: 250003: Failed to execute request: encoding with 'idna' codec failed (UnicodeError: label empty or too long)
The longer version is:
File "tests/integration_tests/data_sources/test_snowflake_ds.py", line 6, in test_snowflake_ds
ds = SnowflakeDS(query='SELECT * FROM HEALTHCARE_COSTS', host='.snowflakecomputing.com', user='GEORGE3D6', password='a passwordd', account='zka81761.us-east-1', warehouse='COMPUTE_WH', database='DEMO_DB', schema='PUBLIC', protocol='https', port=443)
File "/home/george/mindsdb_native/mindsdb_native/libs/data_types/data_source.py", line 13, in __init__
df, col_map = self._setup(*args, **kwargs)
File "/home/george/mindsdb_native/mindsdb_native/libs/data_sources/snowflake_ds.py", line 21, in _setup
port=port)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/__init__.py", line 52, in Connect
return SnowflakeConnection(**kwargs)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/connection.py", line 219, in __init__
self.connect(**kwargs)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/connection.py", line 414, in connect
self.__open_connection()
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/connection.py", line 613, in __open_connection
self._authenticate(auth_instance)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/connection.py", line 839, in _authenticate
self.__authenticate(self.__preprocess_auth_instance(auth_instance))
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/connection.py", line 869, in __authenticate
session_parameters=self._session_parameters,
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/auth.py", line 209, in authenticate
socket_timeout=self._rest._connection.login_timeout)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/network.py", line 509, in _post_request
_include_retry_params=_include_retry_params)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/network.py", line 586, in fetch
**kwargs)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/network.py", line 676, in _request_exec_wrapper
conn, full_url, cause)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/network.py", line 706, in handle_invalid_certificate_error
'errno': ER_FAILED_TO_REQUEST,
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/errors.py", line 128, in errorhandler_wrapper
connection.errorhandler(connection, cursor, error_class, error_value)
File "/home/george/.local/lib/python3.7/site-packages/snowflake/connector/errors.py", line 90, in default_errorhandler
done_format_msg=error_value.get('done_format_msg'))
snowflake.connector.errors.OperationalError: 250003: Failed to execute request: encoding with 'idna' codec failed (UnicodeError: label empty or too long)
This error message tells me nothing, any help would be appreicated
According to the documentation on the python API the host field is no longer used so try removing that. Also, even if it was used, you haven't enclosed it properly in quotes:
You have: host='.snowflakecomputing.com,
Should be: host='.snowflakecomputing.com',
First, I'd see if removing the host completely fixes your issue since it shouldn't be used anyway.
Googling the error, and the error message itself, suggests that the issue is due to the URL being too long so I'd say the error is down to the fact that you haven't enclosed it properly.
I used to connect snowflake from python like below:
import snowflake.connector as sf
sfconnection = sf.connect(
user='THE USER I USE FOR LOGGING IN TO MY TRIAL ACCOUNT',
password='THE PASSWORD I USE FOR LOGGING IN TO MY TRIAL ACCOUNT',
account='zka81761.us-east-1',
warehouse='COMPUTE_WH',
database='DEMO_DB',
schema='PUBLIC')
Apparently the docs are wrong and the host should now be the full URL (so, in my case zka81761.us-east-1.snowflakecomputing.com). That is to say, it should include the account.

Rasa ReminderScheduled makes the program crash, error with timezone

I tried to use Rasa's ReminderScheduled as specified in the docs. I'm using Windows 10 with the Ubuntu subsystem to run the code. The code that calls the reminder is the following:
export_timeout = datetime.timedelta(seconds=30)
class ActionGiveListProducts(Action):
def name(self):
return 'action_give_list_products'
def run(self, dispatcher, tracker, domain):
s = getInfo("listeproduits")
dispatcher.utter_message(s)
return [ReminderScheduled("action_export_logs", datetime.datetime.now() + export_timeout)]
Executing this Action causes the following error:
Traceback (most recent call last):
File "bot.py", line 136, in <module>
run()
File "bot.py", line 108, in run agent.handle_channel(ConsoleInputChannel())
File "/usr/local/lib/python3.5/dist-packages/rasa_core/agent.py", line 126, in handle_channel
processor.handle_channel(input_channel)
File "/usr/local/lib/python3.5/dist-packages/rasa_core/processor.py", line 60, in handle_channel
input_channel.start_sync_listening(self.handle_message)
File "/usr/local/lib/python3.5/dist-packages/rasa_core/channels/console.py", line 52, in start_sync_listening
self._record_messages(message_handler)
File "/usr/local/lib/python3.5/dist-packages/rasa_core/channels/console.py", line 45, in _record_messages
self.sender_id))
File "/usr/local/lib/python3.5/dist-packages/rasa_core/processor.py", line 83, in handle_message
self._predict_and_execute_next_action(message, tracker)
File "/usr/local/lib/python3.5/dist-packages/rasa_core/processor.py", line 262, in _predict_and_execute_next_action
dispatcher)
File "/usr/local/lib/python3.5/dist-packages/rasa_core/processor.py", line 312, in _run_action
self._schedule_reminders(events, dispatcher)
File "/usr/local/lib/python3.5/dist-packages/rasa_core/processor.py", line 296, in _schedule_reminders
replace_existing=True)
File "/usr/local/lib/python3.5/dist-packages/apscheduler/schedulers/base.py", line 413, in add_job
'trigger': self._create_trigger(trigger, trigger_args),
File "/usr/local/lib/python3.5/dist-packages/apscheduler/schedulers/base.py", line 907, in _create_trigger
return self._create_plugin_instance('trigger', trigger, trigger_args)
File "/usr/local/lib/python3.5/dist-packages/apscheduler/schedulers/base.py", line 892, in _create_plugin_instance
return plugin_cls(**constructor_kwargs)
File "/usr/local/lib/python3.5/dist-packages/apscheduler/triggers/date.py", line 20, in __init__
timezone = astimezone(timezone) or get_localzone()
File "/usr/local/lib/python3.5/dist-packages/apscheduler/util.py", line 86, in astimezone
'Unable to determine the name of the local timezone -- you must explicitly '
ValueError: Unable to determine the name of the local timezone -- you must explicitly
specify the name of the local timezone.
Please refrain from using timezones like EST to prevent problems with daylight saving time.
Instead, use a locale based timezone name (such as Europe/Helsinki).
I tried to set the timezone in the launch code as following:
os.environ['TZ'] = 'Europe/London'
time.tzset()
but this didn't change anything. I also searched for other solutions, but found nothing relevant.
Does someone know what causes this error exactly and if there is way to eliminate it?
I assume you run this in a Linux environment (as I had the same error), so try to set the following:
keep as you had it in the python code:
keepos.environ['TZ'] = 'Europe/London'
and also set the timezone in the os:
sudo cp /usr/share/zoneinfo/Europe/London /etc/localtime
It worked for me.
The following works on the Ubuntu subsystem for Windows:
sudo cp /usr/share/zoneinfo/America/New_York /etc/localtime
TZ=America/New_York rasa x
You can also make the TZ environment variable permanent by adding export TZ=America/New_York in your ~/.bashrc file.

Paginated request using python-asana API

I am trying to export all tasks from all of my asana work-spaces using python-asana API. But at some point it exists after giving the following error message.
Traceback (most recent call last):
File "export.py", line 56, in <module>
for index, task in enumerate(tasks):
File "build\bdist.win32\egg\asana\page_iterator.py", line 58, in items
File "build\bdist.win32\egg\asana\page_iterator.py", line 54, in next
File "build\bdist.win32\egg\asana\page_iterator.py", line 43, in __next__
File "build\bdist.win32\egg\asana\page_iterator.py", line 74, in get_next
File "build\bdist.win32\egg\asana\client.py", line 104, in get
File "build\bdist.win32\egg\asana\client.py", line 75, in request
asana.error.InvalidRequestError: Invalid Request: Your pagination token has expired.
I read that to solve this we need to make paginated requests. But I tried passing only offset to my request as following:
tasks = client.tasks.find_all({'project' : project['id']}, limit=50)
But, there was no difference as I was not getting any 'next_page' value even though there was more than 50 tasks in the project.
So my question is:
How can I do paginated request using python-asana API? An explanation with an example would be best!
EDIT:
I am fetching the tasks as below:
tasks = client.tasks.find_all({'project' : project['id']}, item_limit=1)
print "Tasks", tasks # Prints generator object
for index, task in enumerate(tasks):
complete_task = client.tasks.find_by_id(task["id"])
print complete_task #Prints complete task dictionary
Now My question is where will I get the next_page content for the remaining tasks and how do I access it.

Resources