in mlflow under Python, get_latest_versions method fails with "405 Method Not Allowed", but get_model_version method works fine - mlflow

For mlflow, when I use get_model_version(model_name, model_version) method it works fine,
but get_latest_versions method fails as shown below. The same model that I am able to get by its version number also has assigned stage 'Production' but I can't get it by specifying this stage. On my client side I have mlflow 1.24.0 (tried 1.23.0 and 1.23.1 too) under Python 3.9.7 (tried 3.8.8 too) on Windows 10. I believe the server uses mlflow 1.18.
import mlflow
from mlflow.tracking import MlflowClient
...
mlflow.set_tracking_uri('https://fs-mlflow-dev.whatever.whatever.com/')
...
client = MlflowClient()
model_stage='Production'
_vers = client.get_latest_versions(model_name, [model_stage])
Traceback (most recent call last):
File "C:\Users\926304897\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\926304897\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\926304897\Documents\GitHub\CreditModeling\src\models\test_decision.py", line 92, in <module>
main(confidence_interval=conf_int)
File "C:\Users\926304897\Documents\GitHub\CreditModeling\src\models\test_decision.py", line 54, in main
cdm.load_model( model_stage=LOAD_STAGE)#model_version=27 )
File "C:\Users\926304897\Documents\GitHub\CreditModeling\src\models\decision_model.py", line 1128, in load_model
_vers = client.get_latest_versions(model_name, [model_stage])
File "C:\Users\926304897\AppData\Local\JetBrains\PyCharmCE2021.3\demo\PyCharmLearningProject\venv\lib\site-packages\mlflow\tracking\client.py", line 2048, in get_latest_versions
return self._get_registry_client().get_latest_versions(name, stages)
File "C:\Users\926304897\AppData\Local\JetBrains\PyCharmCE2021.3\demo\PyCharmLearningProject\venv\lib\site-packages\mlflow\tracking\_model_registry\client.py", line 149, in get_latest_v
ersions
return self.store.get_latest_versions(name, stages)
File "C:\Users\926304897\AppData\Local\JetBrains\PyCharmCE2021.3\demo\PyCharmLearningProject\venv\lib\site-packages\mlflow\store\model_registry\rest_store.py", line 197, in get_latest_v
ersions
response_proto = self._call_endpoint(GetLatestVersions, req_body, call_all_endpoints=True)
File "C:\Users\926304897\AppData\Local\JetBrains\PyCharmCE2021.3\demo\PyCharmLearningProject\venv\lib\site-packages\mlflow\store\model_registry\rest_store.py", line 61, in _call_endpoin
t
return call_endpoints(self.get_host_creds(), endpoints, json_body, response_proto)
File "C:\Users\926304897\AppData\Local\JetBrains\PyCharmCE2021.3\demo\PyCharmLearningProject\venv\lib\site-packages\mlflow\utils\rest_utils.py", line 251, in call_endpoints
return call_endpoint(host_creds, endpoint, method, json_body, response_proto)
File "C:\Users\926304897\AppData\Local\JetBrains\PyCharmCE2021.3\demo\PyCharmLearningProject\venv\lib\site-packages\mlflow\utils\rest_utils.py", line 240, in call_endpoint
response = verify_rest_response(response, endpoint)
File "C:\Users\926304897\AppData\Local\JetBrains\PyCharmCE2021.3\demo\PyCharmLearningProject\venv\lib\site-packages\mlflow\utils\rest_utils.py", line 175, in verify_rest_response
raise MlflowException("%s. Response body: '%s'" % (base_msg, response.text))
mlflow.exceptions.MlflowException: API request to endpoint /api/2.0/mlflow/registered-models/get-latest-versions failed with error code 405 != 200. Response body: '<!DOCTYPE HTML PUBLIC "
-//W3C//DTD HTML 3.2 Final//EN">
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for the requested URL.</p>

I had this same problem running MLFlow client as 1.23.1 and noticed the server was at 1.14. I upgraded the server to 1.26 and the problem went away.
I have a model registered as TestModel (v1) and flagged as Production.
I have an S3 back-end with credentials set as ENV vars along with the following:
MLFLOW_ARTIFACT_LOCATION
MLFLOW_S3_ENDPOINT_URL
MLFLOW_TRACKING_URI
import mlflow
model = mlflow.pyfunc.load_model(model_uri="models:/TestModel/1") # Works
model = mlflow.pyfunc.load_model(model_uri="models:/TestModel/Production") # Works
model = mlflow.pyfunc.load_model(model_uri="models:/TestModel/Staging") # Fails as expected with:
# mlflow.exceptions.MlflowException: No versions of model with name 'TestModel' and stage 'Staging' found

Related

Why can't upload file into the dropbox with proxy?

The urllib library installed in my os:
pip list |grep urllib
urllib3 1.25.11
I want to upload local file into the dropbox with proxy:
import dropbox
access_token = "xxxxxx"
file_from = "local_file"
file_to = "/directory_in_dropbox"
proxyDict = {
"http": "http://127.0.0.1:8123",
"https": "https://127.0.0.1:8123"
}
mysesh = dropbox.create_session(1,proxyDict)
dbx = dropbox.Dropbox(access_token,session=mysesh)
with open(file_from, 'rb') as f:
dbx.files_upload(f.read(), file_to)
It encounter errors:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/base.py", line 3208, in files_upload
r = self.request(
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/dropbox_client.py", line 326, in request
res = self.request_json_string_with_retry(host,
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/dropbox_client.py", line 476, in request_json_string_with_retry
return self.request_json_string(host,
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/dropbox_client.py", line 589, in request_json_string
r = self._session.post(url,
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 590, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 966, in _prepare_proxy
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 500, in _connect_tls_proxy
return ssl_wrap_socket(
File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 453, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 495, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "/usr/lib/python3.9/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.9/ssl.py", line 997, in _create
raise ValueError("check_hostname requires server_hostname")
ValueError: check_hostname requires server_hostname
It's no use to write the proxy dict as below:
proxyDict = {
"http": "http://127.0.0.1:8123",
"https": "http://127.0.0.1:8123"
}
The proxy 127.0.0.1:8123 works fine,i can down resources from web with proxy in youtube-dl command:
youtube-dl --proxy http://127.0.0.1:8118 $url
Updated for Paulo's advice:
Updaed for Markus' advice:
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
ssl.SSLContext.verify_mode = property(lambda self: ssl.CERT_NONE, lambda self, newval: None)
import dropbox
access_token = "xxxxxxxx"
file_from = "/home/debian/sample.sql"
file_to = "/mydoc"
proxyDict = {
"http": "http://127.0.0.1:8123",
"https": "https://127.0.0.1:8123"
}
mysesh = dropbox.create_session(1,proxyDict)
dbx = dropbox.Dropbox(access_token,session=mysesh)
with open(file_from, 'rb') as f:
dbx.files_upload(f.read(), file_to)
It encounter the below error:
/home/debian/.local/lib/python3.9/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host '127.0.0.1'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
warnings.warn(
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/base.py", line 3208, in files_upload
r = self.request(
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/dropbox_client.py", line 326, in request
res = self.request_json_string_with_retry(host,
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/dropbox_client.py", line 476, in request_json_string_with_retry
return self.request_json_string(host,
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/dropbox_client.py", line 596, in request_json_string
self.raise_dropbox_error_for_resp(r)
File "/home/debian/.local/lib/python3.9/site-packages/dropbox/dropbox_client.py", line 639, in raise_dropbox_error_for_resp
raise AuthError(request_id, err)
dropbox.exceptions.AuthError: AuthError('xxxxxxxxxxxxxxxxxxxxxx', AuthError('invalid_access_token', None))
Update for Life is complex's advice:
I tried many times to get mysesh = dropbox.create_session(1,proxyDict) to work correctly.
I decided to look at the code for dropbox-sdk-python and noted that it is calling requests.Session(). So I decided to use that over dropbox.create_session()
import requests
from dropbox import Dropbox
from dropbox.files import WriteMode
access_token = "my_access_token"
file_from = 'test.docx'
file_to = '/test.docx'
# https://free-proxy-list.net
proxyDict = {
"http": "http://50.218.57.65:80",
"https": "https://83.229.73.175:80"
}
s = requests.Session()
s.proxies = proxyDict
dbx = Dropbox(access_token, session=s)
with open(file_from, 'rb') as f:
file_content = f.read()
dbx.files_upload(f=file_content, path=file_to, mode=WriteMode.overwrite, mute=False)
Here is a screenshot of the file being written to DropBox.
I have tried this code with multiple proxy servers and it works each time.
Tldr;
So far, my understanding is it may be
Miss-use of the urllib
Bad https certificates
Solution (maybe)
urllib format
If I remember well urllib changed his format at some point from
proxyDict = {
'http':'8.88.888.8:8888',
'https':'8.88.888.8:8888'
}
proxyDict = {
'https': 'https://8.88.888.8:8888',
'http': 'http://8.88.888.8:8888',
}
Have you tried both format ?
You must have a problem with
your proxy not forwarding some stuff the right way or
your access token is wrong
the Dropbox app has the wrong permissions set
because this code (which is basically what you have in your question - even without disabling SSL certificate check!) works just fine with my access token put into the environment variable DROPBOX_ACCESS_TOKEN.
import dropbox
import sys
import os
DROPBOX_ACCESS_TOKEN = os.getenv('DROPBOX_ACCESS_TOKEN')
def uploadFile(fromFilePath,toFilePath):
proxy = '127.0.0.1:3128' # locally installed squid proxy server
proxyDict = {
"http": "http://"+proxy,
"https": "http://"+proxy # connection to proxy is http!!
}
session = dropbox.create_session(1,proxyDict)
client = dropbox.Dropbox(DROPBOX_ACCESS_TOKEN,session)
client.files_upload(open(fromFilePath, "rb").read(), toFilePath)
print("Done uploading {} to {}".format(fromFilePath,toFilePath))
if __name__=="__main__":
uploadFile(sys.argv[1],sys.argv[2])
Be aware though, that the access token - once it is generated - has the permissions that were in effect at the time of token generation. If you change the app's permissions AFTER generating the token, the token will still have the original permissions!
EDIT: It looks like, the Dropbox API is clever enough to NOT use the proxy, if it can reach the target directly. Thus this code is working with ANYTHING you put into the proxyDict and it is not at all clear, if the code works, if it really has to go through the proxy. I am working on verifying that and will update the answer accordingly.
Update: I installed squid on my MacBook and used http://127.0.0.1:3128 as the proxy in above code, but the logs showed, the code never even tried to go through the proxy. But once I set the environment variables http_proxy and https_proxy to "http://127.0.0.1:3128" the request WOULD go through the proxy and proceed successfully. So... either there is something going on, I don't fully understand or the Dropbox API has some problem with the proxy definitions in the create_session call. Time to look at the API source code I guess...
Thank for Life is complex's code,i add permission on Files and folders.
And re-generate the dropbox token ,execute the same code (nothing changed) with the new token ,done!
It is nothing related with proxy setting,just dropbox setting!

Object has no attribute 'Client' error when trying to connect to BigQuery table in Google Cloud Function

I am trying to connect to a BigQuery table from a google cloud function but my function returns the following error:
Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging Details:
500 Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
When I inspect the logs, I see the following error:
Traceback (most recent call last): File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/init.py", line 99, in view_func return function(request._get_current_object()) File "/workspace/main.py", line 72, in entry client = bigquery.Client() AttributeError: 'function' object has no attribute 'Client'
I don't understand why the attribute 'Client' doesn't exist. I believe I have imported bigquery correctly. Here is my entry point function with the simple test script:
import os
import json
from google.cloud import bigquery
def entry(request):
client = bigquery.Client()
query_job = client.query("""SELECT MAX(day) FROM `name of project and bigquery table`""")
for i in query_job:
print(i[0])
return "ok"
Here is my requirements.txt file
google-cloud-bigquery
sqlalchemy==1.4.37
pandas==1.4.2
PyMySQL==1.0.2
Any pointers would be greatly appreciated.
UPDATE
I had another function within my cloud function with was called bigquery. I think this caused a conflict with my import of bigquery in the third line of my code. I believe this is the root cause of my issue.

Why to request info logs generate in Quart but not Hypercorn?

I am trying to enable all requests to be logged (to centralized logging system) in a Quart microservice. However this only occurs when running directly in Python, and when running in Hypercorn it only logs major events and errors.
Running from PyCharm does generate the logs (to console, and to centralized log):
# TryLogging.py
import logging
from quart import Quart
app = Quart(__name__)
app.logger.setLevel(logging.INFO)
#app.route("/")
def callme():
return "I'm alive!"
#app.route("/fake_fail")
def failme():
raise Exception("Fake exception")
if __name__ == "__main__":
app.run()
generates console logs:
* Serving Quart app 'TryLogging'
* Environment: production
* Please use an ASGI server (e.g. Hypercorn) directly in production
* Debug mode: False
* Running on http://127.0.0.1:5000 (CTRL + C to quit)
[2022-01-10 15:55:48,323] Running on http://127.0.0.1:5000 (CTRL + C to quit)
[2022-01-10 15:55:50,080] 127.0.0.1:63560 GET / 1.1 200 10 4515
[2022-01-10 15:55:54,480] 127.0.0.1:63560 GET /fake_fail 1.1 500 290 1999
[2022-01-10 15:55:54,478] ERROR in app: Exception on request GET /fake_fail
Traceback (most recent call last):
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1489, in handle_request
return await self.full_dispatch_request(request_context)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1514, in full_dispatch_request
result = await self.handle_user_exception(error)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 964, in handle_user_exception
raise error
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1512, in full_dispatch_request
result = await self.dispatch_request(request_context)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1557, in dispatch_request
return await self.ensure_async(handler)(**request_.view_args)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\utils.py", line 66, in _wrapper
result = await loop.run_in_executor(
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\brend\Documents\GitHub\ms-abs-boundaries\src\TryLogging.py", line 15, in failme
raise Exception("Fake exception")
Exception: Fake exception
However when running through Hypercorn in terminal (as it launched in production) and calling the endpoint from browser:
(ms-abs-boundaries) PS C:\Users\brend\Documents\GitHub\ms-abs-boundaries\src> hypercorn --bind 127.0.0.1:5008 TryLoggi
ng.py
[2022-01-10 15:56:42 +1100] [37772] [INFO] Running on http://127.0.0.1:5008 (CTRL + C to quit)
[2022-01-10 15:56:48,075] ERROR in app: Exception on request GET /fake_fail
Traceback (most recent call last):
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1489, in handle_request
return await self.full_dispatch_request(request_context)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1514, in full_dispatch_
request
result = await self.handle_user_exception(error)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 964, in handle_user_exc
eption
raise error
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1512, in full_dispatch_
request
result = await self.dispatch_request(request_context)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\app.py", line 1557, in dispatch_reque
st
return await self.ensure_async(handler)(**request_.view_args)
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\site-packages\quart\utils.py", line 66, in _wrapper
result = await loop.run_in_executor(
File "C:\Users\brend\miniconda3\envs\ms-abs-boundaries\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\brend\Documents\GitHub\ms-abs-boundaries\src\TryLogging.py", line 15, in failme
raise Exception("Fake exception")
Exception: Fake exception
Only the exception is logged and the other success request is not logged.
How can I enable all requests (including other arbitrary info log events) to be logged when running in Hypercorn?
Hypercorn version: 0.13.2
Quart version: 0.16.2
NOTE: it needs to write to an external log system, not local logging file, and that external log is configured in the real version of the app. But getting it to show in the console is enough for testing.
Logs emitted by your application code would still be logged. What differs when you move from running with Quart to Hypercorn is that you no longer have the quart.serving logger which produced those messages you referred to.
To get similar behavior with Hypercorn, you can configure it to direct access log to stdout by setting accesslog to -.

IB-Insync only place Stk order once (will not execute additional buy or sell) - Posted solution generates an error for me

I'm using an existing set of scripts to connect tradingview with interactive brokers (ib.insync).
Tradingview sends a webhook with JSON text, which is received by a flask web app and redis server. The latter is scanned by an ib.insync python script and an order is generated.
These applications were created by a programmer at the below github repository and I am very grateful.
Hacking the markets
All individual components of the program work just fine, but IB only triggers the first order upon receiving the webhook containing JSON.
After that, even though the JSON messages are received, IB does not trigger any additional trades.
There is a solution posted on stackoverflow
Which suggests using ib.qualifyContracts
but this produces the below error message for me:
{'type': 'message', 'pattern': None, 'channel': b'tradingview', 'data':
b'{\n "passphrase": "########",\n "time": "2021-09-05T19:49:00Z",\n
"ticker": "AAPL",\n "bar": {\n "time": "2021-09-05T19:48:00Z",\n
"open": 126.35,\n "high": 128.02,\n "low": 126.02,\n
"close": 127.75,\n "volume": 12345\n },\n "strategy": {\n
"position_size": 1,\n "order_action": "BUY",\n
"order_contracts": 1,\n "order_price": 128.50,\n "order_id":
"Close entry(s) order long",\n "market_position": "long",\n
"market_position_size": 1,\n "prev_market_position": "flat",\n
"prev_market_position_size": 0\n }\n}'}
Traceback (most recent call last):
File "C:\Users\smana\PycharmProjects\TV_IB\venv\brokerappMO.py", line 31,
in <module>
asyncio.run(run_periodically(1, check_messages))
File "C:\Program Files\Python39\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Program Files\Python39\lib\asyncio\base_events.py", line 642, in
run_until_complete
return future.result()
File "C:\Users\smana\PycharmProjects\TV_IB\venv\brokerappMO.py", line 29,
in run_periodically
await asyncio.gather(asyncio.sleep(interval), periodic_function())
File "C:\Users\smana\PycharmProjects\TV_IB\venv\brokerappMO.py", line 23,
in check_messages
ib.qualifyContracts(stock)
File "C:\Users\smana\AppData\Roaming\Python\Python39\site
packages\ib_insync\ib.py", line 553, in qualifyContracts
return self._run(self.qualifyContractsAsync(*contracts))
File "C:\Users\smana\AppData\Roaming\Python\Python39\site
packages\ib_insync\ib.py", line 310, in _run
return util.run(*awaitables, timeout=self.RequestTimeout)
File "C:\Users\smana\AppData\Roaming\Python\Python39\site-
packages\ib_insync\util.py", line 332, in run
result = loop.run_until_complete(task)
File "C:\Program Files\Python39\lib\asyncio\base_events.py", line 618, in
run_until_complete
self._check_running()
File "C:\Program Files\Python39\lib\asyncio\base_events.py", line 578, in
_check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
Process finished with exit code 1
here is the ib.insync application:
import redis, json
from ib_insync import *
import asyncio, time, random
# connect to Interactive Brokers
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=1)
# connect to Redis and subscribe to tradingview messages
r = redis.Redis(host='localhost', port=6379, db=0)
p = r.pubsub()
p.subscribe('tradingview')
async def check_messages():
print(f"{time.time()} - checking for tradingview webhook messages")
message = p.get_message()
if message is not None and message['type'] == 'message':
print(message)
message_data = json.loads(message['data'])
stock = Stock(message_data['ticker'], 'SMART', 'USD')
order = MarketOrder(message_data['strategy']['order_action'],
message_data['strategy']['order_contracts'])
trade = ib.placeOrder(stock, order)
async def run_periodically(interval, periodic_function):
while True:
await asyncio.gather(asyncio.sleep(interval), periodic_function())
asyncio.run(run_periodically(1, check_messages))
ib.run()
Would love any input into getting ib.insync to run properly and execute whenever a JSON message is sent. if ib.qualifyContracts() is indeed the solution any insight into why it doesnt work would be helpful.
thank you

Google Cloud aiohttp error when connecting from compute engine to database in Python

I'm trying to connect my Compute Engine to a MySQL database, both hosted through Google Cloud. The program to obtain data works in the Compute Engine, but when I try to store the information into the DB I get the following error:
Traceback (most recent call last):
File "main.py", line 81, in <module>
db="products"
File "/home/jasper_wijnhoven/supermarktspider/googlec/cloud/sql/connector/connector.py", line 95, in connect
return icm.connect(driver, timeout, **kwargs)
File "/home/jasper_wijnhoven/supermarktspider/googlec/cloud/sql/connector/instance_connection_manager.py", line 328, in connect
connection = connect_future.result(timeout)
File "/usr/lib/python3.7/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/jasper_wijnhoven/supermarktspider/googlec/cloud/sql/connector/instance_connection_manager.py", line 356, in _connect
instance_data: InstanceMetadata = await self._current
File "/home/jasper_wijnhoven/supermarktspider/googlec/cloud/sql/connector/instance_connection_manager.py", line 251, in _get_instance_data
metadata, ephemeral_cert = await asyncio.gather(metadata_task, ephemeral_task)
File "/home/jasper_wijnhoven/supermarktspider/googlec/cloud/sql/connector/refresh_utils.py", line 88, in _get_metadata
resp = await client_session.get(url, headers=headers, raise_for_status=True)
File "/home/jasper_wijnhoven/.local/lib/python3.7/site-packages/aiohttp/client.py", line 625, in _request
resp.raise_for_status()
File "/home/jasper_wijnhoven/.local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status
headers=self.headers,
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('hidden for obvious reasons')
I'm not sure what's wrong, since when I try to connect through the URL myself I get a 401 error. I followed the steps and whitelisted my CE IP for connection with the DB. Can someone give me a pointer?
Here's the code used to set up the connector:
connection = connector.connect(
"single-router-309308:europe-west4:supermarkt-database",
"mysql-connector",
user="root",
password="hidden again :)",
db="products"
)
Check to make sure the service account you are using has the correct permissions:
It should have the Cloud SQL client role or higher
The Cloud SQL Admin API should be enabled

Resources