Cloud Run connect to Cloud SQL module error Python - python-3.x

I keep getting an error trying to connect Cloud Run and I keep getting the following error. Any idea?
__import__("pg8000") ModuleNotFoundError: No module named 'pg8000'
import pandas as pd
import sqlalchemy
import datetime
import requests
from urllib.parse import urlencode
import warnings
from flask import Flask
import os
import google
db_user = os.environ.get("DB_USER")
db_pass = os.environ.get("DB_PASS")
db_name = os.environ.get("DB_NAME")
cloud_sql_connection_name = os.environ.get("CLOUD_SQL_CONNECTION_NAME")
db = sqlalchemy.create_engine(
# Equivalent URL:
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql/<cloud_sql_instance_name>/.s.PGSQL.5432
sqlalchemy.engine.url.URL(
drivername='postgres+psycopg2',
username=db_user,
password=db_pass,
database=db_name,
query={
'unix_sock': '/cloudsql/{}/.s.PGSQL.5432'.format(
cloud_sql_connection_name)
}
),
# ... Specify additional properties here.
# ...
)

You need to install one of the supported database drivers.
If you want to use postgres+pg8000, you need to install the pg8000 package, otherwise based on your example, you actually need to install psycopg2.

Related

ModuleNotFoundError: No module named 'pandas.lib'

from ggplot import mtcars
While importing mtcars dataset from ggplot on jupyter notebook i got this error
My system is windows 10 and I've already reinstalled and upgraded pandas (also used --user in installation command)but it didn't work out as well. Is there any other way to get rid of this error?
\Anaconda3\lib\site-packages\ggplot\stats\smoothers.py in
2 unicode_literals)
3 import numpy as np
----> 4 from pandas.lib import Timestamp
5 import pandas as pd
6 import statsmodels.api as sm
ModuleNotFoundError: No module named 'pandas.lib'
I Just tried out a way. I hope this works out for others as well. I changed from this
from pandas.lib import Timestamp
to this
from pandas._libs import Timestamp
as the path of the module is saved in path C:\Users\HP\Anaconda3\Lib\site-packages\pandas-libs
is _libs
Also, I changed from
date_types = (
pd.tslib.Timestamp,
pd.DatetimeIndex,
pd.Period,
pd.PeriodIndex,
datetime.datetime,
datetime.time
)
to this
date_types = (
pd._tslib.Timestamp,
pd.DatetimeIndex,
pd.Period,
pd.PeriodIndex,
datetime.datetime,
datetime.time
)
Before that, I went on this path "C:\Users\HP\Anaconda3\Lib\site-packages\ggplot\util.py" to make the same changes in util.py for date_types. This helped me out to get rid of the error I mentioned in my question.

module 'subprocess' has no attribute '_subprocess'

I used python3.7 in windows7.
When I tried to run this line: suinfo.dwFlags |= subprocess._subprocess.STARTF_USESHOWWINDOW
error occurs: module 'subprocess' has no attribute '_subprocess'
import os
import sqlite3
import subprocess
import time
import re
from django.core.mail import send_mail
from django.http import HttpResponse
suinfo = subprocess.STARTUPINFO()
suinfo.dwFlags |= subprocess._subprocess.STARTF_USESHOWWINDOW
How to deal with that?
There is no such thing as subprocess._subprocess, the constant is straight under subprocess:
suinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
See the docs: https://docs.python.org/3/library/subprocess.html#subprocess.STARTF_USESHOWWINDOW

Using Prometheus with Connexion - ValueError: Duplicated timeseries in CollectorRegistry

I get the following error message when using prometheus with connexion using python3.6/3.7:
ValueError: Duplicated timeseries in CollectorRegistry: {'app_request_processing_seconds_sum', 'app_request_processing_seconds_count', 'app_request_processing_seconds_created', 'app_request_processing_seconds'}
#!/usr/bin/env python3
from gevent import monkey # noqa
# monkey.patch_all() # noqa
import json
import os
import connexion
import datetime
import logging
from connexion import NoContent
from prometheus_client import Summary, Counter
logger = logging.getLogger(__name__)
REQUEST_TIME = Summary('app_request_processing_seconds', 'time spent processing')
REQUEST_COUNTER = Counter('app_request_count', 'number of requests')
#REQUEST_TIME.time()
def get_health():
try:
'Hello'
except Exception:
return connexion.problem(503, "Service Unavailable", "Unhealthy")
else:
return "Healthy"
logging.basicConfig(level=logging.INFO)
app = connexion.App(__name__)
app.add_api("swagger.yaml")
if __name__ == "__main__":
# run our standalone gevent server
app.run(port=8080, server="gevent")
There is a swagger.yaml that is identical to:
https://github.com/hjacobs/connexion-example-redis-kubernetes/blob/master/swagger.yaml
Any help would be great
As a guess, you have named your file app.py. What happens it that when the swagger is loaded, the handling is specified as app.get_health:
paths:
/health:
get:
operationId: app.get_health
And it loads (a second time) app.py to import the get_health() function.
The reason it that the main file is loaded as __main__ module and thus get loaded a second time; see this other question for more information.
Therefore, you end-up with defining your Prometheus metrics twice which doesn't sit well with the collector.
The most simple solution is to rename your file and implement your application in another file named app.py.

JPype error: import jpype ModuleNotFoundError: No module named 'jpype'

I installed JPype in a correct way and anything is fine and my instalation was succeed but when i run my refactor.py from command prompt i have error that i pointed in title.
i hope you can help me for solving this problem.
also i have to point that i am beginner in python3.
here is my code:
import urllib.request
import os
import tempfile
import sys
import fileinput
import logging
import jpype
logging.basicConfig(filename="ERROR.txt", level= logging.ERROR)
try:
logging.debug('we are in the main try loop')
jpype.startJVM("C:/Users/user/AppData/Local/Programs/Python/Python36/ClassWithTest.java", "-ea")
test_class = jpype.JClass("ClassWithTest")
a = testAll()
file_java_class = open("OUTPUT.txt", "w")
file_java_class.write(a)
except Exception as e1:
logging.error(str(e1))
jpype.shutdownJVM()
startJVM() function takes the path to the JVM which is like this - C:\\Program Files\\Java\\jdk-10.0.2\\bin\\server\\jvm.dll. You could use the getDefaultJVMPath() function to get the JVM path on your PC. So you can just start the JVM this way:
startJVM(getDefaultJVMPath(), "-ea")
Hope that helps!

Google speech long_running_recognizer does not work when fired through celery task

The code below works when fired as a script using python. But the same, when i encapsulate in a celery task and try to execute, it does not work. The celery task prints the line before the long_running_recognize, but does not print the one after the operation - seems like it gets stuck at the long_running_recognize call when executing as a celery task.
#!/usr/bin/env python3
import speech_recognition as sr
import json
import sqlalchemy
import io
import os
# Imports the Google Cloud client library
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
from sqlalchemy import create_engine
# Instantiates a client
client = speech.SpeechClient()
audio=speech.types.RecognitionAudio(uri='gs://<bucket_name>/<audio_file>')
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
language_code='en-US')
print('FIRING GOOGLE SPEECH RECOGNITION')
# Detects speech in the audio file
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=600)
outfile = open('test.txt', 'w')
for result in response.results:
for alternative in result.alternatives:
print('=' * 20)
outfile.write('Transcript: {}'.format(alternative.transcript))
outfile.write('=' * 20)
outfile.write("Confidence: {}".format(alternative.confidence))
print('Transcript: {}'.format(alternative.transcript))
print(alternative.confidence)
outfile.close()
I had the same issue today, but it worked recently. I rolled back to these requirements in pip and it solved the problem.
google-api-python-client==1.6.4
google-auth==1.1.1
google-cloud-core==0.27.1
google-cloud-speech==0.29.0
google-gax==0.15.15
googleapis-common-protos==1.5.2

Resources