I have written a HTTP trigger that takes the ECG database name and record no as the arguments and reads the record from the Cloud Storage, calculates the parameters and writes them to Firestore. I observed a very strange thing, the code crashes without indicating the reason in the console. This is all I get in the console:
Function execution took 58017 ms, finished with status: 'crash'`.
It generally stops when it reads the record from the Cloud Storage. I am using MIT-BIH Cloud Storage to read the records.
from google.cloud import storage
from flask import escape
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
import numpy as np
import os
from pathlib import Path
from os import listdir
from os.path import isfile, join
from random import randint
def GCFname(request):
recordno = request.args['recordno']
database = request.args['database']
client = storage.Client()
bucket = client.get_bucket('bucket_name')
# it crashes here
record = wfdb.rdrecord(recordno, channels=[0],pb_dir='mitdb')
sig = record.p_signal[:,0]
test_qrs = processing.gqrs_detect(record.p_signal[:,0], fs=record.fs)
ann_test= wfdb.rdann(recordno, 'atr',pb_dir='mitdb')
##Calculate Parameters
cred = credentials.ApplicationDefault()
firebase_admin.initialize_app(cred, {
'projectId': 'project_name',
})
db = firestore.client()
doc_ref = db.collection('xyz').document(database).collection('abc').document(recordno)
doc_ref.set({
u'fieldname': fieldvalue
})
I have deployed using gcloud,
gcloud functions deploy GCFname --runtime python37 --trigger-http --allow-unauthenticated --timeout 540s
But on using the same URL after some time it works. What could be reason for this? Its definitely not timeout issue.
It would be difficult to see why the Cloud Function is crashing without the logs as it can be due to many reasons. Currently, there is a bug open for Cloud Functions crashing before flushing log entries or writing any error trace to Stackdriver. You can follow this issue tracker here for any updates: https://issuetracker.google.com/155215191
Related
I have an Python script that creates a CSV file and loads it into an Azure Container. I want to run it as an Azure Function, but it is failing once I deploy it to Azure.
The code works fine in Google Colab (code here minus the connection string). It also works fine when I run it locally as an Azure function via the CLI (func start command).
Here is the code from the init.py file that is deployed
import logging
import azure.functions as func
import numpy as np
import pandas as pd
from datetime import datetime
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, __version__
import tempfile
def main(mytimer: func.TimerRequest) -> None:
logging.info('Python trigger function.')
temp_path = tempfile.gettempdir()
dateTimeObj = datetime.now()
timestampStr = dateTimeObj.strftime("%d%b%Y%H%M%S")
filename =f"{timestampStr}.csv"
df = pd.DataFrame(np.random.randn(5, 3), columns=['Column1','Column2','Colum3'])
df.to_csv(f"{temp_path}{filename}", index=False)
blob = BlobClient.from_connection_string(
conn_str="My connection string",
container_name="container2",
blob_name=filename)
with open(f"{temp_path}{filename}", "rb") as data:
blob.upload_blob(data)
The deployment to Azure is successful, but the functions fails in Azure. When I look in the Azure Function portal, the output from the Code+ Test menu says
{
"statusCode": 413,
"message": "request entity too large"
}
The CSV file produced by the script is 327B - so tiny. Besides that error message I can't see any good information on what is causing the failure. Can anyone suggest a solution / way forward?
Here is the requirements file contents
# Do not include azure-functions-worker as it may conflict with the Azure Functions platform
azure-functions
azure-storage-blob==12.8.1
logging
numpy==1.19.3
pandas==1.3.0
DateTime==4.3
backports.tempfile==1.0
azure-storage-blob==12.8.1
Here is what I have tried.
This article suggests this issue is linked to CORS (Cross Origin Resource sharing) issue. However adding : https://functions.azure.com to my CORS allowed domains in the the Resource sharing menu in the storage settings of my storage account didn't solve the problem (I also republished the Azure function).
Any help, or suggestions would be appreciated.
I tried reproducing the issue with all the information which you have provided and got the same 404 error as below:
Later after adding a value * in CORS, we can get rid of this issue, below is the fixed screenshot and CORS values:
CORS:
Test/Run:
Also we can add the following domain names to avoid 404 error
https://functions.azure.com
https://functions-staging.azure.com
https://functions.azure.com
This will also fix the issue.
I am referring the aws-secretsmanager-caching-python documentation and trying to cache the retrieved secret from secrets manager however, for some reason, i am always getting timeout without any helpful errors to troubleshoot this further. I am able to retrieve the secrets properly if i retrieve the secrets from secrets manager (without caching).
My main function in lambda function looks like this:
import botocore
import botocore.session
from aws_secretsmanager_caching import SecretCache, SecretCacheConfig
from cacheSecret import getCachedSecrets
def lambda_handler(event, context):
result = getCachedSecrets()
print(result)
and i have created cacheSecret as following.
from aws_secretsmanager_caching import SecretCache
from aws_secretsmanager_caching import InjectKeywordedSecretString, InjectSecretString
cache = SecretCache()
#InjectKeywordedSecretString(secret_id='my_secret_name', cache=cache, secretKey1='keyname1', secretKey2='keyname2')
def getCachedSecrets(secretKey1, secretKey2):
print(secretKey1)
print(secretKey2)
return secretKey1
In the above code, my_secret_name is the name of the secret created in secret manager and secretKey1 and secretKey1 are the secret key names which have string values.
Error:
{
"errorMessage": "2021-03-31T15:29:08.598Z 01f5ded3-7658-4zb5-ae66-6f300098a6e47 Task timed out after 3.00 seconds"
}
Can someone please suggest what needs to be fixed in the above to make this work. Also, i am not sure where to define the secret_name, secret key names in case if we dont use decorators.
The lambda needs to ack the response within 3 seconds but the code can run longer. The timeout can be configured in the function config: https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html
I am trying to write logs to Logging from Python applications by using Cloud Logging API Cloud client library with "execution ID" that as same as google's default value.
logger setup:
from google.cloud import logging
from google.cloud.logging.resource import Resource
log_client = logging.Client()
# This is the resource type of the log
log_name = 'cloudfunctions.googleapis.com%2Fcloud-functions'
# Inside the resource, nest the required labels specific to the resource type
res = Resource(type="cloud_function",
labels={
"function_name": "my-function",
"region": "asia-east2"
})
logger = log_client.logger(log_name.format("my-project"))
write log:
logger.log_struct({"message": request.remote_addr}, resource=res, severity='INFO')
It's currently not possible to do this using the purely the Cloud Function Framework itself, but you can try to extract the executionId from the request itself by using the following:
request.headers.get('function-execution-id')
I found an issue in Cloud Functions Github tracking the implementation of a native way to get those values, you can follow this thread for updates, if you'd like.
I had the same issue using an older version of google-cloud-logging. I was able to get this functional using the default python logging module. In a cloud function running python 3.8 and google-cloud-logging==2.5.0, the executionId is correctly logged with logs, as well as the severity within stackdriver.
main.py:
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.get_default_handler()
client.setup_logging()
# Imports Python standard library logging
import logging
def hello_world(req):
# Emits the data using the standard logging module
logging.info('info')
logging.warning('warn')
logging.error('error')
return ""
requirements.txt:
google-cloud-logging==2.5.0
Triggering this cloud function results in the following in stackdriver:
I'm developing a web application in Flask, using GAE.
My issue here is: Every time that my application tries to log, i got multiple entries on log file:
log viewer
.
My dbconnection class only imports a default logger class that i created and calls unexpected_error_log() to write whenever it needed.
My logger class:
import logging
from google.cloud import logging as cloudlogging
class LoggerDB:
def __init__(self):
log_client = cloudlogging.Client()
log_handler = log_client.get_default_handler()
self.cloud_logger = logging.getLogger("cloudLogger")
self.cloud_logger.setLevel(logging.INFO)
self.cloud_logger.addHandler(log_handler)
def unexpected_error_log(self, name, error="Unhandled Exception"):
self.cloud_logger.error("Unexpected Error on %s: %s", name, error)
Code when executed:
def insertVenda(self, venda):
try:
query = "xxxxx"
self.cursor.execute(query)
self.connection.commit()
return "Success"
except Exception as error:
self.logger.unexpected_error_log(__name__, error)
self.connection.rollback()
return "Error"
I suspect that gunicorn/app logging is duplicating my logs, but i don't know how to handle this case.
Did someone had the same problem?
I am struggling with this at the moment, my suspicion at the moment if you include something like this:
# Imports Python standard library logging
import logging
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.get_default_handler()
client.setup_logging()
I get logs OK, but multiple duplicates.
If I omit I just get single stdout print statements going to the stackdriver logs.
i would like to send more expressive log entries to stackdriver logging from my app engine standard python3 app.
By following the official documentation i was able to send my logs to stackdriver and it seems that the timestamp is parsed correctly.
But i'm missing the severity levels. In addition i see no way to link logs for a certain request together to a operation. Something that the java logging seems to be doing out of the box.
For reference here is my code:
import logging
import os
from flask import Flask
from google.cloud import logging as glog
app = Flask(__name__)
log_client = glog.Client(os.getenv('GOOGLE_CLOUD_PROJECT'))
# Attaches a Google Stackdriver logging handler to the root logger
log_client.setup_logging()
#app.route('/_ah/push-handlers/cloudbuild', methods=['POST'])
def pubsub_push_handle():
logging.info("stdlib info")
logging.warning("stdlib warn")
logging.error("stdlib error")
logs resulting in stackdriver:
As you can see the timestamps and message are available while the severity is strangely missing and it gets classified as 'Any'
Can someone point me in the right direction or is this level of incorporation not yet available?
Thanks for your time!
Carsten
You need to create your own logger and add the google-cloud-logging default handler to it:
import logging
from flask import Flask
from google.cloud import logging as cloudlogging
log_client = cloudlogging.Client()
log_handler = log_client.get_default_handler()
cloud_logger = logging.getLogger("cloudLogger")
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(log_handler)
app = Flask(__name__)
#app.route('/_ah/push-handlers/cloudbuild', methods=['POST'])
def pubsub_push_handle():
cloud_logger.info("info")
cloud_logger.warning("warn")
cloud_logger.error("error")
return 'OK'
Produces: