Problem with Google Cloud Live Logging with Python - python-3.x

I want to live log some logs in the project I am working, and after searching I found the Google Cloud Live Logging Api.
After searching around and reading different guides, I created a project and enabled the logging API.
Then I tried testing it with the below code.
# Imports the Google Cloud client library
import google.cloud.logging
from google.cloud import logging
from google.oauth2 import service_account
# Imports Python standard library logging
import logging
# Instantiates a client
credentials = service_account.Credentials.from_service_account_file('my_path_to_service_account_json')
client = google.cloud.logging.Client(project='name_of_my_project', credentials=credentials)
# Connects the logger to the root logging handler; by default this captures
# all logs at INFO level and higher
client.setup_logging()
# The data to log
text = 'Hello, world!'
# Emits the data using the standard logging module
logging.warning(text)
It executed successfully and I went in the Logs Viewer in google cloud to check the logs.
Nothing was there, although it showed requests in the API Overview.
Do you have any idea on what is going wrong? Is it something in the code snippet above or I missed something in the Google cloud configurations part?
Thanks in advance.

I finally managed to solve this problem.
The problem wasn't the code, but the fact that I didn't choose "Global" at "Logs Viewer".
When I chose "Global", I saw all my logs.

Related

Logging for Azure Function in python with SEQ

I'm working on the Azure Function (durable function) that implements an HTTP trigger. All it does is waiting for an HTTP call from the backend that shares a link to a blob storage object (image) so it can be processed by a function. I need to implement a reliable logging solution using SEQ, that's being used for other projects in our company (mostly .NET).
Using some official documentation from here all I'm receiving in the SEQ console is a stream of unstructured events and it's hard to gain where and when the processing starts, how much time did it take, etc. It makes it impossible to troubleshoot.
With .NET projects we were using Serilog that allows you to write so-called enrichers and filters, so you can structurize the logs and the information that is really needed, including the call performance (e.g. elapsed time). I don't see anything even close to that available for Python 3. Can anyone suggest where do I start? What's the best approach to capture the events I'm looking for?
Thanks.
Ok guys, here's the answer:
You need to install the lib called seqlog via requirements.txt for the purpose
In the Python script where you plan to use logger import respective namespace, i.e. import logging
Define SEQ configuration in a JSON file (something like this):
In the init.py load SEQ config:
with open('./seq.config.json', 'r') as f:
seq_config = json.load(f)
f.close()
Use logging object to stream the logs to SEQ:
logging.info("[" + obj.status + "] >> Data has been processed!")
Enjoy the logs posted to the SEQ console
p.s. if you're debugging locally, set http://localhost+port in the seq.config.json instead of the remote console address
Hope this info will help someone.

Trying to get some trace information from a Python Google Cloud Function into Cloud Trace

I have a couple of Cloud Functions that make remote calls out to a 3rd party API and I would like to be able to gather latency metrics in Cloud Trace for those calls.
I'm trying to find a barebones working piece of example code that I can build off of. Only one I found is at https://medium.com/faun/tracing-python-cloud-functions-a17545586359
It is essentially.
import requests
from opencensus.trace import tracer as tracer_module
from opencensus.trace.exporters import stackdriver_exporter
from opencensus.trace.exporters.transports.background_thread \
import BackgroundThreadTransport
PROJECT_ID = 'test-trace'
# instantiate trace exporter
exporter = stackdriver_exporter.StackdriverExporter(project_id=PROJECT_ID, transport=BackgroundThreadTransport)
def main_fun(data, context):
tracer = tracer_module.Tracer(exporter=exporter)
with tracer.span(name='get_token'):
print('Getting Token')
authtoken = get_token(email,password)
print('Got Token')
def get_token(email,password):
# Make some request out to an API and get a Token
return accesstoken
There are no errors and everything works as intended, minus a trace not showing up in Cloud Trace or Stackdriver.
Am I doing something wrong here? Does anyone have some simple code that may work for Cloud Trace within a Cloud Function

Log severity levels are always any on google standard app engine

log severity levels always show any on the google standard app engine. Here is snippet I am using.
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
handler.setFormatter(CustomFormatter())
google.cloud.logging.handlers.setup_logging(handler)
logging.getLogger("name").setLevel(logging.INFO)
This scenario is known and it looks there’s already a feature request , it seems it comes from an internal limitation. I’ve found this comment related to the logging in GAE:
Bubbling up the application log severity in the request logs is not feasible in App Engine second generation.

Google Cloud Console - Flask - main.py vs package

OK, so I have been through some tutorials to get a flask app onto google cloud, which is fine.
I have also been through the flask tutorial to build a flaskr blog:
http://flask.pocoo.org/docs/1.0/tutorial/
It occurred to me that a sensible thing to do would be to create a database (MySQL in mycase) on google and then modify the code so that it uses that. This is fine and I can get it to work on my local machine.
However, now that I am coming to deploying this, I have hit a problem.
The google cloud tutorials tend to use a flask app that is initiated in a single file such as main.py, eg:
from flask import Flask, render_template
app = Flask(__name__)
....
The flask tutorial mentioned above uses a package and puts the code to create_app() in the __init__.py file and at present I cannot get this to start in the same way. (see sample code).
from flask import Flask
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev'
)
Are there some adjustments that I need to make to something like the app.yaml file to get it to recognise flask as the flaskr package or do I need to rewrite the whole thing so that it uses a main.py file ?
I feel that this is one of the points in time where I could really pick up a bad habit. What in general is the preferred way to write flask apps on google cloud ?
I am using the standard environment in google.
Thanks for your advice.
Mark
Since you have an application factory, you can create the app anywhere. Just create it in main.py, since this is what App Engine expects:
from my_package import create_app
app = create_app()

Serving Flask-RESTPlus on https server

I am relatively new to python and I created a micro service using flask-resplus.
Works fine on my computer and on the dev server served with http.
I dont have control on where the microservice could be deployed. In these case it seems is behind a load balancer(not sure of details), served with https.
The actual errors given by the browser: Can't read from server. It may not have the appropriate access-control-origin settings.
When i check the network developer tools i see it fails loading swagger.json. But is checking it using:
http://hostname/api/swagger.json, instead of https.
I have been googling, and i ran into discussions of this issue.
And this seemed to be the fix that could work without me having to change the library or configurations on the server.
However still i couldnt get it to work.
This is what i have:
on the api file:
api_blueprint = Blueprint('api', __name__, url_prefix='/api')
api = Api(api_blueprint, doc='/doc/', version='1.0', title='My api',
description="My api")
on the main app file:
from flask import Flask
from werkzeug.contrib.fixers import ProxyFix
from lib.api import api_blueprint
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
app.register_blueprint(api_blueprint)
Also tried adding:
app.config['SERVER_URL'] = 'http://testfsdf.co.za' # but it dont look like is being considered
Using flask-restplus==0.9.2,
Any solution will be appreciated, as long as i dont need to make configurations on the container where the service will be deployed
(am ok with setting environment variables), i.e. service need to be self contained. And if there is a version of flask-resplus that i can install with pip, that already has a fix i can appreciate.
Thanks a lot guys,
Overide API class with _scheme='https' option in spec property.
class MyApi(Api):
#property
def specs_url(self):
"""Monkey patch for HTTPS"""
scheme = 'http' if '5000' in self.base_url else 'https'
return url_for(self.endpoint('specs'), _external=True, _scheme=scheme)
api = MyApi(api_blueprint, doc='/doc/', version='1.0', title='My api',
description="My api")
The solution above works like a charm. There are couple of things you should check.
Before applying the fix, make sure in your chrome developertools -> Network tab that whenever you reload the page(in https server) that shows the swagger UI, you get a mixed content error for swagger.json request.
The solution in the above post solves the issue when deployed on an https server but locally it might give issue. For that you can use the environment variable trick.
Set a custom environment variable or any variable which is already there on your https server while deploying your app. Check for the existence of that environment variable before applying the solution to make sure your app in running in the https server.
Now when you run the app locally, this hack won't be applied and swagger.json would be served through http and in your server it would be served via https. Implementation might look similar to this.
import os
from flask import url_for
from flask_restplus import Api
app = Flask( __name__)
if os.environ.get('CUSTOM_ENV_VAR'):
#property
def specs_url(self):
return url_for(self.endpoint('specs'), _external=True, _scheme='https')
Api.specs_url = specs_url
api = Api(app)

Resources