When using the Python client API for the Google Cloud Scheduler I always get the above error message for some reason. I also tried to start the parent path without the slash but got the same result.
Any hint is much appreciated!
import os
from google.cloud import scheduler_v1
def gcloudscheduler(data, context):
current_folder = os.path.dirname(os.path.abspath(__file__))
abs_auth_path = os.path.join(current_folder, 'auth.json')
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = abs_auth_path
response = scheduler_v1.CloudSchedulerClient().create_job(data["parent"], data["job"])
print(response)
I used following parameter:
{"job": {
"pubsub_target": {
"topic_name": "trade-tests",
"attributes": {
"attrKey": "attrValue"
}
},
"schedule": "* * * * *"
},
"parent": "/projects/my-project-id/locations/europe-west1"
}
The problem was actually not the parent parameter but the incorrect format of the topic-name. It should have been projects/my-project-id/topics/trade-tests. Even though the error message says it should with a slash. But it is in line with the API doc here and here.
The problem was just that the error message didn't say which resource name the error was about.
Related
I am attempting to use Google's Cloud Natural Language API with Google Colab.
I started by following Google's simple example: https://cloud.google.com/natural-language/docs/samples/language-entity-sentiment-text#language_entity_sentiment_text-python
So, my Colab notebook was literally just one code cell:
from google.cloud import language_v1
client = language_v1.LanguageServiceClient()
text_content = 'Grapes are good. Bananas are bad.'
# Available types: PLAIN_TEXT, HTML
type_ = language_v1.types.Document.Type.PLAIN_TEXT
# Optional. If not specified, the language is automatically detected.
# For list of supported languages:
# https://cloud.google.com/natural-language/docs/languages
language = "en"
document = {"content": text_content, "type_": type_, "language": language}
# Available values: NONE, UTF8, UTF16, UTF32
encoding_type = language_v1.EncodingType.UTF8
response = client.analyze_entity_sentiment(request = {'document': document, 'encoding_type': encoding_type})
That resulted in several error messages, which I seemed to resolve, mostly with the help of this SO post, by slightly updating the code as follows:
from google.cloud import language_v1
client = language_v1.LanguageServiceClient()
text_content = 'Grapes are good. Bananas are bad.'
# Available types: PLAIN_TEXT, HTML
type_ = language_v1.types.Document.Type.PLAIN_TEXT
# Optional. If not specified, the language is automatically detected.
# For list of supported languages:
# https://cloud.google.com/natural-language/docs/languages
language = "en"
#document = {"content": text_content, "type_": type_, "language": language} ## "type_" is not valid???
document = {"content": text_content, "type": type_, "language": language}
# Available values: NONE, UTF8, UTF16, UTF32
#encoding_type = language_v1.EncodingType.UTF8 ## Does not seem to work
encoding_type = "UTF8"
#response = client.analyze_entity_sentiment(request = {'document': document, 'encoding_type': encoding_type}) ## remove request
response = client.analyze_entity_sentiment( document = document, encoding_type = encoding_type )
Which, after 10 excruciating minutes, results in the following error:
_InactiveRpcError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py in error_remapped_callable(*args, **kwargs)
72 try:
---> 73 return callable_(*args, **kwargs)
74 except grpc.RpcError as exc:
11 frames
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Getting metadata from plugin failed with error: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\nb''", <google.auth.transport.requests._Response object at 0x7f68cee39a90>)"
debug_error_string = "{"created":"#1648840699.964791285","description":"Getting metadata from plugin failed with error: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\nb''", <google.auth.transport.requests._Response object at 0x7f68cee39a90>)","file":"src/core/lib/security/credentials/plugin/plugin_credentials.cc","file_line":91,"grpc_status":14}"
>
The above exception was the direct cause of the following exception:
ServiceUnavailable Traceback (most recent call last)
ServiceUnavailable: 503 Getting metadata from plugin failed with error: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\nb''", <google.auth.transport.requests._Response object at 0x7f68cee39a90>)
The above exception was the direct cause of the following exception:
RetryError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/six.py in raise_from(value, from_value)
RetryError: Deadline of 600.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x7f68cedb69e0>, document {
type: PLAIN_TEXT
content: "Grapes are good. Bananas are bad."
language: "en"
}
encoding_type: UTF8
, metadata=[('x-goog-api-client', 'gl-python/3.7.13 grpc/1.44.0 gax/1.26.3 gapic/1.2.0')]), last exception: 503 Getting metadata from plugin failed with error: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\nb''", <google.auth.transport.requests._Response object at 0x7f68cee39a90>)
Can you please help me with this simple "Hello world!" for Cloud Natural Language with Google Colab?
My hunch is that I need to create a service account and somehow provide that key file to Colab, like this SO answer. If so, can you hold my hand a little more and tell me how I would implement that in Colab (vs. running locally)? I am new to Colab.
This appears to have worked:
Start by creating a service account, generating a key file, and saving the JSON file locally. https://console.cloud.google.com/iam-admin/serviceaccounts
(I would still love to know which, if any roles I should select in the service account generation: "Grant this service account access to project")
Cell 1: Upload a json file with my service account keys
from google.colab import files
uploaded = files.upload()
Cell 2:
from google.oauth2 import service_account
from google.cloud import language_v1
client = language_v1.LanguageServiceClient.from_service_account_json("my-super-important-gcp-key-file.json")
Cell 3:
text_content = 'Grapes are good. Bananas are bad.'
type_ = language_v1.types.Document.Type.PLAIN_TEXT
language = "en"
document = {"content": text_content, "type": type_, "language": language}
encoding_type = "UTF8"
response = client.analyze_entity_sentiment( document = document, encoding_type = encoding_type )
response
Here is the output:
entities {
name: "Grapes"
type: OTHER
salience: 0.8335162997245789
mentions {
text {
content: "Grapes"
}
type: COMMON
sentiment {
magnitude: 0.800000011920929
score: 0.800000011920929
}
}
sentiment {
magnitude: 0.800000011920929
score: 0.800000011920929
}
}
entities {
name: "Bananas"
type: OTHER
salience: 0.16648370027542114
mentions {
text {
content: "Bananas"
begin_offset: 17
}
type: COMMON
sentiment {
magnitude: 0.699999988079071
score: -0.699999988079071
}
}
sentiment {
magnitude: 0.699999988079071
score: -0.699999988079071
}
}
language: "en"
I am certain that I have just violated all sorts of security protocols. So, please, I welcome any advice for how I should improve this process.
I am using django-rest for my back-end and want to overwrite default errors for fields.
My current code looks like this.
class DeckSerializer(serializers.ModelSerializer):
class Meta:
model = Product
fields = (
"id",
"title",
"get_absolute_url",
"description",
"price",
"image",
"category_id",
"category",
"title"
)
extra_kwargs = {
'title': {"error_messages": {"required": "Title cannot be empty"}},
'image': {"error_messages": {"required": "Image cannot be empty"},}
}
After writing these 2 kwargs i realised i would just be repeating something that could be solved by code.
By default the serializer validation returns this when the field is missing {title:"This field is required"}.
Is there any way that i can overwrite the current message so it can display directly the name_of_the_field + my_message . Example {title: Title is required}
I am not looking on how to write custom error message for a single field , im looking on how to write generic costum messages for every field that for example is missing or null.
We can achieve it by writing a custom exception handler.
Here is how a custom response might look like:
{
"status_code": 400,
"type": "ValidationError",
"message": "Bad request syntax or unsupported method",
"errors": [
"username: This field may not be null.",
"email: This field may not be null.",
"ticket number: This field may not be null."
]
}
We have to create a file: exception_handler.py in our project directory with the code that follows; I use utils for this kind of purposes. You can also put this code anywhere you like, but I prefer to have it in a separated file dedicated for this purpose.
from http import HTTPStatus
from rest_framework import exceptions
from rest_framework.views import Response, exception_handler
def api_exception_handler(exception: Exception, context: dict) -> Response:
"""Custom API exception handler."""
# Call REST framework's default exception handler first,
# to get the standard error response.
response = exception_handler(exception, context)
# Only alter the response when it's a validation error
if not isinstance(exception, exceptions.ValidationError):
return response
# It's a validation error, there should be a Serializer
view = context.get("view", None)
serializer = view.get_serializer_class()()
errors_list = []
for key, details in response.data.items():
if key in serializer.fields:
label = serializer.fields[key].label
help_text = serializer.fields[key].help_text
for message in details:
errors_list.append("{}: {}".format(label, message))
elif key == "non_field_errors":
for message in details:
errors_list.append(message)
else:
for message in details:
errors_list.append("{}: {}".format(key, message))
# Using the description's of the HTTPStatus class as error message.
http_code_to_message = {v.value: v.description for v in HTTPStatus}
error_payload = {
"status_code": 0,
"type": "ValidationError",
"message": "",
"errors": [],
}
# error = error_payload["error"]
status_code = response.status_code
error_payload["status_code"] = status_code
error_payload["message"] = http_code_to_message[status_code]
error_payload["errors"] = errors_list
# Overwrite default exception_handler response data
response.data = error_payload
return response
The main idea comes from here, but I changed it to my needs. change it as you see fit.
Don't forget to set it as your default exception handler in you settings.py file:
REST_FRAMEWORK["EXCEPTION_HANDLER"] = "utils.exception_handler.api_exception_handler";
I am trying to create a Strava webhook subscription to recieve events from users who have authorised my Strava application. I was able to successfully create a subscription using the code in this Strava tutorial. However, I don't understand Javascript, so can't adapt the code to my needs. I am therefore trying to replicate it's functionality within Python using Django.
My basic approach has been to follow the setup outlined in the Django section of this webpage, then replace the code in the file views.py with the code below. I have tried to make the code function as similarly as possible to the Javascript function within the tutorial I linked above. However, I'm very new to web applications, so have taken shortcuts / 'gone with what works without understang why' in several places.
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.clickjacking import xframe_options_exempt
import json
#csrf_exempt
#xframe_options_exempt
def example(request):
if request.method == "GET":
verify_token = "STRAVA"
str_request = str(request)
try:
mode = str_request[str_request.find("hub.mode=") + 9 : len(str_request) - 2]
token = str_request[str_request.find("hub.verify_token=") + 17 : str_request.find("&hub.challenge")]
challenge = str_request[str_request.find("hub.challenge=") + 14 : str_request.find("&hub.mode")]
except:
return HttpResponse('Could not verify. Mode, token or challenge not valid.')
if (mode == 'subscribe' and token == verify_token):
resp = json.dumps({"hub.challenge":challenge},separators=(',', ':'))
return HttpResponse(resp, content_type='application/json; charset=utf-8')
else:
return HttpResponse('Could not verify mode or token.')
The Strava documentation says that the callback url must respond to a GET request within 2 seconds with a status of 200 and an echo of the hub.challenge json string. This function seems to do that. Yet when I try to create a POST request equivalent to the one below:
$ curl -X POST https://www.strava.com/api/v3/push_subscriptions \
-F client_id=[MY-CLIENT-ID] \
-F client_secret=[MY-CLIENT-SECRET] \
-F 'callback_url=http://[MY-IP-ADDRESS]:8000/webhooks/example/' \
-F 'verify_token=STRAVA'
I get the following response:
{
"message": "Bad Request",
"errors": [
{
"resource": "PushSubscription",
"field": "callback url",
"code": "not verifiable"
}
]
}
Does anyone have any idea what might be going wrong?
P.S. Please let me know if there's anything I can do to make this example more reproducible. I don't really understand this area well enough to know whether I'm leaving out some crucial info!
I've setup a Python script that will take certain bigquery tables from one dataset, clean them with a SQL query, and add the cleaned tables to a new dataset. This script works correctly. I want to set this up as a cloud function that triggers at midnight every day.
I've also used cloud scheduler to send a message to a pubsub topic at midnight every day. I've verified that this works correctly. I am new to pubsub but I followed the tutorial in the documentation and managed to setup a test cloud function that prints out hello world when it gets a push notification from pubsub.
However, my issue is that when I try to combine the two and automate my script - I get a log message that the execution crashed:
Function execution took 1119 ms, finished with status: 'crash'
To help you understand what I'm doing, here is the code in my main.py:
# Global libraries
import base64
# Local libraries
from scripts.one_minute_tables import helper
def one_minute_tables(event, context):
# Log out the message that triggered the function
print("""This Function was triggered by messageId {} published at {}
""".format(context.event_id, context.timestamp))
# Get the message from the event data
name = base64.b64decode(event['data']).decode('utf-8')
# If it's the message for the daily midnight schedule, execute function
if name == 'midnight':
helper.format_tables('raw_data','table1')
else:
pass
For the sake of convenience, this is a simplified version of my python script:
# Global libraries
from google.cloud import bigquery
import os
# Login to bigquery by providing credentials
credential_path = 'secret.json'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credential_path
def format_tables(dataset, list_of_tables):
# Initialize the client
client = bigquery.Client()
# Loop through the list of tables
for table in list_of_tables:
# Create the query object
script = f"""
SELECT *
FROM {dataset}.{table}
"""
# Call the API
query = client.query(script)
# Wait for job to finish
results = query.result()
# Print
print('Data cleaned and updated in table: {}.{}'.format(dataset, table))
This is my folder structure:
And my requirements.txt file has only one entry in it: google-cloud-bigquery==1.24.0
I'd appreciate your help in figuring out what I need to fix to run this script with the pubsub trigger without getting a log message that says the execution crashed.
EDIT: Based on the comments, this is the log of the function crash
{
"textPayload": "Function execution took 1078 ms, finished with status: 'crash'",
"insertId": "000000-689fdf20-aee2-4900-b5a1-91c34d7c1448",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "one_minute_tables",
"region": "us-central1",
"project_id": "PROJECT_ID"
}
},
"timestamp": "2020-05-15T16:53:53.672758031Z",
"severity": "DEBUG",
"labels": {
"execution_id": "x883cqs07f2w"
},
"logName": "projects/PROJECT_ID/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"trace": "projects/PROJECT_ID/traces/f391b48a469cbbaeccad5d04b4a704a0",
"receiveTimestamp": "2020-05-15T16:53:53.871051291Z"
}
The problem comes from the list_of_tables attributes. You call your function like this
if name == 'midnight':
helper.format_tables('raw_data','table1')
And you iterate on your 'table1' parameter
Perform this, it should work
if name == 'midnight':
helper.format_tables('raw_data',['table1'])
I have an HTTP triggered Consumption plan Azure Function that I want to keep warm by POSTing an empty payload to it regularly.
I am doing this with a Scheduled Function with this configuration:
__init__.py
import os
import datetime
import logging
import azure.functions as func
import urllib.parse, urllib.request, urllib.error
def main(mytimer: func.TimerRequest) -> None:
try:
url = f"https://FUNCTIONNAME.azurewebsites.net/api/predictor?code={os.environ['CODE']}"
request = urllib.request.Request(url, {})
response = urllib.request.urlopen(request)
except urllib.error.HTTPError as e:
message = e.read().decode()
if message == "expected outcome":
pass
else:
logging.info(f"Error: {message}")
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 */9 5-17 * * 1-5"
}
]
}
When I inspect my logs they are filled with HTML. Here is a snippet of the HTML:
...
<h1>Server Error</h1>
...
<h2>502 - Web server received an invalid response while acting as a gateway or proxy server.</h2>
<h3>There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.</h3>
Running the logic of __init__.py locally works fine. What might be wrong here?
Hmm... That is weird. Looks like the response wasn't able to route to the correct instance I guess.
BTW, I believe you could simply have the time triggered function in the same function app as the one you want to keep warm. This function really doesn't have to do anything too.
Also, you might want to take a look at Azure Functions Premium which supports having pre-warmed instances. Note that this is still in preview.