google-cloud-pubsubTypeError: state.topic.publish is not a function - node.js

This is image from node red in terminal
I am working in the sphere of IoT, and want to push message to Pub/Sub in Google, but every time when I run my node-Red, I've got the following error:
25 Dec 18:40:49 - [error] [google-cloud-pubsub out:b2451409.071148] TypeError: state.topic.publish is not a function*
As a source code, I used pub/sub contribution in github, link:
https://github.com/GoogleCloudPlatform/node-red-contrib-google-cloud/blob/master/pubsub.js
It seems that code works fine with credentials and it does create new-topic in Google,in the case, when topic is not present in the cloud, however the message is not published to the topic. In the case of repeating messages in particular interval, the problem above is arising.
Does anyone know how to solve this problem?

I think you've been using an older version of the pubsub API:
const topic = pubsub.topic('YOUR-TOPIC-NAME')
topic.publish(yourData, callback)
The new API as documented here (https://cloud.google.com/pubsub/docs/publisher) looks like this:
const topic = pubsub.topic('YOUR-TOPIC-NAME')
const publisher = topic.publisher()
publisher.publish(dataBuffer, dataJSON, callback)
Hope this fixes your problem.

Related

How to increase the AWS lambda to lambda connection timeout or keep the connection alive?

I am using boto3 lambda client to invoke a lambda_S from a lambda_M. My code looks something like
cfg = botocore.config.Config(retries={'max_attempts': 0},read_timeout=840,
connect_timeout=600) # tried also by including the ,
# region_name="us-east-1"
lambda_client = boto3.client('lambda', config=cfg) # even tried without config
invoke_response = lambda_client.invoke (
FunctionName=lambda_name,
InvocationType='RequestResponse',
Payload=json.dumps(request)
)
Lambda_S is supposed to run for like 6 minutes and I want lambda_M to be still alive to get the response back from lambda_S but lambda_M is timing out, after giving a CW message like
"Failed to connect to proxy URL: http://aws-proxy..."
I searched and found someting like configure your HTTP client, SDK, firewall, proxy or operating system to allow for long connections with timeout or keep-alive settings. But the issue is I have no idea how to do any of these with lambda. Any help is highly appreciated.
I would approach this a bit differently. Lambdas charge you by second, so in general you should avoid waiting in them. One way you can do that is create an sns topic and use that as the messenger to trigger another lambda.
Workflow goes like this.
SNS-A -> triggers Lambda-A
SNS-B -> triggers lambda-B
So if you lambda B wants to send something to A to process and needs the results back, from lambda-B you send a message to SNS-A topic and quit.
SNS-A triggers lambda, which does its work and at the end sends a message to SNS-B
SNS-B triggers lambda-B.
AWS has example documentation on what policies you should put in place, here is one.
I don't know how you are automating the deployment of native assets like SNS and lambda, assuming you will use cloudformation,
you create your AWS::Lambda::Function
you create AWS::SNS::Topic
and in its definition, you add 'subscription' property and point it to you lambda.
So in our example, your SNS-A will have a subscription defined for lambda-A
lastly you grant SNS permission to trigger the lambda: AWS::Lambda::Permission
When these 3 are in place, you are all set to send messages to SNS topic which will now be able to trigger the lambda.
You will find SO answers to questions on how to do this cloudformation (example) but you can also read up on AWS cloudformation documentation.
If you are not worried about automating the stuff and manually tests these, then aws-cli is your friend.

Google Pub/Sub - No event data found from local function after published message to topic

I'm using the Functions Framework with Python alongside Google Cloud Pub/Sub Emulator. I'm having issues with an event triggered from a published message to a topic, where there's no event data found for the function. See more details below.
Start Pub/Sub Emulator under http://localhost:8085 and project_id is local-test.
Spin up function with signature-type: http under http://localhost:8006.
Given a background cloud function with signature-type: event:
Topic is created as test-topic
Function is spinned up under http://localhost:8007.
Create push subscription test-subscription for test-topic for endpoint: http://localhost:8007
When I publish a message to test-topic from http://localhost:8006 via POST request in Postman, I get back a 200 response to confirm the message was published successfully. The function representing http://localhost:8007 gets executed as an event as shown in the logs from the functions-framework. However, there's no actual data for event when debugging the triggered function.
Has anyone encountered this? Any ideas/suggestions on this?Perhaps, this is true? #23 Functions Framework does not work with the Pub/Sub emulator
Modules Installed
functions-framework==2.1.1
google-cloud-pubsub==2.2.0
python version
3.8.8
I'll close this post, since the issue is an actual bug that was reported last year.
Update: As a workaround until this bug is fixed, I copied the code below locally to functions_framework/__init__.py within view_func nested function, inside _event_view_func_wrapper function.
if 'message' in event_data:
if 'data' not in event_data:
message = event_data['message']
event_data['data'] = {
'data': message.get('data'),
'attributes': message.get('attributes')
}

Write messages with ordering_key to Google PubSub using Apache Beam and Python

I am trying to write Google PubSub message with ordering_key to a topic using Apache Beam (https://cloud.google.com/pubsub/docs/publisher). Although Google Pubsub with ordering_key is a beta feature, I am able to publish messages using a normal PubSub client library. I would expect to be able to do it in Apache Beam as well. However, it does not seem to be anything available for Python Apache Beam library.
I have been trying to override beam.io.WriteToPubSub (by changing _to_proto_str) to write a protobuf message with ordering_key (https://cloud.google.com/pubsub/docs/reference/rpc/google.pubsub.v1#google.pubsub.v1.PubsubMessage). In the end the message will be like this
"data": string,
"attributes": {
string: string,
...
},
"messageId": string,
"publishTime": string,
"ordering_key": string
}
sdks.python.apache_beam.io.gcp.pubsub.PubsubMessage._to_proto_str
def _to_proto_str(self):
msg = pubsub.types.pubsub_pb2.PubsubMessage()
msg.data = self.data
for key, value in iteritems(self.attributes):
msg.attributes[key] = value
msg.ordering_key = self.ordering_key
return msg.SerializeToString()
However, the ordering_key seems to be disappear when I look at the end message which end up in my topic. In the worst case, I guess I could use PubSub client to publish the messages as well.
But, it would be better if someone could point me to the right direction for the kind of change. I know contributor in apache Beam project must have done something similar because they have included a feature to change attributes on PubSub messages awhile ago.
Updated : Apache Beam 2.24.0 depend on an old version of PubSub Client library. I think because they want to keep support for Python 2 a bit longer. But, everything may come to the end around 7 Oct (at least for Google, they stop support Python 2 for Dataflow after that date). Everyone else may need to wait for any version after 2.24.0.
For a workaround, I have successfully installed the newest PubSub client library on the top of Apache Beam 2.24.0. And create new custom PubSub IO as DoFn (you just need to override setup method and create Publisher Client in there). I am able to publish messages with ordering key now. But, I am not sure whether anything broker because of my change or not, it is fine for demo purpose.
def setup(self):
publisher_options = pubsub_v1.types.PublisherOptions(
enable_message_ordering=True
)
self.publisher = pubsub_v1.PublisherClient(
publisher_options=publisher_options,
batch_settings=pubsub_v1.types.BatchSettings()
)

How should one properly handle opening and closing of Azure Service Bus/Topic Clients in an Azure Function?

I'm not sure of the proper way to manage the lifespans of the various clients necessary to interact with the Azure Service Bus. From my understanding there are three different but similar clients to manage: ServiceBusClient, a Topic/Queue/Subscription Service, and then a Sender of some sort. In my case, its TopicService and a Sender. Should I close the sender after every message? After a certain amount of downtime? And same with all the others? I feel like I should keep the ServiceBusClient open until the function is entirely complete, so that probably carries over to the Topic Client as well. There's just so many ways to skin this one, I'm not sure where to start to draw the line. I'm pretty sure it's not this extreme:
function sendMessage(message: SendableMessageInfo) {
let client=createServiceBusClientFromConnectionString(connectionString)
let tClient = createTopicClient(client);
const sender = tClient.createSender();
sender.send(message);
sender.close();
tClient.close();
client.close();
}
But leaving everything open all the time seems like a memory leak waiting to happen. Should I handle this all through error handling? Try-catch, then close everything in a finally block?
I could also just use the Azure Function binding, correct me if I'm wrong:
const productChanges: AzureFunction = async function (context: Context, products: product[]): Promise<void> {
context.bindings.product_changes = []
for (let product of product) {
if(product.updated) {
let message = this.createMessage(product)
context.bindings.product_changes.push(message)
}
}
context.done();
}
I can't work out from the docs or source which would be better (both in terms of performance and finances) for an extremely high throughput Topic (at surge, ~100,000 requests/sec).
Any advice would be appreciated!
In my opinion, we'd better use Azure binding or set the client static but not create the client every time. If use Azure binding, we will not consider the problem about close the sender, if set the client static, it's ok too. Both of the solutions have good performance and there is no difference in cost (you can refer to this tutorial for servicebus price: https://azure.microsoft.com/en-us/pricing/details/service-bus/) between these twos. Hope it would be helpful to your question.
I know this is a late reply, but I'll try to explain the concepts behind the clients below in case someone lands here looking for answers.
Version 1
_ ServiceBusClient (maintains the connection)
|_ TopicClient
|_ Sender (sender link)
Version 7
_ ServiceBusClient (maintains the connection)
|_ ServiceBusSender (sender link)
In both version 1 and version 7 of #azure/service-bus SDK, when you use the sendMessages method or the equivalent send method for the first time, a connection is created on the ServiceBusClient if there was none and the new sender link is created.
The sender link remains active for a while and is cleared on its own(by the SDK) if there is no activity. Even if it is closed by inactivity, the subsequent send call even after waiting for a long duration would work just fine since it creates a new sender link.
Once you're done using the ServiceBusClient, you can close the client and all the internal senders, receivers are also closed with this if they are not already closed individually.
The latest version 7.0.0 of #azure/service-bus has been released recently.
#azure/service-bus - 7.0.0
Samples for 7.0.0
Guide to migrate from #azure/service-bus v1 to v7

Phantomjscloud not working with aws lambda nodejs

Feeling painful to create a AWS lambda function, I was able to deploy the same micro service easily with Google Cloud Function, when I ported the same service from GCF to lambda with some changes in handling function like context in aws lambda and deployed the .zip of the project. It started throwing an unknown error shown below. The lambda function works well in local environment,
{
"errorMessage": "callback called with Error argument, but there was a problem while retrieving one or more of its message, name, and stack"
}
and the logs showing a syntax error in the parent script where the code begins, but there is no syntax error in the index.js which I have confirmed by running node index.js, any way I have attached the code snippet of index.js at the bottom
START RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d Version:
$LATEST Syntax error in module 'index': SyntaxError
END RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d
I started to narrow down the piece of software that is causing the problem, I have removed all the dependencies and started including one by one and ran the lambda each time uploading the zip and finally found the culprit that caused the problem, it is phantomjscloud that is causing the problem.
when I include const phantomJsCloud = require('phantomjscloud') it is throwing out that error, even my npm_modules have phantomjscloud module included. are there any known glitches between aws lambda and phanthomjscloud, no clue how to solve this, feel free to ask any information if you feel that I have missed any thing.
Here the code that works well without including const phantomJsCloud = require('phantomjscloud')
global.async = require('async');
global.ImageHelpers = require('./services/ImageHelpers');
global.SimpleStorage = require('./services/SimpleStorage');
global.uuid = require('uuid');
global.path = require('path');
const phantomJsCloud = require('phantomjscloud')
const aadhaarController = require('./controllers/Aadhaar')
exports.handler = (event, context) => {
// TODO implement
aadhaarController.generateAadhaarCard(event,context);
};
Error message from aws lambda function when phantomjscloud is included:
AWS uses node version 4.3 for which phantomjscloud was not supported, that is the reason it worked only with google cloud function which have a run time environment of 6.9.2, now it is fixed by the author, by any chance if you are seeing this answer you might be using some other version of node which is not supported by phantomjscloud, raising a github issue solved the problem

Resources