AWS Lambda: using CloudwatchEvent.disableRule() doesn't work within Lambda but works in local - node.js

I'm trying to disable the Cloudwatch Event Rule that is triggering a Lambda, within the Lambda itself, so that it doesn't keep running when it's not necessary to do so.
I have a separate Lambda that is calling enableRule to enable the rule, which seems to work fine. The rule is not associated with the function that is doing enableRule. EDIT: Turns out the EnableRule doesn't work in Lambda either.
However, this Lambda that's supposed to disable it isn't working.
Both functions already have Cloudwatch and CloudwatchEvent Full Access rights in their roles.
var cloudwatchEvents = new AWS.CloudWatchEvents();
var params = {
Name: cloudwatchEventRuleName
}
console.log("this message will show up");
cloudwatchEvents.disableRule(params, function (err, data) {
console.log("but this message never appears when it runs via Lambda for some reason!")
if (err)
console.log(err,err.stack);
else
console.log(data);
});
console.log("and this message will also show up");
That line where it was supposed to call the middle console.log doesn't work at all if I run it through Lambda. It's working perfectly in my local, however.
I even printed the cloudwatchEventRuleName to check if I have any typos, but the function name seems right. It's like the function is just outright skipping the disableRule function altogether for whatever reason.

So apparently, years later, setting up VPCs still haunt me.
Yep, it's a VPC configuration thing.
I could swear that the Subnet that the Lambda function was using had a route table that pointed to a properly set up Network Interface with a NAT Gateway Instance.
Out of curiosity, I tried making the route table entry of 0.0.0.0/0 point to the instance (i-#####) rather than the network interface (eni-######).
After I pressed Save in the Route Table, it automatically transformed into eni-######, similar to what I already had it set up...
Except this time the function actually started working now.
I have no idea what kind of magic AWS did so that associating an instance =/= associating to a network interface even though the former transformed into the same ID anyway, but whatever.
So for anyone encountering this same problem: always remember to double-check if your function actually has access to the internet to use the AWS APIs.
EDIT: Also another thing: I had to make sure that enableRule and disableRule were both awaited, as for some reason the AWS requests can in fact not be sent properly if the handler already returned something before the request was completed. So we turned into a promise just so we can await it:
try { await cloudwatchEvents.disableRule(params).promise().then((result) => console.log(result)) }
catch (error) { console.log("Error when disabling rule!", error); }

Related

SAP Cloud SDK for javascript using the destination

I have followed the Tutorial and build the basic CF based nodejs applciation to display all BusinessPartners from my S/4HANA on-premise destination.
function getAllBusinessPartners(): Promise<BusinessPartner[]> {
return BusinessPartner.requestBuilder()
.getAll()
.execute({
destinationName: 'MockServer'
});
}
Destination is configured with the Virtual host from cloud connector.
But after deploying to the Cloud Foundry, i get following error for the GET request
{"message":"Service of type destination is not supported! Consider providing your own transformation function when calling destinationForServiceBinding, like this:\n destinationServiceForBinding(yourServiceName, { serviceBindingToDestination: yourTransformationFunction });","level":"warn","custom_fields":{"package":"core","messageContext":"destination-accessor"},"logger":"sap-cloud-sdk-logger","timestamp":"2020-03-09T18:15:41.856Z","msg":"Service of type destination is not supported! Consider providing your own transformation function when calling destinationForServiceBinding, like this:\n destinationServiceForBinding(yourServiceName, { serviceBindingToDestination: yourTransformationFunction });","written_ts":1583777741856,"written_at":"2020-03-09T18:15:41.856Z"}
The application is already bound to the Destination service as well.
Can someone help me here, what went wrong ? or the approach to use destination is different in the new version of Cloud-SDK ?
After lot of attempts, i have made this to work.
My Observations:
Connectivity service is also required to be bound, when using on-premise S4 backend.
There was no errors in the log, i have made certain modification in the code to use async/await
async function getAllBusinessPartners(): Promise<BusinessPartner[]> {
return await BusinessPartner.requestBuilder()
.getAll()
.execute({
destinationName: 'MockServer'
});
}
After this modification, when I hit the GET request, it gave me the following error:
"Failed to get business partners - get request to http://s4h-scc-basic:500/sap/opu/odata/sap/API_BUSINESS_PARTNER/sap/opu/odata/sap/API_BUSINESS_PARTNER failed!"
Could notice that the suffix after the http://domain:port is twice. One I gave in the destination, and the other VDM adds automatically.
Ideally, this error is supposed to be thrown even before adding async/await.
After removing the suffix from the destination, it started to work.
If your request really does error, what you posted here from your logs is most likely not the reason for the failure. We are aware that this message is confusing and will improve it (https://github.com/SAP/cloud-sdk/pull/32).
Can you check whether there are more errors in your logs? Based on the code you posted and the setup you described, this should work. Do you have a binding to the XSUAA service.

How to test advanced interactions when developing an Alexa Skill

I am developing an Alexa Skill, and I am struggling a bit in understanding if I setup everything in the best way possible to debug while developing.
Right now I am developing locally using Node.js, uploading to the cloud when ready, and testing all the responses to intents using the Service Simulator in the Test section of the developer console.
I find the process a bit slow but working... But still, I have two questions:
1) Is there a way of avoiding the process of uploading to the cloud?
And mostly important 2) How do I test advanced interactions, for examples multi-step ones, in the console? How for example to test triggering the response to an intent, but then asking the user for confirmation (Yes/No)? Right now the only way of doing it is using the actual device.
Any improvement is highly appreciated
Like #Tom suggested - take a look at bespoken.tools for testing skills locally.
Also, the Alexa Command Line Interface was recently released and it has some command line options you might look into.
For example, the 'api invoke-skill' command lets you invoke the skill locally via the command line (or script) so you don't have to use the service simulator. Like this...
$ask api invoke-skill -s $SKILL_ID -f $JSON --endpoint-region $REGION --debug
Here is a quick video I did that introduces the ASK CLI. It doesn't specifically cover testing but it will provide a quick intro.
https://youtu.be/p-zlSdixCZ4
Hope that helps.
EDIT: Had another thought for testing locally. If you're using node and Lambda functions, you can call the index.js file from another local .js file (example: test.js) and pass in the event data and context. Here is an example:
//path to the Lambda index.js file
var lambdaFunction = require('../lambda/custom/index.js');
// json representing the event - just copy from the service simulator
var event = require('./events/GetUpdateByName.json');
var context = {
'succeed': function (data) {
console.log(JSON.stringify(data, null,'\t') );
},
'fail': function (err) {
console.log('context.fail occurred');
console.log(JSON.stringify(err, null,'\t') );
}
};
function callback(error, data) {
if(error) {
console.log('error: ' + error);
} else {
console.log(data);
}
}
// call the lambda function
lambdaFunction.handler (event, context, callback);
Here's how I'm testing multi-step interactions locally.
I'm using a 3rd party, free, tool called BSTAlexa:
http://docs.bespoken.tools/en/latest/api/classes/bstalexa.html
It emulates Amazon's role in accepting requests, feeding them to your skill, and maintaining the state of the interactions.
So I start my test script by configuring BSTAlexa - pointing it to my skill config (eg. intents) and to a local instance of my skill (in my case I'm giving it a local URL).
Then I feed BSTAlexa a sequence of textual requests and verify that I'm getting back the expected responses. And I put all this in a Mocha script.
It works quite well.
Please find answers (Answering in reverse order),
You can test multiple steps using simulator (echosim.io) but each time you have to press and hold Mic button (Or hold on space bar). Say for example first you are asking something to Alexa with echosim and alexa responding to confirm'yes/no' then you have to press and hold mic button again to respond to confirm it.
You can automate the lambda deployment process. Please see the link,
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
It would be good to write complete unit tests so that you can test your logic before uploading Lambda. Also it will help to reduce the number of Lambda deployments

Can't figure out how to get write access to work

I'm talking to Nest Cloud API using Nodejs using firebase node module. I'm using accessToken that I got from https://api.home.nest.com/oauth2/access_token, and this seems to work. My Nest user account got prompted to "accept" this request, which I did and my app is listed in the Nest Account "Works with Nest" page so all looks good. I use this accessToken in call to authWithCustomToken and that works and my Nodejs application requested read/write permission (See https://developers.nest.com/products/978ea6e2-c301-4dff-8b38-f63d80757162). And reading the Nest thermostat properties from https://developer-api.nest.com/devices/thermostats/[deviceid] works, but when I try and write to hvac_mode like this:
this.firebaseRef = new Firebase("https://developer-api.nest.com");
this.myNestThermostat = this.firebaseRef.child("devices/thermostats/"+deviceId);
this.myNestThermostat.set("{'hvac_mode': 'off'}", function (error) { ... }
and this always returns:
FIREBASE WARNING: set at /devices/thermostats/fwxNBtjaok6KZJbSXhf2azuBmGSvkcjK failed: No write permission(s) for field(s): /devices/thermostats/fwxNBtjaok6KZJbSXhf2azuBmGSvkcjK
(where the deviceId is what I see when I enumerate my devices, I only have one, so I'm pretty sure it is correct).
Any ideas?
Well, isn't that always the case, I found an answer already. Turns out if I create a firebase reference to the property itself like this:
var propertyRef = this.myNestThermostat.child(name);
Then the following succeeds:
propertyRef.set(value, function (error) { ...
The firebase documentation was misleading on this because it led me to believe I could write this:
this.myNestThermostat.set("{'hvac_mode': 'off'}", function (error) { ... }
which technically should have worked, but I guess that would mean I'd need write access on the whole of this.myNestThermostat, which I don't. Tricky.
Anyway I'm happy because it works now, yay! Firebase + nodejs rocks!

How to use lambda and api getaway to trigger a custom event?

I'm trying to understand how The AWS api gateway works with lambda. What i am wanting to do is quite simple :
When I submit a basic form in a localhosted web page, this simple action should invoke a lambda function.
I know i need to use aws api gateway to complete this action and i read some tutorials online but i can't figure out how to start a lambda function after a custom event.
Thanks yor any help.
It is easier to understand if you work backwards. First, make your custom event handler. Amazon provides a good overview of what you need to do here:
http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.html
If you need more of a kick in the right direction, LithosTech has a well-written guide to handling FORM POST events in Lambda here:
http://lithostech.com/2015/10/aws-lambda-example-contact-form-handler/
At its simplest level, you're going to have a function that takes an event parameter and does something with its values:
var AWS = require('aws-sdk');
exports.handler = function(event, context) {
console.log('Received event:', JSON.stringify(event, null, 2));
// TODO: Do something with event.name, event.email, event.*, ...
}
After you make this function in a .JS file, upload it using the Lambda Web console - you can do it entirely from the command line, but it's easier to use the Web interface when you're first starting out. The biggest benefit to doing it this way is that during the creation process, you'll be asked if you want to make an API gateway endpoint for the function - say yes! This will automatically create a suitable entry for you and give you the details. Drop those in your form and you're off to the races!

Checking availability of address when binding in ZMQJS

I'm working with an automated system written in nodeJS that creates on the fly nodes across the cloud connecting them by the means of the ZMQ binding for nodeJS. Sometimes I get the error Error: Address already in use, which is my bad because I have some bug somewhere. I would like to know if it's possible with the nodeJS binding of ZMQ to check the availability of the address before binding it.
It's not really what I was searching for, but in the end I decided to go for the "simple" solution and use a try-catch block to check if there is an error when binding to a host:port. In practice this is what I do:
try {
receiver.bindSync("tcp://"+host+":" + port);
}
catch (e) {
console.log(e);
}
Which is stupid and straight forward. I was looking for a more accurate way to do this (for example, as mentioned in the question, a function to check the availability of the address, rather than catching the error).

Resources