Checking availability of address when binding in ZMQJS - node.js

I'm working with an automated system written in nodeJS that creates on the fly nodes across the cloud connecting them by the means of the ZMQ binding for nodeJS. Sometimes I get the error Error: Address already in use, which is my bad because I have some bug somewhere. I would like to know if it's possible with the nodeJS binding of ZMQ to check the availability of the address before binding it.

It's not really what I was searching for, but in the end I decided to go for the "simple" solution and use a try-catch block to check if there is an error when binding to a host:port. In practice this is what I do:
try {
receiver.bindSync("tcp://"+host+":" + port);
}
catch (e) {
console.log(e);
}
Which is stupid and straight forward. I was looking for a more accurate way to do this (for example, as mentioned in the question, a function to check the availability of the address, rather than catching the error).

Related

AWS Lambda: using CloudwatchEvent.disableRule() doesn't work within Lambda but works in local

I'm trying to disable the Cloudwatch Event Rule that is triggering a Lambda, within the Lambda itself, so that it doesn't keep running when it's not necessary to do so.
I have a separate Lambda that is calling enableRule to enable the rule, which seems to work fine. The rule is not associated with the function that is doing enableRule. EDIT: Turns out the EnableRule doesn't work in Lambda either.
However, this Lambda that's supposed to disable it isn't working.
Both functions already have Cloudwatch and CloudwatchEvent Full Access rights in their roles.
var cloudwatchEvents = new AWS.CloudWatchEvents();
var params = {
Name: cloudwatchEventRuleName
}
console.log("this message will show up");
cloudwatchEvents.disableRule(params, function (err, data) {
console.log("but this message never appears when it runs via Lambda for some reason!")
if (err)
console.log(err,err.stack);
else
console.log(data);
});
console.log("and this message will also show up");
That line where it was supposed to call the middle console.log doesn't work at all if I run it through Lambda. It's working perfectly in my local, however.
I even printed the cloudwatchEventRuleName to check if I have any typos, but the function name seems right. It's like the function is just outright skipping the disableRule function altogether for whatever reason.
So apparently, years later, setting up VPCs still haunt me.
Yep, it's a VPC configuration thing.
I could swear that the Subnet that the Lambda function was using had a route table that pointed to a properly set up Network Interface with a NAT Gateway Instance.
Out of curiosity, I tried making the route table entry of 0.0.0.0/0 point to the instance (i-#####) rather than the network interface (eni-######).
After I pressed Save in the Route Table, it automatically transformed into eni-######, similar to what I already had it set up...
Except this time the function actually started working now.
I have no idea what kind of magic AWS did so that associating an instance =/= associating to a network interface even though the former transformed into the same ID anyway, but whatever.
So for anyone encountering this same problem: always remember to double-check if your function actually has access to the internet to use the AWS APIs.
EDIT: Also another thing: I had to make sure that enableRule and disableRule were both awaited, as for some reason the AWS requests can in fact not be sent properly if the handler already returned something before the request was completed. So we turned into a promise just so we can await it:
try { await cloudwatchEvents.disableRule(params).promise().then((result) => console.log(result)) }
catch (error) { console.log("Error when disabling rule!", error); }

SAP Cloud SDK for javascript using the destination

I have followed the Tutorial and build the basic CF based nodejs applciation to display all BusinessPartners from my S/4HANA on-premise destination.
function getAllBusinessPartners(): Promise<BusinessPartner[]> {
return BusinessPartner.requestBuilder()
.getAll()
.execute({
destinationName: 'MockServer'
});
}
Destination is configured with the Virtual host from cloud connector.
But after deploying to the Cloud Foundry, i get following error for the GET request
{"message":"Service of type destination is not supported! Consider providing your own transformation function when calling destinationForServiceBinding, like this:\n destinationServiceForBinding(yourServiceName, { serviceBindingToDestination: yourTransformationFunction });","level":"warn","custom_fields":{"package":"core","messageContext":"destination-accessor"},"logger":"sap-cloud-sdk-logger","timestamp":"2020-03-09T18:15:41.856Z","msg":"Service of type destination is not supported! Consider providing your own transformation function when calling destinationForServiceBinding, like this:\n destinationServiceForBinding(yourServiceName, { serviceBindingToDestination: yourTransformationFunction });","written_ts":1583777741856,"written_at":"2020-03-09T18:15:41.856Z"}
The application is already bound to the Destination service as well.
Can someone help me here, what went wrong ? or the approach to use destination is different in the new version of Cloud-SDK ?
After lot of attempts, i have made this to work.
My Observations:
Connectivity service is also required to be bound, when using on-premise S4 backend.
There was no errors in the log, i have made certain modification in the code to use async/await
async function getAllBusinessPartners(): Promise<BusinessPartner[]> {
return await BusinessPartner.requestBuilder()
.getAll()
.execute({
destinationName: 'MockServer'
});
}
After this modification, when I hit the GET request, it gave me the following error:
"Failed to get business partners - get request to http://s4h-scc-basic:500/sap/opu/odata/sap/API_BUSINESS_PARTNER/sap/opu/odata/sap/API_BUSINESS_PARTNER failed!"
Could notice that the suffix after the http://domain:port is twice. One I gave in the destination, and the other VDM adds automatically.
Ideally, this error is supposed to be thrown even before adding async/await.
After removing the suffix from the destination, it started to work.
If your request really does error, what you posted here from your logs is most likely not the reason for the failure. We are aware that this message is confusing and will improve it (https://github.com/SAP/cloud-sdk/pull/32).
Can you check whether there are more errors in your logs? Based on the code you posted and the setup you described, this should work. Do you have a binding to the XSUAA service.

Does Golang's net.LookupHost() use all DNS servers in "/etc/resolv.conf"?

I have an application, that's written in Go that uses this function and it keeps failing to resolve a DNS name. I can resolve the DNS name on the server just fine using other applications but not the Go-based one that uses this function.
When in doubt, "Use the source, Luke". Reading dnsclient_unix.go reveals that it iterates over all configured servers.
But mind the note:
// If answer errored for rcodes dnsRcodeSuccess or dnsRcodeNameError,
// it means the response in msg was not useful and trying another
// server probably won't help. Return now in those cases.
// TODO: indicate this in a more obvious way, such as a field on DNSError?

How to handle Node js app errors to prevent crashing

I am new to node and what I would call, real server-side programming (vs PHP). I was setting up a user database with MongoDB, Mongoose and a simple mongoose user plugin that came with a schema and password stuff to use. You can add validation to Mongoose for your fields like so
schema.path('email').validate(function (email) {
if (this.skipValidation) return true
return email.trim().length
}, 'Please provide a valid email')
(this is not my code). I noticed though when I passed an invalid or blank email, .trim() failed and the entire server crashed. This is very worrisome to me because things like this don't happen in your good ol' WAMP stack. If you have a bug, 99.9% of the time it's just the browser that is affected.
Now that I am delving into lower level programming, do I have to be paranoid about every incoming variable to a simple function? Is there a tried-and-true error system I should follow?
Just check before using the variable with trim, if it is !null for example:
if(!email) {
return false;
}
And if you want to run your app forever, rather use PM2.
If you are interested in running forever, read this interesting post http://devo.ps/blog/goodbye-node-forever-hello-pm2/
You may consider using forever to keep your node.js program running. Even it crashes, it restarts automatically and the error is logged as well.
Note: Although you could actually catch all exceptions to prevent the node.js from crashing, it is not recommended.
One of our strategies is to make use of Node.js Domain to handle errors - http://nodejs.org/api/domain.html
You should set up a error logging node modules like Winston, once configured produces useful error/exceptions.
Have a look in this answer for how to catch error within your node implementation, though specific to expressjs but relevant.
Once you catch exceptions, it prevents unexpected crashes.

Is it possible to replace or alias core modules in Node.js?

I'm currently working on a project where one of the core Node.js modules (dns) does not behave the way I need it to. I've found a module that seems like it would work as a replacement: https://github.com/tjfontaine/node-dns. However, the code using the DNS module is several layers down from the application code I've written. I'm using the Request module (https://github.com/mikeal/request) which makes HTTP requests and uses several core modules to do so. The Request module does not seem to be using the DNS module directly, but I'm assuming one of those core modules is calling the DNS module.
Is there a way I can tell Node to use https://github.com/tjfontaine/node-dns whenever require('dns') is called?
Yes and you should not,
require.cache is a dangerous thing, extremely. It can cause memory leaks if you do not know what you are doing and cache mismatch which is potentially worse. Most requests to change core modules can also result in unintentional side effects (such as discoverability failures with DNS).
You can create a user-space require with something like : https://github.com/bmeck/node-module-system ; however, this faces the same dangers but is not directly tied to core.
My suggestion would be to wrap your require('dns').resolve with require('async').memoize, but be aware that DNS discoverability may fall over.
For better or worse, I've implemented module white lists before doing something as demonstrated below. In your case, it ought to be possible to explicitly check for dns module name and delegate everything else to original require(). However, this implementation assumes that you have full control of when and how your own code is being executed.
var _require = constructMyOwnRequire(/*intercept 'dns' and require something else*/);
var sandbox = Object.freeze({
/* directly import all other globals like setTimeout, setInterval, etc.. */
require : Object.freeze(_require)
});
try {
vm.runInContext(YOUR_SCRIPT, Object.freeze(vm.createContext(sandbox)));
} catch (exception) {
/* stuff */
}
Not really.
require is a core var which is local to each module, so you can't stub it, because Node will give the untouched require var to the loaded module.
You could run those things using the vm module. However, you would have to write too much code to do a "simple workaround" (give all needed variables to the request module, stub the needed ones to work properly, etc, etc...).

Resources