How to push console.log from Nodejs application to Elasticsearch - node.js

I am using the package elastic-apm-node for sending the APM data to Elastic via Nodejs app. I need to do something like apm.captureError(error) to send the error to Elastic.
Is there some where wherein i can send simple console.log() to Elastic ?
I basically need functionality wherein i can track every function getting called in my App.
Function A (){
console.log("Function A called , send to Elastic");
doSomething()
}
Function doSomething(){
console.log("Function doSomething called, send to Elastic");
}
I got some example using Elastic FileBeat , but i think this should be achievable using APM as the connection is already made between the app and Elastic.

APM libraries like that (from any provider, it's the same with NewRelic or Datadog, etc.) work by instrumenting (basically, wrapping) functions in common libraries like Express and Koa, so they automatically pick up on function calls like executing routes, but won't pick up on something else unrelated. And APM is for performance (and error) monitoring; logs are really a separate concern, which is why there's a separate solution for logs.
But you could do it with custom spans, and overriding console.log or writing a wrapper. Example:
const elasticLog = (data) => {
const span = apm.startSpan(data)
span.end()
}
const logger = process.env.NODE_ENV === 'production' ? elasticLog : console.log
function foo () {
logger('In foo')
bar()
}
function bar () {
logger('In bar')
}
As for automatically detecting names of called functions, that's tricky. And running the logger on every function that's called without having to actually call it is pretty much impossible in this sort of context.

Related

Firebase Functions: How to maintain 'app-global' API client?

How can I achieve an 'app-wide' global variable that is shared across Cloud Function instances and function invocations? I want to create a truly 'global' object that is initialized only once per the lifetime of all my functions.
Context:
My app's entire backend is Firestore + Firebase Cloud Functions. That is, I use a mix of background (Firestore) triggers and HTTP functions to implement backend logic. Additionally, I rely on a 3rd-party location service to continually listen to location updates from sensors. I want just a single instance of the client on which to subscribe to these updates.
The problem is that Firebase/Google Cloud Functions are stateless, meaning that function instances don't share memory/objects/state. If I call functionA, functionB, functionC, there's going to be at least 3 instances of locationService clients created, each listening separately to the 3rd party service so we end up with duplicate invocations of the location API callback.
Sample code:
// index.js
const functions = require("firebase-functions");
exports.locationService = require('./location_service');
this.locationService.initClient();
// define callable/HTTP functions & Firestore triggers
...
and
// location_service.js
var tracker = require("third-party-tracker-js");
const self = (module.exports = {
initClient: function () {
tracker.initialize('apiKey')
.then((client)=>{
client.setCallback(async function(payload) {
console.log("received location update: ", payload)
// process the payload ...
// with multiple function instances running at once, we receive as many callbacks for each location update
})
client.subscribeProject()
.then((subscription)=>{
subscription.subscribe()
.then((subscribeMsg)=>{
console.log("subscribed to project with message: ", subscribeMsg); // success
});
// subscription.unsubscribe(); // ??? at what point should we unsubscribe?
})
.catch((err)=>{
throw(err)
})
})
.catch((err)=>{
throw(err)
})
},
});
I realize what I'm trying to do is roughly equivalent to implementing a daemon in a single-process environment, and it appears that serverless environments like Firebase/Google Cloud Functions aren't designed to support this need because each instance runs as its own process. But I'd love to hear any contrary ideas and possible workarounds.
Another idea...
Inspired by this related SO post and the official GCF docs on stateless functions, I thought about using Firestore to persist a tracker value that allows us to conditionally initialize the API client. Roughly like this:
// read value from db; only initialize the client if there's no valid subscription
let locSubscriberActive = await getSubscribeStatusFromDb();
if (!locSubscriberActive) {
this.locationService.initClient();
}
// in `location_service.js`, do setSubscribeStatusToDb(); // set flag to true when we call subscribe(). reset when we get terminated
The problem faced: at what point do I unset/reset that value? Intuitively, I would do so the moment the function instance that initialized the client gets recycled/killed. However, it appears that it is not possible to know when a Firebase Cloud Function instance is terminated? I searched everywhere but couldn't find docs on how to detect such an event...
What you're trying to do is not at all supported in Cloud Functions. It's important to realize that there may be any number of server instances allocated for each deployed function. That's how Cloud Functions scales up and down to match the load on the function in a cost-effective way. These instances might be terminated at any time for any reason. You have no indication when an instance terminates.
Also, instances are not capable of performing any computation when they are idle. CPU resources are clamped down after a function terminates, and are spun up again when the next function is invoked on that instance. You can't have any "daemon" code running when a function is not actively being invoked. I don't know what your locationService does, but it is certainly doing nothing at all after a function terminates, regardless of how it terminated.
For any sort of long-running or daemon-like code, Cloud Functions is not a suitable product. You should instead consider also using another product that lets you run code 24/7 without disruptions. App Engine and Compute Engine are viable alternatives, and you will have to think carefully about if and how you want their server instances to scale with load.

Handle message before event ws nodejs

I'm using ws version 7.4.0 and I would want to display a console log or perfom operations between the moment where the client is sending a message to the server and before the server fire the on event message.
To represent it:
webserver.on('example', function callback(msg){console.log(msg);}); //act before the call of callback
client------server---[here]---callback
The only way I see right now would be to use a "root" function before the callback of all my events like this:
function callback(msg){console.log(msg);}
webserver.on('example', function root(msg) {console.log('example msg'); callback(msg);});
I don't know if this is a real and/or good solution I really wish to write a clean and organized application.
If someone could give me some advise or a real solution? Thank you.
You could make a wrapper for all of your callbacks like so:
function makeCallback(fn) {
return function(msg) {
if (!environment.prod) console.log(msg);
fn(msg)
};
}
var myCallback = makeCallback(function (msg) {
// something
});
webserver.on('example', myCallback);
Or I think the better solution is to stream the requets into your stdout although I don't know the implications of using this method.
And I want to address the naming of your websocket server. Even though a web socket server is technically a web server, it only responds to the websocket protocol and naming it webserver could be misleading, I would recommend using the naming like in their documents wss.

Send request progress to client side via nodejs and express

I am using this (contentful-export) library in my express app like so
const app = require('express');
...
app.get('/export', (req, rex, next) => {
const contentfulExport = require('contentful-export');
const options = {
...
}
contentfulExport(options).then((result) => {
res.send(result);
});
})
now this does work, but the method takes a bit of time and sends status / progress messages to the node console, but I would like to keep the user updated also.. is there a way I can send the node console progress messages to the client??
This is my first time using node / express any help would be appreciated, I'm not sure if this already has an answer since im not entirely sure what to call it?
Looking of the documentation for contentful-export I don't think this is possible. The way this usually works in Node is that you have an object (contentfulExport in this case), you call a method on this object and the same object is also an EventEmitter. This way you'd get a hook to react to fired events.
// pseudo code
someLibrary.on('someEvent', (event) => { /* do something */ })
someLibrary.doLongRunningTask()
.then(/* ... */)
This is not documented for contentful-export so I assume that there is no way to hook into the log messages that are sent to the console.
Your question has another tricky angle though. In the code you shared you include a single endpoint (/export). If you would like to display updates or show some progress you'd probably need a second endpoint giving information about the progress of your long running task (which you can not access with contentful-export though).
The way this is usually handled is that you kick of a long running task via a certain HTTP endpoint and then use another endpoint that serves infos via polling or or a web socket connection.
Sorry that I can't give a proper solution but due to the limitation of contentful-export I don't think there is a clean/easy way to show progress of the exported data.
Hope that helps. :)

AWS Lambda Dynamic DB Switching Singelton (Node)

I'm trying to take advantage of db connection reuse in Lambda, by keeping the code outside of the handler.
For example - something like:
import dbconnection from './connection'
const handler(event, context, callback){
//use dbconnection
}
The issue is I don't decide what database to connect to until I do a lookup to see where they should be connecting. In my specific case I have 'customer=foo' in a query param then I can look to see that foo should connect to database1.
So what I need to do is something like this :
const dbconnection = require('./connection)('database1')
The way it is now I need to do this in every handler method which is expensive.
Is there some way I can pull the query parameter, look up my database and set it / switch it globally within the Lambda execution context?
I've tried this:
import dbconnection from './connection'
const handler(event, context, callback){
const client = dbconnection.setDatabase('database1')
}
....
./connection.js
setDatabase(database) {
if(this.currentDatabase !== database) {
// connect to different database
this.currentDatabase = database;
}
}
Everything works locally with sls offline but doesn't work through the AWS Lambda execution context. Thoughts?
You can either hardcode (or provide it via environment variable) it or not. If you can, then pull it out of then handler and it will not be executed each time. If you can't, as you have mentioned, then what you are trying to do is to make lambda stateful. Lambda was designed to be stateless and AWS intentionally doesn't expose specific informations about the underlying containers so that you don't start doing something like what you are trying to do now - introducing state to it.

Implementing isRequestFromAssistant in Node.js on actions-on-google project fulfillment

I am having trouble implementing the isRequestFromAssistant method to verify requests to my fulfillment webhook. Using Node.js, I instantiate the following variables at the start of my index.js file:
const App = require('actions-on-google').ApiAiApp;
const app = new App({ request, response });
I then use "app" with the .ask and .tell and other methods throughout my functions.
The code I see in the docs for implementing isRequestFromAssistant is:
const app = new ActionsSdkApp({request, response});
app.isRequestFromAssistant('my-project-id')
.then(() => {
app.ask('Hey there, thanks for stopping by!');
})
.catch(err => {
response.status(400).send();
});
If I leave out the first line and use my existing app variable, created with the .ApiAi method instead of the .ActionsSdkApp method, it doesn't work. If I create a new variable App1 and app1 using the .ActionsSdkApp method and change the above code to be app1.isRequestFromAssistant, it also doesn't work. I have tried other variations with no luck.
When I say it doesn't work, I mean I receive a 500 Internal Server Error when I call it. I am hosting it with NGROK currently. I am still a beginner with Node.js, although I have managed to get the other 700 lines of code working just fine, learning mostly from Google searches and reading these forums.
You have a few things going on here which, individually or separately, may be causing the problem.
First - make sure you have the most recent version of the actions-on-google library. The isRequestFromAssistant() function was added in version 1.6.0, I believe.
Second - Make sure you're creating the right kind of App instance. If you're using Dialogflow (formerly API.AI), you should be creating it with something like
const App = require('actions-on-google').DialogflowApp;
const app = new App( {request, response} );
or
const { DialogflowApp } = require('actions-on-google');
const app = new DialogflowApp( {request, response} );
(They both do the same thing, but you'll see both forms in documentation.) You should switch to DialogflowApp from ApiAiApp (which your example uses) to reflect the new name, but the old form has been retained.
If you're using the Actions SDK directly (not using Dialogflow / API.AI), then you should be using the ActionsSdkApp object, something like
const { ActionsSdkApp } = require('actions-on-google');
const app = new ActionsSdkApp({request: request, response: response});
(Again, you'll see variants on this, but they're all fundamentally the same.)
Third - Make sure you're using the right function that matches the object you're using. The isRequestFromAssistant() function is only if you are using the Actions SDK.
If you are using Dialogflow, the corresponding function is isRequestFromDialogflow(). The parameters are different, however, since it requires you to set confirmation information as part of your Dialogflow configuration.
Finally - If you're getting a 500 error, then check your logs (or the output from stderr) for the node.js server that is running. Typically there will be an error message there that points you in the right direction. If not - posting that error message as part of your StackOverflow question is always helpful.
Set the secure (randomly generated) auth header & key values in the dialogflow Fulfillment page, then in nodejs:
if (app.isRequestFromDialogflow("replace_with_key", "replace_with_value")) {
console.log("Request came from dialogflow!");
// rest of bot
} else {
console.log("Request did not come from dialogflow!");
response.status(400).send();
}
Also see: https://developers.google.com/actions/reference/nodejs/DialogflowApp#isRequestFromDialogflow

Resources