I have a nodejs (express) app running on heroku.
I need to keep a short history of logs (up to a day or so).
I need to be able to dig in to the stack trace of any errors being thrown
I have been able to solve the first two by using Loggly but I can't find any solution which will show me the stack trace of any exceptions which have been thrown.
I'd recommend using node-airbrake module on your app, connecting either to the webservice or to an own set-up server-instance for the notification backend using errbit gem.
For setting things up, it depends, which way you want to go.
To keep it simple, i'd recommend using the 30-day free-trial hosted service and in case, you're happy with it, to setup your own errbit instance.
Related
I have a Fastify web server that exposes a bunch of RESTful endpoints. I created my project using fastify-cli and I also use the CLI to run the server locally. I notice that whenever I rapidly call my APIs, I get the following error in my terminal:
You'll notice that I call the same API repeatedly. I don't get any errors in the first two or three calls but, when I call the same API a third or fourth time, the server process crashes and automatically restarts a few seconds later.
Since the logs are sparse, I can't seem to figure out what might cause this issue and I wonder if it is an issue with the CLI itself? Appreciate any help from other Fastify users.
It is a known bug with the watch feature cli issue
not fixed yet.
In python, there is an option to inject trace details(tracid,spanid) to application logs using environment variable (export OTEL_PYTHON_LOG_CORRELATION=true). Is there something similar in Nodejs or Go.
I couldn't find any autoinstrumentaion for tracing in Go (We can manually instrument, by creating spans and wrappers for handlers). So I don't expect answer from Go language, but probably, Node might have a solution
I have a node.js app that is intended to be running all the time. Is a core process that does some specific work for each of our clients.
Today, that app is feed with the clients data by a config (json) file on startup, every time we need to add/remove a client or change a client data, we edit the json and kill/restart the app.
I came with the idea of building another node.js app that should work as a cli to send instructions to the main app, I don't know where to start, is there a solution already for this?
I tought of using sockets to connect the app with the cli, but I feel it overkill.
What's the propper way of controlling a node.js app from another node.js app?
I know this question may be vague and leaves space for discussions and opinions, but I don't know where to start looking for, all my searches give me express.js or forever/pm2 articles.
You could try storing your data in a REDIS DB and subscribing to changes from the node applications. So, you could update the redis and the configuration changes trigger automatically from within node application
See this question
Another method is to emit global events from your CLI and have your apps listen to and update their code as required.
I've been asked to configure an ELK stack, in order to manage log from several applications of a certain client. I have the given stack hosted and working at a redhat 7 server (followed this cookbook), and a test instance in a virtual machine with Ubuntu 16.04 (followed this other cookbook), but I've hit a roadblock and cannot seem to get through it. Kibana is rather new for me and maybe I don't fully understand the way it works. In addition, the most important application of the client is a JHipster managed application, another tool I am not familiarized.
Up until now, all I've found about jhipster and logstash tells me to install the full ELK stack using Docker (which I haven't, and would rather avoid in orther to keep the configuration I've already made), so that Kibana deployed through that method already has configured a dashboard tunned for displaying the information that the application will send with the native configuration, activated in the application.yml logstash: enabled: true.
So... my questions would be... Can I get that preconfigured jhipster dashboard imported in my preexistent Kibana deploy. Where is the data, logged by the application, stored? can I expect a given humanly readable format? Is there any other way of testing the configuration is working, since I don't have any traffic going through the test instance into the VM?
Since that JHipster app is not the only one I care for, I want other dashboards and inputs to be displayed from other applications, most probably using file beat.
Any reference to useful information is appreciated.
Yes you can. Take a look at this repository: https://github.com/jhipster/jhipster-console/tree/master/jhipster-console
there are the exports (in JSON format) from kibana stored in the repository, along with the load.sh
The scripts adds the configuration by adding them via the API. As you can imply, any recent dashboard is not affected by this, so you can use your existing configuration.
I am running a Node.JS + Angular JS application on a cloud server using the MEAN stack. The application is terminating every hour or sooner.
I have few thoughts and would like someone who can tell me which might a cause.
I am using SSH through root when I start the service using this command
NODE_ENV=production PORT=80 grunt serve:dist
Do I need forever to run this properly ?
Should I use a server user (similar to apache) that can run the application?
If yes how do I do this ?
We do not have a deployment engineer in our team but it is annoying to not being able to keep the app running on the server after developing the application. Please help diagnose the problem.
If you don't want to use a deployment service — MS azure, AWS, heroku, etc. (which would probably be a lot easier) — then yes, you would have to use something like forever to restart your sever every time it crashes. It's really odd that your app terminates after an hour though, it'd be helpful if you could describe why that's happening.