As a developer using Zipkin in a Docker container on my local dev box, I would like to clear out the traces from time to time to prevent build up and to allow easier identification of what I am looking for.
Is there an easy way to clear all traces?
The default config stores traces in memory so restarting the zipkin instance should clear all the traces. Otherwise, if it's configured to use a database, you could connect to the database and manually truncate tables containing the traces.
Related
I'm trying to get a few metrics from a Cassandra node that has a Cassandra Exporter running on it (https://github.com/criteo/cassandra_exporter/). I don't want to go into the details, but using Prometheus is not an option at this time.
I'd like to access the data with HTTP requests or something similar. With a simple HTTP Get I can access all the cached information, but I would like to do more sophisticated operations on this, such as filtering for certain messages. Is there a way to do this? I could not find any information on this. Or do I have to get the entire log and then do filtering operations on my local machine?
I'm using the jmx-exporter tag because cassandra-exporter used to be a fork of it and I couldn't find a more fitting tag.
I would suggest to use telegraf + jolokia.
It is easy to setup and it will expose the metrics via HTTP.
I wrote a post about it (in my case I saved the result into InfluxDb and used it in Grafana), it might be useful:
cassandra-performance-monitoring-by-using-jolokia-agent-telegraf-influxdb-and-grafana
Using Prometheus exporters without the Prometheus server itself is a perfectly valid approach if you don't care about historical data and just want to get an immediate snapshot of metrics (state of the system) or make a recording of some short period manually.
One of the instruments you might look at is Metricat application (https://metricat.dev/), it allows you to have filters by metrics and make recordings of how metrics change in time during period of your interest.
I've been working with Arango for a few months now within a local, single-node development environment that regularly gets restarted for maintenance reasons. About 5 or 6 times now my development database has become corrupted after a controlled restart of my system. When it occurs, the corruption is subtle in that the Arango daemon seems to start ok and the database structurally appears as expected through the web interface (collections, documents are there). The problems have included the Foxx microservice system failing to upload my validated service code (generic 500 service error) as well as queries using filters not returning expected results (damaged indexes?). When this happens, the only way I've been able to recover is by deleting the database and rebuilding it.
I'm looking for advice on how to debug this issue - such as what to look for in log files, server configuration options that may apply, etc. I've read most of the development documentation, but only skimmed over the deployment docs, so perhaps there's an obvious setting I'm missing somewhere to adjust reliability/resilience? (this is a single-node local instance).
Thanks for any help/advice!
please note that issues like this should rather be discussed on github.
I'm trying to debug why the remote caching doesn't work for my use case.
I wanted to inspect the cache entries related to bazel, but realized that I don't really know and can't find what map names are used.
I found one "hazelcast-build-cache" - this seems to keep some of the build and test actions. I've set up a listener to see what gets put there, but I can't see any of the success actions.
For example,I run a test, and I want to verify that the success of this test gets cached remotely. I have no idea how to do this. I would either want to know how to find it out, or what map names I can inspect in hazelcast to find it out.
Hazelcast Management Center can show you all the maps/caches that you create or get created in the cluster, how data is distributed etc. You can also make use of various types of listeners within Hazelcast: EntryListener, MapListener etc.
Take a look at documentation:
http://docs.hazelcast.org/docs/3.9/manual/html-single/index.html#management-center
http://docs.hazelcast.org/docs/3.9/manual/html-single/index.html#distributed-events
I've been asked to configure an ELK stack, in order to manage log from several applications of a certain client. I have the given stack hosted and working at a redhat 7 server (followed this cookbook), and a test instance in a virtual machine with Ubuntu 16.04 (followed this other cookbook), but I've hit a roadblock and cannot seem to get through it. Kibana is rather new for me and maybe I don't fully understand the way it works. In addition, the most important application of the client is a JHipster managed application, another tool I am not familiarized.
Up until now, all I've found about jhipster and logstash tells me to install the full ELK stack using Docker (which I haven't, and would rather avoid in orther to keep the configuration I've already made), so that Kibana deployed through that method already has configured a dashboard tunned for displaying the information that the application will send with the native configuration, activated in the application.yml logstash: enabled: true.
So... my questions would be... Can I get that preconfigured jhipster dashboard imported in my preexistent Kibana deploy. Where is the data, logged by the application, stored? can I expect a given humanly readable format? Is there any other way of testing the configuration is working, since I don't have any traffic going through the test instance into the VM?
Since that JHipster app is not the only one I care for, I want other dashboards and inputs to be displayed from other applications, most probably using file beat.
Any reference to useful information is appreciated.
Yes you can. Take a look at this repository: https://github.com/jhipster/jhipster-console/tree/master/jhipster-console
there are the exports (in JSON format) from kibana stored in the repository, along with the load.sh
The scripts adds the configuration by adding them via the API. As you can imply, any recent dashboard is not affected by this, so you can use your existing configuration.
What is the recommended workflow to debug Foxx applications?
I am currently working on a pretty big application and it seems to me I am doing something wrong, because the way I am proceeding does not seem to be maintanable at all:
Do your changes in Foxx app (eg new endpoints).
Upload your foxx app to ArangoDB.
Test your changes (eg trigger API calls).
Check the logs to see if something went wrong.
Go to 1.
i experienced great time savings, shifting more of the development workflow to the terminal client 'arangosh'. Especially when debugging more complex endpoints, you can isolate queries and functions and debug each individually in the terminal. When done debugging, you merge your code in Foxx app and mount it. Require modules as you would do in Foxx, just enter variables as arguments for your functions or queries.
You can use arangosh either directly from the terminal or via the embedded terminal in the Arangodb frontend.
You may also save some time switching to dev mode, which allows you to have changes in your code directly reflected in the mounted app without fetching, mounting and unmounting each time.
That additional flexibility costs some performance, so make sure to switch back to production mode once your Foxx app is ready for deployment.
When developing a Foxx App, I would suggest using the development mode. This also helps a lot with debugging, as you have faster feedback. This works as follows:
Start arangod with the dev-app-path option like this: arangod --javascript.dev-app-path /PATH/TO/FOXX_APPS /PATH/TO/DB, where the path to foxx apps is the folder that contains a database folder that contains your foxx apps sorted by database. More information can be found here.
Make your changes, no need to deploy the app or anything. The app now automatically reloads on every request. Change, try out, change try out...
There's currently no debugging capabilities. We are planning to add more support for unit testing of Foxx apps in the near future, so you can have a more TDD-like workflow.