How to enable logging/tracing with Axum? - rust

Learning Axum and I will like to add logging to a service I have put together, but unfortunately I cannot get it to work.
What I have done?
I added tower-http = { version = "0.3.5", features = ["trace"] } to Cargo.html and in the definition of the service I have this:
use tower_http::trace::TraceLayer;
let app = Router::new()
.route("/:name/path", axum::routing::get(handler))
.layer(TraceLayer::new_for_http())
But when I start the application and make requests to the endpoint, nothing gets logged.
Is there any configuration step I could have missed?
** Edit 1 **
As suggested in the comment, I added the tracing-subscriber crate:
tracing-subscriber = { version = "0.3"}
and updated the code as follows:
use tower_http::trace::TraceLayer;
use tower_http::trace::TraceLayer;
tracing_subscriber::fmt().init();
let app = Router::new()
.route("/:name/path", axum::routing::get(handler))
.layer(TraceLayer::new_for_http())
But yet, still no log output.
** Edit 2 **
Ok so what finally worked was
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
and then initilize as follows:
tracing_subscriber::registry()
.with(tracing_subscriber::fmt::layer())
.init();
Even though this get the log output, I cannot explain why. So perhaps someone who understands how these crates work can give an explanation which I can accept as the answer to the question

You can get up and running quickly with the tracing-subscriber crate:
tracing_subscriber::fmt()
.with_max_level(tracing::Level::DEBUG)
.init();
The difference in the above attempts is simply a case of defaults. By default, TraceLayer will log with a DEBUG level. Using fmt() is configured with a default INFO logging level while registry().with(..).init() does not configure a log level filter.
You can also change the behavior of TraceLayer by using the customizable on_* methods.
See also:
How to use the tracing library? for more introductory tracing configurations
How to log and filter requests with Axum/Tokio? to help reduce the noise

Related

Can't load Features.Diagnostics

I'm creating a web client for joining Teams meetings with the ACS Calling SDK.
I'm having trouble loading the diagnostics API. Microsoft provides this page:
https://learn.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/call-diagnostics
You are supposed to get the diagnostics this way:
const callDiagnostics = call.api(Features.Diagnostics);
This does not work.
I am loading the Features like this:
import { Features } from '#azure/communication-calling'
A statement console.log(Features) shows only these four features:
DominantSpeakers: (...)
Recording: (...)
Transcription: (...)
Transfer: (...)
Where are the Diagnostics??
User Facing Diagnostics
For anyone, like me, looking now...
ATOW, using the latest version of #azure/communication-calling SDK, the documented solution, still doesn't work:
const callDiagnostics = call.api(Features.Diagnostics);
call.api is undefined.
TL;DR
However, once the call is instantiated, this allows you to subscribe to changes:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
userFacingDiagnostics.media.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
userFacingDiagnostics.network.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
This isn't documented in the latest version, but is under this alpha version.
Whether this will continue to work is anyone's guess ¯\(ツ)/¯
Accessing Pre-Call APIs
Confusingly, this doesn't currently work using the specified version, despite the docs saying it will...
Features.PreCallDiagnostics is undefined.
This is actually what I was looking for, but I can get what I want by setting up a test call asking for the latest values, like this:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
console.log(userFacingDiagnostics.media.getLatest())
console.log(userFacingDiagnostics.network.getLatest())
Hope this helps :)
Currently the User Facing Diagnostics API is only available in the Public Preview and npm beta packages right now. I confirmed this with a quick test comparing the 1.1.0 and beta packages.
Check the following link:
https://github.com/Azure-Samples/communication-services-web-calling-tutorial/
Features are imported from the #azure/communication-calling,
for example:
const {
Features
} = require('#azure/communication-calling');

How to forward messages to Sentry with a clean scope (no runtime information)

I'm forwarding alert messages from a AWS Lambda function to Sentry using the sentry_sdk in Python.
The problem is that even if I use scope.clear() before capture_message() the events I receive in sentry are enriched with information about the runtime environment where the message is captured in (the AWS lambda python environment) - which in this scenario is completly unrelated to the actual alert I'm forwarding.
My Code:
sentry_sdk.init(dsn, environment="name-of-stage")
with sentry_sdk.push_scope() as scope:
# Unfortunately this does not get rid of lambda specific context information.
scope.clear()
# here I set relevant information which works just fine.
scope.set_tag("priority", "high")
result = sentry_sdk.capture_message("mymessage")
The behaviour does not change if I pass scope as an argument to capture_message().
The tag I set manually is beeing transmitted just fine. But I also receive information about the Python runtime - therefore scope.clear() either does not behave like I expect it to OR capture_message gathers additional information itself.
Can someone explain how to only capture the information I'm actively assigning to the scope with set_tag and similar functions and surpress everything else?
Thank you very much
While I didn't find an explaination for the behaviour I was able to solve my problem (Even though it' a little bit hacky).
The solution was to use the sentry before_send hook in the init step like so:
sentry_sdk.init(dsn, environment="test", before_send=cleanup_event)
with sentry_sdk.push_scope() as scope:
sentry_sdk.capture_message(message, state, scope)
# when using sentry from lambda don't forget to flush otherwise messages can get lost.
sentry_sdk.flush()
Then in the cleanup_event function it gets a little bit ugly. I basically iterate over the keys of the event and remove the ones I do not want to show up. Since some Keys hold objects and some (like "tags") are a list with [key, value] entries this was quite some hassle.
KEYS_TO_REMOVE = {
"platform": [],
"modules": [],
"extra": ["sys.argv"],
"contexts": ["runtime"],
}
TAGS_TO_REMOVE = ["runtime", "runtime.name"]
def cleanup_event(event, hint):
for (k, v) in KEYS_TO_REMOVE.items():
with suppress(KeyError):
if v:
for i in v:
del event[k][i]
else:
del event[k]
for t in event["tags"]:
if t[0] in TAGS_TO_REMOVE:
event["tags"].remove(t)
return event

How to use pystemd to control systemd timedated ntp service?

I'm working on a python app that needs to get the NTPSynchronized parameter from system-timedated. I'd also like to be able to start and stop the NTP service by using the SetNTP method.
To communicate with timedated over d-bus I have been using this as reference: https://www.freedesktop.org/wiki/Software/systemd/timedated/
I previously got this working with dbus-python, but have since learned that this library has been deprecated. I tried the dbus_next package, but that does not have support for Python 3.5, which I need.
I came across the pystemd package, but I am unsure if this can be used to do what I want. The only documentation I have been able to find is this example (https://github.com/facebookincubator/pystemd), but I can not figure out how to use this to work with system-timedated.
Here is the code I have that works with dbus-python:
import dbus
BUS_NAME = 'org.freedesktop.timedate1`
IFACE = 'org.freedesktop.timedate1`
bus = dbus.SystemBus()
timedate_obj = bus.get_object(BUS_NAME, '/org/freedesktop/timedate1')
# Get synchronization value
is_sync = timedate_obj.Get(BUS_NAME, 'NTPSynchronized', dbus_interface=dbus.PROPERTIES_IFACE)
# Turn off NTP
timedate_obj.SetNTP(False,False, dbus_interface=IFACE)
Here's what I have so far with pystemd, but I don't think I'm accessing it in the right way:
from pystemd.systemd1 import Unit
unit = Unit(b'systemd-timesyncd.service')
unit.load()
# Try to access properties
prop = unit.Properties
prop.NTPSynchronized
Running that I get:
Attribute Error: 'SDInterface' object has no attribute 'NTPSynchronized'
I have a feeling that either the service I entered is wrong, or the way I'm accessing properties is wrong, or even both are wrong.
Any help or advice is appreciated.
Looking at the source code, it appears that using the pystemd.systemd1 Unit object has a default destination of "org.freedesktop.systemd1" + the service name (https://github.com/facebookincubator/pystemd/blob/master/pystemd/systemd1/unit.py)
This is not what I want because I am trying to access "org.freedesktop.timedate1"
So instead I instantiated it's base class SDObject from pystemd.base (https://github.com/facebookincubator/pystemd/blob/master/pystemd/base.py)
The following code allowed me to get the sync status of NTP
from pystemd.base import SDObject
obj = SDObject(
destination=b'org.freedesktop.timedate1',
path=b'/org/freedesktop/timedate1',
bus=None,
_autoload=False
)
obj.load()
is_sync = obj.Properties.Get('org.freedesktop.timedate1','NTPSynchronized')
print(is_sync)
Not sure if this is what the library author intended, but hey it works!

Proper error logging for node applications

I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.

Masking part of URL in jetty request log

I would like to log all HTTP request in Jetty, which is well documented, but I can;t find any resources how can I mask some of the arguments.
E.g.:
json/users/detail?id=dsgrw543
should be logged as:
json/users/detail?id=********
or similar.
The main motivation is that I could give those logs for analytics, without worries that privacy of our users could be compromised. Ideally on-line, without using batch processing or other script.
Please note, that I use other authentication mechanism (cookies/all write methods are POST/etc.) and I can't change the existing URLs.
So far my only idea is to implement it as a class on top of NCSARequestLog:
http://download.eclipse.org/jetty/stable-7/apidocs/org/eclipse/jetty/server/NCSARequestLog.html
What are the better ways of doing that?
you may write a simple util like this, use it wherever you are logging
Enumeration<String> s = request.getParameterNames();
String q = "";
while(s.hasMoreElements()){
q += s.nextElement() + "=***&";
}
String url = request.getServletPath()+(q.length()>0?"?"+q:"");
or may be request.getRequestedURI() in place of getServletPath

Resources