Add Sentry Log4j2 appender at runtime - log4j

I've been browsing previous threads about adding Log4j2 appenders at runtime but none of them really seem to fit my scenario.
We have a longrunning Flink Job packaged into a FAT jar that we essentially submit to a running Flink cluster. We want to forward error logs to Sentry. Conveniently enough Sentry provides a Log4j2 appender that I want to be able to use, but all attempts to get Log4j2 to work have failed -- going a bit crazy about this (spent days).
Since Flink (who also uses log4j2) provides a set of default logging configurations that takes precedence of any configuration files we bundle in our jar; I'm essentially left with attempting to configure the appender at runtime to see if that will make it register the appender and forward the LogEvents to it.
As a side note: I attempted to override the Flink provided configuration file (to essentially add the appender directly into the Log4j2.properties file) but Flink fails to load the plugin due to a missing dependency - io.sentry.IHub - which doesn't make sense since all examples/sentry docs don't mention any other dependencies outside of log4j related ones which already exists in the classpath.
I've followed the example in the log4j docs: Programmatically Modifying the Current Configuration after Initialization but the logs are not getting through to Sentry.
SentryLog4j.scala
package com.REDACTED.thoros.config
import io.sentry.log4j2.SentryAppender
import org.apache.logging.log4j.Level
import org.apache.logging.log4j.LogManager
import org.apache.logging.log4j.core.LoggerContext
import org.apache.logging.log4j.core.config.AppenderRef
import org.apache.logging.log4j.core.config.Configuration
import org.apache.logging.log4j.core.config.LoggerConfig
object SentryLog4j2 {
val SENTRY_LOGGER_NAME = "Sentry"
val SENTRY_BREADCRUMBS_LEVEL: Level = Level.ALL
val SENTRY_MINIMUM_EVENT_LEVEL: Level = Level.ERROR
val SENTRY_DSN =
"REDACTED"
def init(): Unit = {
// scalafix:off
val loggerContext: LoggerContext =
LogManager.getContext(false).asInstanceOf[LoggerContext]
val configuration: Configuration = loggerContext.getConfiguration
val sentryAppender: SentryAppender = SentryAppender.createAppender(
SENTRY_LOGGER_NAME,
SENTRY_BREADCRUMBS_LEVEL,
SENTRY_MINIMUM_EVENT_LEVEL,
SENTRY_DSN,
false,
null
)
sentryAppender.start()
configuration.addAppender(sentryAppender)
// Creating a new dedicated logger for Sentry
val ref: AppenderRef =
AppenderRef.createAppenderRef("Sentry", null, null)
val refs: Array[AppenderRef] = Array(ref)
val loggerConfig: LoggerConfig = LoggerConfig.createLogger(
false,
Level.ERROR,
"org.apache.logging.log4j",
"true",
refs,
null,
configuration,
null
)
loggerConfig.addAppender(sentryAppender, null, null)
configuration.addLogger("org.apache.logging.log4j", loggerConfig)
println(configuration.getAppenders)
loggerContext.updateLoggers()
// scalafix:on
}
}
Then invoking the SentryLog4j.init() in the Main module.
import org.apache.logging.log4j.LogManager
import org.apache.logging.log4j.Logger
import org.apache.logging.log4j.core.LoggerContext
import org.apache.logging.log4j.core.config.Configuration
object Main {
val logger: Logger = LogManager.getLogger()
sys.env.get("ENVIRONMENT") match {
case Some("dev") | Some("staging") | Some("production") =>
SentryLog4j2.init()
case _ => SentryLog4j2.init() // <-- this was only added during debugging
}
def main(args: Array[String]): Unit = {
logger.error("test") // this does not forward the logevent to the appender
}
}
I think I somehow need to register the appender to loggerConfig that the rootLogger uses so that all logger.error statements are propogated to the configured Sentry appender?
Greatly appreciate any guidance with this!

Although not an answer to how you get log4j2 and the SentryAppender to work. For anyone else that is stumbling on this problem, I'll just briefly explain what I did to get the sentry integration working.
What I eventually decided to do was drop the use of the SentryAppender and instead used the raw sentry client. Adding a wrapper class exposing the typical debug, info, warn and error methods. Then for the warn+ methods, I'd also send the logevent to Sentry.
This is essentially the only way I got this to work within a Flink cluster.
See example below:
sealed trait LoggerLike {
type LoggerFn = (String, Option[Object]) => Unit
val debug: LoggerFn
val info: LoggerFn
val warn: LoggerFn
val error: LoggerFn
}
trait LazyLogging {
#transient
protected lazy val logger: CustomLogger =
CustomLogger.getLogger(getClass.getName, enableSentry = true)
}
final class CustomLogger(slf4JLogger: Logger) extends LoggerLike {...your implementation...}
Then for each class/object (scala language at least), you'd just extend the LazyLogging trait to get a logger instance.

Related

Groovy code to read rabbitMQ working on Windows, not working on Linux

Need: Read from rabbitMQ with AMQPS
Problem: ConsumeAMQP is not working so I'm using groovy script that's working on windows and not working on linux. Error message is:
groovy.lang.MissingMethodException: No signature of method: com.rabbitmq.client.ConnectionFactory.setUri() is applicable for argument types: (String) values: [amqps://user:xxxxxxxXXXxxxx#c-s565c7-ag77-etc-etc-etc.mq.us-east-1.amazonaws.com:5671/virtualhost]
Possible solutions: getAt(java.lang.String), every(), every(groovy.lang.Closure)
Troubleshooting:
Developed code on python to test from my machine using pika lib and it's working with URL amqps. It reads from rabbitMQ. no connection issues.
put the python code on the nifi server (1.15.3) machine, installed python and pika lib, execute on the command line, it's working on the server and reads from rabbitMQ.
Develop groovy code to test from my windows apache nifi (1.15.3)` and it's working, it's reading from rabbitMQ client system.
Copy the code (copy past) to the nifi server, uploaded the .jar lib also. Not working with this error message. create a groovy file and execute the code. not working.
Can anyone help me?
NOTE: I want to use groovy code to output the results to the flowfile.
#Grab('com.rabbitmq:amqp-client:5.14.2')
import com.rabbitmq.client.*
import org.apache.commons.io.IOUtils
import java.nio.charset.*
// -- Define connection
def ConnectionFactory factory = new ConnectionFactory();
factory.setUri('amqps://user:password#a-r5t60-etc-etc-etc.mq.us-east-1.amazonaws.com:5671/virtualhost');
factory.useSslProtocol();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
// -- Waiting for messages.");
boolean noAck = false;
int count = 0;
while(count<10) {
GetResponse response = channel.basicGet("db-user-q" , noAck)
if (response != null) {
byte[] body = response.getBody()
long deliveryTag = response.getEnvelope().getDeliveryTag()
def msg = new String(body, "UTF-8")
channel.basicAck(response.envelope.deliveryTag, false)
def flowFile = session.create()
flowFile = session.putAttribute(flowFile, 'myAttr', msg)
session.transfer(flowFile, REL_SUCCESS);
}
count++;
}
channel.close();
connection.close();
The following code is suspect:
def ConnectionFactory factory = new ConnectionFactory();
You don't need both def and a type ConnectionFactory. Just change it to this:
ConnectionFactory factory = new ConnectionFactory()
You don't need the semi-colon either. The keyword def is used for dynamic typing situations (or laziness), and specifying the type (ie ConnectionFactory) is for static typing situations. You can't have both. It's either dynamic or static typing. I suspect Groovy VM is confused by what type the object is hence why it can't figure out if setUri exists or not.

Log4j in file Spark

I am trying to log my project. For that, I'm using log4j and I'm putting the information and settings in the code itself, without using the properties file, as shown below.
public class Teste {
Logger log = Logger.getLogger(Teste.class.getName());
public static void configError() {
EnhancedPatternLayout layout = new EnhancedPatternLayout();
String conversionPattern = "%d{ISO8601}{GMT+1} %-5p %m%n";
layout.setConversionPattern(conversionPattern);
String fileError = "C:/ProducerError.log";
// creates console appender
ConsoleAppender consoleAppender = new ConsoleAppender();
consoleAppender.setLayout(layout);
consoleAppender.activateOptions();
// creates file appender
FileAppender fileAppender = new FileAppender();
fileAppender.setFile(fileError);
fileAppender.setLayout(layout);
fileAppender.activateOptions();
// configures the root logger
Logger rootLogger = Logger.getRootLogger();
rootLogger.setLevel(Level.ERROR);
rootLogger.addAppender(consoleAppender);
rootLogger.addAppender(fileAppender);
log.error("Error teste");
rootLogger.removeAllAppenders();
}
}
I wanted to do the same but in a spark file. I tried the same way but it doesn't return anything. How does spark logs work? Can't I put it in the code like I did before? I have a DockerFile with spark-submit, but I didn't want to mess with that code.
Provide config file path while submitting the spark job -Dlog4j.configuration=path/to/log4j.properties

Using Node-Geocoder with Typescript

I am trying to use Node-Geocoder in my typescript application, using the DefinitelyTyped type definitions found here. What I am wondering is how I am supposed to pass my configuration options to the Node-Geocoder instance. I would think you use it similar to how you use the library in Javascript, by passing the options to the constructor of the Geocoder object. However, that is giving me an error stating that the constructor does not take any arguments.
import DomainServiceInterface from "./../DomainServiceInterface";
import NodeGeocoder from "node-geocoder";
import LocationServiceInterface from "./LocationServiceInterface";
export default class LocationService implements DomainServiceInterface, LocationServiceInterface {
private geocoder: NodeGeocoder.Geocoder;
constructor() {
const options: NodeGeocoder.BaseOptions = {
provider: "google",
// other options
};
this.geocoder = new NodeGeocoder.Geocoder(options);
}
// other methods here
}
I attempted to look this up. However, all tutorials and content I could find related to Node-Geocoder are in Javascript.
What am I doing wrong?
Fair warning, I've never used the node-geocoder package.
If you look at the type definitions, the export is a function, not a class. (Technically declaration merging is used to export a function and a namespace.) That function takes Options and instantiates a Geocoder instance.
That said, you would create a Geocoder like this.
import * as NodeGeocoder from 'node-geocoder';
const options: NodeGeocoder.Options = {
provider: "google",
};
const geoCoder = NodeGeocoder(options);

Getting this error while running cordapp on linux server "net.corda.core.CordaRuntimeException"

I get the following error net.corda.core.CordaRuntimeException: java.io.NotSerializableException: com.example.state.TradeState was not found by the node, check the Node containing the CorDapp that implements com.example.state.TradeState is loaded and on the Classpath.
i am running cordapp as a systemd service. here is the image of the error and my node directory structure
This might be an issue cause by multiple constructors. When you overriding Constructors in Corda (Java version), you would need put #ConstructorForDeserialization on the constructor that has the more parameters. Also, you need manually create all the getters as well(for database hibernate).
Here is an example: https://github.com/corda/samples-java/blob/master/Accounts/tictacthor/contracts/src/main/java/com/tictacthor/states/BoardState.java
#ConstructorForDeserialization
public BoardState(UniqueIdentifier playerO, UniqueIdentifier playerX,
AnonymousParty me, AnonymousParty competitor,
boolean isPlayerXTurn, UniqueIdentifier linearId,
char[][] board, Status status) {
this.playerO = playerO;
this.playerX = playerX;
this.me = me;
this.competitor = competitor;
this.isPlayerXTurn = isPlayerXTurn;
this.linearId = linearId;
this.board = board;
this.status = status;
}

Apache Spark: saveAsTextFile not working correctly in Stand Alone Mode

I wrote a simple Apache Spark (1.2.0) Java program to import a text file and then write it to disk using saveAsTextFile. But the output folder either has no content (just the _SUCCESS file) or at times has incomplete data (data from just 1/2 of the tasks ).
When I do a rdd.count() on the RDD, it shows the correct number, so I know the RDD correctly constructed, it is just the saveAsTextFile method which is not working.
Here is the code:
/* SimpleApp.java */
import java.util.List;
import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;
public class SimpleApp {
public static void main(String[] args) {
String logFile = "/tmp/READ_ME.txt"; // Should be some file on your system
SparkConf conf = new SparkConf().setAppName("Simple Application");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> logData = sc.textFile(logFile);
logData.saveAsTextFile("/tmp/simple-output");
System.out.println("Lines -> " + logData.count());
}
}
This is because you're saving to a local path. Are you running multiple machines? so, each worker is saving to its own /tmp directory. Sometimes, you have the driver executing a task so you get part of the result locally. Really you won't want to mix distributed mode and local file systems.
You can try code like below(for eg)..
JavaSparkContext sc = new JavaSparkContext("local or your network IP","Application name");
JavaRDD<String> lines = sc.textFile("Path Of Your File", No. of partitions).count();
And then you print no. of lines containing in file.

Resources