Has any one been able to connect Spark Job Server to HDInsight - azure-hdinsight

HDinsight used to support Spark Job Server https://github.com/Huachao/azure-content/blob/master/articles/hdinsight/hdinsight-apache-spark-job-server.md but it is no longer supported does anyone still connect to HDInsight via SJS, thanks, I am trying to connect by embedding SJS in an identical spark distribution and running it as a service while pointing to HDinsigt cluster via configurations
spark {
master = yarn
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 4
jobserver {
port = 8090
context-per-jvm = true
daorootdir = "/tmp/spark-jobserver"
binarydao {
class = spark.jobserver.io.HdfsBinaryDAO
}
metadatadao {
class = spark.jobserver.io.MetaDataSqlDAO
}
sqldao {
# Slick database driver, full classpath
slick-driver = slick.jdbc.MySQLProfile
# JDBC driver, full classpath
jdbc-driver = com.mysql.jdbc.Driver
jdbc {
url = "jdbc:mysql://x.x.x.x/spark_jobserver"
user = "xxxxx"
password = "xxxx"
}
dbcp {
maxactive = 20
maxidle = 10
initialsize = 10
}
}
}
flyway.locations="db/mysql/migration"
# predefined Spark contexts
contexts {
my-low-latency-context {
num-cpu-cores = 1 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
}
# define additional contexts here
}
# universal context configuration. These settings can be overridden, see README.md
context-settings {
# choose a port that is free on your system and also the 16 (depends on max retries for submitting the job) next portnumbers should be free
spark.driver.port = 32456 # important
# defines the place where your spark-assembly jar is located in your hdfs
spark.yarn.jar = "hdfs://hadoopHDFSCluster/spark_jars/spark-assembly-1.6.0-hadoop2.6.0.jar" # important
# defines the YARN queue the job is submitted to
#spark.yarn.queue = root.myYarnQueue
num-cpu-cores = 2 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
# in case spark distribution should be accessed from HDFS (as opposed to being installed on every mesos slave)
# spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz"
# uris of jars to be loaded into the classpath for this context. Uris is a string list, or a string separated by commas ','
# dependent-jar-uris = ["file:///some/path/present/in/each/mesos/slave/somepackage.jar"]
# If you wish to pass any settings directly to the sparkConf as-is, add them here in passthrough,
# such as hadoop connection settings that don't use the "spark." prefix
passthrough {
#es.nodes = "192.1.1.1"
}
}
# This needs to match SPARK_HOME for cluster SparkContexts to be created successfully
home = "/usr/hdp/current/spark2-client"
}

Related

Add Sentry Log4j2 appender at runtime

I've been browsing previous threads about adding Log4j2 appenders at runtime but none of them really seem to fit my scenario.
We have a longrunning Flink Job packaged into a FAT jar that we essentially submit to a running Flink cluster. We want to forward error logs to Sentry. Conveniently enough Sentry provides a Log4j2 appender that I want to be able to use, but all attempts to get Log4j2 to work have failed -- going a bit crazy about this (spent days).
Since Flink (who also uses log4j2) provides a set of default logging configurations that takes precedence of any configuration files we bundle in our jar; I'm essentially left with attempting to configure the appender at runtime to see if that will make it register the appender and forward the LogEvents to it.
As a side note: I attempted to override the Flink provided configuration file (to essentially add the appender directly into the Log4j2.properties file) but Flink fails to load the plugin due to a missing dependency - io.sentry.IHub - which doesn't make sense since all examples/sentry docs don't mention any other dependencies outside of log4j related ones which already exists in the classpath.
I've followed the example in the log4j docs: Programmatically Modifying the Current Configuration after Initialization but the logs are not getting through to Sentry.
SentryLog4j.scala
package com.REDACTED.thoros.config
import io.sentry.log4j2.SentryAppender
import org.apache.logging.log4j.Level
import org.apache.logging.log4j.LogManager
import org.apache.logging.log4j.core.LoggerContext
import org.apache.logging.log4j.core.config.AppenderRef
import org.apache.logging.log4j.core.config.Configuration
import org.apache.logging.log4j.core.config.LoggerConfig
object SentryLog4j2 {
val SENTRY_LOGGER_NAME = "Sentry"
val SENTRY_BREADCRUMBS_LEVEL: Level = Level.ALL
val SENTRY_MINIMUM_EVENT_LEVEL: Level = Level.ERROR
val SENTRY_DSN =
"REDACTED"
def init(): Unit = {
// scalafix:off
val loggerContext: LoggerContext =
LogManager.getContext(false).asInstanceOf[LoggerContext]
val configuration: Configuration = loggerContext.getConfiguration
val sentryAppender: SentryAppender = SentryAppender.createAppender(
SENTRY_LOGGER_NAME,
SENTRY_BREADCRUMBS_LEVEL,
SENTRY_MINIMUM_EVENT_LEVEL,
SENTRY_DSN,
false,
null
)
sentryAppender.start()
configuration.addAppender(sentryAppender)
// Creating a new dedicated logger for Sentry
val ref: AppenderRef =
AppenderRef.createAppenderRef("Sentry", null, null)
val refs: Array[AppenderRef] = Array(ref)
val loggerConfig: LoggerConfig = LoggerConfig.createLogger(
false,
Level.ERROR,
"org.apache.logging.log4j",
"true",
refs,
null,
configuration,
null
)
loggerConfig.addAppender(sentryAppender, null, null)
configuration.addLogger("org.apache.logging.log4j", loggerConfig)
println(configuration.getAppenders)
loggerContext.updateLoggers()
// scalafix:on
}
}
Then invoking the SentryLog4j.init() in the Main module.
import org.apache.logging.log4j.LogManager
import org.apache.logging.log4j.Logger
import org.apache.logging.log4j.core.LoggerContext
import org.apache.logging.log4j.core.config.Configuration
object Main {
val logger: Logger = LogManager.getLogger()
sys.env.get("ENVIRONMENT") match {
case Some("dev") | Some("staging") | Some("production") =>
SentryLog4j2.init()
case _ => SentryLog4j2.init() // <-- this was only added during debugging
}
def main(args: Array[String]): Unit = {
logger.error("test") // this does not forward the logevent to the appender
}
}
I think I somehow need to register the appender to loggerConfig that the rootLogger uses so that all logger.error statements are propogated to the configured Sentry appender?
Greatly appreciate any guidance with this!
Although not an answer to how you get log4j2 and the SentryAppender to work. For anyone else that is stumbling on this problem, I'll just briefly explain what I did to get the sentry integration working.
What I eventually decided to do was drop the use of the SentryAppender and instead used the raw sentry client. Adding a wrapper class exposing the typical debug, info, warn and error methods. Then for the warn+ methods, I'd also send the logevent to Sentry.
This is essentially the only way I got this to work within a Flink cluster.
See example below:
sealed trait LoggerLike {
type LoggerFn = (String, Option[Object]) => Unit
val debug: LoggerFn
val info: LoggerFn
val warn: LoggerFn
val error: LoggerFn
}
trait LazyLogging {
#transient
protected lazy val logger: CustomLogger =
CustomLogger.getLogger(getClass.getName, enableSentry = true)
}
final class CustomLogger(slf4JLogger: Logger) extends LoggerLike {...your implementation...}
Then for each class/object (scala language at least), you'd just extend the LazyLogging trait to get a logger instance.

Unable to get environment variable in azure batch VM

Here is my Azure batch configuration which I am trying to create from microsoft tutorials here and here
I am trying to get env variables defined here
CloudPool pool = batchClient.PoolOperations.CreatePool(
poolId: PoolId,
targetDedicatedComputeNodes: PoolNodeCount,
virtualMachineSize: PoolVMSize,
virtualMachineConfiguration: vmConfiguration);
// Specify the application and version to install on the compute nodes
pool.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference {
ApplicationId = "7Zip",
Version = "19.00" }
};
// Commit the pool so that it's created in the Batch service. As the nodes join
// the pool, the specified application package is installed on each.
await pool.CommitAsync();
CloudJob job = batchClient.JobOperations.CreateJob();
job.Id = JobId;
job.PoolInformation = new PoolInformation { PoolId = PoolId };
await job.CommitAsync();
string taskId = "blendertask01";
string commandLine =
#"cmd /c echo %AZ_BATCH_APP_PACKAGE_7Zip%";
CloudTask blenderTask = new CloudTask(taskId, commandLine);
batchClient.JobOperations.AddTask(JobId, blenderTask);
I am expecting the output of cmd /c echo %AZ_BATCH_APP_PACKAGE_7Zip% to give me the path where I can find my application 7zip so that i can install it however I dont get that.
Instead i get %AZ_BATCH_APP_PACKAGE_7Zip%
Instead of using AZ_BATCH_APP_PACKAGE_7Zip we should specify the version also
thus it would become %AZ_BATCH_APP_PACKAGE_7Zip#19.00%\7z1900-x64.exe
This information is not clearly defined however after few hit and tries I discovered it
Also that the env variables are not visible when we log on using remote account.

Too many open files in spark aborting spark job

In my application i am reading 40 GB text files that is totally spread across 188 files .
I split this files and create xml files per line in spark using pair rdd .
For 40 GB of input it will create many millions small xml files and this is my requirement.
All working fine but when spark saves files in S3 it throws error and job fails .
Here is the exception i get
Caused by: java.nio.file.FileSystemException:
/mnt/s3/emrfs-2408623010549537848/0000000000: Too many open files at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at
sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361) at
java.nio.file.Files.createFile(Files.java:632) at
com.amazon.ws.emr.hadoop.fs.files.TemporaryFiles.create(TemporaryFiles.java:70)
at
com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.openNewPart(MultipartUploadOutputStream.java:493)
... 21 more
ApplicationMaster host: 10.97.57.198 ApplicationMaster RPC port: 0
queue: default start time: 1542344243252 final status: FAILED
tracking URL:
http://ip-10-97-57-234.tr-fr-nonprod.aws-int.thomsonreuters.com:20888/proxy/application_1542343091900_0001/
user: hadoop Exception in thread "main"
org.apache.spark.SparkException: Application
application_1542343091900_0001 finished with failed status
And this as well
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:
Please reduce your request rate. (Service: Amazon S3; Status Code:
503; Error Code: SlowDown; Request ID: D33581CA9A799F64; S3 Extended
Request ID:
/SlEplo+lCKQRVVH+zHiop0oh8q8WqwnNykK3Ga6/VM2HENl/eKizbd1rg4vZD1BZIpp8lk6zwA=),
S3 Extended Request ID:
/SlEplo+lCKQRVVH+zHiop0oh8q8WqwnNykK3Ga6/VM2HENl/eKizbd1rg4vZD1BZIpp8lk6zwA=
Here is my code to do that .
object TestAudit {
def main(args: Array[String]) {
val inputPath = args(0)
val output = args(1)
val noOfHashPartitioner = args(2).toInt
//val conf = new SparkConf().setAppName("AuditXML").setMaster("local");
val conf = new SparkConf().setAppName("AuditXML")
val sc = new SparkContext(conf);
val input = sc.textFile(inputPath)
val pairedRDD = input.map(row => {
val split = row.split("\\|")
val fileName = split(0)
val fileContent = split(1)
(fileName, fileContent)
})
import org.apache.hadoop.io.NullWritable
import org.apache.spark.HashPartitioner
import org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
class RddMultiTextOutputFormat extends MultipleTextOutputFormat[Any, Any] {
override def generateActualKey(key: Any, value: Any): Any = NullWritable.get()
override def generateFileNameForKeyValue(key: Any, value: Any, name: String): String = key.asInstanceOf[String]
}
pairedRDD.partitionBy(new HashPartitioner(10000)).saveAsHadoopFile("s3://a205381-tr-fr-development-us-east-1-trf-auditabilty//AUDITOUTPUT", classOf[String], classOf[String], classOf[RddMultiTextOutputFormat], classOf[GzipCodec])
}
}
Even i tried reducing no of HashPartitioner then also it does not work
Every process on Unix systems has a limitation of open files or number of file descriptors. As your data is large and partitions to subfile (in internal of Spark), your process meet the limitation and error.
You can increase the number of file descriptors for each user as following:
edit the file: /etc/security/limits.conf and add (or modify)
* hard nofile 500000
* soft nofile 500000
root hard nofile 500000
root soft nofile 500000
This will set the nofile (number of file descriptors) feature to 500000 for each user along with the root user.
After restarting the changes will be applied.
Also, someone can set the number of file descriptors for a special process, by setting the LimitNOFILE. For example, if you use yarn to run Spark jobs and the Yarn daemon will be started using systemd, you can add LimitNOFILE=128000 to Yarn systemd script(resource manager and nodemanager) to set Yarn process number of file descriptors to 128000.
related articles:
3 Methods to Change the Number of Open File Limit in Linux
Limits on the number of file descriptors

How can I use a JSR-203 file system provider with Apache Spark?

We'd like to use the HDFS NIO.2 file system provider in a spark job. However, we've run into the classpath issues with file system providers: they have to be in the system classpath to be used through the Paths.get(URI) API. As a result, the provider is not found even when it is provided in the jar files supplied to spark-submit.
Here's the spark-submit command:
spark-submit --master local["*"] \
--jars target/dependency/jimfs-1.1.jar,target/dependency/guava-16.0.1.jar \
--class com.basistech.tc.SparkFsTc \
target/spark-fs-tc-0.0.1-SNAPSHOT.jar
And here's the job class, which fails with 'file system not found.'
public final class SparkFsTc {
private SparkFsTc() {
//
}
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("File System Test Case");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> logData = sc.parallelize(Collections.singletonList("foo"));
System.out.println(logData.getNumPartitions());
logData.mapPartitions(itr -> {
FileSystem fs = Jimfs.newFileSystem();
Path path = fs.getPath("/root");
URI uri = path.toUri();
Paths.get(uri); // expect this to go splat.
return null;
}).collect();
}
}
Is there some mechanism to persuade spark to add the FS provider to the appropriate classpath?
Readers should note that file system providers are special. If you read the code in the JRE, you will see
ServiceLoader<FileSystemProvider> sl = ServiceLoader
.load(FileSystemProvider.class, ClassLoader.getSystemClassLoader()).
They have to be in 'the system class loader'. They are not found locally.
This thing would work fine if I acquired the FileSystem object reference myself instead of using Paths.get(URI).

Apache Spark: saveAsTextFile not working correctly in Stand Alone Mode

I wrote a simple Apache Spark (1.2.0) Java program to import a text file and then write it to disk using saveAsTextFile. But the output folder either has no content (just the _SUCCESS file) or at times has incomplete data (data from just 1/2 of the tasks ).
When I do a rdd.count() on the RDD, it shows the correct number, so I know the RDD correctly constructed, it is just the saveAsTextFile method which is not working.
Here is the code:
/* SimpleApp.java */
import java.util.List;
import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;
public class SimpleApp {
public static void main(String[] args) {
String logFile = "/tmp/READ_ME.txt"; // Should be some file on your system
SparkConf conf = new SparkConf().setAppName("Simple Application");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> logData = sc.textFile(logFile);
logData.saveAsTextFile("/tmp/simple-output");
System.out.println("Lines -> " + logData.count());
}
}
This is because you're saving to a local path. Are you running multiple machines? so, each worker is saving to its own /tmp directory. Sometimes, you have the driver executing a task so you get part of the result locally. Really you won't want to mix distributed mode and local file systems.
You can try code like below(for eg)..
JavaSparkContext sc = new JavaSparkContext("local or your network IP","Application name");
JavaRDD<String> lines = sc.textFile("Path Of Your File", No. of partitions).count();
And then you print no. of lines containing in file.

Resources