Build Failure in Stromcrawler 1.16 - stormcrawler

i am using stormcrawler 1.16, apache storm 1.2.3, maven 3.6.3 and jdk 1.8.
i have created the project using the articfact command below-
mvn archetype:generate -DarchetypeGroupId=com.digitalpebble.stormcrawler -Darche typeArtifactId=storm-crawler-elasticsearch-archetype -DarchetypeVersion=LATEST
when i run mvn clean package command then i get this error -
/crawler$ mvn clean package
[INFO] Scanning for projects...
[INFO]
[INFO] -------------------------< com.storm:crawler >--------------------------
[INFO] Building crawler 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) # crawler ---
[INFO] Deleting /home/ubuntu/crawler/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) # crawler ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 4 resources
[INFO]
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) # crawler ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /home/ubuntu/crawler/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/ubuntu/crawler/src/main/java/com/cnf/245/ESCrawlTopology.java:[19,16] ';'
expected
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.407 s
[INFO] Finished at: 2020-06-29T20:40:46Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile
(default-compile) on project crawler: Compilation failure
[ERROR] /home/ubuntu/crawler/src/main/java/com/cnf/245/ESCrawlTopology.java:[19,16] ';'
expected
[ERROR]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the
following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
i haven't edited pom.xml file.
Here is the content of the ESCrawlTopology.java file -
package com.cnf.245;
import org.apache.storm.metric.LoggingMetricsConsumer;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.tuple.Fields;
import com.digitalpebble.stormcrawler.ConfigurableTopology;
import com.digitalpebble.stormcrawler.Constants;
import com.digitalpebble.stormcrawler.bolt.FetcherBolt;
import com.digitalpebble.stormcrawler.bolt.JSoupParserBolt;
import com.digitalpebble.stormcrawler.bolt.SiteMapParserBolt;
import com.digitalpebble.stormcrawler.bolt.URLFilterBolt;
import com.digitalpebble.stormcrawler.bolt.URLPartitionerBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.bolt.DeletionBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.bolt.IndexerBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.metrics.MetricsConsumer;
import
com.digitalpebble.stormcrawler.elasticsearch.metrics.StatusMetricsBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.persistence.AggregationSpout;
import com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBolt;
import com.digitalpebble.stormcrawler.spout.FileSpout;
import com.digitalpebble.stormcrawler.util.ConfUtils;
import com.digitalpebble.stormcrawler.util.URLStreamGrouping;
/**
* Dummy topology to play with the spouts and bolts on ElasticSearch
*/
public class ESCrawlTopology extends ConfigurableTopology {
public static void main(String[] args) throws Exception {
ConfigurableTopology.start(new ESCrawlTopology(), args);
}
#Override
protected int run(String[] args) {
TopologyBuilder builder = new TopologyBuilder();
int numWorkers = ConfUtils.getInt(getConf(), "topology.workers", 1);
if (args.length == 0) {
System.err.println("ESCrawlTopology seed_dir file_filter");
return -1;
}
// set to the real number of shards ONLY if es.status.routing is set to
// true in the configuration
int numShards = 1;
builder.setSpout("filespout", new FileSpout(args[0], args[1], true));
Fields key = new Fields("url");
builder.setBolt("filter", new URLFilterBolt())
.fieldsGrouping("filespout", Constants.StatusStreamName, key);
builder.setSpout("spout", new AggregationSpout(), numShards);
builder.setBolt("status_metrics", new StatusMetricsBolt())
.shuffleGrouping("spout");
builder.setBolt("partitioner", new URLPartitionerBolt(), numWorkers)
.shuffleGrouping("spout");
builder.setBolt("fetch", new FetcherBolt(), numWorkers)
.fieldsGrouping("partitioner", new Fields("key"));
builder.setBolt("sitemap", new SiteMapParserBolt(), numWorkers)
.localOrShuffleGrouping("fetch");
builder.setBolt("parse", new JSoupParserBolt(), numWorkers)
.localOrShuffleGrouping("sitemap");
builder.setBolt("indexer", new IndexerBolt(), numWorkers)
.localOrShuffleGrouping("parse");
builder.setBolt("status", new StatusUpdaterBolt(), numWorkers)
.fieldsGrouping("fetch", Constants.StatusStreamName,
key)
.fieldsGrouping("sitemap", Constants.StatusStreamName,
key)
.fieldsGrouping("parse", Constants.StatusStreamName,
key)
.fieldsGrouping("indexer", Constants.StatusStreamName,
key)
.customGrouping("filter", Constants.StatusStreamName,
new URLStreamGrouping());
builder.setBolt("deleter", new DeletionBolt(), numWorkers)
.localOrShuffleGrouping("status",
Constants.DELETION_STREAM_NAME);
conf.registerMetricsConsumer(MetricsConsumer.class);
conf.registerMetricsConsumer(LoggingMetricsConsumer.class);
return submit("crawl", conf, builder);
}
}
i put com.cnf.245 in groupId and crawler in articfactId.
someone please explain what cause this error ?

Can you please paste the content of ESCrawlTopology.java? Did you set com.cnf.245 as package name?
The template class gets rewritten during the execution of the archetype with the package name substituted, it could be that the value you set broke the template.
EDIT: you can't use numbers in package names in Java. See Using numbers as package names in java
Use a different package name and groupID.

Related

Error: Can't resolve 'fs' and Error: Can't resolve 'path' in '.\node_modules\bindings' after libxmljs is added in angular 13 project

My angular project has started giving error during build post addition of libxmljs package. But strange thing is error is pointing to folder - \node_modules\bindings instead of \node_modules\libxmljs.
Only code change -
"libxmljs": "^0.19.7" in package.json
and
let xsdDoc = libxmljs.parseXml(xsdFile); in myproject.component.ts
My environment -
Node v18.0.0
NPM 8.6.0
No angular CLI. Its maven project.
Error log -
...
[INFO] ./node_modules/bindings/bindings.js:4:9-22 - Error: Module not found: Error: Can't resolve 'fs' in 'D:\my-frontend\node_modules\bindings'
[INFO]
[INFO] ./node_modules/bindings/bindings.js:5:11-26 - Error: Module not found: Error: Can't resolve 'path' in 'D:\my-frontend\node_modules\bindings'
[INFO]
[INFO] BREAKING CHANGE: webpack < 5 used to include polyfills for node.js core modules by default.
[INFO] This is no longer the case. Verify if you need this module and configure a polyfill for it.
[INFO]
[INFO] If you want to include a polyfill, you need to:
[INFO] - add a fallback 'resolve.fallback: { "path": require.resolve("path-browserify") }'
[INFO] - install 'path-browserify'
[INFO] If you don't want to include a polyfill, you can use an empty module like this:
[INFO] resolve.fallback: { "path": false }
[INFO]
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 07:51 min
...
Troubleshoot -
Tried "browser": { "fs": false, "path":false } with no luck.
Tried webpack.config.js but same issue.
Is this a bug in libxmljs and should I find other way to validate xmls ?
It seems node_modules/bindings specific bug.
I resolved issue by switching to java using javax.xml.
Sample code for war based app -
#PostMapping("/xml/validate")
public ResourceEntity<Boolean> validateXmlWithXSD(HttpServletRequest request, #RequestHeader(value = "Authorization", required = true) String token, int xsdVersion, #RequestBody ProjectXml processIn) {
//step- initializations
...
try {
//step- session validation
...
//step- xsd access from webapp
String xsdPath = "/assets/xsds/ProjectXsd." + xsdVersion + ".xsd";
InputStream xsdStream = request.getSession().getServletContext().getResourceAsStream(xsdPath);
if(xsdStream==null)
throw new NullPointerException("Null xsd stream.");
//step- xml validation against xsd
Source source = new StreamSource(xsdStream);
Schema schema = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI).newSchema(source);
Validator validator = schema.newValidator();
validator.validate(new StreamSource(new ByteArrayInputStream(processIn.getProcessXML())));
isValidated = true;
} catch (SAXException | IOException | NullPointerException e) {
...
}
//step- return result
}
In case of spring-boot jar app,
//step- xsd access from target/classes
String xsdPath = "/assets/xsds/ProjectXsd." + xsdVersion + ".xsd";
InputStream xsdStream = new ClassPathResource(xsdPath).getInputStream();
and in maven I configured
<resources>
<resource>
<directory>${project.build.outputDirectory}/assets/runtimeXsds</directory>
</resource>
</resources>

Prisma: getting "com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found" error

I'm getting errors related to name 'default' and stage 'default' when initializing new prisma project
Steps to reproduce:
Follow all the steps from official guide strictly
Get com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found error when run prisma deploy
Get this error when performing a simple query from http://localhost:4466/graphql:
Query:
query {
user {
id
name
}
}
Response:
{
"errors": [
{
"message": "Project not found: 'graphql_default'",
"code": 3016,
"requestId": "local:cjzs556h5000f0754vc6k36qd"
}
]
}
Versions:
Connector: MongoDB
Prisma Server: 1.34.6
prisma CLI: prisma/1.34.6 (darwin-x64) node-v10.16.3
OS: OS X Mojave - 10.14.6
Logs from Docker:
$ docker logs hello-world_prisma_1
No log level set, defaulting to INFO.
[INFO] Cluster created with settings {hosts=[mongo:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
[INFO] Exception in monitor thread while connecting to server mongo:27017
Exception opening socket
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.AsynchronousSocketChannelStream$OpenCompletionHandler.failed(AsynchronousSocketChannelStream.java:272)
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:128)
at sun.nio.ch.Invoker$2.run(Invoker.java:218)
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)
... 1 more
[INFO] Initializing workers...
[INFO] Obtaining exclusive agent lock...
[INFO] Obtaining exclusive agent lock... Successful.
[INFO] Successfully started 1 workers.
[INFO] No server chosen by com.mongodb.async.client.ClientSessionHelper$1#70a6c292 from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=mongo:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]}. Waiting for 30000 ms before timing out
[INFO] Opened connection [connectionId{localValue:2, serverValue:1}] to mongo:27017
[INFO] Monitor thread successfully connected to server with description ServerDescription{address=mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 6, 13]}, minWireVersion=0, maxWireVersion=6, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=16638401}
Server running on :4466
[INFO] Opened connection [connectionId{localValue:3, serverValue:2}] to mongo:27017
[INFO] Deployment worker initialization complete.
[Warning] Management authentication is disabled. Enable it in your Prisma config to secure your server.
{"key":"error/handled","requestId":"local:cjzs54qg500020754mbbzqni9","payload":{"exception":"com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found","query":"\n query($name: String! $stage: String!) {\n project(name: $name stage: $stage) {\n name\n stage\n }\n }\n ","variables":"{\"name\":\"default\",\"stage\":\"default\"}","code":"4000","stack_trace":"com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)\\n scala.Option.getOrElse(Option.scala:121)\\n com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)\\n scala.util.Success.$anonfun$map$1(Try.scala:251)\\n scala.util.Success.map(Try.scala:209)\\n scala.concurrent.Future.$anonfun$map$1(Future.scala:288)\\n scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)\\n scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)\\n scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)\\n akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)\\n akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)\\n scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)\\n scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)\\n akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)\\n akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)\\n akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)\\n akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\\n akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\\n akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\\n akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)","message":"No service with name 'default' and stage 'default' found"}}
[Debug] Initializing deployment worker for default_default
[Debug] Scheduling deployment for project default_default
[INFO] Opened connection [connectionId{localValue:4, serverValue:3}] to mongo:27017
[Debug] Applied migration for project default_default
Formatted [Warning]:
{
"key": "error/handled",
"requestId": "local:cjzs54qg500020754mbbzqni9",
"payload": {
"exception": "com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found",
"query": "\n query($name: String! $stage: String!) {\n project(name: $name stage: $stage) {\n name\n stage\n }\n }\n ",
"variables": "{\"name\":\"default\",\"stage\":\"default\"}",
"code": "4000",
"stack_trace": "com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)\\n scala.Option.getOrElse(Option.scala:121)\\n com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)\\n scala.util.Success.$anonfun$map$1(Try.scala:251)\\n scala.util.Success.map(Try.scala:209)\\n scala.concurrent.Future.$anonfun$map$1(Future.scala:288)\\n scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)\\n scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)\\n scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)\\n akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)\\n akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)\\n scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)\\n scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)\\n akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)\\n akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)\\n akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)\\n akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\\n akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\\n akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\\n akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)",
"message": "No service with name 'default' and stage 'default' found"
}
}
Formatted "query":
query($name: String! $stage: String!) {
project(name: $name stage: $stage) {
name
stage
}
}
Formatted "variables":
{ "name":"default", "stage":"default" }
Formatted stack trace:
com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)
scala.Option.getOrElse(Option.scala:121)
com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)
scala.util.Success.$anonfun$map$1(Try.scala:251)
scala.util.Success.map(Try.scala:209)
scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
P/s: It actually was running flawlessly some days before, but today I can't manage to make it work again!

Metrics with Prometheus issue using JHipster 6.0.1 (The elements [jhipster.metrics.prometheus.enabled] were left unbound.)

I'm getting an error running my JHipster application with Prometheus configuration for metrics.
I use the configuration from the official website :
https://www.jhipster.tech/monitoring/
In my application-dev.yml I have :
metrics:
prometheus:
enabled: true
And my class for auth is :
#Configuration
#Order(1)
#ConditionalOnProperty(prefix = "jhipster", name = "metrics.prometheus.enabled")
public class BasicAuthConfiguration extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.antMatcher("/management/prometheus/**")
.authorizeRequests()
.anyRequest().hasAuthority(AuthoritiesConstants.ADMIN)
.and()
.httpBasic().realmName("jhipster")
.and()
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.and().csrf().disable();
}
}
2019-06-25 12:22:52.693 INFO 13260 --- [ restartedMain] com.ex.App : The following profiles are active: dev,swagger
2019-06-25 12:22:55.170 WARN 13260 --- [ restartedMain] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'undertowServletWebServerFactory' defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/ServletWebServerFactoryConfiguration$EmbeddedUndertow.class]: Initialization of bean failed; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webConfigurer' defined in file [/home/eclipse-workspace/back_docker/target/classes/com/ex/config/WebConfigurer.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'io.github.jhipster.config.JHipsterProperties': Could not bind properties to 'JHipsterProperties' : prefix=jhipster, ignoreInvalidFields=false, ignoreUnknownFields=false; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'jhipster' to io.github.jhipster.config.JHipsterProperties
2019-06-25 12:22:55.188 ERROR 13260 --- [ restartedMain] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target [Bindable#7585af55 type = io.github.jhipster.config.JHipsterProperties, value = 'provided', annotations = array<Annotation>[#org.springframework.boot.context.properties.ConfigurationProperties(ignoreInvalidFields=false, ignoreUnknownFields=false, value=jhipster, prefix=jhipster)]] failed:
Property: jhipster.metrics.prometheus.enabled
Value: true
Origin: class path resource [config/application-dev.yml]:128:22
Reason: The elements [jhipster.metrics.prometheus.enabled] were left unbound.
Action:
Update your application's configuration
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.679 s
[INFO] Finished at: 2019-06-25T12:22:55+02:00
[INFO] ------------------------------------------------------------------------
I changed my JHipster project from microservice application to microservice gateway and it solved this issue.

Getting SparkFlumeProtocol and EventBatch not found errors when building Spark 1.6.2 on CentOS7

I was trying to build Spark 1.6.2 on CentOS7 and ran into the error below:
[error] /home/pateln16/spark-1.6.2/external/flume-sink/src/main/scala/org/apache/spark/streaming/flume/sink/SparkAvroCallbackHandler.scala:45: not found: type SparkFlumeProtocol
[error] val transactionTimeout: Int, val backOffInterval: Int) extends SparkFlumeProtocol with Logging {
[error] ^
[error] /home/pateln16/spark-1.6.2/external/flume-sink/src/main/scala/org/apache/spark/streaming/flume/sink/SparkAvroCallbackHandler.scala:70: not found: type EventBatch
[error] override def getEventBatch(n: Int): EventBatch = {
[error] ^
[error] /home/pateln16/spark-1.6.2/external/flume-sink/src/main/scala/org/apache/spark/streaming/flume/sink/TransactionProcessor.scala:80: not found: type EventBatch
[error] def getEventBatch: EventBatch = {
[error] ^
[error] /home/pateln16/spark-1.6.2/external/flume-sink/src/main/scala/org/apache/spark/streaming/flume/sink/SparkSinkUtils.scala:25: not found: type EventBatch
[error] def isErrorBatch(batch: EventBatch): Boolean = {
[error] ^
[error] /home/pateln16/spark-1.6.2/external/flume-sink/src/main/scala/org/apache/spark/streaming/flume/sink/SparkAvroCallbackHandler.scala:85: not found: type EventBatch
[error] new EventBatch("Spark sink has been stopped!", "", java.util.Collections.emptyList())
[error] ^
[warn] Class org.jboss.netty.channel.ChannelFactory not found - continuing with a stub.
[warn] Class org.jboss.netty.channel.ChannelFactory not found - continuing with a stub.
[warn] Class org.jboss.netty.channel.ChannelPipelineFactory not found - continuing with a stub.
[warn] Class org.jboss.netty.handler.execution.ExecutionHandler not found - continuing with a stub.
[warn] Class org.jboss.netty.channel.ChannelFactory not found - continuing with a stub.
[warn] Class org.jboss.netty.handler.execution.ExecutionHandler not found - continuing with a stub.
[warn] Class org.jboss.netty.channel.group.ChannelGroup not found - continuing with a stub.
[error] /home/pateln16/spark-1.6.2/external/flume-sink/src/main/scala/org/apache/spark/streaming/flume/sink/SparkSink.scala:86: not found: type SparkFlumeProtocol
[error] val responder = new SpecificResponder(classOf[SparkFlumeProtocol], handler.get)
I meet the same problem on Spark 2.0.0. I think the reason is that file 'external\flume-sink\src\main\avro\sparkflume.avdl' is not complied well.
The problem can be resolved by:
Download Apache Avro
http://avro.apache.org/docs/current/gettingstartedjava.html
I downloaded all jar files into folder 'C:\Downloads\avro'.
Go to folder 'external\flume-sink\src\main\avro'
compile sparkflume.avdl to java files
java -jar C:\Downloads\avro\avro-tools-1.8.1.jar idl sparkflume.avdl > sparkflume.avpr
java -jar C:\Downloads\avro\avro-tools-1.8.1.jar compile -string protocol sparkflume.avpr ..\scala
recompile your projects.

Using Sparksql and SparkCSV with SparkJob Server

Am trying to JAR a simple scala application which make use of SparlCSV and spark sql to create a Data frame of the CSV file stored in HDFS and then just make a simple query to return the Max and Min of specific column in CSV file.
I am getting error when i use the sbt command to create the JAR which later i will curl to jobserver /jars folder and execute from remote machine
Code:
import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark.SparkContext._
import org.apache.spark._
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
object sparkSqlCSV extends SparkJob {
def main(args: Array[String]) {
val conf = new SparkConf().setMaster("local[4]").setAppName("sparkSqlCSV")
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val config = ConfigFactory.parseString("")
val results = runJob(sc, config)
println("Result is " + results)
}
override def validate(sc: sqlContext, config: Config): SparkJobValidation = {
SparkJobValid
}
override def runJob(sc: sqlContext, config: Config): Any = {
val value = "com.databricks.spark.csv"
val ControlDF = sqlContext.load(value,Map("path"->"hdfs://mycluster/user/Test.csv","header"->"true"))
ControlDF.registerTempTable("Control")
val aggDF = sqlContext.sql("select max(DieX) from Control")
aggDF.collectAsList()
}
}
Error:
[hduser#ptfhadoop01v spark-jobserver]$ sbt ashesh-jobs/package
[info] Loading project definition from /usr/local/hadoop/spark-jobserver/project
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to root (in build file:/usr/local/hadoop/spark-jobserver/)
[info] scalastyle using config /usr/local/hadoop/spark-jobserver/scalastyle-config.xml
[info] Processed 2 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 9 ms
[success] created output: /usr/local/hadoop/spark-jobserver/ashesh-jobs/target
[warn] Credentials file /home/hduser/.bintray/.credentials does not exist
[info] Updating {file:/usr/local/hadoop/spark-jobserver/}ashesh-jobs...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] scalastyle using config /usr/local/hadoop/spark-jobserver/scalastyle-config.xml
[info] Processed 5 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 1 ms
[success] created output: /usr/local/hadoop/spark-jobserver/job-server-api/target
[info] Compiling 2 Scala sources and 1 Java source to /usr/local/hadoop/spark-jobserver/ashesh-jobs/target/scala-2.10/classes...
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:8: object sql is not a member of package org.apache.spark
[error] import org.apache.spark.sql.SQLContext
[error] ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:14: object sql is not a member of package org.apache.spark
[error] val sqlContext = new org.apache.spark.sql.SQLContext(sc)
[error] ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:25: not found: type sqlContext
[error] override def runJob(sc: sqlContext, config: Config): Any = {
[error] ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:21: not found: type sqlContext
[error] override def validate(sc: sqlContext, config: Config): SparkJobValidation = {
[error] ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:27: not found: value sqlContext
[error] val ControlDF = sqlContext.load(value,Map("path"->"hdfs://mycluster/user/Test.csv","header"->"true"))
[error] ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:29: not found: value sqlContext
[error] val aggDF = sqlContext.sql("select max(DieX) from Control")
[error] ^
[error] 6 errors found
[error] (ashesh-jobs/compile:compileIncremental) Compilation failed
[error] Total time: 10 s, completed May 26, 2016 4:42:52 PM
[hduser#ptfhadoop01v spark-jobserver]$
I guess the main issue being that its missing the dependencies for sparkCSV and sparkSQL , But i have no idea where to place the dependencies before compiling the code using sbt.
I am issuing the following command to package the application , The source codes are placed under "ashesh_jobs" directory
[hduser#ptfhadoop01v spark-jobserver]$ sbt ashesh-jobs/package
I hope someone can help me to resolve this issue.Can you specify me the file where i can specify the dependency and the format to input
The following link has more information in creating other contexts https://github.com/spark-jobserver/spark-jobserver/blob/master/doc/contexts.md
Also you need job-server-extras
add library dependency in buil.sbt
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.6.2"

Resources