Metrics with Prometheus issue using JHipster 6.0.1 (The elements [jhipster.metrics.prometheus.enabled] were left unbound.) - jhipster

I'm getting an error running my JHipster application with Prometheus configuration for metrics.
I use the configuration from the official website :
https://www.jhipster.tech/monitoring/
In my application-dev.yml I have :
metrics:
prometheus:
enabled: true
And my class for auth is :
#Configuration
#Order(1)
#ConditionalOnProperty(prefix = "jhipster", name = "metrics.prometheus.enabled")
public class BasicAuthConfiguration extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.antMatcher("/management/prometheus/**")
.authorizeRequests()
.anyRequest().hasAuthority(AuthoritiesConstants.ADMIN)
.and()
.httpBasic().realmName("jhipster")
.and()
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.and().csrf().disable();
}
}
2019-06-25 12:22:52.693 INFO 13260 --- [ restartedMain] com.ex.App : The following profiles are active: dev,swagger
2019-06-25 12:22:55.170 WARN 13260 --- [ restartedMain] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'undertowServletWebServerFactory' defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/ServletWebServerFactoryConfiguration$EmbeddedUndertow.class]: Initialization of bean failed; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webConfigurer' defined in file [/home/eclipse-workspace/back_docker/target/classes/com/ex/config/WebConfigurer.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'io.github.jhipster.config.JHipsterProperties': Could not bind properties to 'JHipsterProperties' : prefix=jhipster, ignoreInvalidFields=false, ignoreUnknownFields=false; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'jhipster' to io.github.jhipster.config.JHipsterProperties
2019-06-25 12:22:55.188 ERROR 13260 --- [ restartedMain] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target [Bindable#7585af55 type = io.github.jhipster.config.JHipsterProperties, value = 'provided', annotations = array<Annotation>[#org.springframework.boot.context.properties.ConfigurationProperties(ignoreInvalidFields=false, ignoreUnknownFields=false, value=jhipster, prefix=jhipster)]] failed:
Property: jhipster.metrics.prometheus.enabled
Value: true
Origin: class path resource [config/application-dev.yml]:128:22
Reason: The elements [jhipster.metrics.prometheus.enabled] were left unbound.
Action:
Update your application's configuration
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.679 s
[INFO] Finished at: 2019-06-25T12:22:55+02:00
[INFO] ------------------------------------------------------------------------

I changed my JHipster project from microservice application to microservice gateway and it solved this issue.

Related

How to consume 2 Azure Event Hubs in Spring Cloud Stream

I want to consume messages from following 2 connection strings
Endpoint=sb://region1.servicebus.windows.net/;SharedAccessKeyName=abc;SharedAccessKey=123;EntityPath=my-request
Endpoint=sb://region2.servicebus.windows.net/;SharedAccessKeyName=def;SharedAccessKey=456;EntityPath=my-request
It is very simple to use Java API
EventHubConsumerAsyncClient client = new EventHubClientBuilder()
.connectionString("Endpoint=sb://region1.servicebus.windows.net/;SharedAccessKeyName=abc;SharedAccessKey=123;EntityPath=my-request")
.buildAsyncConsumerClient();
However, how to make this work in yaml file using Spring Cloud Stream (equivalent to the Java code above)? Tried all tutorials found online and none of them works.
spring:
cloud:
stream:
function:
definition: consumeRegion1;consumeRegion2
bindings:
consumeRegion1-in-0:
destination: my-request
binder: eventhub1
consumeRegion2-in-0:
destination: my-request
binder: eventhub2
binders:
eventhub1:
type: eventhub
default-candidate: false
environment:
spring:
cloud:
azure:
eventhub:
connection-string: Endpoint=sb://region1.servicebus.windows.net/;SharedAccessKeyName=abc;SharedAccessKey=123;EntityPath=my-request
eventhub2:
type: eventhub
default-candidate: false
environment:
spring:
cloud:
azure:
eventhub:
connection-string: Endpoint=sb://region2.servicebus.windows.net/;SharedAccessKeyName=def;SharedAccessKey=456;EntityPath=my-request
#Bean
public Consumer<Message<String>> consumeRegion1() {
return message -> {
System.out.printf(message.getPayload());
};
}
#Bean
public Consumer<Message<String>> consumeRegion2() {
return message -> {
System.out.printf(message.getPayload());
};
}
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>azure-spring-cloud-stream-binder-eventhubs</artifactId>
<version>2.5.0</version>
</dependency>
error log
2021-10-14 21:12:26.760 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.integration.config.IntegrationManagementConfiguration' of type [org.springframework.integration.config.IntegrationManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2021-10-14 21:12:26.882 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'integrationChannelResolver' of type [org.springframework.integration.support.channel.BeanFactoryChannelResolver] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2021-10-14 21:12:26.884 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'integrationDisposableAutoCreatedBeans' of type [org.springframework.integration.config.annotation.Disposables] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2021-10-14 21:12:29.587 WARN 1 --- [ main] a.s.c.a.e.AzureEventHubAutoConfiguration : Can't construct the EventHubConnectionStringProvider, namespace: null, connectionString: null
2021-10-14 21:12:29.611 INFO 1 --- [ main] a.s.c.a.e.AzureEventHubAutoConfiguration : No event hub connection string provided.
2021-10-14 21:12:30.290 INFO 1 --- [ main] c.a.s.i.eventhub.impl.EventHubTemplate : Started EventHubTemplate with properties: {checkpointConfig=CheckpointConfig{checkpointMode=RECORD, checkpointCount=0, checkpointInterval=null}, startPosition=LATEST}
2021-10-14 21:12:32.934 INFO 1 --- [ main] c.f.c.c.BeanFactoryAwareFunctionRegistry : Can't determine default function definition. Please use 'spring.cloud.function.definition' property to explicitly define it.
As log says use spring.cloud.function.definition property.
Refer to docs

Packaging as ERROR jar, method pageableParameterBuilderPlugin

2020-12-25 13:51:37.470 WARN 6770 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'pageableParameterBuilderPlugin' defined in class path resource [io/github/jhipster/config/apidoc/SwaggerPluginsAutoConfiguration$SpringPagePluginConfiguration.class]: Unsatisfied dependency expressed through method 'pageableParameterBuilderPlugin' parameter 0; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'springfox.documentation.schema.TypeNameExtractor' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
2020-12-25 13:51:38.213 ERROR 6770 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
APPLICATION FAILED TO START
Description:
Parameter 0 of method pageableParameterBuilderPlugin in io.github.jhipster.config.apidoc.SwaggerPluginsAutoConfiguration$SpringPagePluginConfiguration required a bean of type 'springfox.documentation.schema.TypeNameExtractor' that could not be found.
Action:
Consider defining a bean of type 'springfox.documentation.schema.TypeNameExtractor' in your configuration.
I do everything according to the instructions, but the jar is not going to. Tried inserting different dependencies https://springfox.github.io/springfox/docs/current/ , but nothing works for me.
https://www.jhipster.tech/production/#build
webpack.prod.js or webpack.common.jsс
new HtmlWebpackPlugin({
...
base: '/jhipster/'
})
update src/main/webapp/swagger-ui/index.html
var urls = [];
axios.get("/swagger-resources").then(function (response) {
response.data.forEach(function (resource) {
urls.push({
"name": resource.name,
"url": "/jhipster" + resource.location
});
});
});

Build Failure in Stromcrawler 1.16

i am using stormcrawler 1.16, apache storm 1.2.3, maven 3.6.3 and jdk 1.8.
i have created the project using the articfact command below-
mvn archetype:generate -DarchetypeGroupId=com.digitalpebble.stormcrawler -Darche typeArtifactId=storm-crawler-elasticsearch-archetype -DarchetypeVersion=LATEST
when i run mvn clean package command then i get this error -
/crawler$ mvn clean package
[INFO] Scanning for projects...
[INFO]
[INFO] -------------------------< com.storm:crawler >--------------------------
[INFO] Building crawler 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) # crawler ---
[INFO] Deleting /home/ubuntu/crawler/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) # crawler ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 4 resources
[INFO]
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) # crawler ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /home/ubuntu/crawler/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/ubuntu/crawler/src/main/java/com/cnf/245/ESCrawlTopology.java:[19,16] ';'
expected
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.407 s
[INFO] Finished at: 2020-06-29T20:40:46Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile
(default-compile) on project crawler: Compilation failure
[ERROR] /home/ubuntu/crawler/src/main/java/com/cnf/245/ESCrawlTopology.java:[19,16] ';'
expected
[ERROR]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the
following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
i haven't edited pom.xml file.
Here is the content of the ESCrawlTopology.java file -
package com.cnf.245;
import org.apache.storm.metric.LoggingMetricsConsumer;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.tuple.Fields;
import com.digitalpebble.stormcrawler.ConfigurableTopology;
import com.digitalpebble.stormcrawler.Constants;
import com.digitalpebble.stormcrawler.bolt.FetcherBolt;
import com.digitalpebble.stormcrawler.bolt.JSoupParserBolt;
import com.digitalpebble.stormcrawler.bolt.SiteMapParserBolt;
import com.digitalpebble.stormcrawler.bolt.URLFilterBolt;
import com.digitalpebble.stormcrawler.bolt.URLPartitionerBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.bolt.DeletionBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.bolt.IndexerBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.metrics.MetricsConsumer;
import
com.digitalpebble.stormcrawler.elasticsearch.metrics.StatusMetricsBolt;
import
com.digitalpebble.stormcrawler.elasticsearch.persistence.AggregationSpout;
import com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBolt;
import com.digitalpebble.stormcrawler.spout.FileSpout;
import com.digitalpebble.stormcrawler.util.ConfUtils;
import com.digitalpebble.stormcrawler.util.URLStreamGrouping;
/**
* Dummy topology to play with the spouts and bolts on ElasticSearch
*/
public class ESCrawlTopology extends ConfigurableTopology {
public static void main(String[] args) throws Exception {
ConfigurableTopology.start(new ESCrawlTopology(), args);
}
#Override
protected int run(String[] args) {
TopologyBuilder builder = new TopologyBuilder();
int numWorkers = ConfUtils.getInt(getConf(), "topology.workers", 1);
if (args.length == 0) {
System.err.println("ESCrawlTopology seed_dir file_filter");
return -1;
}
// set to the real number of shards ONLY if es.status.routing is set to
// true in the configuration
int numShards = 1;
builder.setSpout("filespout", new FileSpout(args[0], args[1], true));
Fields key = new Fields("url");
builder.setBolt("filter", new URLFilterBolt())
.fieldsGrouping("filespout", Constants.StatusStreamName, key);
builder.setSpout("spout", new AggregationSpout(), numShards);
builder.setBolt("status_metrics", new StatusMetricsBolt())
.shuffleGrouping("spout");
builder.setBolt("partitioner", new URLPartitionerBolt(), numWorkers)
.shuffleGrouping("spout");
builder.setBolt("fetch", new FetcherBolt(), numWorkers)
.fieldsGrouping("partitioner", new Fields("key"));
builder.setBolt("sitemap", new SiteMapParserBolt(), numWorkers)
.localOrShuffleGrouping("fetch");
builder.setBolt("parse", new JSoupParserBolt(), numWorkers)
.localOrShuffleGrouping("sitemap");
builder.setBolt("indexer", new IndexerBolt(), numWorkers)
.localOrShuffleGrouping("parse");
builder.setBolt("status", new StatusUpdaterBolt(), numWorkers)
.fieldsGrouping("fetch", Constants.StatusStreamName,
key)
.fieldsGrouping("sitemap", Constants.StatusStreamName,
key)
.fieldsGrouping("parse", Constants.StatusStreamName,
key)
.fieldsGrouping("indexer", Constants.StatusStreamName,
key)
.customGrouping("filter", Constants.StatusStreamName,
new URLStreamGrouping());
builder.setBolt("deleter", new DeletionBolt(), numWorkers)
.localOrShuffleGrouping("status",
Constants.DELETION_STREAM_NAME);
conf.registerMetricsConsumer(MetricsConsumer.class);
conf.registerMetricsConsumer(LoggingMetricsConsumer.class);
return submit("crawl", conf, builder);
}
}
i put com.cnf.245 in groupId and crawler in articfactId.
someone please explain what cause this error ?
Can you please paste the content of ESCrawlTopology.java? Did you set com.cnf.245 as package name?
The template class gets rewritten during the execution of the archetype with the package name substituted, it could be that the value you set broke the template.
EDIT: you can't use numbers in package names in Java. See Using numbers as package names in java
Use a different package name and groupID.

Prisma: getting "com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found" error

I'm getting errors related to name 'default' and stage 'default' when initializing new prisma project
Steps to reproduce:
Follow all the steps from official guide strictly
Get com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found error when run prisma deploy
Get this error when performing a simple query from http://localhost:4466/graphql:
Query:
query {
user {
id
name
}
}
Response:
{
"errors": [
{
"message": "Project not found: 'graphql_default'",
"code": 3016,
"requestId": "local:cjzs556h5000f0754vc6k36qd"
}
]
}
Versions:
Connector: MongoDB
Prisma Server: 1.34.6
prisma CLI: prisma/1.34.6 (darwin-x64) node-v10.16.3
OS: OS X Mojave - 10.14.6
Logs from Docker:
$ docker logs hello-world_prisma_1
No log level set, defaulting to INFO.
[INFO] Cluster created with settings {hosts=[mongo:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
[INFO] Exception in monitor thread while connecting to server mongo:27017
Exception opening socket
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.AsynchronousSocketChannelStream$OpenCompletionHandler.failed(AsynchronousSocketChannelStream.java:272)
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:128)
at sun.nio.ch.Invoker$2.run(Invoker.java:218)
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)
... 1 more
[INFO] Initializing workers...
[INFO] Obtaining exclusive agent lock...
[INFO] Obtaining exclusive agent lock... Successful.
[INFO] Successfully started 1 workers.
[INFO] No server chosen by com.mongodb.async.client.ClientSessionHelper$1#70a6c292 from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=mongo:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]}. Waiting for 30000 ms before timing out
[INFO] Opened connection [connectionId{localValue:2, serverValue:1}] to mongo:27017
[INFO] Monitor thread successfully connected to server with description ServerDescription{address=mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 6, 13]}, minWireVersion=0, maxWireVersion=6, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=16638401}
Server running on :4466
[INFO] Opened connection [connectionId{localValue:3, serverValue:2}] to mongo:27017
[INFO] Deployment worker initialization complete.
[Warning] Management authentication is disabled. Enable it in your Prisma config to secure your server.
{"key":"error/handled","requestId":"local:cjzs54qg500020754mbbzqni9","payload":{"exception":"com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found","query":"\n query($name: String! $stage: String!) {\n project(name: $name stage: $stage) {\n name\n stage\n }\n }\n ","variables":"{\"name\":\"default\",\"stage\":\"default\"}","code":"4000","stack_trace":"com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)\\n scala.Option.getOrElse(Option.scala:121)\\n com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)\\n scala.util.Success.$anonfun$map$1(Try.scala:251)\\n scala.util.Success.map(Try.scala:209)\\n scala.concurrent.Future.$anonfun$map$1(Future.scala:288)\\n scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)\\n scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)\\n scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)\\n akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)\\n akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)\\n scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)\\n scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)\\n akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)\\n akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)\\n akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)\\n akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\\n akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\\n akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\\n akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)","message":"No service with name 'default' and stage 'default' found"}}
[Debug] Initializing deployment worker for default_default
[Debug] Scheduling deployment for project default_default
[INFO] Opened connection [connectionId{localValue:4, serverValue:3}] to mongo:27017
[Debug] Applied migration for project default_default
Formatted [Warning]:
{
"key": "error/handled",
"requestId": "local:cjzs54qg500020754mbbzqni9",
"payload": {
"exception": "com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found",
"query": "\n query($name: String! $stage: String!) {\n project(name: $name stage: $stage) {\n name\n stage\n }\n }\n ",
"variables": "{\"name\":\"default\",\"stage\":\"default\"}",
"code": "4000",
"stack_trace": "com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)\\n scala.Option.getOrElse(Option.scala:121)\\n com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)\\n scala.util.Success.$anonfun$map$1(Try.scala:251)\\n scala.util.Success.map(Try.scala:209)\\n scala.concurrent.Future.$anonfun$map$1(Future.scala:288)\\n scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)\\n scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)\\n scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)\\n akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)\\n akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)\\n scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)\\n scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)\\n akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)\\n akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)\\n akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)\\n akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\\n akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\\n akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\\n akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)",
"message": "No service with name 'default' and stage 'default' found"
}
}
Formatted "query":
query($name: String! $stage: String!) {
project(name: $name stage: $stage) {
name
stage
}
}
Formatted "variables":
{ "name":"default", "stage":"default" }
Formatted stack trace:
com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)
scala.Option.getOrElse(Option.scala:121)
com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)
scala.util.Success.$anonfun$map$1(Try.scala:251)
scala.util.Success.map(Try.scala:209)
scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
P/s: It actually was running flawlessly some days before, but today I can't manage to make it work again!

javax.faces.context.FacesContextFactory Exception while embedding Undertow with JSF application

I am trying to build a simple JSF web application using embedded Undertow server.
Gradle dependencies for the project:
dependencies {
compile group: 'io.undertow', name: 'undertow-core', version: '1.4.0.CR3'
compile group: 'io.undertow', name: 'undertow-servlet', version: '1.4.0.CR3'
compile group: 'javax', name: 'javaee-api', version: '7.0'
compile group: 'org.glassfish', name: 'javax.faces', version: '2.2.11'
}
Sample Undertow Server Code:
public class HelloWorldServer {
public static void main(final String[] args) throws ServletException {
DeploymentInfo servletContainer=Servlets.deployment()
.setClassLoader(HelloWorldServer.class.getClassLoader())
.setDeploymentName("helloWorld.war")
.setContextPath("")
.addServlet(Servlets.servlet("javax.faces.webapp.FacesServlet",FacesServlet.class)
.addMappings("/faces/*","/javax.faces.resource/*")
.setLoadOnStartup(1));
DeploymentManager manager=Servlets.defaultContainer().addDeployment(servletContainer);
manager.deploy();
HttpHandler servletHandler=manager.start();
PathHandler path = Handlers.path(Handlers.redirect(""))
.addPrefixPath("/", servletHandler);
Undertow server = Undertow.builder()
.addHttpListener(8080, "localhost")
.setHandler(path)
.build();
server.start();
}
}
When i start the server,following error comes:
Mar 07, 2017 6:04:49 PM javax.faces.FactoryFinder$FactoryManager copyInjectionProviderFromFacesContext
SEVERE: Unable to obtain InjectionProvider from init time FacesContext. Does this container implement the Mojarra Injection SPI?
Mar 07, 2017 6:04:49 PM javax.faces.FactoryFinder$FactoryManager getFactory
SEVERE: Application was not properly initialized at startup, could not find Factory: javax.faces.context.FacesContextFactory. Attempting to find backup.
Exception in thread "main" java.lang.IllegalStateException: Could not find backup for factory javax.faces.context.FacesContextFactory.
at javax.faces.FactoryFinder$FactoryManager.getFactory(FactoryFinder.java:1135)
at javax.faces.FactoryFinder.getFactory(FactoryFinder.java:379)
at javax.faces.webapp.FacesServlet.init(FacesServlet.java:350)
at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:239)
at io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:133)
at io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:541)
at HelloWorldServer.main(HelloWorldServer.java:24)
Finally i found a solution.
For JSF bootstrap process we must add Two additional init parameter to Undertow DeploymentInfo class.
com.sun.faces.forceLoadConfiguration = TRUE
com.sun.faces.expressionFactory = com.sun.el.ExpressionFactoryImpl
Also we must add two more extra dependency other than JSF.
el-api
el-impl

Resources