I'm a newbie to Jhipster. I created gateway project with mongodb. I started jhipster-registry and mongodb in docker. There is no other microservice. when I debugged gateway project, I found below error:
Caused by: java.lang.IllegalArgumentException: Database name must not be empty
at org.springframework.util.Assert.hasText(Assert.java:168)
at org.springframework.data.mongodb.core.SimpleMongoDbFactory.<init>(SimpleMongoDbFactory.java:142)
at org.springframework.data.mongodb.core.SimpleMongoDbFactory.<init>(SimpleMongoDbFactory.java:93)
at org.springframework.data.mongodb.config.AbstractMongoConfiguration.mongoDbFactory(AbstractMongoConfiguration.java:114)
at com.xx.cloud.demo.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$65a29278.CGLIB$mongoDbFactory$6(<generated>)
at com.xx.cloud.demo.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$65a29278$$FastClassBySpringCGLIB$$7563f63.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:356)
at com.xx.cloud.demo.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$65a29278.mongoDbFactory(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
... 182 common frames omitted
I put a breakpoint within method getDatabaseName() in DatabaseConfiguration.java class and debug getwayApp.java. I found all the properties of mongoProperties are null except url with value 'mongodb://localhost/test'. I configured data in application-dev.yml like this:
data:
cassandra:
contactPoints: localhost
protocolVersion: V4
compression: LZ4
keyspaceName: gateway
repositories:
enabled: false
mongodb:
host: localhost
port: 27017
uri: mongodb://localhost:27017
database: jhipsterMongodbSampleApplication
but it was the same result of I didn't configure it.
How and where should I configure mongodb in gateway? Thanks in advance.
You put the mongodb block under cassandra, reduce indentation.
Also don't forget that your application properties should be defined in registry for production like deployment.
Related
I am trying to generate a report using Cucumber-jvm 6.11.0, and it works fine on my machine, when I put these properties in junit-platform.properties :
cucumber.publish.enabled=true
cucumber.plugin=pretty, json:build/reports/cucumber/report.json
cucumber.junit-platform.naming-strategy=long
However, when I run it on Jenkins, I get an ConnectException during the publication :
java.lang.RuntimeException: java.net.ConnectException: Connection timed out (Connection timed out)
at io.cucumber.core.plugin.MessageFormatter.writeMessage(MessageFormatter.java:36)
at io.cucumber.core.eventbus.AbstractEventPublisher.send(AbstractEventPublisher.java:51)
at io.cucumber.core.eventbus.AbstractEventBus.send(AbstractEventBus.java:12)
at io.cucumber.core.runtime.SynchronizedEventBus.send(SynchronizedEventBus.java:47)
at io.cucumber.core.runtime.CucumberExecutionContext.emitTestRunFinished(CucumberExecutionContext.java:102)
at io.cucumber.core.runtime.CucumberExecutionContext.finishTestRun(CucumberExecutionContext.java:74)
at io.cucumber.junit.platform.engine.CucumberEngineExecutionContext.finishTestRun(CucumberEngineExecutionContext.java:98)
at io.cucumber.junit.platform.engine.CucumberEngineDescriptor.after(CucumberEngineDescriptor.java:37)
at io.cucumber.junit.platform.engine.CucumberEngineDescriptor.after(CucumberEngineDescriptor.java:10)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
...
Caused by: java.net.ConnectException: Connection timed out (Connection timed out)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at java.base/sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1963)
at java.base/sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1958)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1957)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1525)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:329)
at io.cucumber.core.plugin.UrlOutputStream.getResponseBody(UrlOutputStream.java:111)
at io.cucumber.core.plugin.UrlOutputStream.sendRequest(UrlOutputStream.java:83)
I tried with different combination of properties, and I see it starts happening the moment I enable the publishing, with only :
cucumber.publish.enabled=true
I am not finding the default behavior in the documentation, once we enable the publishing : where does it get published by default ? does it really try to upload it through http ? (I guess the proxy is not configured when running on Jenkins, while it is found when running on my machine, hence the different behavior)
How come I still get this error when I simply try to write the html or json report on disk ?
When you enable report publishing, it uploads test result to Cucumber cloud service and you get the unique URL that you (or anyone you share that link with) can use to access your report.
The report is self-destructive in 24 hours. You can find more details in official Cucumber blog.
I have been trying Spark 2.4 deployment on k8s and want to establish a secured RPC communication channel between driver and executors. Was using the following configuration parameters as part of spark-submit
spark.authenticate true
spark.authenticate.secret good
spark.network.crypto.enabled true
spark.network.crypto.keyFactoryAlgorithm PBKDF2WithHmacSHA1
spark.network.crypto.saslFallback false
The driver and executors were not able to communicate on a secured channel and were throwing the following errors.
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:281)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:201)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:65)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:64)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
... 4 more
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Unknown challenge message.
at org.apache.spark.network.crypto.AuthRpcHandler.receive(AuthRpcHandler.java:109)
at org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:181)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:103)
at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
Can someone guide me on this?
Disclaimer: I do not have a very deep understanding of spark implementation, so, be careful when using the workaround described below.
AFAIK, spark does not have support for auth/encryption for k8s in 2.4.0 version.
There is a ticket, which is already fixed and likely will be released in a next spark version: https://issues.apache.org/jira/browse/SPARK-26239
The problem is that spark executors try to open connection to a driver, and a configuration will be sent only using this connection. Although, an executor creates the connection with default config AND system properties started with "spark.".
For reference, here is the place where executor opens the connection: https://github.com/apache/spark/blob/5fa4384/core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala#L201
Theoretically, if you would set spark.executor.extraJavaOptions=-Dspark.authenticate=true -Dspark.network.crypto.enabled=true ..., it should help, although driver checks that there are no spark parameters set in extraJavaOptions.
Although, there is a workaround (a little bit hacky): you can set spark.executorEnv.JAVA_TOOL_OPTIONS=-Dspark.authenticate=true -Dspark.network.crypto.enabled=true .... Spark does not check this parameter, but JVM uses this env variable to add this parameter to properties.
Also, instead of using JAVA_TOOL_OPTIONS to pass secret, I would recommend to use spark.executorEnv._SPARK_AUTH_SECRET=<secret>.
I am new to Cassandra. I am now working on Ubuntu and the latest Cassandra. I am using Java 1.7 on a PC. I created the installation on a PC without any cloud or server network (replication was one). I can use CQL ok but when I am trying in Java the code fails.
Here is the code:
Cluster cluster;
Session session;
cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect("casslinks");
if (!tableExists(linktable, session))
{
createCassTable(linktable, session);
}
for (String url: urls) {
insertCassUrl(url, crawledUrl, session, linktable);
}
Here is the error:
Exception in thread "main"
java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory
at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:65)
at com.example.GetSoogrData.insertUrls(GetSoogrData.java:596)
at com.example.GetSoogrData.main(GetSoogrData.java:1112)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
I used the debugger and saw the session line failed. I assume the IP address is wrong. I am unsure how to test for the correct IP address or how to define the session variable to it will correctly connect.
I have been using mongo db and 127.0.0.1 works for that.
Has anyone any ideas?
You are probably running your code with missing dependencies.
Datastax java driver requires few addidional jars, SLF4J libraries being one of them for which exception is thrown.
Look here to see what jars you need:
http://www.datastax.com/documentation/developer/java-driver/2.0/java-driver/reference/settingUpJavaProgEnv_r.html
http://www.datastax.com/documentation/developer/java-driver/2.0/common/drivers/introduction/driverDependencies_r.html
The setup:
Web server
Apache Tomcat
RestFull web services
Using DataStax java driver 2.0
Database
-2-node Cassandra 2.0.7.31 cluster
-replicas=1
Problem
After sending set of 1500 request more than three times. I got error at the tomcat log
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169)
at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered))
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
Observation
After this state of tomcat. All the further request attaining the same fate. That is drivers are not able to send my insert request to cassandra.
After executing net stat command i find that the all the TCP connection b/w web server and the Cassandra are in TIMED_WAIT state.
What could be the reason ? why Datastax driver is not able to take back the connection back to the pool? or why does the Cassandra is engaging all the connection form its client.
Thanks in Advance
The connection was increasing Due to calling creating multiple session for each request. Now it is working Fine.
builder = new Cluster.Builder().
addContactPoints("192.168.114.42");
builder.withPoolingOptions(new PoolingOptions().setCoreConnectionsPerHost(
HostDistance.LOCAL, new PoolingOptions().getMaxConnectionsPerHost(HostDistance.LOCAL)));
cluster = builder
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withReconnectionPolicy(new ConstantReconnectionPolicy(100L))
.build();
session = cluster.connect("demodb");
Now Driver is maintain 17-26 number of connection irrespective of number of transaction.
I've configured HTTP Connector in server.xml adding some ssl features. I tryied to set my keyAlias to which is the name of the alias for certain certificate (not the private key of the keystore). Then, when I start JBoss I get something like:
[2012-04-12 17:01:37,236 ERROR [org.apache.coyote.http11.Http11Protocol] Error
initializing endpoint
java.io.IOException: Alias name <somealias> do not indetify a key entry
I'm new to ssl configuration and web security core concepts as well. Thanks for your patience.
Edit: complete stacktrace follows:
at org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:412)
at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:378)
at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:135)
at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:497)
at org.apache.tomcat.util.net.JIoEndpoint.start(JIoEndpoint.java:514)
at org.apache.coyote.http11.Http11Protocol.start(Http11Protocol.java:203)
at org.apache.catalina.connector.Connector.start(Connector.java:1146)
at org.jboss.web.tomcat.service.JBossWeb.startConnectors(JBossWeb.java:601)
at org.jboss.web.tomcat.service.JBossWeb.handleNotification(JBossWeb.java:638)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.jboss.mx.notification.NotificationListenerProxy.invoke(NotificationListenerProxy.java:153)
at $Proxy46.handleNotification(Unknown Source)
at org.jboss.mx.util.JBossNotificationBroadcasterSupport.handleNotification(JBossNotificationBroadcasterSupport.java:127)
at org.jboss.mx.util.JBossNotificationBroadcasterSupport.sendNotification(JBossNotificationBroadcasterSupport.java:108)
at org.jboss.system.server.ServerImpl.sendNotification(ServerImpl.java:916)
at org.jboss.system.server.ServerImpl.doStart(ServerImpl.java:497)
at org.jboss.system.server.ServerImpl.start(ServerImpl.java:362)
at org.jboss.Main.boot(Main.java:200)
at org.jboss.Main$1.run(Main.java:508)
at java.lang.Thread.run(Thread.java:662)
It looks like you are not importing your keys properties. I'd recommend you review your steps against these two documents
http://docs.jboss.org/jbossweb/3.0.x/ssl-howto.html
A shorter version is here
http://www.agentbob.info/agentbob/79-AB.html