I'm using solr 5.2.1 with jetty.In my logs(solr.log) there's an error with
"org.apache.solr.core.CoreContainer; Error creating core [dosweb]: Could not load conf for core project : Error loading solr config from solrconfig.xml" title
,and I understand it's because of misconfigured in solr version,and it is showing me this exception:Caused by: java.lang.IllegalArgumentException: Illegal parameter 'termIndexInterval'
,what I should do to fix it.
thanks alot.
Not enough data.
It seems like your core is misconfigured for 5.2.1
I'd suggest that you try first with a reference configuration core, or a reference empty core, make sure that works, and then merge the config from the dosWeb core to the newly created core in steps.
This way you will find the part of the configuration that causes the issue.
EDIT -
After you added information (still not enough - a snippet of the offending configuration would have been good) I've been able to track down the bug that refers to the issue you get.
https://issues.apache.org/jira/browse/SOLR-6560
Essentially - the configuration for termIndexInterval is now not only deprecated, but can't be done in a standard way at all. From what I understand from the bug, It's also not needed anymore.
Related
I'm trying to initialize local Hybris 2205.3 version and getting following error -
ERROR [hybrisHTTP27] [HacInitUpdateFacade] Failed to initialize
java.lang.IllegalArgumentException: Property 'http://javax.xml.XMLConstants/property/accessExternalSchema' is not recognized.
Using Oracle JDK 17.0.4.1. Initialization triggered from HAC.
I've tried adding following property to tomcat.generaloptions also which didn't help -
"-Djavax.xml.accessExternalSchema=all"
Any pointers to fix this? Or need more information?
It could be that one of the custom jar is older version and conflicting with oob. In one of our project we faced a similar issue and found that xerces.jar version in custom code was of lower version.
Both the jars I.e in oob and custom there was xmlconstant class but lower version jar used in custom code didn't have accessExternalSchema as class variable and was being picked up by system on startup because of conflict.
Oracle JDK not supported anymore, you need to try with SapMachine 17.0.
Whole system requirements by version is here.
We resolved the issue with Oracle JDK 17 only. We faced the same issue while upgrade from 2105 to 2211. This is due to jar dependency. We tried with the above given solution but it didn't work for us. We researched more and found the issue with dependency with xerces which caused conflict. We also found there is "xerces-2.12-orbeon' xerces-2.12-orbeon in OOTB. We did two things to resolve the issue
we updated classpath with xerces-2.12-orbeon( OOTB jar) in out custom code
We need to modify few code base as we were using xerces for Base64 encrytion and decryption for SSO
Please let me know if you have any questions, I will try my best to respond in time.
Regards,
Abhijit Das
Please find the link here:
https://answers.sap.com/questions/13781195/hybris-2211-upgradation-error.html?childToView=13818282
This is a very strange error that took a while to find out what was going on. We have multiple jenkins linux slaves that were building a library. When the library was tested from one slave all was well. When it was tested with the other it got a runtime error with where it was trying to call a method signature that was non-existent. After lots of testing, I was able to determine that the order in which the class files were added to the jar determined if the tests would work or not. Does anyone know where to begin trying to remedy this? Is this a bug in groovy classloading? java classloading? Any ideas are appreciated.
I took the approach that the error state was in fact the correct state and started looking at the application config in detail (I'm on build team, so I was not familiar with this app) and found a bad spring configuration object. The issue dealt with order in which classes were processed by annotation scanner. If one class was processed first it would give a bad config which gave a bad app state which caused the error. The other build servers were building the jar with the bad class later in the config which caused a different state to be loaded which essentially ignored the bad file. This is why order in which classes were added to the jar mattered. I was able to correct the bad config and solve the error permanently.
I am trying to query Cassandra using Apache Drill. The only connector I could find is here:
http://www.confusedcoders.com/bigdata/apache-drill/sql-on-cassandra-querying-cassandra-via-apache-drill
However this does not build. It comes up with an artifact not found error. I also had another developer who is more versed in these tools take a stab at it, but he also had no luck.
I tried contacting the developer of the plugin I referenced, but the blog does not work and won't let me post comments. Has anyone got this plugin to work (if so how?) or is there another plugin or method I can use to connect apache drill to Cassandra? If anyone could show me how to connect an execute a simple SQL query that would be much appreciated.
I looked at the latest Cassandra storage plugin patch and the latest apache drill source. The drill code has changed and the patch can no longer be applied.
I then manually took the patch apart (it id mostly diff output). Most of the patch was new classes which I could easily add to the latest drill source tree. Most of the other updates were easy to insert into the current source. There were two specific classes that required some minor code modifications/extensions. I rebuilt the distribution from the modified source and installed the drill servers it on a 3 node cluster. The Cassandra schema failed to initialize properly throwing a null pointer exception one of the new classes. This leads me to believe that the (latest) modifed storage plugin is incompatible with the latest version of Cassandra. Since the author of the original storage plugin is unreachable and no one else is stepping up to support the code, this is a dead horse. Beat it if you must.
I was the author of the patch written a year back. Could not get it merged into Drill then, and later got occupied with other stuffs :(
With so many changes to Drill internals, I am not sure what amount of welding would be needed at this point to get it working. Please use the code just as a reference for writing a Drill storage plugin.
Have added this banner on top of the blog post to save fellow developer's hours.
I don't know if anyone is still interested in this topic but I've been experimenting with this plugin and got it to work with Drill 1.18-SNAPSHOT. Here is a link to my branch with this code: 1. My plan is to submit this as a PR for Drill, but it still needs some work. This code will successfully query Cassandra 3.11.5 (latest stable version).
I've been playing with the jHipster yeoman generator for the past week and I'm trying to get my application working with atomikos for JTA/XA transactions and I'm running into a number of problems, which is to be expected since I'm new to spring boot and a number of the other components in the jHipster stack.
I have been using the example found here as my starting point for configuring atomikos. I've implemented everything described there, replacing HikariCP entirely.
At the moment I have eliminated Metrics and liquibase from my configuration as they were giving me problems and I wanted to get the basics working and then add them back in. However, I'm now hitting a Hibernate issue.
Hibernate is complaining that second-level cache is used but hibernate.cache.region.factory_class is not given. The factory_class setting is specified in the configuration and I'm not able to figure out what I'm missing.
Has anyone managed to get atomikos (or maybe bitronix) working with this stack?
I've managed to get this working. For some reason I had to explicitly set hibernate.cache.use_second_level_cache to false. Not sure why it would require this given that I am not setting any second level cache flags anywhere that I can see.
Never the less. It's working now.
My problem is this exception:
Caused by: <openjpa-2.1.1-r422266:1148538 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: This configuration disallows runtime optimization, but the following listed types were not enhanced at build time or at class load time with a javaagent: "
I'm trying to get an very simple java application with jsf and jpa running, but there seems to be a problem with the enhancement of my entities. As far as I know, tries OpenJPA to enhance my entities at runtime, which are listet in the persistence.xml, however there is no agent to do this. The keyword for this is: Enhancing at Runtime, right?
I thought the enhancement will automatically done by the application server at deployment? How can I configure this?
My exactly environment:
Glassfish 3.1.1
Derby, which is integrated in Glassfish
OpenJPA 2.1.1
Mojarra JSF 2.1.3
Update #1:
After some comments I've added the following lines to my persistence.xml:
<property name="openjpa.DynamicEnhancementAgent" value="false"/>
<property name="openjpa.RuntimeUnenhancedClasses" value="supported" />
It works now, but OpenJPA throw this warning:
SEVERE: 52 myApp WARN [http-thread-pool-8080(5)] openjpa.Enhance - Creating subclass for "[class myApp.model.entities.AbstractEntity, class myApp.model.entities.Post]".
This means that your application will be less efficient and will consume more memory than it would if you ran the OpenJPA enhancer. Additionally, lazy loading will not be available for one-to-one and many-to-one persistent attributes in types using field access; they will be loaded eagerly instead.
I think this can't be the solution.
Update #2:
Refer to fvu's answer, I've tried to define the -javaagent jvm parameter in the domain.xml and over the web admin console. After a restart appeared the problem again.
Update #3:
Refer to update #2, I've played a bit around. There must be thrown an error, when the -javaagent parameter is used, but the file is missing, right?. Yes, there it is:
Waiting for domain1 to start .Command start-domain failed.
Error starting domain domain1.
The server exited prematurely with exit code 1.
Before it died, it produced the following output:
Error occurred during initialization of VM
agent library failed to init: instrument
Error opening zip file or JAR manifest missing : /tmp/openjpa.jar
If I copying the agent to this location, this error doesn't appear, but openjpa could still not enhance my entities!
If you're still having issues... I'd highly recommend biting the bullet and setting up build time enhancement. You'll be much happier in the long run if you get that going.
A couple of ideas:
add the Java agent for enhancement to GF's JVM option, see this link for an example of how to install a javaagent and OpenJPA's doc 5.2.3, enhancing at runtime. That emulates enhancer activation in desktop apps as closely as possible IMO.
However, when I read chapter 5.2.4 of the OpenJPA docs it might be capable of picking up the correct enhancer automatically. Try copying openjpa.jar to the domain's library directory, and check what happens after a server restart.