Getting No such algorithm exception while using TLSV1.2 in java 1.4 - bouncycastle

I am trying to hit a webservice which supports TLSv1.2. I am using Java 1.4. It does not support TLSv1.2.
Now someone told me that BC could solve my problem.
Though does it work with a SSLEngine as drop in replacement somehow?
Is this possible with BC?
What do I have to do to get a working SSLEngine (for use with TLSv1 in a
nonblocking io scenario) without such low restrictions on primesize for DH.
What I tried:
Security.addProvider(new BouncyCastleProvider());
This alone seems not to solve the problem.
So instead of
SSLContext.getInstance("TLSv1"); //which works alas only little DH keys.
I tried calling the following:
SSLContext.getInstance("TLSv1","BC");
SSLContext.getInstance("TLS","BC");
SSLContext.getInstance("TLSv1.2","BC");
SSLContext.getInstance("ssl","BC");
Though all of them throws NoSuchAlgorithmException.

I could solve this by using bctls lib, but unfortunatelly it doesn't seem to have a version for Java 1.4.
The only version that I could find in Bouncy Castle's website and in Mvn Repository is bctls-jdk15on-157 (for Java >= 1.5).
Anyway, if an upgrade of your Java version is possible, you just need to add this jar to your project and use the org.bouncycastle.jsse.provider.BouncyCastleJsseProvider class (I've used Java 1.7 for this test):
// add the JSSE provider
Security.addProvider(new BouncyCastleJsseProvider());
// tests
SSLContext.getInstance("TLSv1.1", BouncyCastleJsseProvider.PROVIDER_NAME);
SSLContext.getInstance("TLSv1.2", BouncyCastleJsseProvider.PROVIDER_NAME);
SSLContext.getInstance("TLSv1", BouncyCastleJsseProvider.PROVIDER_NAME);
All tests above run without error.
Checking all the SSL protocols supported:
SSLContext context = SSLContext.getInstance("TLSv1", BouncyCastleJsseProvider.PROVIDER_NAME);
System.out.println(Arrays.toString(context.getSupportedSSLParameters().getProtocols())); // [TLSv1.1, TLSv1, TLSv1.2]
The output is:
[TLSv1.1, TLSv1, TLSv1.2]

Related

Liferay OSGi bundle deploy with FirebirdSQL jdbc driver error

I am new to Liferay 7.x and am having trouble with, I suspect, OSGI.
I am trying to write an DB Authenticator which just checks for users in a separate DB. The DB is a FirebirdSQL DB.
When setting the depenency in build.gradle like this
compileInclude group: 'org.firebirdsql.jdbc', name: 'jaybird', version: '4.0.9.java11'
The error I get when the bundle tries to deploy is:
2023-02-14 01:52:59.128 ERROR [fileinstall-directory-watcher][DirectoryWatcher:1173] Unable to start bundle: file:/home/me/Documents/IdeaProjects/liferay/labsys-authentication/bundles/osgi/modules/com.myapp.intranet.auth-1.0.0.jar
com.liferay.portal.kernel.log.LogSanitizerException: org.osgi.framework.BundleException: Could not resolve module: com.myapp.intranet.auth [1591]_ Unresolved requirement: Import-Package: com.sun.jna_ [Sanitized]
at org.eclipse.osgi.container.Module.start(Module.java:444) ~[org.eclipse.osgi.jar:?]
at org.eclipse.osgi.internal.framework.EquinoxBundle.start(EquinoxBundle.java:428) ~[org.eclipse.osgi.jar:?]
at com.liferay.portal.file.install.internal.DirectoryWatcher._startBundle(DirectoryWatcher.java:1156) [bundleFile:?]
at com.liferay.portal.file.install.internal.DirectoryWatcher._startBundles(DirectoryWatcher.java:1189) [bundleFile:?]
at com.liferay.portal.file.install.internal.DirectoryWatcher._startAllBundles(DirectoryWatcher.java:1130) [bundleFile:?]
at com.liferay.portal.file.install.internal.DirectoryWatcher._process(DirectoryWatcher.java:1041) [bundleFile:?]
at com.liferay.portal.file.install.internal.DirectoryWatcher.run(DirectoryWatcher.java:247) [bundleFile:?]
I have looked at
https://liferay.dev/blogs/-/blogs/osgi-module-dependencies and
https://liferay.dev/blogs/-/blogs/gradle-compile-vs-compileonly-vs-compileinclude
and tried option 1 (adding the DB driver in tomcats lib dir), and that still did not seem to work (in that case, the driver can't be found).
Just not sure how to include the Firebird jdbc driver in an OSGi bundle... of if I have to add any transitive dependencies (and if so, how do I know what they are and how do I best add them).
Just wondering if anyone has deployed a Firebird JDBC driver in a Liferay service app.
The important part of the error is "Unresolved requirement: Import-Package: com.sun.jna_ [Sanitized]". Jaybird itself doesn't provide OSGi metadata. I have no experience with OSGi, but I guess due to the lack of metadata, it scans the class files and notices that Jaybird uses JNA and that there is no dependency providing JNA. In practice this is a optional dependency of Jaybird (you only need it if you use native or embedded connections, which are not the default), but OSGi isn't aware of that and requires that you declare it.
Adding the dependency with compileInclude 'net.java.dev.jna:jna:5.5.0' to your build.gradle should do the trick.
(NOTE: This answer is based on my earlier comment and the comment by cfnz)

Not able to connect to Cassandra from my spring boot application , throwing exception as couldn't reach any contact-point [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 20 days ago.
Improve this question
added config file to take contact-point programatically
#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter) .withString(DefaultDriverOption.AUTH_PROVIDER_USER_NAME,username)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;
`Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.datastax.oss.driver.api.core.CqlSession]: Factory method 'cassandraSession' threw exception with message: Since you provided explicit contact points, the local DC must be explicitly set (see basic.load-balancing-policy.local-datacenter in the config, or set it programmatically with SessionBuilder.withLocalDatacenter). Current contact points are: Node(endPoint=/127.0.0.1:9042, hostId=0323221f-9a0f-ec92-ea4a-c1472c2a8b94, hashCode=39075b16)=datacenter1. Current DCs in this cluster are: datacenter1
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:171) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:648) ~[spring-beans-6.0.2.jar:6.0.2]
... 89 common frames omitted
Caused by: java.lang.IllegalStateException: Since you provided explicit contact points, the local DC must be explicitly set (see basic.load-balancing-policy.local-datacenter in the config, or set it programmatically with SessionBuilder.withLocalDatacenter). Current contact points are: Node(endPoint=/127.0.0.1:9042, hostId=0323221f-9a0f-ec92-ea4a-c1472c2a8b94, hashCode=39075b16)=datacenter1. Current DCs in this cluster are: datacenter1
at com.datastax.oss.driver.internal.core.loadbalancing.helper.MandatoryLocalDcHelper.discoverLocalDc(MandatoryLocalDcHelper.java:91) ~[java-driver-core-4.11.4-yb-1-RC1.jar:na]
at com.datastax.oss.driver.internal.core.loadbalancing.DefaultLoadBalancingPolicy.discoverLocalDc(DefaultLoadBalancingPolicy.java:119) ~[java-driver-core-4.11.4-yb-1-RC1.jar:na]
at
this is the application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
but still the application is pointing towards the localhost , even though i've explicitly mentioned the contact-points and localDC
logs of stg evn are :
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7:
xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request,
step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception:
Lost connection to remote peer)]
thanks for the question!
I'll try to provide some pointers that might help you identify the problem but it should be noted that you appear to have some non-standard elements to your application. Specifically I note that "java-driver-core-4.11.4-yb-1-RC1.jar" isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into in a moment. I also don't recognize the configuration file you cite above. Could you provide some more detail on how your app is configured? At first blush it looked as though you might be using spring-data-cassandra but there wasn't any mention of it in your stack trace... so perhaps you're using some kind of custom configuration code?
As to your specific question: my guess is that you might have a Java driver configuration file in your staging environment which is providing a default value for "datastax-java-driver.basic.contact-points". The 4.x Java driver is configured via the Lightbend Config library. Most relevant to our case it searches for a set of configuration files with various default names on the classpath; these files are then merged together to generate the config passed to the driver. So if you have an application.conf in staging which specifies some contact points and is in the classpath the code you cite above would run fine in your local environment but fail in staging.
To validate this, create an application.conf file in your local environment in src/main/resources (or somewhere else that's explicitly included in the classpath) and give it the following contents:
datastax-java-driver {
basic {
contact-points: ["127.0.0.1:9042"]
}
}
If you then re-run the app in your local environment you should see the error there as well.
Note that the core Java driver JAR already includes a reference.conf file which serves as a default configuration. Here's where the part about a custom JAR figures in; because your not using a standard DataStax Java driver JAR I don't know if you're using the standard reference.conf file defined within that JAR. It's possible that the contact points are defined in that file, although if that were the case I'd expect you to already be seeing the error in any environment where you use that JAR.
One final note: the Java driver should be fine with IPv6 addresses. The issue described above isn't related to IPv6; it's entirely a function of how you're using the Java driver's configuration mechanism.
Hopefully some of the above is helpful!

Alfresco solr/search stops working after installing records management

I am using alfresco 5.2.3 enterprise with solr6 search services.
Everything works fine when I deploy our application custom code inside the alfresco-platform jar and alfresco-share jar.
Now, when I install alfresco records management amp file, the search stops working. I am not able to search even a single document or folder.
RM amp version: alfresco-rm-enterprise-repo-2.7.0.amp and alfresco-rm-enterprise-share-2.7.0.amp
There are three different instances: repo (where alfresco.war sits), share (where share.war and ADF sits) and index server (where indexes are maintained).
I install alfresco-rm-enterprise-repo-2.7.0.amp on repo, and alfresco-rm-enterprise-share-2.7.0.amp on share. And restart the servers. RM installation is successful without any errors. But search is not at all working after this.
Is it possible that after RM installation, some indexes are corrupted, and we need to conduct reindexing ? Can that resolve this issue ?
NOTE: The versions of alfresco and RM are already in the supported stack as per the alfresco documentation link: https://docs.alfresco.com/5.2/concepts/supported-platforms-ACS.html
Any help would be appreciated.
Finally, the problem is resolved.
The keystore, truststore certificate files were the culprit.
New keystore, truststore files were required to be generated as the communication between ACS and Index server was not happening and resulting into GetModelsDiff 403 error in the logs.
Additionally, we ensured the following settings were put up in ACS and index server files:
ACS alfresco-global.properties:
alfresco.host=alfresco-dev-repo.domain.com
alfresco.port=443
alfresco.protocol=https
share.host=alfresco-dev-repo.domain.com
share.port=443
share.protocol=https
db.ssl_params=&useSSL=true&requireSSL=true&verifyServerCertificate=true&trustCertificateKeyStoreUrl=file:///opt/alfresco-content-services/alf_data/keystore/ssl.truststore&trustCertificateKeyStoreType=JCEKS&trustCertificateKeyStorePassword=kT9X6oe68t
db.url=jdbc:mysql://${db.host}/${db.name}?${db.params}${db.ssl_params}
index.subsystem.name=solr6
dir.keystore=${dir.root}/keystore
solr.host=alfresco-dev-index.domain.com
solr.port.ssl=8983
solr.port=80
solr.secureComms=https
#ssl encryption
encryption.ssl.keystore.location=${dir.keystore}/ssl.keystore
encryption.ssl.keystore.type=JCEKS
encryption.ssl.keystore.keyMetaData.location=${dir.keystore}/ssl-keystore-passwords.properties
encryption.ssl.truststore.location=${dir.keystore}/ssl.truststore
encryption.ssl.truststore.type=JCEKS
encryption.ssl.truststore.keyMetaData.location=${dir.keystore}/ssl-truststore-passwords.properties
Solr Configuration:
solr.in.sh file:
SOLR_PORT=8983
SOLR_SSL_KEY_STORE=/opt/alfresco-search-services/solrhome/keystore/ssl.keystore
SOLR_SSL_KEY_STORE_PASSWORD=kT9X6oe68t
SOLR_SSL_TRUST_STORE=/opt/alfresco-search-services/solrhome/keystore/ssl.truststore
SOLR_SSL_TRUST_STORE_PASSWORD=kT9X6oe68t
SOLR_SSL_NEED_CLIENT_AUTH=true
SOLR_SSL_WANT_CLIENT_AUTH=false
alfresco core > solrcore.properties AND archive core > solrcore.properties
alfresco.secureComms=https
data.dir.root=/opt/alfresco-search-services/solrhome/
alfresco.port.ssl=8443
alfresco.encryption.ssl.keystore.passwordFileLocation=ssl-keystore-passwords.properties
alfresco.encryption.ssl.truststore.passwordFileLocation=ssl-truststore-passwords.properties
alfresco.baseUrl=/alfresco
alfresco.host=alfdevhostname.domain.com
alfresco.encryption.ssl.keystore.provider=
alfresco.encryption.ssl.truststore.type=JCEKS
alfresco.encryption.ssl.truststore.provider=
alfresco.encryption.ssl.keystore.type=JCEKS
alfresco.encryption.ssl.keystore.location=ssl.keystore
alfresco.port=80
alfresco.version=5.2.3
alfresco.encryption.ssl.truststore.location=ssl.truststore
No need of touching the files under this location:
/opt/alfresco-search-services/solrhome/templates/rerank/conf
And finally the most important part:
Latest/Updated Certificate files placed under:
/opt/alfresco-search-services/solrhome/keystore
And the same certificate files placed under:
/opt/alfresco-search-services/solrhome/alfresco/conf
and
/opt/alfresco-search-services/solrhome/archive/conf
and on ACS server:
/opt/alfresco-content-services/alf_data/keystore
On top of it, if the issue is still not getting resolved, you can try the following:
Set solr.secureComms=none in alf-global, and alfresco.secureComms=none in archive core and alfresco core, and restart both entities to see if the normal HTTP connection is working without SSL or HTTPS
Validate with infra/netwk team is certificates installed r correct or not
Try pointing directly the IP address of alfresco and solr to each other, instead of host name –as it might be coming through LB
Try Telnet solr host from alfresco repo server, and also vice-versa
Put -Djavax.net.debug=all under alfresco > tomcat/scripts/ctl.sh and see if you get any useful information
Check not just the alfresco.log, solr.log, see access-logs if you can find 404 or 200 status responses. OR curl on solr machine against the URL that is logged in localhost-access logs.
Starting/stopping solr with root user – ideally should be another dedicated user for solr
Ideally certificates should be copied from alfresco (alf_data/keystore) to solr server, not from solr to alfresco server. But if not working, you can try the other way around.
The alfresco.host, share.host, alfresco.port, share.port in alf-global should match with properties in solrhome/alfresco/conf/solrcore.properties + solrhome/archive/conf/solrcore.properties
Try putting debugger on i.e debug statements on from alfresco repo side as well as solr side to capture any unknown or hidden exceptions/errors.
You can also check the solr-admin console page from browser and check the logs from there.
I faced similar issue on Alfresco 6.2.2 with alfresco-insight-engine 2.0.0. Multiple errors like below I had faced one by one after changing the configurations :-
If certificates are not matching between ACS, Solr OR between ACS, Solr and AWS OR certificates generated are incorrect OR certificates compatible only with particular java version OR certificates not added to truststore correctly, then you may get:
javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException ,
unable to find valid certification path to requested target ,
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
I checked the configuration (certificate) was imported correctly at AWS side. And no restriction was applied at AWS side.
But, finally I was able to resolve with the following combination:
Alfresco side
Server.xml:
<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
SSLEnabled="true" maxThreads="150" scheme="https"
keystoreFile="/app/tomcat/keystores/ssl.keystore"
keystorePass="pwd" keystoreType="JCEKS"
secure="true" connectionTimeout="240000"
truststoreFile="/app/tomcat/keystores/ssl.truststore"
truststorePass="pwd" truststoreType="JCEKS"
clientAuth="false" sslProtocol="TLS" />
alfresco-global.properties:
index.subsystem.name=solr6
solr.secureComms=https
solr.port=8984
solr.port.ssl=8984
solr.host=domainname
alfresco.context=alfresco
alfresco.host=host
alfresco.port=8443
alfresco.protocol=https
#
share.context=share
share.host=host
share.port=8443
share.protocol=https
#ssl encryption
encryption.ssl.keystore.location=/app/tomcat/keystores/ssl.keystore
encryption.ssl.keystore.type=JCEKS
encryption.ssl.keystore.keyMetaData.location=/app/tomcat/keystores/ssl-keystore-passwords.properties
encryption.ssl.truststore.location=/app/tomcat/keystores/ssl.truststore
encryption.ssl.truststore.type=JCEKS
encryption.ssl.truststore.keyMetaData.location=/app/tomcat/keystores/ssl-truststore-passwords.properties
solr side
solr.in.sh
SOLR_SOLR_HOST=domainname
SOLR_ALFRESCO_HOST=domainname
SOLR_SSL_CUSTOM="-Dsolr.ssl.checkPeerName=false -Dsolr.allow.unsafe.resourceloading=true"
SOLR_OPTS="$SOLR_SSL_CUSTOM"
SOLR_PORT=8984
SOLR_HOST=domainname
SOLR_SSL_KEY_STORE=/app/alfresco-insight-engine/solrhome/keystore/ssl.repo.client.keystore
SOLR_SSL_KEY_STORE_PASSWORD=pwd
SOLR_SSL_KEY_STORE_TYPE=JCEKS
SOLR_SSL_TRUST_STORE=/app/alfresco-insight-engine/solrhome/keystore/ssl.repo.client.truststore
SOLR_SSL_TRUST_STORE_PASSWORD=pwd
SOLR_SSL_TRUST_STORE_TYPE=JCEKS
SOLR_SSL_NEED_CLIENT_AUTH=false
SOLR_SSL_WANT_CLIENT_AUTH=true
solrcore.properties (both cores)
alfresco.encryption.ssl.truststore.location=ssl.repo.client.truststore
alfresco.encryption.ssl.keystore.provider=
alfresco.encryption.ssl.truststore.type=JCEKS
alfresco.host=ip-10-233-4-126.ap-east-1.compute.internal
alfresco.encryption.ssl.keystore.location=ssl.repo.client.keystore
alfresco.encryption.ssl.truststore.provider=
alfresco.port.ssl=8443
alfresco.encryption.ssl.truststore.passwordFileLocation=ssl-truststore-passwords.properties
alfresco.port=8080
alfresco.encryption.ssl.keystore.type=JCEKS
alfresco.secureComms=https
alfresco.encryption.ssl.keystore.passwordFileLocation=ssl-keystore-passwords.properties
solrcore.properties (under rerank/conf)
alfresco.host=domainname
alfresco.port=8080
alfresco.port.ssl=8443
alfresco.secureComms=https
alfresco.encryption.ssl.keystore.type=JCEKS
alfresco.encryption.ssl.keystore.provider=
alfresco.encryption.ssl.keystore.location=ssl.repo.client.keystore
alfresco.encryption.ssl.keystore.passwordFileLocation=ssl-keystore-passwords.properties
alfresco.encryption.ssl.truststore.type=JCEKS
alfresco.encryption.ssl.truststore.provider=
alfresco.encryption.ssl.truststore.location=ssl.repo.client.truststore
alfresco.encryption.ssl.truststore.passwordFileLocation=ssl-truststore-passwords.properties
The alfresco keystore files (used/pointed to by Alfresco) are under /app/tomcat/keystores.
And solr keystore files (used/pointed to by solr) are under /app/alfresco-insight-engine/solrhome/keystore.
NOTE: We have copied the solr keystores files to following locations also: /app/alfresco-insight-engine/solrhome/alfresco/conf , /app/alfresco-insight-engine/solrhome/archive/conf , /app/alfresco-insight-engine/solrhome/templates/rerank/conf
NOTE: If it's just a certificate not added to truststore cacerts, then you can add the certificate to the cacerts using this link: Error - trustAnchors parameter must be non-empty
Other points which can be checked if above does not work:
Check if java version is a supported one (in supported stack) and certificates are correctly getting added to the truststore.
Check the java version from alfresco's admin summary page and verify if certificates get added into the correct java
Check if solr host, port and ssl port is correctly picked up. Verify this location - http://domainname/alfresco/s/enterprise/admin/admin-searchservice , as port might be picked up from here which might not match with the one in alfresco-global.properties file. In case of mismatching properties between alf-global and admin-searchservice URL, you may get “Connection refused” error in alfresco logs when alfresco tries to connect to solr.
If JKS type of certi has become obsolete, try generating PKCS12 or JCEKS type certi.
When solr is running on 8983 (http) as well as 8984 (https/ssl), you may get error "Unsupported or unrecognized SSL message". Try stopping one which is not used.
If https with 8984 solr url is not accessible from browser, then try importing the correct certificate at AWS, and also try adding following entry in /app/alfresco-insight-engine/solr/server/etc/jetty-ssl.xml file: FALSE

Deltaspike service layer catching PersistenceException unexpected behavior

I have a java ee 8 web app using deltaspike using core, data and jsf modules.
I also added cdi bean (RegisterService using #trasactional) which calls a UserRepository
public void createNewUser(User newUser){
try{
userRepository.save(newUser)
}catch(PersistenceException ex){
throw new RegisterException("Error", ex); //it never falls at this point
}
}
When service layer calls repository I’m catching a PersistenceException and rethrowing a service layer exception but the repository never throws PersistenceException even if the primary ID is duplicated (I just can see the stacktrace in console output), of course I’m running into a contraint exception to try to execute the expected flow
I’m using a deltaspike exception handler but I have not added a #handler for PersistenceException
Does somebody knows what could be happening here?
this is a reproducible example: https://github.com/gdiazs/javaee8-fullstack
Since there is no tables in DB it will throw an PersitenceException.
I ran this example in 2 laptops.
Mine: OSx 10.14.6
JDK: Open JDK Zulu 1.8
eclipse and also maven terminal execution
Work: OSx 10.14.6 (it works as I expect good here)
JDK: Oracle 1.8
Intellij and also maven terminal execution
So now I`m really confused, no idea why is working just in one.

TLSv1.2 on Jboss 5.1.0 GA using Java 6 and BouncyCastle

I'm facing a problem with a Jboss server and the https connector, running on Java 6.
I want to make my server using only TLSv1.2 and using the cipher suites "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" for decoding the certificate.
I know that Java 6 does not support TLSv1.2, but I added the Bouncy Castle JCE and JSSE provider to the JDK (https://www.bouncycastle.org/latest_releases.html) :
Added the JARs files (bcprov-jdk15on-159.jar and bctls-jdk15on-159.jar) in path_to_jdk/jre/lib/ext folder
Edited file path_to_jdk/jre/lib/security/java.security to add lines :
security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider
security.provider.11=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider
The java instruction : SSLContext.getInstance("TLSv1.2"); does not throw a NoSuchAlgorithmException anymore if I test it on a small test class.
On Jboss :
Edited file path_to_jboss/server/default/deploy/jbossweb.sar/server.xml to have :
< Connector protocol="HTTP/1.1" SSLEnabled="true"
port="8443" address="${jboss.bind.address}"
keystoreFile="${jboss.server.home.dir}/conf/jboss.pfx"
keystorePass="password" sslProtocols="TLSv1.2" maxThreads="170"/>
After that, jboss is still providing only SSLv3 and TLSv1 protocols for https connection.
Any solution ?
Thanks
I believe the 'sslProtocols' attribute translates to a call to SSLParameters.setProtocols (later given effect by SSLSocket.setParameters), and doesn't affect the SSLContext.getInstance call. So you are still getting a SunJSSE SSLContext because you added BCJSSE at lower priority.
I suggest moving the BouncyCastleJsseProvider entry in java.security to a higher priority (than com.sun.net.ssl.internal.ssl.Provider).
Also in java.security you will need to set the default KMF type from SunX509 to PKIX (change the existing entry):
ssl.KeyManagerFactory.algorithm=PKIX
This is because BCJSSE currently only works with its own KMF implementation.

Resources