Azure sql database TLS is always enable? - azure

I wrote a java code. In the code, I used com.microsoft.sqlserver.jdbc.SQLServerDataSource to establish a JDBC connection with my Azure sql database . I found that no matter whether I used " ds.setEncrypt(true);" or not, the JDBC connection was encrypted by TLS ( I use wireshark to catch the TCP packaege , all the package is TLS whether I used " ds.setEncrypt(true);" or not ).
Why ? I checked many official documents, but I couldn't find the answer . It's too difficult...
Azure sql database TLS is always enable ? Are there relevant official documents to prove it ?
The question is : I use ds.setEncrypt(true) or not ,even i set this to "false" , the TCP packages are encrypted by TLS . Why ?
Below is my code to establish the JDBC connection .
public static Connection getConnectionObject() {
SQLServerDataSource ds = new SQLServerDataSource();
ds.setServerName("azuresqldbserver0821.database.windows.net");
ds.setDatabaseName("azuresqldb0821");
ds.setPortNumber(1433);
ds.setUser("root0817");
ds.setPassword("<YourStrong#Passw0rd>");
ds.setEncrypt(false);// I use this method or not ,even i set this to "false" , the TCP packages are encrypted by TLS
ds.setTrustServerCertificate(true);
Connection conn;
try {
conn = ds.getConnection();
} catch (Exception e) {
e.printStackTrace();
return null;
}
return conn;
}
}

When a client first attempts a connection to SQL Azure, it sends an initial connection request. Consider this a "pre-pre-connection" request. At this point the client does not know if TLS/SSL/Encryption is required and waits an answer from SQL Azure to determine if TLS/SSL is indeed required throughout the session (not just the login sequence, the entire connection session). A bit is set on the response indicating so. Then the client library disconnects and reconnects armed with this information.
When you set "Encrypt connection" setting on the connetion string you avoid the "pre-pre-connection", you are preventing any proxy from turning off the encryption bit on the client side of the proxy, this way attacks like man-in-the-middle attack are avoided.
When secure connections are needed, please enable "Encrypt connection" setting.
In-transit encryption to Azure SQL is always enabled.
Transport Layer Security (TLS) was previously known as Secure Sockets Layer (SSL).

Related

Spring Boot app can't connect to Cassandra cluster, driver returning "AllNodesFailedException: Could not reach any contact point"

i've updated my spring-boot to v3.0.0 and spring-data-cassandra to v4.0.0 which resulted in unable to connect to cassandra cluster which is deployed in stg env and runs on IPv6 address having different datacenter rather DC1
i've added a config file which accepts localDC programatically
`#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;`
and this is my application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
So locally I was able to connect to cassandra (by default it is pointing to localhost) , but in stg env my appplication is not able to connect to that cluster
logs in my stg env
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7: xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request, step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception: Lost connection to remote peer)]
Network
You appear to have a networking issue. The driver can't connect to any of the nodes because they are unreachable from a network perspective as it states in the error message:
... AllNodesFailedException: Could not reach any contact point ...
You need to check that:
you have configured the correct IP addresses,
you have configured the correct CQL port, and
there is network connectivity between your app and the cluster.
Security
I also noted that you configured the driver to use SSL:
ssl: true
but I don't see anywhere where you've configured the certificate credentials and this could explain why the driver can't initiate connections.
Check that the cluster has client-to-node encryption enabled. If it does then you need to prepare the client certificates and configure SSL on the driver.
Driver build
This post appears to be a duplicate of another question you posted but is now closed due to lack of clarity and details.
In that question it appears you are running a version of the Java driver not produced by DataStax as pointed out by #absurdface:
Specifically I note that java-driver-core-4.11.4-yb-1-RC1.jar isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into ...
We are not aware of where this build came from and without knowing much about it, it could be the reason you are not able to connect to the cluster.
We recommend that you switch to one of the supported builds of the Java driver. Cheers!
A hearty +1 to everything #erick-ramirez mentioned above. I would also expand on his answers with an observation or two.
Normally spring-data-cassandra is used to automatically configure a CqlSession and make it available for injection (or for use in CqlTemplate etc.). That's what you'd normally be configuring with your application.yml file. But you're apparently creating the CqlSession directly in code, which means that spring-data-cassandra isn't involved... and therefore what's in your application.yml likely isn't being used.
This analysis strongly suggests that your CqlSession is not being configured to use SSL. My understanding is that your testing sequence went as follows:
Tested app locally on a local server, everything worked
Tested app against test environment, observed the errors above
If this sequence is correct and you have SSL enabled in you test environment but not on your local Cassandra instance that could very easily explain the behaviour you're describing.
This explanation could also explain the specific error you cite in the error message. "Lost connection to remote peer" indicates that something is unexpectedly killing your socket connection before any protocol messages are explained... and an SSL issue would cause almost exactly that behaviour.
I would recommend checking the SSL configuration for both servers involved in your testing. I would also suggest consulting the SSL-related documentation referenced by Erick above and confirm that you have all the relevant materials when building your CqlSession.
added the certificate in my spring application
public CqlSession session() throws IOException, CertificateException, NoSuchAlgorithmException, KeyStoreException, KeyManagementException {
Resource resource = new ClassPathResource("root.crt");
InputStream inputStream = resource.getInputStream();
CertificateFactory cf = CertificateFactory.getInstance("X.509");
Certificate cert = cf.generateCertificate(inputStream);
TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(null);
keyStore.setCertificateEntry("ca", cert);
trustManagerFactory.init(keyStore);
SSLContext sslContext = SSLContext.getInstance("TLSv1.3");
sslContext.init(null, trustManagerFactory.getTrustManagers(), null);
return CqlSession.builder()
.withSslContext(sslContext)
.addContactPoint(new InetSocketAddress(contactPoints,port))
.withAuthCredentials(username, password)
.withLocalDatacenter(localDatacenter)
.withKeyspace(keyspace)
.build();
}
so added the cert file in the configuration file of the cqlsession builder and this helped me in connecting to the remote cassandra cluster

Spring Integration TcpNetServerConnectionFactory for Single Never Close TCP Connection

Does the below ensure a never close single TCP connection, as well recover from any network errors by closing and recreating the TCP connection? Our use-case is legacy and requires a single TCP connection that should keep on reading and writing. Kindly suggest.
Also, couldn't find a way to include / configure heartbeat messages in the IntegrationFlow DSL.
#Bean
public AbstractConnectionFactory tcpNetServerConnectionFactory() {
var tcpNetServerConnectionFactory = new TcpNetServerConnectionFactory(port);
tcpNetServerConnectionFactory.setLeaveOpen(true);
tcpNetServerConnectionFactory.setSoTimeout(-1);
tcpNetServerConnectionFactory.setSoKeepAlive(true);
tcpNetServerConnectionFactory.setSoTcpNoDelay(true);
tcpNetServerConnectionFactory.setSerializer(byteArrayLengthHeaderSerializer());
tcpNetServerConnectionFactory.setDeserializer(byteArrayLengthHeaderSerializer());
return tcpNetServerConnectionFactory;
}
Yes, but the recovery is done by the client creating a new connection, not the server.

Spring Integration Tcp project

I have a project that part of it is using Tcp connection, the case is as per below , I will also include a screen shot.
We have two clients, client 1 and client 2 those are conveyor belts so if we receive data on client one input we should send the reply to client 2 output and vise vers, I'm sure we can do it using Spring integration Tcp and probably getways. Am I approaching correctly Tcp integration at this case?
Yet I do not have code implementation but started to put something on it.
Sounds like you implementing a chat (or similar user-to-user) communication.
No, gateways won't help you here.
You need to have a TcpReceivingChannelAdapter and TcpSendingMessageHandler connected to the same AbstractServerConnectionFactory. The TcpSendingMessageHandler is registered as a TcpSender with that connection and all the sending connections are stored in the Map<String, TcpConnection> connections. When we produce a message to this MessageHandler, it tries to consult that registry like this:
private void handleMessageAsServer(Message<?> message) {
// We don't own the connection, we are asynchronously replying
String connectionId = message.getHeaders().get(IpHeaders.CONNECTION_ID, String.class);
TcpConnection connection = null;
if (connectionId != null) {
connection = this.connections.get(connectionId);
}
if (connection != null) {
So, on the receiving side (TcpReceivingChannelAdapter and its sub-flow) you need to ensure somehow that you really set a proper IpHeaders.CONNECTION_ID header for producing so-called reply in the end to a desired client.
You probably can react for the TcpConnectionOpenEvent via #EventListener and register some business key with the connectionId for the future correlation. When you send a message, you supply that target user business key, in the TcpReceivingChannelAdapter sub-flow you take that business key and obtain a desired connectionId from you registry. And enrich it into the IpHeaders.CONNECTION_ID header for automatic logic in the TcpSendingMessageHandler.
When TcpConnectionCloseEvent happens you have to remove its respective entry from your custom registry.
Since TCP/IP comes without headers support there is no any out-of-the-box mechanism to implement such a correlation feature.
Although TcpConnectionOpenEvent might not be enough for you since there is no any business info when connection is established. Perhaps you would need to implement some hand-shake logic in the TcpReceivingChannelAdapter flow to distinguish a real message and connection metadata for registering in the custom registry.
See more info in the docs: https://docs.spring.io/spring-integration/docs/current/reference/html/ip.html#ip-correlation
It might be also better for your use-case to look into a WebSocket support: https://docs.spring.io/spring-integration/docs/current/reference/html/web-sockets.html#web-sockets

Excel2016: Cannot query PostgresSQL database: Server certificate not accepted

I want to import some data to Excel2016 from a postgresSQL table. I have tried it by clicking "new query" and selecting From Database -> From PostgresSQL Database:
But then I receive the following error:
Details: "TlsClientStream.ClientAlertException: CertificateUnknown: Server certificate was not accepted. Chain status: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider.
. The specified hostname was not present in the certificate.
at TlsClientStream.TlsClientStream.ParseCertificateMessage(Byte[] buf, Int32& pos)
at TlsClientStream.TlsClientStream.TraverseHandshakeMessages()
at TlsClientStream.TlsClientStream.GetInitialHandshakeMessages(Boolean allowApplicationData)
at TlsClientStream.TlsClientStream.PerformInitialHandshake(String hostName, X509CertificateCollection clientCertificates, RemoteCertificateValidationCallback remoteCertificateValidationCallback, Boolean checkCertificateRevocation)"
Any suggestions on how to solve this? Thank you so much in advance!
This error is indicative of a connection being made to the PostgreSQL db where the server's certificate cannot be validated by the client making a connection. This error only happens when the "Trust Server Certificate" is set to FALSE in the library Excel uses to connect to PostgreSQL (npgsql).
There are several ways that may work to address this, in the order I'd suggest trying them:
If there's an option hidden in Excel (perhaps under advanced options or similar) to set the 'Trust Server Certificate' parameter to True, then your connection will start working. If it allows you to specify an entire connection string, then this can be done in the connection string as well.
The database should have a public key in an SSL cert listed in the postgresql.conf file for the db. If you (or your db administrator) can get that public key and add it to your machine (instructions will vary depending on your operating system).
I have finally found a workaround for my problem.
What you can do is to:
Install the current postgresql driver from here
Follow the instruction from this video
With this, you can connect to your postgreSQL database by ODBC.

SSL handshake error when connecting by websocket using encrypted connection

I use Tyrus webSocket implementation to connect to the server from my JavaFX application. When I try to establish connection over SSL I get this error: javax.net.ssl.SSLException: SSL handshake error has occurred - more data needed for validating the certificate
I tried to use a dummy certificate and host verification as described in Disable Certificate Validation in Java SSL Connections but to no avail.
There is also not much information on Tyrus documentation.
I simply don't know what to do!
P.S. For what it's worth I managed to get around this issue by using Grizzly client
//final WebSocketContainer container = ContainerProvider.getWebSocketContainer();
final ClientManager client = ClientManager.createClient();
URI uri = URI.create(this.uri + "?" + System.currentTimeMillis());
session = client.connectToServer(this, uri);
It sounds like you need to install a certificate chain. I believe you can import the signing certificate using keytool -import. Have you setup the certificate store?

Resources