Upon using the DataFacotry Connector to Snowflake, I consistently get the error message below. Does anyone have any idea how to fix this?
I am using an Azure-managed Integration Runtime.
ERROR [HY000] [Microsoft][Snowflake] (4) REST request for URL
https://xxxxxxx.east-us-2.azure.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?requestId=2fb149b1-5f57-47ad-a471-8a8db718336c&request_guid=25dcec4f-f680-4f18-b018-363084843708&databaseName=DEMO_DB&warehouse=COMPUTE_WH failed: CURLerror (curl_easy_perform() failed) - code=60 msg='SSL peer
certificate or SSH remote key was not OK'.
ERROR [HY000] [Microsoft][Snowflake] (4) REST request for URL
https://xxxxxxx.east-us-2.azure.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?requestId=2fb149b1-5f57-47ad-a471-8a8db718336c&request_guid=25dcec4f-f680-4f18-b018-363084843708&databaseName=DEMO_DB&warehouse=COMPUTE_WH failed: CURLerror (curl_easy_perform() failed) - code=60 msg='SSL peer
certificate or SSH remote key was not OK'.
Activity ID: 376547c0-6604-454d-b881-544cb6e7811a.
Probably not a good idea, from a security perspective, to leave your account id visible like this.
Anyway, the issue is probably that you have mis-configured your connection as snowflake.com is repeated: ...snowflakecomputing.com.snowflakecomputing.com
Related
We are trying to connect to two keyspaces of Cassandra (3.x) in the same application with the same Kerberos credentials. The application is able to connect to one keyspace but no the other. Access to the keyspaces has been verified.
Error on connection:
2022-08-22 13:15:10,972 [cluster-reconnection-0] DEBUG c.d.d.c.ControlConnection [--]- [Control connection] error on 169.24.167.109:9042 connection, trying next host
javax.security.auth.login.LoginException: No LoginModules configured for CassandraJavaClient
at javax.security.auth.login.LoginContext.init(LoginContext.java:264)
at javax.security.auth.login.LoginContext.<init>(LoginContext.java:417)
The ticket cache is :
CassandraJavaClient {
com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true ticketCache="/var//krb5cc_userlogin";
};
The same ticket cache file is used by the first connection - which succeeds. While the second connection fails. I am not even sure as to how to debug it (tried remote debugging and since the initial control connection is an Async call, unable to get to the actual error).
We are using com.datastax.cassandra:cassandra-driver-core:jar:3.6.0
Any ideas/help to debug / resolve this will be highly appreciated
Localstack, aws, terraform.
I am doing an init/apply and getting a simple SSL error that I have tried many many things to fix and two days of google searches.
I really just want to disable (local env only) SSL checks for aws-sdk-go.
I have an /etc/hosts alias that points s3.amazonaws.com to a private network localstack instance.
Here is the error. I would really appreciate any help on solving this.
2022-04-17T14:53:42.026-0600 [DEBUG] [aws-sdk-go] DEBUG: Send Request s3/GetObject failed, attempt 5/5, error RequestError: send request failed
caused by: Get "https://s3.amazonaws.com/osdubucket/terraform.tfstate": x509: certificate is valid for *.amplifyapp.localhost.localstack.cloud, *.cloudfront.localhost.localstack.cloud, *.execute-api.localhost.localstack.cloud, *.localhost.localstack.cloud, *.opensearch.localhost.localstack.cloud, *.s3.localhost.localstack.cloud, *.scm.localhost.localstack.cloud, localhost.localstack.cloud, not s3.amazonaws.com
I have a basic Pulsar app, and when I try to connect to Pulsar, I get this exception:
2021-03-10 14:38:26.107 WARN 7 --- [r-client-io-1-1]
o.a.pulsar.client.impl.ConnectionPool : Failed to open connection
to my-pulsar-server-ms-tls.domain.com:6651 :
io.netty.channel.ConnectTimeoutException: connection timed out:
my-pulsar-server-ms-tls.domain.com/10.80.13.38:6651 2021-03-10
14:38:26.212 WARN 7 --- [al-listener-3-1]
o.a.pulsar.client.impl.PulsarClientImpl : [topic:
persistent://myTenant/myNamespace/myTopic]
Could not get connection while getPartitionedTopicMetadata -- Will try
again in 100 ms
My Pulsar client is pretty basic:
PulsarClient.builder()
.serviceUrl(serviceUrl)
.authentication(AuthenticationFactory.token(authToken))
.tlsTrustCertsFilePath(serverCertificateFilePath.toString())
.enableTlsHostnameVerification(false)
.allowTlsInsecureConnection(false)
.build();
The producer is also pretty basic and looks like this:
pulsarClient.newProducer(Schema.STRING)
.topic(topic)
.create();
I've verified that the token and TLS cert are correct. I've also tried connecting a consumer from this same environment and got a similar exception, and I know that others with the same code are able to connect to the same Pulsar cluster from other environments. What is the issue?
Your connection is getting blocked by a firewall or network issue.
Verify that you can establish a connection to your endpoint my-pulsar-server-ms-tls.domain.com:6651 from your environment.
If you're able to run a network packet dump (like tcpdump), that should make it obvious if you're not able to establish a connection.
You can also try running curl my-pulsar-server-ms-tls.domain.com:6651, and if you get back some html, that means you were able to reach the server. However, if you get Could not resolve host, then you were blocked by the network configuration (such as a missing route) or firewall.
I'm trying to use the Docusign API for an application that I'm running locally and I see the following error:
"message":"Uncaught Error when executing a Single
Cause: com.docusign.esign.client.ApiException: Error while
requesting server, received a non successful HTTP code 400 with response Body:
'{"errorCode":"HTTPS_REQUIRED_FOR_CONNECT_LISTENER",
"message":"HTTPS required for Connect listener communication."}'
Description: com.docusign.esign.client.ApiException: Error while
requesting server, received a non successful HTTP code 400 with response Body:
'{"errorCode":"HTTPS_REQUIRED_FOR_CONNECT_LISTENER",
"message":"HTTPS required for Connect listener communication."}
I am behind a company proxy but I have been able to use the API in the past and create envelopes without an issue so I'm not sure how to address this. Any help would be greatly appreciated.
This change is discussed in the Jan release notes.
Connect can only be used with https listeners (customers' servers).
And note that the server must use a certificate that chains to a root cert in the Microsoft standard root cert list. (Self-signed certs won't work.) You can use a free cert from LetsEncrypt or a $15 cert from a reputable CA.
I'm sorry that this update caught you by surprise.
Getting error now, it was working fine before:
Fatal error: Uncaught DocuSign\eSign\Client\ApiException: Error while requesting server, received a non successful HTTP code [400] with response Body: O:8:"stdClass":2:{s:9:"errorCode";s:35:"HTTPS_REQUIRED_FOR_CONNECT_LISTENER";s:7:"message";s:50:"HTTPS required for Connect listener communication.";}
I am debugging a cluster there nodes are not coming online after deployment using a ARM template. I think the issue is something to do with the certificate.
I have the following events that might help figuring out what the issue is:
SecurityUtil::GetX509SvrCredThumbprint(LocalMachine, My, FindByThumbprint:6a187334b4ba95589cd5ee733b9ca1c3499eab5f) failed: FABRIC_E_INVALID_CREDENTIALS
Unable to acquire ssl credentials: FABRIC_E_INVALID_CREDENTIALS
failed to set security settings to { provider=SSL protection=EncryptAndSign certType = 'cluster' store='LocalMachine/My' findValue='FindByThumbprint:6a187334b4ba95589cd5ee733b9ca1c3499eab5f' remoteCertThumbprints='6a187334b4ba95589cd5ee733b9ca1c3499eab5f' certChainFlags=40000000 isClientRoleInEffect=false claimBasedClientAuthEnabled=false }: FABRIC_E_INVALID_CREDENTIALS
Failed to set security on transport: FABRIC_E_INVALID_CREDENTIALS
federation open failed with FABRIC_E_INVALID_CREDENTIALS
Fabric Node open failed with error code = FABRIC_E_INVALID_CREDENTIALS
HostedService: _nt1vm_0 on node id 72e0ec579b75d9847ba5a43d6b365d7c terminated unexpectedly with code 7167 and process name Fabric.exe
The thumbprint matches the expected cert used in the template deployment.
The certificate was created in c# and stored in a secret with var certBase64 = Convert.ToBase64String(x509Certificate.Export(X509ContentType.Pkcs12)); and contenttype = application/x-pkcs12
I was using expired certificates which caused this. :(