livy curl request error for Kerberos Cloudera Hadoop - apache-spark

Configured livy server on kerberized CDH 5.10.x and its running fine on port 8998, but curl request giving below error,
curl --negotiate -u : http://xxxxxxx:8998/sessions
Error 403
HTTP ERROR: 403 Problem accessing
/sessions. Reason: GSSException: No valid credentials
provided (Mechanism level: Failed to find any Kerberos
credentails) Powered by Jetty://
unable to get why request is not going through kerberos security layer?

This error indicates that your kerberos ticket most likely doesn't exist or expired.
Have you run kinit to create your kerberos ticket?

For Testing purpose, Can you kinit as hdfs user using (you can find the keytab under HDFS roles machine - Namenode, Datanode /var/run/cloudera-scm-agent/process/hdfs/hdfs.keytab)
kinit -kt hdfs.keytab hdfs/hostname#REALM
or kinit as your user kinit user#REALM
And then try
curl --negotiate -u : -X GET -H "Content-Type: application/json" http://xxxx.xxxx:8998/sessions
In order to find pyspark sessions,
curl --negotiate -u : -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" http://xxxxx:8998/sessions

Related

troubleshoot sporadic SSL "bad record MAC" exception in influxdb

We recently updated our influxdb configuration (to reduce SWEET32 issues) as follows:
[http]
auth-enabled = true
pprof-enabled = false
flux-enabled = true
https-enabled = true
https-certificate = "client.crt"
https-private-key = "client.key"
[tls]
min-version = "tls1.2"
max-version = "tls1.3"
# https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29
#
# Can this be configured more cleanly?
# strict-ciphers didn't work / or not sure on where to configure it
ciphers = [ "TLS_AES_128_GCM_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA"
]
Anyway the configuration seemd to running fine for a while and then suddenly yesterday our influxdb became in accessible. Grafana started throwing 502 errors and trying to do curl commands:
curl --fail --silent --show-error -k -u grafana_user:<redacted> -G "https://10.0.67.1:8086/query?db=metrics" --data-urlencode "q=select LAST(value) from /^some.metric*/ where time > now() - 1m"
Failed with curl: (7) Failed to connect to 10.0.67.1 port 8086: Connection refused
On restarting the VM ofcourse everything worked back again. On checking the logs the error stated on influxdb was
http: TLS handshake error from 10.0.67.6:38084: local error: tls: bad record MAC
How could this be debugged, and what could be the possible ways to fix this?
Using influxdb: 1.8.10

Cannot connect to cluster with cqlsh using secure connect bundle

I am getting error when I try to datastax cassandra instance.
bin/cqlsh -u admin -p PASSWORD -b BUNDLE_ZIP_PATH
Connection error: ('Unable to connect to any servers', \
{'xxx:xxx:xxx': ValueError('No host_id to create the SniEndPoint',)} \
)
Have anyone seen this error? This is a to a cloud managed datastax instance on IBM Cloud and the connection used to work before.
The error is generated by the embedded Python driver that cqlsh uses to connect to clusters. It indicates that it couldn't get the host from the secure bundle.
The most likely cause is that the secure bundle you're using is corrupted so I'd suggest downloading it from the source again. Cheers!

wildfly 25 quickstart ee-security

I can't make the quickstart ee-security work with Wildfly 25.0.1.
After sending the request :
curl -v http://localhost:8080/ee-security/secured -H 'X-Username:quickstartUser' -H 'X-Password:quickstartPwd1!'
I get this :
Caused by: java.io.IOException: ELY01177: Authorization failed.
at org.wildfly.security.jakarta.authentication#1.17.1.Final//org.wildfly.security.auth.jaspi.impl.JaspiAuthenticationContext$1.handleOne(JaspiAuthenticationContext.java:188)
at org.wildfly.security.jakarta.authentication#1.17.1.Final//org.wildfly.security.auth.jaspi.impl.JaspiAuthenticationContext$1.lambda$handle$0(JaspiAuthenticationContext.java:100)
at org.wildfly.security.jakarta.authentication#1.17.1.Final//org.wildfly.security.auth.jaspi.impl.SecurityActions.doPrivileged(SecurityActions.java:39)
at org.wildfly.security.jakarta.authentication#1.17.1.Final//org.wildfly.security.auth.jaspi.impl.JaspiAuthenticationContext$1.handle(JaspiAuthenticationContext.java:99)
What shall I do ?

Connecting ODBC to AzureDatabricks using Simba Driver

I am simply trying to setup an ODBC driver to Databricks Cluster.
According to the MS documentation
https://learn.microsoft.com/en-us/azure/databricks/kb/bi/jdbc-odbc-troubleshooting
If you get an TTransport exception using the curl command, you successfully reached and authenticated.
When I run...
curl https://adb-77180857967XXXXX.6.azuredatabricks.net:443/sql/protocolv1/o/7718085796704186/0910-172424-pizza885 -H "Authorization: Bearer XXXXX"
It does produce the error which indicates success...
Error 500 Server Error
HTTP ERROR 500
<p>Problem accessing /cliservice. Reason:
<pre> Server Error</pre></p><h3>Caused by:</h3><pre>javax.servlet.ServletException: org.apache.thrift.transport.TTransportException
When I test the connection from the ODBC driver I get the following error:
FAILED!
[Simba][ThriftExtension] (14) Unexpected response from server during a HTTP connection: Could not resolve host for client socket..

Cassandra with ldap intergration

Using Datastax 5.1 version Cassandra, trying to integrate ldap with it. Added required parameters in dse.yaml and cassandra.yaml but when I try to authenticate ldap user then it keeps failing with below error.
[root#ip-11.11.11.11 ~]# cqlsh -u 123456
Password:
Connection error: ('Unable to connect to any servers', {'127.0.0.1': AuthenticationFailed('Failed to authenticate to 127.0.0.1: Error from server: code=0100 [Bad credentials] message="Failed to login. Please re-try."',)})
Here is the message from debug.log.
ERROR [Native-Transport-Requests-1] 2019-06-06 05:34:50,842 DefaultLdapConnectionFactory.java:68 - unable to bind connection: PROTOCOL_ERROR: The server will disconnect!
TRACE [Native-Transport-Requests-1] 2019-06-06 05:34:50,843 LdapUtils.java:577 - [ldap-fetch-user] ERROR - failed to fetch username: 123456
org.apache.directory.api.ldap.model.exception.LdapOperationException: PROTOCOL_ERROR: The server will disconnect!
at org.apache.directory.ldap.client.api.LdapNetworkConnection.startTls(LdapNetworkConnection.java:3986)
at org.apache.directory.ldap.client.api.LdapNetworkConnection.bindAsync(LdapNetworkConnection.java:1373)
at org.apache.directory.ldap.client.api.LdapNetworkConnection.bind(LdapNetworkConnection.java:1293)
at org.apache.directory.ldap.client.api.AbstractLdapConnection.bind(AbstractLdapConnection.java:130)
at org.apache.directory.ldap.client.api.AbstractLdapConnection.bind(AbstractLdapConnection.java:114)
Looks like its not able to connect to ldap, not binding with it?
I am able to connect to AD using ldapsearch and get result for the user from the cassandra node. Also imported the certificate into the keystore and mentioned it in the dse.yaml. Any pointers?
The issue was with the certificate chain, after using correct intermediate certificates binding with AD was successful.

Resources