Kerberos SASL authentication using node-rdkafka - node.js

I am having an issue trying to connect to a Kafka instance that uses Kerberos SASL authentication using the node-rdkafka npm module, and the lack of documentation and examples out there are making things very difficult to solve the issue. I am using the Alpine setup found here
to pull in all of the necessary librdkafka and node-rdkafka dependencies and create an image. I am trying to run my container with the following configuration object for a node-rdkafka producer to connect to Kafka instance:
'debug': 'all',
'client.id': 'kafka',
'metadata.broker.list': KAFKA_DOMAINS,
'sasl.kerberos.principal': '<id>#<REALM>',
'sasl.kerberos.keytab': '/path/to/<id>.keytab',
'sasl.kerberos.service.name': 'kafka',
'security.protocol': 'sasl_plaintext',
'sasl.kerberos.kinit.cmd': 'kinit -kt /path/to/<id>.keytab <id>#<REALM>',
'dr_cb': true,
'event_cb': true
When I try to run the container, it hangs up and then throws the following error:
[2019-02-05T17:09:59.637Z] ERROR: kafka/7 on 95c927d11411: broker transport failure (err.code=-195)
Error: Local: Broker transport failure
at Function.createLibrdkafkaError [as create] (/opt/rdkafka/node_modules/node-rdkafka/lib/error.js:261:10)
at /opt/rdkafka/node_modules/node-rdkafka/lib/client.js:339:28
Here is the output from the features and libkafkaversion log statements:
[2019-02-05T17:09:29.519Z] INFO: kafka/7 on 95c927d11411:
[ 'gzip',
'snappy',
'ssl',
'sasl',
'regex',
'lz4',
'sasl_gssapi',
'sasl_plain',
'sasl_scram',
'plugins' ]
[2019-02-05T17:09:29.521Z] INFO: kafka/7 on 95c927d11411: 0.11.5
What am I missing that I am unable to connect? Am I missing some environment variables? I know that the Java implementation of librdkafka involves passing the REALM, KDC_HOST, and some .conf files (krb5 and jaas). Do I have to set up some kind of environment variable? I can find no examples or solutions.

Related

Databricks DBT Runtime Error, cannot connect to Database. Maybe an SSL error?

I have a custom Databricks instance with a Domain name that points to an AWS Load Balancer. When I put that information in using either the HTTP instructions here or the databricks cluster instructions here, I get this response in the DBT CLI:
Connection:
host: https://subdomain.domain.com
port: 443
cluster: 123456-stuff00003
endpoint: None
schema: default
organization: 0
16:40:39.470091 [debug] [MainThread]: Acquiring new spark connection "debug"
16:40:39.471632 [debug] [MainThread]: Using spark connection "debug"
16:40:39.472524 [debug] [MainThread]: On debug: select 1 as id
16:40:39.472953 [debug] [MainThread]: Opening a new connection, currently in state init
Connection test: [ERROR]
1 check failed:
dbt was unable to connect to the specified database.
The database returned the following error:
>Runtime Error
Database Error
failed to connect
Unfortunately, DBT's debugging logs are terrible and I am not entirely sure why it is failing. I do know that when I connect to the cluster via Intellij I have to provide the CA file, the Client Certificate file, and the Client key file, because I am using a self-signed SSL cert (unfortunately, the self signed cert is required). Also, when defining my ~/.databrickscfg file I have to provide the argument insecure = true.
I've encountered this issue recently and I fixed it by installing root certificates by executing the "Install Certificates.command" script in the python home directory used to run dbt.
Laurent

Integration of pubsubeat with elasticsearch

I am learning how to integrate pubsub with elasticsearch. There are various options like pubsubbeat, Google_pubsub input plugin, Google Cloud Pub/Sub Output Plugin.
I am currently trying to use pubsubbeat and stucked after running the command " ./pubsubbeat -c pubsubbeat.yml -e -d "*" " as suggested. Log of console is as follows
2019-05-23T14:42:19.949+0100 INFO instance/beat.go:468 Home path: [/home/amishra/pubsubbeat-linux-amd64] Config path: [/home/amishra/pubsubbeat-linux-amd64] Data path: [/home/amishra/pubsubbeat-linux-amd64/data] Logs path: [/home/amishra/pubsubbeat-linux-amd64/logs]
2019-05-23T14:42:19.949+0100 DEBUG [beat] instance/beat.go:495 Beat metadata path: /home/amishra/pubsubbeat-linux-amd64/data/meta.json
2019-05-23T14:42:19.949+0100 INFO instance/beat.go:475 Beat UUID: 4bd6119e-603a-426c-9d5b-6ac588bb000e
2019-05-23T14:42:19.949+0100 INFO instance/beat.go:213 Setup Beat: pubsubbeat; Version: 6.2.2
2019-05-23T14:42:19.949+0100 DEBUG [beat] instance/beat.go:230 Initializing output plugins
2019-05-23T14:42:19.949+0100 DEBUG [processors] processors/processor.go:49 Processors:
2019-05-23T14:42:19.952+0100 INFO pipeline/module.go:76 Beat name: allspark
2019-05-23T14:42:19.952+0100 INFO [PubSub: dev/elk-logstash-poc/logstash-poc] beater/pubsubbeat.go:54 config retrieved: &{Project:dev Topic:elk-logstash-poc CredentialsFile:/home/amishra/key/key.json Subscription:{Name:logstash-poc RetainAckedMessages:false RetentionDuration:5h0m0s} Json:{Enabled:false AddErrorKey:false}}
On second thought, I tried solution 2 but was getting below error and haven't able to resolve yet
io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl onError
WARNING: [io.grpc.internal.ManagedChannelImpl-1] Failed to resolve name. status=Status{code=UNAVAILABLE, description=Unable to resolve host pubsub.googleapis.com, cause=java.net.UnknownHostException: pubsub.googleapis.com
Any lead on how to make thing working will be great help
The issue got resolved in unexpected way i.e. installing and deploying on fresh machine. Root cause is still unknown.

Error reading from 192.168.1.164:44214: rpc error: code = Canceled desc = context canceled

i got this error when i am trying to connect peers running in different machines .I found this error in docker logs of orderer.There is an error in docker logs of peer2 running in different machine
Failed obtaining connection: Could not connect to any of the endpoints: [orderer.example.com:7050]
You can find the orderer.yaml file at fabric-samples/config folder.
Going through the fields and their respective comments in orderer.yaml and core.yaml can help you to understand the method of configuring the network(orderer/peer).
And here you can get the info related to TLS.

Puppet can't deactivate nodes

I'm using Puppet with PuppetDb. The two are connected and I can see PuppetDb update whenever I add or update a node.
But when I try to deactivate a node with puppet node deactivate nodeName I get back:
Warning: Error connecting to puppetdb on 8081 at route /pdb/cmd/v1?checksum=36a4313be5bac718badc45495f0266bf87c7a806&version=3&certname=v-hub-1.5659710c-33d5-45f2-a477-6
ccf1357e1ac.local.dockerapp.io&command=deactivate_node, error message received was 'SSL_connect SYSCALL returned=5 errno=0 state=unknown state'. Failing over to the next
PuppetDB server_url in the 'server_urls' list
Error: Failed to execute '/pdb/cmd/v1?checksum=36a4313be5bac718badc45495f0266bf87c7a806&version=3&certname=v-hub-1.5659710c-33d5-45f2-a477-6ccf1357e1ac.local.dockerapp.i
o&command=deactivate_node' on at least 1 of the following 'server_urls': https://puppetdb:8081
Error: undefined method `[]' for #<Puppet::Util::Log:0x00000003a15178>
Error: Try 'puppet help node deactivate' for usage
Any suggestions on how to debug this? I've tried deleting and regenerating the certificate with puppet cert generate puppetdb. As mentioned when it comes to creating or updating nodes on PuppetDb there is no problem.
Puppetserver version: 2.7.2

Spark Error: Error downloading resource: SSL connect error

I am attempting to run a job on a Spark cluster setup in Mesos. I can run a job if I copy the jar to the server and then use a file: URL, but I cannot get spark to download a jar using https:. Every time I do get the error below in the stderr file.
I0226 00:11:05.618361 22652 logging.cpp:172] INFO level logging started!
I0226 00:11:05.618552 22652 fetcher.cpp:409] Fetcher Info: ...
I0226 00:11:05.619721 22652 fetcher.cpp:364] Fetching URI 'https://jenkins.company.com/nexus/...
I0226 00:11:05.619738 22652 fetcher.cpp:238] Fetching directly into the sandbox directory
I0226 00:11:05.619751 22652 fetcher.cpp:176] Fetching URI 'https://jenkins.company.com/nexus/...
I0226 00:11:05.619762 22652 fetcher.cpp:126] Downloading resource from 'https://jenkins.company.com/nexus/...
Failed to fetch 'https://jenkins.company.com/nexus/... ': Error downloading resource: SSL connect error
Failed to synchronize with slave (it's probably exited)
I am able to use wget to download the jar from the specified URL. I have also verified that the JDK on the server has the correct certificate for the nexus server where I am attempting to download the jar from.
I am new to Spark and Mesos and any help resolving this issue would be greatly appreciated.
Did you specify you private Nexus repository with the --repository flag on application start?
I personally never use encryption together with Spark, but from the docs it seems to be possible/necessary to configure it within Spark. I guess just for the SDKs is not enough.
See
http://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management
http://spark.apache.org/docs/latest/configuration.html#encryption

Resources