Host resolution error while using node-rdkafka - node.js

I'm running node-rdkafka as a Node.js application. The consumer hangs indefinitely without pulling any messages from kafka (works on localhost).
Emits the below error,
{ Error: Local: Host resolution failure
origin: 'local',
message: 'host resolution failure',
code: -1,
errno: -1,
stack: 'Error: Local: Host resolution failure' }
The application works to the point of receiving data from kafka. The kafka instance is fine, validated by producing and consuming messages using the console.
Any help with debugging why this is occurring is much appreciated.
Sample consumer code here - https://github.com/Blizzard/node-rdkafka/blob/master/examples/consumer-flow.md

This issue happens due to the different networks of your client and broker.
The simple hack is to make host entry of advertised.listeners
For example,
advertised.listeners=PLAINTEXT://kafka:9092
Then add an entry in /etc/hosts with your kafka-broker-IP. For e.g. kafka-borker-IP is 192.168.1.1
192.168.1.1 kafka
You can use kafkacat utility to check your broker's IP.
kafkacat -b kafka:9092 -L
It will return metadata about the brokers.
You need to check that returned broker's IP is reachable or not from your machine.
For a better understanding of this issue.
You can refer https://www.confluent.io/blog/kafka-listeners-explained/

I had this exact problem when running kafka locally using the quick start instructions from https://kafka.apache.org/quickstart
For me, adding the following two lines to config/server.properties before starting kafka-server has solved the issue -
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092

Related

Could not get connection while getPartitionedTopicMetadata - io.netty.channel.ConnectTimeoutException: connection timed out

I have a basic Pulsar app, and when I try to connect to Pulsar, I get this exception:
2021-03-10 14:38:26.107 WARN 7 --- [r-client-io-1-1]
o.a.pulsar.client.impl.ConnectionPool : Failed to open connection
to my-pulsar-server-ms-tls.domain.com:6651 :
io.netty.channel.ConnectTimeoutException: connection timed out:
my-pulsar-server-ms-tls.domain.com/10.80.13.38:6651 2021-03-10
14:38:26.212 WARN 7 --- [al-listener-3-1]
o.a.pulsar.client.impl.PulsarClientImpl : [topic:
persistent://myTenant/myNamespace/myTopic]
Could not get connection while getPartitionedTopicMetadata -- Will try
again in 100 ms
My Pulsar client is pretty basic:
PulsarClient.builder()
.serviceUrl(serviceUrl)
.authentication(AuthenticationFactory.token(authToken))
.tlsTrustCertsFilePath(serverCertificateFilePath.toString())
.enableTlsHostnameVerification(false)
.allowTlsInsecureConnection(false)
.build();
The producer is also pretty basic and looks like this:
pulsarClient.newProducer(Schema.STRING)
.topic(topic)
.create();
I've verified that the token and TLS cert are correct. I've also tried connecting a consumer from this same environment and got a similar exception, and I know that others with the same code are able to connect to the same Pulsar cluster from other environments. What is the issue?
Your connection is getting blocked by a firewall or network issue.
Verify that you can establish a connection to your endpoint my-pulsar-server-ms-tls.domain.com:6651 from your environment.
If you're able to run a network packet dump (like tcpdump), that should make it obvious if you're not able to establish a connection.
You can also try running curl my-pulsar-server-ms-tls.domain.com:6651, and if you get back some html, that means you were able to reach the server. However, if you get Could not resolve host, then you were blocked by the network configuration (such as a missing route) or firewall.

"Error: Key not loaded" in h2o deployed through a K3s cluster, using python3 client

I can confirm the 3-replica cluster of h2o inside K3s is correctly deployed, as executing in the Python3 interpreter h2o.init(ip="x.x.x.x") works as expected. I followed the instructions noted here: https://www.h2o.ai/blog/running-h2o-cluster-on-a-kubernetes-cluster/
Nevertheless, I had to modify the service.yaml and comment out the line which says clusterIP: None, as K3s was complaining about something related to its inability to set the clusterIP to None. But even though, I can certify it is working correctly, and I am able to use an external IP to connect to the cluster.
If I try to load the dataset using the h2o cluster inside the K3s cluster using the exact same steps as described here http://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html, this is the output that I get:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Key not loaded: Key<Frame> https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv
Request: POST /3/ParseSetup
data: {'check_header': '0', 'source_frames': '["https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv"]'}
The same error occurs if I use the h2o.upoad_file("x.csv") method.
There is a clue about what may be happening here: Key not loaded: Key<Frame> while POSTing source frame through ParseSetup in H2O API call but I am not using curl, and I can not find any parameter that could help me overcome this issue: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/h2o.html?highlight=import_file#h2o.import_file
I need to use the Python client inside the same K3s cluster due to different technical reasons, so I am not able to kick off nor Flow nor Firebug to know what may be happening.
I can confirm it is working correctly when I simply issue a h2o.init(), using the local Java instance.
UPDATE 1:
I have tried in different K3s clusters without success. I changed the service.yaml to a NodePort, and now this is the error traceback:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Job is missing
Request: GET /3/Jobs/$03010a2a016132d4ffffffff$_a2366be93ec99a78d7bc161de8c54d67
UPDATE 2:
I have tried using different services (NodePort, LoadBalancer, ClusterIP) and none of them work. I also have tried using Minikube with the official image, and with a custom image made by me, without success. I suspect this is something related to either h2o itself, or the clustering between pods. I will keep digging and let's think there will be some gold in it.
UPDATE 3:
I also found out that the post about running H2O in Docker is really outdated https://www.h2o.ai/blog/h2o-docker/ nor is working the Dockerfile present at GitHub (I changed it to uncomment the ENTRYPOINT section without success): https://github.com/h2oai/h2o-3/blob/master/Dockerfile
Even though, I tried with the custom image I built for h2o-k8s and it is working seamlessly in pure Docker. I am wondering why it is still not working in K8s...
UPDATE 4:
I have tried modifying the environment variable called H2O_KUBERNETES_SERVICE_DNS without success.
In the meantime, the cluster started to be unavailable, that is, the readinessProbe's would not successfully complete. No matter what I change now, it does not work.
I spinned up a K3d cluster in local to see what happened, and surprisingly, the readinessProbe's were not failing, using v3.30.0.6. But now I started testing it with R instead of Python. I am glad I tried, because I may have pinpointed what was wrong. There is a version mismatch between the client and the server. So I updated accordingly the image to v3.30.0.1.
But now again, the readinessProbe is not working in my k3d cluster, so I am unable to test it.
It seems it is working now. R client version 3.30.0.1 with server version 3.30.0.1. Also tried with Python version 3.30.0.7 and server version 3.30.0.7 and it started working. Marvelous. The problem was caused by a version mismatch between the client and the server, as the python client was updated to 3.30.0.7 while the latest server for docker was 3.30.0.6.

How to run a http server on EMR master node of a Spark application

I have a Spark streaming application (Spark 2.4.4) running on AWS EMR 5.28.0. In the driver application on master node, besides setting up the spark streaming job, I am also running a http server (Akka-http 10.1.6) which can query the driver application for data, I bind to port 6161 like the following:
val bindingFuture: Future[ServerBinding] = Http().bindAndHandle(myapiroutes, "127.0.0.1", 6161)
try {
bindingFuture.map { serverBinding =>
log.info(s"AlertRestApi bound to ${serverBinding.localAddress}")
}
} catch {
case ex: Exception => {
log.error(s"Failed to bind to 127.0.0:6161")
system.terminate()
}
}
then I start spark streaming:
ssc.start()
When I test this on local spark, I am able to access http://localhost:6161/myapp/v1/data and get data from spark streaming, everything is good so far.
However, when I run this application in AWS EMR, I could not access port 6161. I ssh into the driver node and try to curl my url, it gives me error message:
[hadoop#ip-xxx-xx-xx-x ~]$ curl http://xxx.xx.xx.x:6161/myapp/v1/data
curl: (7) Failed to connect to xxx.xx.xx.x port 6161: Connection refused
when I look into the log in the driver node, I do see the port is bound (why the host shows 0:0:0:0:0:0:0:0? I don't know, that is the way in my dev testing, and it works, I see the same log and able to access the url):
20/04/13 16:53:26 INFO MyApp: MyRestApi bound to /0:0:0:0:0:0:0:0:6161
So my question is, what should I do so that I can access the api at port 6161 on the driver node? I realize Yarn resource manager may be involved but I know nothing about Yarn resource manager to point myself where to investigate.
Please help. Thanks
You are mentioning 127.0.0.1 as the host name or 0.0.0.0??
127.0.0.1 will work in your local system but not in AWS as it is loopback address. In such case you need to use 0.0.0.0 as the host name
Also make sure that ports are open and access is provided from your IP. To do that, go to Inbound rules for your instance and add 6161 under custom TCP rule if not done already.
Let me know if this makes any difference

Kafka Zookeeper Security Authentication & Authorization(JAAS) Using SASL

Regarding Kafka-Zookeeper Security using DIGEST MD5 Authentication, I am trying to rotate/change credentials/password for both server(zookeeper) and client(kafka) jaas config file.
We have a 3 node cluster of 3 zookeepers and 3 kafka broker nodes with below jaas configuration file.
kafka.conf
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="password";
};
zookeeper.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="password";
};
To rotate we do a rolling restart of server(zookeeper) instances after updating the credential(password) and during the process of rolling restart after updating the same credential/password for super user for client(kafka instances) one at a time, we notice
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
these info level in server logs, which eventually results in unclean shutdown and restart of the broker which impacts the writes and reads for longer than expected. I have tried commenting requireClientAuthScheme=sasl in zookeeper zoo.cfg https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication to allow any clients authenticate to zookeeper but no success.
Also, alternative approach - tried to update the credential/password in jaas config file dynamically using sasl.jaas.config and do get the same exception documented in this jira (reference: https://issues.apache.org/jira/browse/KAFKA-8010).
can someone have any suggestions? Thanks in advance.

WSO2 BAM wirh offset 1. Cassandra error

i have a problem on startup of BAM server.
My machine has the IP 1.33.33.127 and hostname "srv-lc-presen".
I it have configurated using this document:
Monitoring and statistics.
I have modified the at carbon.xml. I have it set to 1.
I've modified the master-datasources.xml and set
WSO2BAM_CASSANDRA_DATASOURCE url = jdbc:cassandra://srv-lc-presen:9161/EVENT_KS
WSO2BAM_UTIL_DATASOURCE url = jdbc:cassandra://srv-lc-presen:9161/BAM_UTIL_KS
I have tried with localhost, 1.33.33.127 and srv-lc-presen.
I always get the same error:
ERROR {me.prettyprint.cassandra.connection.HConnectionManager} - Could not start connection pool for host srv-lc-presen(1.33.33.127):9161
[2014-05-07 12:04:24,983] WARN {me.prettyprint.cassandra.connection.CassandraHostRetryService} - Downed srv-lc-presen(1.33.33.127):9161 host still appears to be down: Unable to open transport to srv-lc-presen(1.33.33.127):9161 , java.net.ConnectException: Connection refused
[2014-05-07 12:04:24,987] ERROR {org.wso2.carbon.bam.notification.task.internal.NotificationDispatchComponent} - All host pools marked down. Retry burden pushed out to client.
me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
at me.prettyprint.cassandra.connection.HConnectionManager.getClientFromLBPolicy(HConnectionManager.java:393)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:249)
at me.prettyprint.cassandra.service.ThriftCluster.addKeyspace(ThriftCluster.java:168)
at org.wso2.carbon.bam.datasource.utils.DataSourceUtils.createKeyspaceIfNotExist(DataSourceUtils.java:80)
at org.wso2.carbon.bam.datasource.utils.DataSourceUtils.getClusterKeyspaceFromRDBMSConfig(DataSourceUtils.java:92)
at org.wso2.carbon.bam.datasource.utils.DataSourceUtils.getClusterKeyspaceFromRDBMSDataSource(DataSourceUtils.java:96)
NEW information
i have tried to reconfigure and i don't find the problem.
I see in BAM console this error
[2014-05-08 09:10:57,531] ERROR {me.prettyprint.cassandra.connection.HConnectionManager} - Could not start connection pool for host 1.33.33.127(1.33.33.127):9161
[2014-05-08 09:10:57,564] ERROR {org.wso2.carbon.bam.notification.task.internal.NotificationDispatchComponent} - All host pools marked down. Retry burden pushed out to client.
me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
at me.prettyprint.cassandra.connection.HConnectionManager.getClientFromLBPolicy(HConnectionManager.java:393)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:249)
at me.prettyprint.cassandra.service.ThriftCluster.addKeyspace(ThriftCluster.java:168)
at org.wso2.carbon.bam.datasource.utils.DataSourceUtils.createKeyspaceIfNotExist(DataSourceUtils.java:80)
at org.wso2.carbon.bam.datasource.utils.DataSourceUtils.getClusterKeyspaceFromRDBMSConfig(DataSourceUtils.java:92)
at org.wso2.carbon.bam.datasource.utils.DataSourceUtils.getClusterKeyspaceFromRDBMSDataSource(DataSourceUtils.java:96)
at org.wso2.carbon.bam.notification.task.internal.NotificationDispatchComponent.initRecordStore(NotificationDispatchComponent.java:72)
at org.wso2.carbon.bam.notification.task.internal.NotificationDispatchComponent.activate(NotificationDispatchComponent.java:64)
And in API Manager console this
[2014-05-08 09:14:52,096] ERROR - ReceiverGroup No receiver is reachable at reconnection, can't publish the events
[2014-05-08 09:14:55,102] ERROR - AsyncDataPublisher Reconnection failed for for tcp://1.33.33.127:7612/
Please use this command at startup or edit wso2server.sh if you are not using notification feature sh wso2server.sh -Ddisable.notification.task
https://docs.wso2.org/display/BAM240/Notifications

Resources