Kafka Producer - org.apache.kafka.common.serialization.StringSerializer could not be found - apache-spark

I have creating a simple Kafka Producer & Consumer.I am using kafka_2.11-0.9.0.0. Here is my Producer code.
public class KafkaProducerTest {
public static String topicName = "test-topic-2";
public static void main(String[] args) {
// TODO Auto-generated method stub
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer",
StringSerializer.class.getName());
props.put("value.serializer",
StringSerializer.class.getName());
Producer<String, String> producer = new KafkaProducer(props);
for (int i = 0; i < 100; i++) {
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(
topicName, Integer.toString(i), Integer.toString(i));
System.out.println(producerRecord);
producer.send(producerRecord);
}
producer.close();
}
}
While starting the bundle I a facing the below error:
2016-05-20 09:44:57,792 | ERROR | nsole user karaf | ShellUtil | 44 - org.apache.karaf.shell.core - 4.0.3 | Exception caught while executing command
org.apache.karaf.shell.support.MultiException: Error executing command on bundles:
Error starting bundle162: Activator start error in bundle NewKafkaArtifact [162].
at org.apache.karaf.shell.support.MultiException.throwIf(MultiException.java:61)
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:69)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.bundle.command.BundlesCommand.execute(BundlesCommand.java:54)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)[44:org.apache.karaf.shell.core:4.0.3]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]
Caused by: java.lang.Exception: Error starting bundle162: Activator start error in bundle NewKafkaArtifact [162].
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)[24:org.apache.karaf.bundle.core:4.0.3]
... 12 more
Caused by: org.osgi.framework.BundleException: Activator start error in bundle NewKafkaArtifact [162].
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.Felix.startBundle(Felix.java:2144)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)[24:org.apache.karaf.bundle.core:4.0.3]
... 12 more
Caused by: org.apache.kafka.common.config.ConfigException: Invalid value org.apache.kafka.common.serialization.StringSerializer for configuration key.serializer: Class org.apache.kafka.common.serialization.StringSerializer could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:145)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:49)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:56)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.clients.producer.ProducerConfig.<init>(ProducerConfig.java:317)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:181)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at com.NewKafka.NewKafkaArtifact.KafkaProducerTest.main(KafkaProducerTest.java:25)[162:NewKafkaArtifact:0.0.1.SNAPSHOT]
at com.NewKafka.NewKafkaArtifact.StartKafka.start(StartKafka.java:11)[162:NewKafkaArtifact:0.0.1.SNAPSHOT]
at org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)[org.apache.felix.framework-5.4.0.jar:]
... 16 more
I have tried setting the key.serializer and value.serializer like below:
props.put("key.serializer",StringSerializer.class.getName());
props.put("value.serializer",StringSerializer.class.getName());
Also like, But still getting the same error. What is I am doing wrong here.
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

Its issue with the version you are using.
It was also suggested to version 0.8.2.2_1.
Suggest you to adjust the version of kafka you are using and give a try.
code wise, I cross checked many code samples in kafka dev list and seems like you have written in right way.
i.e Thread.currentThread().setContextClassLoader(null);

I find the reason by reading the kafka client source code.
kafka client use Class.forName(trimmed, true, Utils.getContextOrKafkaClassLoader()) to get the Class object, and the create the instance, the key point is the classLoader, which is specified by the last param, the implementation of method Utils.getContextOrKafkaClassLoader() is
public static ClassLoader getContextOrKafkaClassLoader() {
ClassLoader cl = Thread.currentThread().getContextClassLoader();
if (cl == null)
return getKafkaClassLoader();
else
return cl;
}
so, by default, the Class object of org.apache.kafka.common.serialization.StringSerializer is load by the applicationClassLoader, if your target class is not loaded by the applicationClassLoader, this problem will happend !
to solve the problem, simply set the ContextClassLoader of current thread to null before new KafkaProducer instance like this
Thread.currentThread().setContextClassLoader(null);
Producer<String, String> producer = new KafkaProducer(props);
hope my answer can let you know what happend .

The issue appears to be with the class loader, as #Ram Ghadiyaram indicated in his answer. In order to get this working with kafka-clients 2.x, I had to do the following:
public Producer<String, String> createProducer() {
ClassLoader original = Thread.currentThread().getContextClassLoader();
Thread.currentThread().setContextClassLoader(null);
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
BOOTSTRAP_SERVERS);
... etc ...
KafkaProducer<String, String> producer = new KafkaProducer<>(props);
Thread.currentThread().setContextClassLoader(original);
return producer;
}
This allows the system to continue loading additional classes with the original classloader. This was needed in Wildfly/JBoss (the specific app I'm working with is Keycloak).

try using these props instead of yours props.
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
Here is full Kafka Producer Example:-
import java.util.Properties;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
public class FxDateProducer {
public static void main(String[] args) throws Exception{
if(args.length == 0){
System.out.println("Enter topic name”);
return;
}
String topicName = args[0].toString();
Properties props = new Properties();
//Assign localhost id
props.put("bootstrap.servers", “localhost:9092");
//Set acknowledgements for producer requests.
props.put("acks", “all");
//If the request fails, the producer can automatically retry,
props.put("retries", 0);
//Specify buffer size in config
props.put("batch.size", 16384);
//Reduce the no of requests less than 0
props.put("linger.ms", 1);
//The buffer.memory controls the total amount of memory available to the producer for buffering.
props.put("buffer.memory", 33554432);
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer
<String, String>(props);
for(int i = 0; i < 10; i++)
producer.send(new ProducerRecord<String, String>(topicName,
Integer.toString(i), Integer.toString(i)));
System.out.println(“Message sent successfully”);
producer.close();
}
}

Recently i found the solution. Setting the Thead Context loader to null resolved the issue for me. Thanks.
Thread.currentThread().setContextClassLoader(null);
Producer<String, String> producer = new KafkaProducer(props);

It happens because of kafka-version issue. Make sure, you use the correct kafka version. The version that I used is 'kafka_2.12-1.0.1'
But try using below properties in your code .This fixed my issue.
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
Earlier I was using below properties which was causing the issue.
//props.put("key.serializer","org.apache.kafka.common.serialization.Stringserializer");
//props.put("value.serializer","org.apache.kafka.common.serialization.Stringserializer");

Related

Message history not getting deserialized

I was trying to send message history via kafka backed message channel, and i am getting an error like below:
Caused by: java.lang.IllegalArgumentException: Incorrect type specified for header 'history'. Expected [class org.springframework.integration.history.MessageHistory] but actual type is [class org.springframework.kafka.support.DefaultKafkaHeaderMapper$NonTrustedHeaderType]
at org.springframework.messaging.MessageHeaders.get(MessageHeaders.java:216)
at org.springframework.integration.history.MessageHistory.write(MessageHistory.java:96)
Environment:
Java version: JDK8
Kafka version: 3.1.0
Spring-boot-starter-integration: 2.6.2 (integration core:5.5.7)
The message is getting deserialized properly without message history, but unable to do so with message history.
Here is the configuration that I am setting:
Consumer:
public ConsumerFactory consumerFactory(String groupId, String clientId) {
Properties consumerProperties = new Properties();
consumerProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
consumerProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, AUTO_OFFSET_RESET_CONFIG);
consumerProperties
.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
consumerProperties.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId);
consumerProperties.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,MAX_POLL_INTERVAL_MS_CONFIG);
consumerProperties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,MAX_POLL_RECORDS_CONFIG);
DefaultKafkaConsumerFactory defaultKafkaConsumerFactory = new DefaultKafkaConsumerFactory(
consumerProperties);
JsonDeserializer jsonDeserializer = new JsonDeserializer(GenericMessage.class, JacksonJsonUtils.messagingAwareMapper());
jsonDeserializer.addTrustedPackages("*");
defaultKafkaConsumerFactory.setValueDeserializer(jsonDeserializer);
return defaultKafkaConsumerFactory;
}
Producer:
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> producerConfigMap = new HashMap<>();
producerConfigMap.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
producerConfigMap.put(ProducerConfig.LINGER_MS_CONFIG, PRODUCER_LINGER_MS_CONFIG);
producerConfigMap.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, PRODUCER_COMPRESSION_TYPE_CONFIG);
producerConfigMap.put(ProducerConfig.BATCH_SIZE_CONFIG, PRODUCER_BATCH_SIZE_CONFIG);
producerConfigMap
.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
JsonSerializer<GenericMessage> jsonSerializer = new JsonSerializer(JacksonJsonUtils.messagingAwareMapper());
DefaultKafkaProducerFactory defaultKafkaProducerFactory = new DefaultKafkaProducerFactory<>(
producerConfigMap);
defaultKafkaProducerFactory.setValueSerializer(jsonSerializer);
return defaultKafkaProducerFactory;
}
ConcurentKafkaContainerListenerFactory:
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
String groupId, String clientId) {
DefaultKafkaHeaderMapper kafkaHeaderMapper = new DefaultKafkaHeaderMapper();
kafkaHeaderMapper.addTrustedPackages("org.springframework.integration.history");
MessagingMessageConverter messagingMessageConverter = new MessagingMessageConverter();
messagingMessageConverter.setHeaderMapper(kafkaHeaderMapper);
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory(groupId, clientId));
factory.setMessageConverter(messagingMessageConverter);
return factory;
}
I have tried adding trusted packages in every possible way, but still, I am getting the above error.
Looks at the exception again: org.springframework.kafka.support.DefaultKafkaHeaderMapper$NonTrustedHeaderType. It doesn't say that something is wrong with your (de)serializers. It is a DefaultKafkaHeaderMapper mapper feature to ban that type from you.
You need to supply a DefaultKafkaHeaderMapper with the addTrustedPackages("*") on the consumer side. If you use KafkaMessageDrivenChannelAdapter, see its setMessageConverter(MessageConverter messageConverter) to be populated with the MessagingMessageConverter. And that one has an option for the setHeaderMapper(KafkaHeaderMapper headerMapper), where you already can set that DefaultKafkaHeaderMapper.
Please, raise a GH, so we can add that org.springframework.integration.history to trusted packages for a default DefaultKafkaHeaderMapper in the KafkaMessageDrivenChannelAdapter.

Spring Kafka, manual committing in different threads with multiple acks?

I am trying to ack kafka Message consumed via a batchListener in a separate thread; Using #Async for the called method.
#KafkaListener( topics = "${topic.name}" ,containerFactory = "kafkaListenerContainerFactoryBatch", id ="${kafkaconsumerprefix}")
public void consume(List<ConsumerRecord<String, String>> records,Acknowledgment ack) {
records.forEach(record -> asynchttpCaller.posttoHttpsURL(record,ack));
}
and my Async code is below where KafkaConsumerException extends BatchListenerFailedException
#Async
public void posttoHttpsURL(ConsumerRecord<String, String> record,Acknowledgment ack)
{
try {
//post to http
ack.acknowledge();
}
catch(Exception ex){
throw new KafkaConsumerException("Exception occured in sending via HTTPS",record);
}
}
With the below Configuration
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG,
"read_committed");
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 10000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,
maxpollRecords);
return props;
}
#Bean
public ConsumerFactory<Object, Object> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
/**
* Batch Listener */
#Bean
#Primary
public ConcurrentKafkaListenerContainerFactory<Object, Object>
kafkaListenerContainerFactoryBatch (
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
KafkaOperations<? extends Object, ? extends Object> template ) {
ConcurrentKafkaListenerContainerFactory<Object, Object>
factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, consumerFactory());
factory.setBatchListener(true);
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
DeadLetterPublishingRecoverer recoverer = new
DeadLetterPublishingRecoverer(template);
ExponentialBackOff fbo = new ExponentialBackOff();
fbo.setMaxElapsedTime(maxElapsedTime);
fbo.setInitialInterval(initialInterval);
fbo.setMultiplier(multiplier);
RecoveringBatchErrorHandler errorHandler = new
RecoveringBatchErrorHandler(recoverer, fbo);
factory.setBatchErrorHandler(errorHandler);
factory.setConcurrency(setConcurrency);
return factory;
}
This ack.acknowledge() acknowledges every record in that batch if using AckMode as MANUAL_IMMEDIATE and will ack only if all are success when AckMode is MANUAL.
The Scenario I have is --> there will be certain httpcalls that results in success and certain that gets a timeout both in the same batch; if the errored Messages has a greater offset than the successful one ;even the succesful one is not getting acknowledged and is being duplicated.
Not sure why BatchListenerFailedException always throws the whole batch though I give specifically the record that errored.
Any suggestions on how to implement this ?
You should not process asynchronously because offsets could be committed out-of-sequence.
BatchListenerFailedException will only work if thrown on the listener thread.

Register Java Class in Flink Cluster

I am running my Fat Jar in Flink Cluster which reads Kafka and saves in Cassandra, the code is,
final Properties prop = getProperties();
final FlinkKafkaConsumer<String> flinkConsumer = new FlinkKafkaConsumer<>
(kafkaTopicName, new SimpleStringSchema(), prop);
flinkConsumer.setStartFromEarliest();
final DataStream<String> stream = env.addSource(flinkConsumer);
DataStream<Person> sensorStreaming = stream.flatMap(new FlatMapFunction<String, Person>() {
#Override
public void flatMap(String value, Collector<Person> out) throws Exception {
try {
out.collect(objectMapper.readValue(value, Person.class));
} catch (JsonProcessingException e) {
logger.error("Json Processing Exception", e);
}
}
});
savePersonDetails(sensorStreaming);
env.execute();
and The Person POJO contains,
#Column(name = "event_time")
private Instant eventTime;
There is codec required to store Instant as below for Cassandra side,
final Cluster cluster = ClusterManager.getCluster(cassandraIpAddress);
cluster.getConfiguration().getCodecRegistry().register(InstantCodec.instance);
When i run standalone works fine, but when i run local cluster throws me an error as below,
Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [timestamp <-> java.time.Instant]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:679)
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:526)
at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:506)
at com.datastax.driver.core.CodecRegistry.access$200(CodecRegistry.java:140)
at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:211)
at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:208)
I read the below document for registering,
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/custom_serializers.html
but InstantCodec is 3rd party one. How can i register it?
I solved the problem, there was LocalDateTime which was emitting from and when i was converting with same type, there was above error. I changed the type into java.util Date type then it worked.

Hazelcast IMap TTL Expiry

How to invoke a method to sync the data to some DB or Kafka once the TTL set is expired during the put method of IMap class.
eg:IMap.put(key,value,TTL,TimeUnit.SECONDS);
if the above TTL is set to like 10 seconds i must call some store or some mechanism where i could sync that key and value to DB or Kafka in real time. As of now when i tried the store method it is immediately calling the method instead of 10 seconds wait time.
You may set an EntryExpiredListener to your map config.
It feeds on two sources of expiration based eviction, they are max-idle-seconds and time-to-live-seconds.
Example Listener class:
#Slf4j
public class MyExpiredEntryListener implements EntryExpiredListener<String, String>, MapListener {
#Override
public void entryExpired(EntryEvent<String, String> event) {
log.info("entry Expired {}", event);
}
}
You can add this config via programmatically or you may set mapconfig via xml config file.
Example usage:
public static void main(String[] args) {
Config config = new Config();
MapConfig mapConfig = config.getMapConfig("myMap");
mapConfig.setTimeToLiveSeconds(10);
mapConfig.setEvictionPolicy(EvictionPolicy.RANDOM);
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<String, String> map = hz.getMap("myMap");
map.addEntryListener(new MyExpiredEntryListener(), true);
for (int i = 0; i < 100; i++) {
String uuid = UUID.randomUUID().toString();
map.put(uuid, uuid);
}
}
You will see the logs like below when running this implementation.
entry Expired EntryEvent{entryEventType=EXPIRED, member=Member [192.168.1.1]:5701 - ca76c6d8-abe0-4efe-a6a6-24330657675b this, name='myMap', key=70ee594c-ffea-4584-aefe-1148b9fcdf9f, oldValue=70ee594c-ffea-4584-aefe-1148b9fcdf9f, value=null, mergingValue=null}
Also, you can use other entry listeners according to your requirements.

EventHub 'send' in Java SDK hangs, send never occurs

I'm trying to use the sample code for sending a simple event to an Azure EventHub (https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-send). It seems to go fine, I'm all configured, but when I get to the:
ehClient.sendSync(sendEvent);
Part of the code, it just hangs there, and never gets past the sendAsync method. I'm using a personal computer, and have no firewall running. Is there some networking configuration I have to make maybe in Azure to allow this simple send to occur? Anyone have any luck making this work?
final ConnectionStringBuilder connStr = new ConnectionStringBuilder()
.setNamespaceName("mynamespace")
.setEventHubName("myeventhubname")
.setSasKeyName("mysaskename")
.setSasKey("mysaskey");
final Gson gson = new GsonBuilder().create();
final ExecutorService executorService = Executors.newSingleThreadExecutor();
final EventHubClient ehClient = EventHubClient.createSync(connStr.toString(), executorService);
print("Event Hub Client Created");
try {
for (int i = 0; i < 100; i++) {
String payload = "Message " + Integer.toString(i);
byte[] payloadBytes = gson.toJson(payload).getBytes(Charset.defaultCharset());
EventData sendEvent = EventData.create(payloadBytes);
// HANGS HERE - NEVER GETS PAST THIS CALL
ehClient.sendSync(sendEvent);
}
} finally {
ehClient.closeSync();
executorService.shutdown();
}
Try to use different executor service. For example 'work-stealing thread'
final ExecutorService executorService = Executors.newWorkStealingPool();
Below code should work for you.
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-eventhubs</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-eventhubs-spark_2.11</artifactId>
<version>2.3.7</version>
</dependency>
import java.util.concurrent.{Executors, ScheduledExecutorService}
import com.google.gson.Gson
import com.microsoft.azure.eventhubs.{EventData, EventHubClient}
object callToPushMessage{
private var executorService : ScheduledExecutorService = null
def writeMsgToSink(message: PushMessage):Unit={
val connStr = ConnectionStringBuilder()
.setNamespaceName("namespace")
.setEventHubName("name")
.setSasKeyName("policyname")
.setSasKey("policykey").build
// The Executor handles all asynchronous tasks and this is passed to the EventHubClient instance.
// This enables the user to segregate their thread pool based on the work load.
// This pool can then be shared across multiple EventHubClient instances.
// The following code uses a single thread executor, as there is only one EventHubClient instance,
// handling different flavors of ingestion to Event Hubs here.
if (executorService == null) {
executorService = Executors.newSingleThreadScheduledExecutor()
}
val ehclient = EventHubClient.createSync(connStr,executorService)
try {
val jsonMessage = new Gson().toJson(message,classOf[PushMessage])
val eventData: EventData = EventData.create(jsonMessage.getBytes())
ehclient.sendSync(eventData)
}
finally {
ehclient.close()
executorService.shutdown()
}
}}

Resources