EventHub 'send' in Java SDK hangs, send never occurs - azure

I'm trying to use the sample code for sending a simple event to an Azure EventHub (https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-send). It seems to go fine, I'm all configured, but when I get to the:
ehClient.sendSync(sendEvent);
Part of the code, it just hangs there, and never gets past the sendAsync method. I'm using a personal computer, and have no firewall running. Is there some networking configuration I have to make maybe in Azure to allow this simple send to occur? Anyone have any luck making this work?
final ConnectionStringBuilder connStr = new ConnectionStringBuilder()
.setNamespaceName("mynamespace")
.setEventHubName("myeventhubname")
.setSasKeyName("mysaskename")
.setSasKey("mysaskey");
final Gson gson = new GsonBuilder().create();
final ExecutorService executorService = Executors.newSingleThreadExecutor();
final EventHubClient ehClient = EventHubClient.createSync(connStr.toString(), executorService);
print("Event Hub Client Created");
try {
for (int i = 0; i < 100; i++) {
String payload = "Message " + Integer.toString(i);
byte[] payloadBytes = gson.toJson(payload).getBytes(Charset.defaultCharset());
EventData sendEvent = EventData.create(payloadBytes);
// HANGS HERE - NEVER GETS PAST THIS CALL
ehClient.sendSync(sendEvent);
}
} finally {
ehClient.closeSync();
executorService.shutdown();
}

Try to use different executor service. For example 'work-stealing thread'
final ExecutorService executorService = Executors.newWorkStealingPool();

Below code should work for you.
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-eventhubs</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-eventhubs-spark_2.11</artifactId>
<version>2.3.7</version>
</dependency>
import java.util.concurrent.{Executors, ScheduledExecutorService}
import com.google.gson.Gson
import com.microsoft.azure.eventhubs.{EventData, EventHubClient}
object callToPushMessage{
private var executorService : ScheduledExecutorService = null
def writeMsgToSink(message: PushMessage):Unit={
val connStr = ConnectionStringBuilder()
.setNamespaceName("namespace")
.setEventHubName("name")
.setSasKeyName("policyname")
.setSasKey("policykey").build
// The Executor handles all asynchronous tasks and this is passed to the EventHubClient instance.
// This enables the user to segregate their thread pool based on the work load.
// This pool can then be shared across multiple EventHubClient instances.
// The following code uses a single thread executor, as there is only one EventHubClient instance,
// handling different flavors of ingestion to Event Hubs here.
if (executorService == null) {
executorService = Executors.newSingleThreadScheduledExecutor()
}
val ehclient = EventHubClient.createSync(connStr,executorService)
try {
val jsonMessage = new Gson().toJson(message,classOf[PushMessage])
val eventData: EventData = EventData.create(jsonMessage.getBytes())
ehclient.sendSync(eventData)
}
finally {
ehclient.close()
executorService.shutdown()
}
}}

Related

AzurePageable breaks operation c#

I am having difficulties working with azure pageable.
I have these dificulties in several places.......
What happens is, if i interact with an AzurePageable and something goes wrong, the thread just doesnt return ........
For example yesterday I had too many requests to Azure Appconfiguration, the following piece of code would just hang.......
// Get all the settings from Azure app configuration
private static Dictionary<string, string> GetAllXYZSettings(ConfigurationClient client)
{
var settingsSelector = new SettingSelector() { KeyFilter = "xyz:*" };
var settings = client.GetConfigurationSettings(settingsSelector);
Dictionary<string, string> config = new();
foreach (dynamic setting in settings)
{
string settingValue = (string)setting.Value;
if (!string.IsNullOrEmpty(settingValue))
{
config.Add(setting.Key, settingValue);
}
}
return config;
}
I tried several things, wrapped everything in a try catch but my thread would just not return.
How should i read the config so i can do some correct error handeling .....
Same behaviour is also observed when reading the servicebus queues.......
wrapped in try catch, didn't work

WCF ChannelFactory use in multithread request timeout

I have a problem using WCF ChannelFactory in multi threading environment. When I call the a method from the ChannelFactory, I keep geting timeout on my calls.
private static ChannelFactory<Foo> factory = null;
private static object lockObj = new object();
...
in my thread method:
Foo obj
lock(lockObj)
{
if(factory == null)
{
factory = new ChannelFactory<Foo>(basicBinding, New EndpointAddress(New Uri(u)));
}
obj = factory.CreateChannel();
}
obj.doSomething();
obj.close()
...
When the code execute the obj.doSomething(), I get a timeout exception and I don't understand why. And worst, some times, the call pass witought problems and I ged expected results.
I also noted that there are only 2 call made when the program execute.
Ok, I do not understand why, but it seem that the backgroundWorker that we use to manage the threads cap the number of connection to 2. The Thread class though does not block the connections

Azure Service Bus SessionHandler issue with partitioned queue

I got into an issue with IMessageSessionAsyncHandlerFactory where new instances of IMessageSessionAsyncHandler are not created when the volume of writing goes to 0 and then up to a normal level.
To be more precise, I'm using SessionHandlerOptions with a value of 500 for MaxConcurrentSessions. This allows reading at a speed of more than 1k msg/s.
The queue I'm reading from is a partitioned queue.
The volume of messages in the queue is rather constant, but from time to time it gets down to 0. When the volume gets back to the normal level, the SessionFactory is not spawning any handlers so I'm not able to read messages anymore. It's like the sessions were not correctly recycled or are held into a sort of continuous waiting.
Here is the code for the factory registering:
private void RegisterHandler()
{
var sessionHandlerOptions = new SessionHandlerOptions
{
AutoRenewTimeout = TimeSpan.FromMinutes(1),
MessageWaitTimeout = TimeSpan.FromSeconds(1),
MaxConcurrentSessions = 500
};
_queueClient.RegisterSessionHandlerFactoryAsync(new SessionHandlerFactory(_callback), sessionHandlerOptions);
}
The factory class:
public class SessionHandlerFactory : IMessageSessionAsyncHandlerFactory
{
private readonly Action<BrokeredMessage> _callback;
public SessionHandlerFactory(Action<BrokeredMessage> callback)
{
_callback = callback;
}
public IMessageSessionAsyncHandler CreateInstance(MessageSession session, BrokeredMessage message)
{
return new SessionHandler(session.SessionId, _callback);
}
public void DisposeInstance(IMessageSessionAsyncHandler handler)
{
var disposable = handler as IDisposable;
disposable?.Dispose();
}
}
And the handler:
public class SessionHandler : MessageSessionAsyncHandler
{
private readonly Action<BrokeredMessage> _callback;
public SessionHandler(string sessionId, Action<BrokeredMessage> callback)
{
SessionId = sessionId;
_callback = callback;
}
public string SessionId { get; }
protected override async Task OnMessageAsync(MessageSession session, BrokeredMessage message)
{
try
{
_callback(message);
}
catch (Exception ex)
{
Logger.Error(...);
}
}
I can see that the session handlers are closed and that the factories are disposed when the writing/reading is at a normal level. However, once the queue empties, there's no way new session handlers are created. Is there a policy for allocating session IDs that forbids reallocating the same sessions after a period of inactivity?
Edit 1:
I'm adding two pictures to illustrate the behavior:
When the writer is stopped and restarted, the running reader is not able to read as much as before.
The number of sessions created after that moment is also much lower than before:
The volume of messages in the queue is rather constant, but from time to time it gets down to 0. When the volume gets back to the normal level, the SessionFactory is not spawning any handlers so I'm not able to read messages anymore. It's like the sessions were not correctly recycled or are held into a sort of continuous waiting.
When using IMessageSessionHandlerFactory to control how the IMessageSessionAsyncHandler instances are created, you could try to log the creation and destruction for all of your IMessageSessionAsyncHandler instances.
Based on your code, I created a console application to this issue on my side. Here is my code snippet for initializing queue client and handling messages:
InitializeReceiver
static void InitializeReceiver(string connectionString, string queuePath)
{
_queueClient = QueueClient.CreateFromConnectionString(connectionString, queuePath, ReceiveMode.PeekLock);
var sessionHandlerOptions = new SessionHandlerOptions
{
AutoRenewTimeout = TimeSpan.FromMinutes(1),
MessageWaitTimeout = TimeSpan.FromSeconds(5),
MaxConcurrentSessions = 500
};
_queueClient.RegisterSessionHandlerFactoryAsync(new SessionHandlerFactory(OnMessageHandler), sessionHandlerOptions);
}
OnMessageHandler
static void OnMessageHandler(BrokeredMessage message)
{
var body = message.GetBody<Stream>();
dynamic recipeStep = JsonConvert.DeserializeObject(new StreamReader(body, true).ReadToEnd());
lock (Console.Out)
{
Console.ForegroundColor = ConsoleColor.Cyan;
Console.WriteLine(
"Message received: \n\tSessionId = {0}, \n\tMessageId = {1}, \n\tSequenceNumber = {2}," +
"\n\tContent: [ title = {3} ]",
message.SessionId,
message.MessageId,
message.SequenceNumber,
recipeStep.title);
Console.ResetColor();
}
Task.Delay(TimeSpan.FromSeconds(3)).Wait();
message.Complete();
}
Per my test, the SessionHandler could work as expected when the volume of messages in the queue from normal to zero and from zero to normal for some time as follows:
I also tried to leverage QueueClient.RegisterSessionHandlerAsync to test this issue and it works as well. Additionally, I found this git sample about Service Bus Sessions, you could refer to it.

How to integration test Azure Web Jobs?

I have a ASP.NET Web API application with supporting Azure Web Job with functions that are triggered by messages added to a storage queue by the API's controllers. Testing the Web API is simple enough using OWIN but how do I test the web jobs?
Do I run a console app in memory in the test runner? Execute the function directly (that wouldn't be a proper integration test though)? It is a continious job so the app doesn't exit. To make matters worse Azure Web Job-functions are void so there's no output to assert.
There is no need to run console app in memory. You can run JobHost in the memory of your integration test.
var host = new JobHost();
You could use host.Call() or host.RunAndBlock(). You would need to point to Azure storage account as webjobs are not supported in localhost.
It depends on what your function is doing, but you could manually add a message to a queue, add a blob or whatever. You could assert by querying the storage where your webjob executed result, etc.
While #boris-lipschitz is correct, when your job is continious (as op says it is), you can't do anything after calling host.RunAndBlock().
However, if you run the host in a separate thread, you can continue with the test as desired. Although, you have to do some kind of polling in the end of the test to know when the job has run.
Example
Function to be tested (A simple copy from one blob to another, triggered by created blob):
public void CopyBlob(
[BlobTrigger("input/{name}")] TextReader input,
[Blob("output/{name}")] out string output)
{
output = input.ReadToEnd();
}
Test function:
[Test]
public void CopyBlobTest()
{
var blobClient = GetBlobClient("UseDevelopmentStorage=true;");
//Start host in separate thread
var thread = new Thread(() =>
{
Thread.CurrentThread.IsBackground = true;
var host = new JobHost();
host.RunAndBlock();
});
thread.Start();
//Trigger job by writing some content to a blob
using (var stream = new MemoryStream())
using (var stringWriter = new StreamWriter(stream))
{
stringWriter.Write("TestContent");
stringWriter.Flush();
stream.Seek(0, SeekOrigin.Begin);
blobClient.UploadStream("input", "blobName", stream);
}
//Check every second for up to 20 seconds, to see if blob have been created in output and assert content if it has
var maxTries = 20;
while (maxTries-- > 0)
{
if (!blobClient.Exists("output", "blobName"))
{
Thread.Sleep(1000);
continue;
}
using (var stream = blobClient.OpenRead("output", "blobName"))
using (var streamReader = new StreamReader(stream))
{
Assert.AreEqual("TestContent", streamReader.ReadToEnd());
}
break;
}
}
I've been able to simulate this really easily by simply doing the following, and it seems to work fine for me:
private JobHost _webJob;
[OneTimeSetUp]
public void StartupFixture()
{
_webJob = Program.GetHost();
_webJob.Start();
}
[OneTimeTearDown]
public void TearDownFixture()
{
_webJob?.Stop();
}
Where the WebJob Code looks like:
public class Program
{
public static void Main()
{
var host = GetHost();
host.RunAndBlock();
}
public static JobHost GetHost()
{
...
}
}

Kafka Producer - org.apache.kafka.common.serialization.StringSerializer could not be found

I have creating a simple Kafka Producer & Consumer.I am using kafka_2.11-0.9.0.0. Here is my Producer code.
public class KafkaProducerTest {
public static String topicName = "test-topic-2";
public static void main(String[] args) {
// TODO Auto-generated method stub
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer",
StringSerializer.class.getName());
props.put("value.serializer",
StringSerializer.class.getName());
Producer<String, String> producer = new KafkaProducer(props);
for (int i = 0; i < 100; i++) {
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(
topicName, Integer.toString(i), Integer.toString(i));
System.out.println(producerRecord);
producer.send(producerRecord);
}
producer.close();
}
}
While starting the bundle I a facing the below error:
2016-05-20 09:44:57,792 | ERROR | nsole user karaf | ShellUtil | 44 - org.apache.karaf.shell.core - 4.0.3 | Exception caught while executing command
org.apache.karaf.shell.support.MultiException: Error executing command on bundles:
Error starting bundle162: Activator start error in bundle NewKafkaArtifact [162].
at org.apache.karaf.shell.support.MultiException.throwIf(MultiException.java:61)
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:69)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.bundle.command.BundlesCommand.execute(BundlesCommand.java:54)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)[44:org.apache.karaf.shell.core:4.0.3]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]
Caused by: java.lang.Exception: Error starting bundle162: Activator start error in bundle NewKafkaArtifact [162].
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)[24:org.apache.karaf.bundle.core:4.0.3]
... 12 more
Caused by: org.osgi.framework.BundleException: Activator start error in bundle NewKafkaArtifact [162].
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.Felix.startBundle(Felix.java:2144)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)[24:org.apache.karaf.bundle.core:4.0.3]
... 12 more
Caused by: org.apache.kafka.common.config.ConfigException: Invalid value org.apache.kafka.common.serialization.StringSerializer for configuration key.serializer: Class org.apache.kafka.common.serialization.StringSerializer could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:145)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:49)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:56)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.clients.producer.ProducerConfig.<init>(ProducerConfig.java:317)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:181)[141:kafka-examples:1.0.0.SNAPSHOT-jar-with-dependencies]
at com.NewKafka.NewKafkaArtifact.KafkaProducerTest.main(KafkaProducerTest.java:25)[162:NewKafkaArtifact:0.0.1.SNAPSHOT]
at com.NewKafka.NewKafkaArtifact.StartKafka.start(StartKafka.java:11)[162:NewKafkaArtifact:0.0.1.SNAPSHOT]
at org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)[org.apache.felix.framework-5.4.0.jar:]
... 16 more
I have tried setting the key.serializer and value.serializer like below:
props.put("key.serializer",StringSerializer.class.getName());
props.put("value.serializer",StringSerializer.class.getName());
Also like, But still getting the same error. What is I am doing wrong here.
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Its issue with the version you are using.
It was also suggested to version 0.8.2.2_1.
Suggest you to adjust the version of kafka you are using and give a try.
code wise, I cross checked many code samples in kafka dev list and seems like you have written in right way.
i.e Thread.currentThread().setContextClassLoader(null);
I find the reason by reading the kafka client source code.
kafka client use Class.forName(trimmed, true, Utils.getContextOrKafkaClassLoader()) to get the Class object, and the create the instance, the key point is the classLoader, which is specified by the last param, the implementation of method Utils.getContextOrKafkaClassLoader() is
public static ClassLoader getContextOrKafkaClassLoader() {
ClassLoader cl = Thread.currentThread().getContextClassLoader();
if (cl == null)
return getKafkaClassLoader();
else
return cl;
}
so, by default, the Class object of org.apache.kafka.common.serialization.StringSerializer is load by the applicationClassLoader, if your target class is not loaded by the applicationClassLoader, this problem will happend !
to solve the problem, simply set the ContextClassLoader of current thread to null before new KafkaProducer instance like this
Thread.currentThread().setContextClassLoader(null);
Producer<String, String> producer = new KafkaProducer(props);
hope my answer can let you know what happend .
The issue appears to be with the class loader, as #Ram Ghadiyaram indicated in his answer. In order to get this working with kafka-clients 2.x, I had to do the following:
public Producer<String, String> createProducer() {
ClassLoader original = Thread.currentThread().getContextClassLoader();
Thread.currentThread().setContextClassLoader(null);
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
BOOTSTRAP_SERVERS);
... etc ...
KafkaProducer<String, String> producer = new KafkaProducer<>(props);
Thread.currentThread().setContextClassLoader(original);
return producer;
}
This allows the system to continue loading additional classes with the original classloader. This was needed in Wildfly/JBoss (the specific app I'm working with is Keycloak).
try using these props instead of yours props.
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
Here is full Kafka Producer Example:-
import java.util.Properties;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
public class FxDateProducer {
public static void main(String[] args) throws Exception{
if(args.length == 0){
System.out.println("Enter topic name”);
return;
}
String topicName = args[0].toString();
Properties props = new Properties();
//Assign localhost id
props.put("bootstrap.servers", “localhost:9092");
//Set acknowledgements for producer requests.
props.put("acks", “all");
//If the request fails, the producer can automatically retry,
props.put("retries", 0);
//Specify buffer size in config
props.put("batch.size", 16384);
//Reduce the no of requests less than 0
props.put("linger.ms", 1);
//The buffer.memory controls the total amount of memory available to the producer for buffering.
props.put("buffer.memory", 33554432);
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer
<String, String>(props);
for(int i = 0; i < 10; i++)
producer.send(new ProducerRecord<String, String>(topicName,
Integer.toString(i), Integer.toString(i)));
System.out.println(“Message sent successfully”);
producer.close();
}
}
Recently i found the solution. Setting the Thead Context loader to null resolved the issue for me. Thanks.
Thread.currentThread().setContextClassLoader(null);
Producer<String, String> producer = new KafkaProducer(props);
It happens because of kafka-version issue. Make sure, you use the correct kafka version. The version that I used is 'kafka_2.12-1.0.1'
But try using below properties in your code .This fixed my issue.
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
Earlier I was using below properties which was causing the issue.
//props.put("key.serializer","org.apache.kafka.common.serialization.Stringserializer");
//props.put("value.serializer","org.apache.kafka.common.serialization.Stringserializer");

Resources