is grpc channel thread safe in csharp, or more generally in any language depend on C core version;
for the following code:
1) is channel thread safe?
2) is client thread safe?
Channel channel = new Channel("127.0.0.1:50051", ChannelCredentials.Insecure);
Task task1 = Task.Factory.StartNew(() =>
{
var client = new Greeter.GreeterClient(channel);
String user = "you";
var reply = client.SayHello(new HelloRequest {Name = user});
Console.WriteLine("Greeting: " + reply.Message);
});
Task task2 = Task.Factory.StartNew(() =>
{
var client = new Greeter.GreeterClient(channel);
String user = "you";
var secondReply = client.SayHelloAgain(new HelloRequest {Name = user});
Console.WriteLine("Greeting: " + secondReply.Message);
});
task1.Wait();
task2.Wait();
channel.ShutdownAsync().Wait();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
Yes, channel and client(stub for other languages) are both thread-safe in csharp.
The channel is an abstraction of long-lived connection to remote server. More client objects can reuse the same channel, even with different gRPC servers so long as they have the same address (For example, use Envy as sidecar in Kubernetes, server addresses are all localhost:envoy-port).
The documentation explicitly states that Creating a channel is an expensive operation compared to invoking a remote call so in general you should reuse a single channel for as many calls as possible. Here's its source code. You can see that its shared data can be safety accessed by multiple threads.
The Client's base class ClientBase<T> and ClientBase are both thread-safe you can verify it from their source code.
So client is thread-safe if you just use the auto-generated one and the client-side interceptors you added are thread-safe too.
Related
I have a Microsoft Azure Service Bus connection string that I want to validate if it is a valid one. I want to validate the topics and subscriptions as well if they exist. If the connection string, topics, and subscriptions are invalid then we can take appropriate actions.
The sample connection string can be like - TransportType=AmqpWebSockets;Endpoint=sb://myservicebus.servicebus.windows.net/;SharedAccessKeyName=sendkey;SharedAccessKey=MQHnx3voLhH/xVgoamX3KijzkZ0qb7U6oHTolj7LM9H=;
I tried ServiceBusClient but it is not having such support. Is there a way we can validate the connection string and topics/subscriptions?
The client will validate that the connection string and entity names are well-formed, but there is no dedicated operation for testing its ability to communicate with Service Bus.
Connections and links are established lazily when the first network operation that requires them is invoked. There are two approaches that occur to me which would trigger network resource creation without being intrusive.
Create a message batch
In order to ensure that the batch size does not exceed the limits for a given resource, the ServiceBusSender must query the queue/topic. In the case where the client is misconfigured, it will throw.
// In a real scenario, you would want to create these as
// singletons and reuse them for the lifetime of the application.
await using var client = new ServiceBusClient("<< Connection String>>");
await using var sender = client.CreateSender("<< Queue/Topic >>");
var valid = true;
try
{
using var batch = await sender.CreateMessageBatchAsync().ConfigureAwait(false);
}
catch (Exception ex)
when (ex is ServiceBusException
|| ex is IOException
|| ex is SocketException)
{
valid = false;
}
Peek a message
Peeking a message will not cause the message to be locked or otherwise impact receiving it.
await using var client = new ServiceBusClient("<< Connection String>>");
await using var receiver = client.CreateReceiver("<< Queue/Subscription >>");
var valid = true;
try
{
_ = await receiver.PeekMessageAsync().ConfigureAwait(false);
}
catch (Exception ex)
when (ex is ServiceBusException
|| ex is IOException
|| ex is SocketException)
{
valid = false;
}
Interactions with each queue, topic, and subscription make use of a dedicated AMQP link, so each would need to be tested individually if you'd like to ensure that the name matches a known Service Bus entity.
I hope someone can clarify this for me:
I have 2 consumers in the same ConsumerGroup, it is my understanding that they should coordinate between them, but I am having the issue that both consumers are getting all the messages. My code is pretty simple:
const connectionString =...";
const eventHubName = "my-hub-dev";
const consumerGroup = "processor";
async function main() {
const consumerClient = new EventHubConsumerClient(consumerGroup, connectionString, eventHubName);
const subscription = consumerClient.subscribe({
processEvents: async (events, context) => {
for (const event of events) {
console.log(`Received event...`, event)
}
},
}
);
If I run two instances of this consumer code and publish an event, both instances will receive the event.
So my questions are:
Am I correct in my understanding that only 1 consumer should receive the message?
Is there anything I am missing here?
The EventHubConsumerClient requires a CheckpointStore that facilitates coordination between multiple clients. You can pass this to the EventHubConsumerClient constructor when you instantiate it.
The #azure/eventhubs-checkpointstore-blob uses Azure Storage Blob to store the metadata and required to coordinate multiple consumers using the same consumer group. It also stores checkpoint data: you can call context.updateCheckpoint with an event and if you stop and start a new receiver, it will continue from the last checkpointed event in the partition that event was associated with.
There's a full sample using the #azure/eventhubs-checkpointstore-blob here: https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventhub/eventhubs-checkpointstore-blob/samples/javascript/receiveEventsUsingCheckpointStore.js
Clarification: The Event Hubs service doesn't enforce a single owner for a partition when reading from a consumer group unless the client has specified an ownerLevel. The highest ownerLevel "wins". You can set this in the options bag you pass to subscribe, but if you want the CheckpointStore to handle coordination for you it's best not to set it.
Trying to convert an implementation using the .net library from using QueueClient.Create to the MessagingFactory.CreateQueueClient to be able to better control the BatchFlushInterval as well as to allowing the use of multiple factories over multiple connections to increase send throughput but running into roadblocks.
Right now we are creating QueueClients (they are maintained throughout the app) like this:
QueueClient.CreateFromConnectionString(address, queueName, ReceiveMode.PeekLock); // address is the connection string from the azure portal in the form of Endpoint=sb....
Trying to change it to creating a MessagingFactory in the class construtor that will be used to create the QueueClients:
messagingFactory = MessagingFactory.Create(address.Replace("Endpoint=",""),mfs);
// later on in another part of the class
messagingFactory.CreateQueueClient(queueName, ReceiveMode.PeekLock);
// error Endpoint not found.,
This throws the error Endpoint not found. If I don't replace the Endpoint= it won't even create the MessagingFactory. What is the proper way to handle this?
Notes:
address = Endpoint=sb://pmg-bus-mybus.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=somekey
As an aside, we have a process that is trying to push as many messages as possible to a queue and others reading it. The readers seem to easily keep up with the sender and I'm trying to maximize the send rate.
The address is the base address of namespace(sb://yournamespace.servicebus.windows.net/) you are connecting to. For more information, please refer to MessagingFactory. The following is the demo code :
var Address = "sb://yournamespace.servicebus.windows.net/"; //base address of namespace you are connecting to.
MessagingFactorySettings MsgFactorySettings = new MessagingFactorySettings
{
NetMessagingTransportSettings = new NetMessagingTransportSettings
{
BatchFlushInterval = TimeSpan.FromSeconds(2)
},
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("RootManageSharedAccessKey", "balabala..."),
OperationTimeout = TimeSpan.FromSeconds(30)
}; //specify operating timeout (optional)
MessagingFactory messagingFactory = MessagingFactory.Create(Address, MsgFactorySettings);
var queue = messagingFactory.CreateQueueClient("queueName",ReceiveMode.PeekLock);
var message = queue.Receive(TimeSpan.Zero);
I'm trying to make a game, which works on rooms, lobby and such (imagine the chat app, except with additional checks/information storing).
Let's say, I have a module room.js
var EventEmitter = require('events');
class Room extends EventEmitter {
constructor (id, name) {
super();
this.id = id;
this.name = name;
this.users = [];
}
}
Room.prototype.addUser = function (user) {
if(this.users.indexOf(user) === -1) {
this.users.push(user);
this.emit('user_joined', user);
} else {
/* error handling */
}
};
module.exports = {
Room: Room,
byId: function (id) {
// where should I look up?
}
};
How can I get exactly this object (with events)? How can I access events emitted by this object?
In a single instance of node, I would do something like:
var rooms = [];
var room = new Room(1234, 'test room');
room.on('user_joined', console.log);
rooms.push(room);
Also, I don't quite understood how Redis is actually helping (is it replacement of EventEmitter?)
Regards.
EDIT: Would accept PM2 solutions too.
Instead of handling rooms in Node, you can replace them with channels in Redis).
When a new client wants to join in a room, the NodeJS app returns it the ID of this given room (that is to say the name of the channel), then the client suscribes to the selected room (your client is directly connected to Redis.
You can use a Redis Set to manage the list of rooms.
In this scenario, you don't need any event emitter, and your node servers are stateless.
Otherwise, it would mean Redis would be exposed on the Internet (assuming your game is public), so you must activate Redis authentication. A major problem with this solution is that you have to give the server password to all clients, so it's definitely unsecure.
Moreover, Redis' performances allow brute force attacks so exposing it on Internet is not recommended. That's why I think all communications should go through a Node instance, even if Redis is used as a backend.
To solve this, you can use socket.io to open sockets between Node and your clients, and make the Node instances (not the client) subscribe to the Redis channel. When a message is published by Redis, send it to the client through the socket. And add a layer of authentication to ensure only valid clients connect to a given channel.
Event emitter is not required. It's the Redis client which will be an event emitter (like in this example based on ioRedis)
My requirements are stated below:
I have to develop a wrapper service on top a queue,so i was just going through some message Queue like (ActiveMQ,Apollo,Kafka). But decided to proceed with ActiveMQ to match our usecases.Now the requirement are as follows:
1) A restful api through which different publisher will publish to queue,based on clientId queue will be selected.
2) Consumer will consume message through restful api and will consume message in batches. say consumer as for something like give me 10 message from queue.
Now the service should provide 10 message if there is 10 message or if message number is less or zero it will send accordingly. After receiving the message the client will process with the message and send back acknowledgement through different res-full uri. upon receiving that acknowledgement,the MQService should commit or rollback message from the queue.
In order to this in the MQService layer, i have used a cached,where im keeping the JMS connection and session object till acknowledgemnt is received or ttl expire.
In-order to retrieve message in batches and send back to client, i have created a multi-threaded consumer,so that for 5 batch message request,the service layer will create 5 thread each having different connection and session object( as stated in ActiveMQ multiple consumer http://activemq.apache.org/multiple-consumers-on-a-queue.html)
Basic use-case:
MQ(BROKER)[A] --> Wrapper(MQService)[B]-->Client [C]
Note:[B] is a restfull service having JMS consumer implemented in it.It keeps the connection and session object in cache.
[C] request to [B] to give 3 message
[B] must fetch 3 message if available in queue,wrap it in batchmsgFormat and send it to [C]
[C] process the message and send acknowledgemnt suces/failed to [B] through /send-ack uri.
Upon receiving Ack from [C], [B] will commit the Jms session and close the session and connection object. Also it will evict those from the cache.
The above work-flow is working fine with single message fetching.
But the queue hungs up on JMS MesageConsumer.receive() when try to fetch message with mutilple consumer using multithreading. ...
Here the JMS Consumer code in MQService layer:
----------------------------------------------
public BatchMessageFormat getConsumeMsg(final String clientId, final Integer batchSize) throws Exception {
BatchMessageFormat batchmsgFormat = new BatchMessageFormat();
List<MessageFormat> msgdetails = new ArrayList<MessageFormat>();
List<Future<MessageFormat>> futuremsgdetails = new ArrayList<Future<MessageFormat>>();
if (batchSize != null) {
Integer msgCount = getMsgCount(clientId, batchSize);
for (int batchconnect = 0; batchconnect <msgCount; batchconnect++) {
FutureTask<MessageFormat> task = new FutureTask<MessageFormat>(new Callable<MessageFormat>() {
#Override
public MessageFormat call() throws Exception {
MessageFormat msg=consumeBatchMsg(clientId,batchSize);
return msg;
}
});
futuremsgdetails.add(task);
Thread t = new Thread(task);
t.start();
}
for(Future<MessageFormat> msg:futuremsgdetails){
msgdetails.add(msg.get());
}
batchmsgFormat.setMsgDetails(msgdetails);
return batchmsgFormat
}
Message fetching:
private MessageFormat consumeBatchMsg(String clientId, Integer batchSize) throws JMSException, IOException{
MessageFormat msgFormat= new MessageFormat();
Connection qC = ConnectionUtil.getConnection();
qC.start();
Session session = qC.createSession(true, -1);
Destination destination = createQueue(clientId, session);
MessageConsumer consumer = session.createConsumer(destination);
Message message = consumer.receive(2000);
if (message!=null || message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
msgFormat.setMessageID(textMessage.getJMSMessageID());
msgFormat.setMessage(textMessage.getText());
CacheObject cacheValue = new CacheObject();
cacheValue.setConnection(qC);
cacheValue.setSession(session);
cacheValue.setJmsQueue(destination);
MQCache.instance().add(textMessage.getJMSMessageID(),cacheValue);
}
consumer.close();
return msgFormat;
}
Acknowledgement and session closing:
public String getACK(String clientId,String msgId,String ack)throws JMSException{
if (MQCache.instance().get(msgId) != null) {
Connection connection = MQCache.instance().get(msgId).getConnection();
Session session = MQCache.instance().get(msgId).getSession();
Destination destination = MQCache.instance().get(msgId).getJmsQueue();
MessageConsumer consumer = session.createConsumer(destination);
if (ack.equalsIgnoreCase("SUCCESS")) {
session.commit();
} else {
session.rollback();
}
session.close();
connection.close();
MQCache.instance().evictCache(msgId);
return "Accepted";
} else {
return "Rejected";
}
}
Does anyone worked on similar scenario or can you pls throw some light? Is there any other way to implement this batch mesage fetching as well as client failure handling?
Try after setting the prefetch limit to 0 as below:
ConnectionFactory connectionFactory
= new ActiveMQConnectionFactory("tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=0");
I'll give a few pointers to help to code this logic better.
I'm assuming you are using pure JMS 1.1 as much as possible. Ensure that you have one place where you get the connection from the pool or create a connection. You need not do that inside a thread. You can do that outside. Sessions must be created inside a thread and shouldn't be shared. This will impact the logic in the function consumeBatchMsg().
Secondly, its simpler to use one thread to consume all the messages of the given batchSize. I see that you are using transacted session. So you can do one commit after getting all the messages of the batchSize.
If you really want to take the complicated route of having multiple consumers on a queue (probably little better performance), you can using CountDownLatch or CyclicBarrier of Java and set it to batchSize to trigger. Once all the threads have received the messages, it can commit and close the sessions in the respective threads. Never let the session instance go out of the context of the thread that created it.