The documentation for connect method says,
Connects to the socket at the given remote address and returns immediately. The connection will be made asynchronously in the background.
But, await does not seem to be applicable as shown in their example of subscriber code.
subscriber.js
const zmq = require("zeromq")
async function run() {
const sock = new zmq.Subscriber
sock.connect("tcp://127.0.0.1:3000") //Happens async; can we await this?
sock.subscribe("kitty cats")
console.log("Subscriber connected to port 3000")
for await (const [topic, msg] of sock) {
console.log("received a message related to:", topic, "containing message:", msg)
}
}
run()
Also, what error(s) maybe raised by the connect() method? I provided an 'obscene' port number such as, 8124000, to connect. I was hoping for some error messages to be raised.
Q : "what error(s) maybe raised by the connect() method?"
The Error(s) part
The ZeroMQ native API distinguishes ( unchanged since v2.1 ) these errors for this :
EINVAL
The endpoint supplied is invalid.
EPROTONOSUPPORT
The requested transport protocol is not supported.
ENOCOMPATPROTO
The requested transport protocol is not compatible with the socket type.
ETERM
The ØMQ context associated with the specified socket was terminated.
ENOTSOCK
The provided socket was invalid.
EMTHREAD
No I/O thread is available to accomplish the task.
Yet your actual observer is dependent on the zeromq.js re-wrapping these principal states, so the best next step is to re-read the wrapper source code, so as to see, how these native API error states get actually handled inside the zeromq.js-wrapper.
The remarks :
The following socket events can be generated. This list may be different depending on the ZeroMQ version that is used.
Note that the error event is avoided by design, since this has a special behaviour in Node.js causing an exception to be thrown if it is unhandled.
Other error names are adjusted to be as close to possible as other networking related event names in Node.js and/or to the corresponding ZeroMQ.js method call. Events (including any errors) that correspond to a specific operation are namespaced with a colon :, e.g. bind:error or connect:retry.
are nevertheless quite warning, aren't they?
The await part
The MCVE-code ( as-is ) is unable to reproduce the live-session, so best adapt the MCVE-code so as to get run-able and we can proceed further on this.
Related
I am trying to implement a pub/sub broker with ZeroMQ where it is possible to restrict clients from subscribing to prefixes they are not allowed to subscribe to. I found a tutorial that tries to achieve a similar thing using the ZMQ_XPUB_MANUAL option. With zeromq.js it is possible to set this option:
import * as zmq from "zeromq";
// ...
const socket = new zmq.XPublisher({ manual: true });
After setting this option I am able to receive the subscription messages by calling .receive() on this socket:
const [msg] = await socket.receive();
But I have no Idea how to accept this subscription. Usally this is done by calling setSockOpt with ZMQ_SUBSCRIBE but I don't know how to do this with zeromq.js.
Is there a way to call setSockOpt with zeromq.js or is there another way to accept a subscription?
Edit
I tried user3666197's suggestion to call setSockOpt directly, but I am not sure how to do this. Rather than doing that, I took another look in the sources and found this: https://github.com/zeromq/zeromq.js/blob/master/src/native.ts#L617
It seems like setSockOpt is exposed to the TypeScript side as protected methods of the Socket class. To try this out, I created my own class that inherits XPublisher and exposed an acceptSubscription message:
class CustomPublisher extends zmq.XPublisher {
constructor(options?: zmq.SocketOptions<zmq.XPublisher>) {
super(options);
}
public acceptSubscription(subscription: string | null): void {
// ZMQ_SUBSCRIBE has a value of 6
// reference:
// https://github.com/zeromq/libzmq/blob/master/include/zmq.h#L310
this.setStringOption(6, subscription);
}
}
This works like a charm! But do not forget to strip the first byte of the subscription messages, otherwise your client won't receive any messages since the prefix won't match.
Q : "Is there a way to call setSockOpt() with zeromq.js or is there another way to accept a subscription?"
So, let me first mention Somdoron to be, out of doubts & for ages, a master of the ZeroMQ tooling.
Next comes the issue. The GitHub-sources, I was able to review atm, seem to me, that permit the ZMQ_XPUB-Socket-archetypes to process the native API ZMQ_XPUB_MANUAL settings ( re-dressed into manual-property, an idiomatic shift ), yet present no method (so far visible for me) to actually permit user to meet the native API explicit protocol of:
ZMQ_XPUB_MANUAL: change the subscription handling to manual...with manual mode subscription requests are not added to the subscription list. To add subscription the user need to call setsockopt() with ZMQ_SUBSCRIBE on XPUB socket./__ from ZeroMQ native API v.4.3.2 documentation __/
Trying to blind-call the Socket-inherited .SetSockOpt() method may prove me wrong, yet if successful, it may be a way to inject the { ZMQ_SUBSCRIBE | ZMQ_UNSUBSCRIBE } subscription-management steps into the XPUB-instance currently having been switched into the ZMQ_XPUB_MANUAL-mode.
Please test it, and if it fails to work via this super-class inherited method, the shortest remedy would be to claim that collision/conceptual-shortcomings directly to the zeromq.js maintainers ( it might be a W.I.P. item, deeper in their actual v6+ refactoring backlog, so my fingers are crossed for either case ).
I am unable to get two users chatting to each other despite reducing the complexity and the potential code that could have caused the issue.
I am able to emit to all connected sockets so I have established it's not an issue in context of emit/on structure but rather; coming from the way i'm handling the private socket ids.
I have tried various versions of trying to send the private message to the correct socket id; I have tried older ways such as socket.to and the current way from the docs which is io.to(sockid).emit('event' message); all these variations have been unable to help me. I have consoled out the socket id I have on my Angular client side by printing console.log('THIS IS MY SOCKET '+this.socket.id) and comparing it to the value I have in Redis using redis-cli and they both match perfectly every time which doesn't give me too much to go on.
problem arises here:
if (res === 1) {
_active_users.get_client_key(recipient)
.then(socket_id => {
console.log('======='+io.sockets.name)
console.log('I am sending the message to: '+ recipient + 'and my socket id is'+ socket_id)
// socket.to(socket_id)socket.emit('incoming', "this is top secret"
io.of('/chat').to(socket_id).emit('incoming', "this is top secret")
})
.catch(error => {
console.log("COULD NOT RETRIEVE KEY: " + error)
})
Here is the link to the pastebin with more context:
https://pastebin.com/fYPJSnWW
The classes I import are essentially just setters and getters for handling the socket id you can think of them as just a worker class that handles Redis actions.
Expected: To allow two clients to communicate based on just their socket ids.
Actual:
am able to emit to all connected sockets and receive the expected results but the problem arises when trying to send to a specific socket id from a unknown reason.
Issue was coming from my front end.. I hope nobody has a headache like this! but here is what happened; when you're digging your own hole you often don't realise how deep you got yourself if you don't take the time to look around. I had two instances of the sockets. I instantiated both and used the one to connect and the other to send the message; which of course you cannnot do if you want things to work properly. So what I did was created only one instance of the socket in and and passed that ref of the socket around where I needed it which is ( sendMessage(username, socket) getMessage(socket)
ngOnInit(
this.socket = io.connect('localhost:3600',{
reconnection: true,
reconnectionDelay: 1000,
reconnectionDelayMax : 5000,
reconnectionAttempts: Infinity});
I'm currently writing a public REST service in Node.js that interfaces with a Postgres-database (using Sequelize) and a Redis cache instance.
I'm now looking into error handling and how to send informative and verbose error messages if something would happen with a request.
It struck me that I'm not quite sure how to handle internal server errors. What would be the appropriate way of dealing with this? Consider the following scenario:
I'm sending a post-request to an endpoint which in turn inserts the content to the database. However, something went wrong during this process (validation, connection issue, whatever). An error is thrown by the Sequelize-driver and I catch it.
I would argue that it is quite sensitive information (even if I remove the stack trace) and I'm not comfortable with exposing references of internal concepts (table-names, functions, etc.) to the client. I'd like to have a custom error for these scenarios that briefly describes the problem without giving away too detailed information.
Is the only way to approach this by mapping every "possible" error in the Sequelize-driver to a generic one and send that back to the client? Or how would you approach this?
Thanks in advance.
Errors are always caused by something. You should identify and intercept these causes before doing your database operation. Only cases that you think you've prepared for should reach the database operation.
If an unexpected error occurs, you should not send an informative error message for security reasons. Just send a generic error for unexpected cases.
Your code will look somewhat like this:
async databaseInsert(req, res) {
try {
if (typeof req.body.name !== 'string') {
res.status(400).send('Required field "name" was missing or malformed.')
return
}
if (problemCase2) {
res.status(400).send('Error message 2')
return
}
...
result = await ... // database operation
res.status(200).send(result)
} catch (e) {
res.status(500).send(debugging ? e : 'Unexpected error')
}
}
I have already written a request-method in java that sends a request to a simple Server. I have written this simple server and the Connection is based on sockets. When the server has the answer for the request, it will send it automatically to client. Now I want to write a new method that can behave as following:
if the server does not answer after a fixed period of time, then I send a new Request to the server using my request-method
My problem is to implement this idea. I am thinking in launching a thread, whenever the request-method is executed. If this thread does not hear something for fixed period of time, then the request method should be executed again. But how can I hear from the same socket used between that client and server?
I am also asking,if there is a simpler method that does not use threads
curently I am working on this idea
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer){ go to (1)}
else{
exit
}
I have some difficulties in step 3. How I can go to (1)
You may be able to accomplish this with a single thread using a SocketChannel and a Selector, see also these tutorials on SocketChannel and Selector. The gist of it is that you'll use long-polling on the Selector to let you know when your SocketChannel(s) are ready to read/write/etc using Selector#select(long timeout). (SocketChannel supports non-blocking, but from your problem description it sounds like things would be simpler using blocking)
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress("http://jenkov.com", 80));
Selector selector = Selector.open();
SelectionKey key = socketChannel.register(selector, SelectionKey.OP_READ);
// returns the number of channels ready after 5000ms; if you have
// multiple channels attached to the selector then you may prefer
// to iterate through the SelectionKeys
if(selector.select(5000) > 0) {
SocketChannel keyedChannel = (SocketChannel)key.channel();
// read/write the SocketChannel
} else {
// I think your best bet here is to close and reopen the Socket
// or to reinstantiate a new socket - depends on your Request method
}
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer) then go to (1)
The Azure Service Bus supports a built-in retry mechanism which makes an abandoned message immediately visible for another read attempt. I'm trying to use this mechanism to handle some transient errors, but the message is made available immediately after being abandoned.
What I would like to do is make the message invisible for a period of time after it is abandoned, preferably based on an exponentially incrementing policy.
I've tried to set the ScheduledEnqueueTimeUtc property when abandoning the message, but it doesn't seem to have an effect:
var messagingFactory = MessagingFactory.CreateFromConnectionString(...);
var receiver = messagingFactory.CreateMessageReceiver("test-queue");
receiver.OnMessageAsync(async brokeredMessage =>
{
await brokeredMessage.AbandonAsync(
new Dictionary<string, object>
{
{ "ScheduledEnqueueTimeUtc", DateTime.UtcNow.AddSeconds(30) }
});
}
});
I've considered not abandoning the message at all and just letting the lock expire, but this would require having some way to influence how the MessageReceiver specifies the lock duration on a message, and I can't find anything in the API to let me change this value. In addition, it wouldn't be possible to read the delivery count of the message (and therefore make a decision for how long to wait for the next retry) until after the lock is already required.
Can the retry policy in the Message Bus be influenced in some way, or can a delay be artificially introduced in some other way?
Careful here because I think you are confusing the retry feature with the automatic Complete/Abandon mechanism for the OnMessage event-driven message handling. The built in retry mechanism comes into play when a call to the Service Bus fails. For example, if you call to set a message as complete and that fails, then the retry mechanism would kick in. If you are processing a message an exception occurs in your own code that will NOT trigger a retry through the retry feature. Your question doesn't get explicit on if the error is from your code or when attempting to contact the service bus.
If you are indeed after modifying the retry policy that occurs when an error occurs attempting to communicate with the service bus you can modify the RetryPolicy that is set on the MessageReciver itself. There is an RetryExponitial which is used by default, as well as an abstract RetryPolicy you can create your own from.
What I think you are after is more control over what happens when you get an exception doing your processing, and you want to push off working on that message. There are a few options:
When you create your message handler you can set up OnMessageOptions. One of the properties is "AutoComplete". By default this is set to true, which means as soon as processing for the message is completed the Complete method is called automatically. If an exception occurs then abandon is automatically called, which is what you are seeing. By setting the AutoComplete to false you required to call Complete on your own from within the message handler. Failing to do so will cause the message lock to eventually run out, which is one of the behaviors you are looking for.
So, you could write your handler so that if an exception occurs during your processing you simply do not call Complete. The message would then remain on the queue until it's lock runs out and then would become available again. The standard dead lettering mechanism applies and after x number of tries it will be put into the deadletter queue automatically.
A caution of handling this way is that any type of exception will be treated this way. You really need to think about what types of exceptions are doing this and if you really want to push off processing or not. For example, if you are calling a third party system during your processing and it gives you an exception you know is transient, great. If, however, it gives you an error that you know will be a big problem then you may decide to do something else in the system besides just bailing on the message.
You could also look at the "Defer" method. This method actually will then not allow that message to be processed off the queue unless it is specifically pulled by its sequence number. You're code would have to remember the sequence number value and pull it. This isn't quite what you described though.
Another option is you can move away from the OnMessage, Event-driven style of processing messages. While this is very helpful you don't get a lot of control over things. Instead hook up your own processing loop and handle the abandon/complete on your own. You'll also need to deal some of the threading/concurrent call management that the OnMessage pattern gives you. This can be more work but you have the ultimate in flexibility.
Finally, I believe the reason the call you made to AbandonAsync passing the properties you wanted to modify didn't work is that those properties are referring to Metadata properties on the method, not standard properties on BrokeredMessage.
I actually asked this same question last year (implementation aside) with the three approaches I could think of looking at the API. #ClemensVasters, who works on the SB team, responded that using Defer with some kind of re-receive is really the only way to control this precisely.
You can read my comment to his answer for a specific approach to doing it where I suggest using a secondary queue to store messages that indicate which primary messages have been deferred and need to be re-received from the main queue. Then you can control how long you wait by setting the ScheduledEnqueueTimeUtc on those secondary messages to control exactly how long you wait before you retry.
I ran into a similar issue where our order picking system is legacy and goes into maintenance mode each night.
Using the ideas in this article(https://markheath.net/post/defer-processing-azure-service-bus-message) I created a custom property to track how many times a message has been resubmitted and manually dead lettering the message after 10 tries. If the message is under 10 retries it clones the message increments the custom property and sets the en queue of the new message.
using Microsoft.Azure.ServiceBus;
public PickQueue()
{
queueClient = new QueueClient(QUEUE_CONN_STRING, QUEUE_NAME);
}
public async Task QueueMessageAsync(int OrderId)
{
string body = JsonConvert.SerializeObject(OrderId);
var message = new Message(Encoding.UTF8.GetBytes(body));
await queueClient.SendAsync(message);
}
public async Task ReQueueMessageAsync(Message message, DateTime utcEnqueueTime)
{
int resubmitCount = (int)(message.UserProperties["ResubmitCount"] ?? 0) + 1;
if (resubmitCount > 10)
{
await queueClient.DeadLetterAsync(message.SystemProperties.LockToken);
}
else
{
Message clone = message.Clone();
clone.UserProperties["ResubmitCount"] = ++resubmitCount;
await queueClient.ScheduleMessageAsync(message, utcEnqueueTime);
}
}
This question asks how to implement exponential backoff in Azure Functions. If you do not want to use the built-in RetryPolicy (only available when autoComplete = false), here's the solution I've been using:
public static async Task ExceptionHandler(IMessageSession MessageSession, string LockToken, int DeliveryCount)
{
if (DeliveryCount < Globals.MaxDeliveryCount)
{
var DelaySeconds = Math.Pow(Globals.ExponentialBackoff, DeliveryCount);
await Task.Delay(TimeSpan.FromSeconds(DelaySeconds));
await MessageSession.AbandonAsync(LockToken);
}
else
{
await MessageSession.DeadLetterAsync(LockToken);
}
}