Is retry handled from go client library for azure service bus? - azure

I'm referencing to the client library in https://github.com/Azure/azure-service-bus-go. I was able to write a simple client which listens to a subscription and read the messages. If I drop the network, after a few seconds I could see the receive loop exiting with the error message "context canceled". I was hoping the client library will do some kind of retry mechanism and handle any connection issues or any server timeouts etc. Do we need to handle this our selves ? Also do we get any exceptions which we could identify and use it for this retrying mechanism ? Any sample code would be highly appreciated.
below is the sample code I tried (only including the vital parts).
err = subscription.Receive(ctx, servicebus.HandlerFunc(func(ctx context.Context, message *servicebus.Message) error {
fmt.Println(string(message.Data))
return message.Complete(ctx)
}))
if err != nil {
fmt.Println("FATAL: ", err)
return
}

Unfortunately, the older library doesn't do a good job of retrying, so you'll need to wrap the code in your own retry loop.
func demoReceiveWithRecovery(ns *servicebus.Namespace) {
parentCtx := context.TODO()
for {
q, err := ns.NewQueue(queueName)
if err != nil {
panic(err)
}
defer q.Close(parentCtx)
handler := servicebus.HandlerFunc(func(c context.Context, m *servicebus.Message) error {
log.Printf("Received message")
return m.Complete(c)
})
err = q.Receive(parentCtx, handler)
// check for potential recoverable situation
if err != nil && errors.Is(err, context.Canceled) {
// This workaround distinguishes between the handler cancelling because the
// library has disconnected vs the parent context (passed in by you) being cancelled.
if parentCtx.Err() != nil {
// our parent context cancelled, which caused the entire operation to be canceled
log.Printf("Cancelled by parent context")
return
} else {
// cancelled internally due to an error, we can restart the queue
log.Printf("Error occurred, restarting")
_ = q.Close(parentCtx)
continue
}
} else {
log.Printf("Other error, closing client and restarting: %s", err.Error())
_ = q.Close(parentCtx)
}
}
}
NOTE: This library was deprecated recently, but it was deprecated after you posted your question. The new package is here if you want to try it - azservicebus with a migration guide here to make things easier: migrationguide.md
The new package should recover properly in this scenario for receiving. There is an open bug in the new package I'm working on for the sending side to do recovery #16695.
If you want to ask further questions or have some feature requests you can submit those in the github issues for https://github.com/Azure/azure-sdk-for-go.

Related

Stop channels when ws not able to connect

I have the following code which works ok, the issue is that when the socket.Connect() fails to connect I want to stop the process, I’ve tried with the following code
but it’s not working, I.e. if the socket connect fails to connect the program still runs.
What I want to happen is that if the connect fails, the process stops and the channe…what am I missing here?
func run (appName string) (err error) {
done = make(chan bool)
defer close(done)
serviceURL, e := GetContext().getServiceURL(appName)
if e != nil {
err = errors.New("process failed" + err.Error())
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
url := "wss://" + serviceURL + route
socket := gowebsocket.New(url)
addPass(&socket, user, pass)
socket.OnConnectError = OnConnectErrorHandler
socket.OnConnected = OnConnectedHandler
socket.OnTextMessage = socketTextMessageHandler
socket.OnDisconnected = OnDisconnectedHandler
LogDebug("In %v func connecting to URL %v", methodName, url)
socket.Connect()
jsonBytes, e := json.Marshal(payload)
if e != nil {
err = errors.New("build process failed" + e.Error())
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
jsonStr := string(jsonBytes)
LogDebug("In %v Connecting to payload JSON is %v", methodName, jsonStr)
socket.SendText(jsonStr)
<-done
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
func OnConnectErrorHandler(err error, socket gowebsocket.Socket) {
methodName := "OnConnectErrorHandler"
LogDebug("Starting %v parameters [err = %v , socket = %v]", methodName, err, socket)
LogInfo("Disconnected from server ")
done <- true
}
The process should open one ws connection for process that runs about 60-90 sec (like execute npm install) and get the logs of the process via web socket and when it finish , and of course handle the issue that could happen like network issue or some error running the process
So, #Slabgorb is correct - if you look here (https://github.com/sacOO7/GoWebsocket/blob/master/gowebsocket.go#L87) you will see that the OnConnectErrorHandler is called synchronously during the execution of your call to Connect(). The Connect() function doesn't kick off a separate goroutine to handle the websocket until after the connection is fully established and the OnConnected callback has completed. So when you try to write to the unbuffered channel done, you are blocking the same goroutine that called into the run() function to begin with, and you deadlock yourself, because no goroutine will ever be able to read from the channel to unblock you.
So you could go with his solution and turn it into a buffered channel, and that will work, but my suggestion would be not to write to a channel for this sort of one-time flag behavior, but use close signaling instead. Define a channel for each condition you want to terminate run(), and in the appropriate websocket handler function, close the channel when that condition happens. At the bottom of run(), you can select on all the channels, and exit when the first one closes. It would look something like this:
package main
import "errors"
func run(appName string) (err error) {
// first, define one channel per socket-closing-reason (DO NOT defer close these channels.)
connectErrorChan := make(chan struct{})
successDoneChan := make(chan struct{})
surpriseDisconnectChan := make(chan struct{})
// next, wrap calls to your handlers in a closure `https://gobyexample.com/closures`
// that captures a reference to the channel you care about
OnConnectErrorHandler := func(err error, socket gowebsocket.Socket) {
MyOnConnectErrorHandler(connectErrorChan, err, socket)
}
OnDisconnectedHandler := func(err error, socket gowebsocket.Socket) {
MyOnDisconectedHandler(surpriseDisconnectChan, err, socket)
}
// ... declare any other handlers that might close the connection here
// Do your setup logic here
// serviceURL, e := GetContext().getServiceURL(appName)
// . . .
// socket := gowebsocket.New(url)
socket.OnConnectError = OnConnectErrorHandler
socket.OnConnected = OnConnectedHandler
socket.OnTextMessage = socketTextMessageHandler
socket.OnDisconnected = OnDisconnectedHandler
// Prepare and send your message here...
// LogDebug("In %v func connecting to URL %v", methodName, url)
// . . .
// socket.SendText(jsonStr)
// now wait for one of your signalling channels to close.
select { // this will block until one of the handlers signals an exit
case <-connectError:
err = errors.New("never connected :( ")
case <-successDone:
socket.Close()
LogDebug("mission accomplished! :) ")
case <-surpriseDisconnect:
err = errors.New("somebody cut the wires! :O ")
}
if err != nil {
LogDebug(err)
}
return err
}
// *Your* connect error handler will take an extra channel as a parameter
func MyOnConnectErrorHandler(done chan struct{}, err error, socket gowebsocket.Socket) {
methodName := "OnConnectErrorHandler"
LogDebug("Starting %v parameters [err = %v , socket = %v]", methodName, err, socket)
LogInfo("Disconnected from server ")
close(done) // signal we are done.
}
This has a few advantages:
1) You don't need to guess which callbacks happen in-process and which happen in background goroutines (and you don't have to make all your channels buffered 'just in case')
2) Selecting on the multiple channels lets you find out why you are exiting and maybe handle cleanup or logging differently.
Note 1: If you choose to use close signaling, you have to use different channels for each source in order to avoid race conditions that might cause a channel to get closed twice from different goroutines (e.g. a timeout happens just as you get back a response, and both handlers fire; the second handler to close the same channel causes a panic.) This is also why you don't want to defer close all the channel at the top of the function.
Note 2: Not directly relevant to your question, but -- you don't need to close every channel - once all the handles to it go out of scope, the channel will get garbage collected whether or not it has been closed.
Ok, what is happening is the channel is blocking when you try to add something to it. Try initializing the done channel with a buffer (I used 1) like this:
done = make(chan bool, 1)

"operation was canceled" error message upon reading a single document

I use golang's driver (github.com/arangodb/go-driver) to talk to Arangodb and this issue seems happening sporadically but often enough to be concerned about.
Request to read a single document fails with error "operation was canceled". Not much to add here, except that it might be a timeout.
var contributor model.Contributor
_, err = col.ReadDocument(ctx, id, &contributor)
if err != nil {
log.Errorf("cannot read contributor %s document: %v", id, err)
return nil, err
}

Error when retrieving message from Service Bus queue

I trying to pull messages from azure service bus queue using Go, but I got an error when running the code. Here is my code.
func Example_queue_receive() {
ctx, cancel :=context.WithTimeout(context.Background(),10*time.Second)
defer cancel()
connectionString :="Endpoint=sb://{my_service_name}.servicebus.windows.net/;SharedAccessKeyName = RootManageSharedAccessKey;SharedAccessKey={my_shared_access_key_value}"
// Create a client to communicate with a Service Bus Namespace.
ns, err := servicebus.NewNamespace(servicebus.NamespaceWithConnectionString(connectionString))
if err != nil {
fmt.Println(err)
}
// Create a client to communicate with the queue.
q, err := ns.NewQueue("MyQueueName")
if err != nil {
fmt.Println("FATAL: ", err)
}
err = q.ReceiveOne(ctx, servicebus.HandlerFunc(func(ctx context.Context, message *servicebus.Message) servicebus.DispositionAction {
fmt.Println(string(message.Data))
return message.Complete()
}))
if err != nil {
fmt.Println("FATAL: ", err)
}
}
This is the error:
link detached, reason: *Error{Condition: amqp:not-found}
I searched for the error information in the Github repo, and I found the code ErrorNotFound MessageErrorCondition = "amqp:not-found", but there is not any explaination for the error.
I compared it with Exception types in C# from the offical document Service Bus messaging exceptions and my testing, I think it's the same as below.
In my environment go version go1.11.3 windows/amd64, I run the similar code without an existing queue MyQueueName, I got the similar error below.
FATAL: unhandled error link xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx: status code 404 and description: The messaging entity 'sb://.servicebus.windows.net/MyQueueName' could not be found. TrackingId:f9fc309d-xxxx-xxxx-xxxx-8fccd694f266_G42, SystemTracker:.servicebus.windows.MyQueueName, Timestamp:2019-01-25T09:45:28
So I think the error means your Queue MyQueueName in your code is not existing in your Azure Service Bus, you should create it first before to use.
Meanwhile, as #JerryLiu said, your code below has some mistakes.
err = q.ReceiveOne(ctx, servicebus.HandlerFunc(func(ctx context.Context, message *servicebus.Message) servicebus.DispositionAction {
fmt.Println(string(message.Data))
return message.Complete()
}))
According to the godoc for azure-service-bus-go, the parameter type of servicebus.HanderFunc method must be HandlerFunc which is a function return error, not servicebus.DispositionAction in your code.
And the method message.Complete should be passed a parameter ctx (a context object) and return error that not match servicebus.DispositionAction. The message.CompleteAction method return servicebus.DispositionAction , but not suitable in the reciving message code.
Please refer to the example of godoc Example (QueueSendAndReceive) to modify your code.

Akka.net Ask timeout when used in Azure WebJob

At work we have some code in a Azure WebJob where we use Rabbit
The basic workflow is this
A message arrives on RabbitMQ Queue
We have a message handler for the incoming message
Within the message handler we start a top level (user) supervisor actor where we "ask" it to handle the message
The supervisor actor hierarchy is like this
And the relevant top level code is something like this (this is the WebJob code)
static void Main(string[] args)
{
try
{
//Bootstrap akka IoC resolver well ahead of any actor usages
new AutoFacDependencyResolver(ContainerOperations.Instance.Container, ContainerOperations.Instance.Container.Resolve<ActorSystem>());
var system = ContainerOperations.Instance.Container.Resolve<ActorSystem>();
var busQueueReader = ContainerOperations.Instance.Container.Resolve<IBusQueueReader>();
var dateTime = ContainerOperations.Instance.Container.Resolve<IDateTime>();
busQueueReader.AddHandler<ProgramCalculationMessage>("RabbitQueue", x =>
{
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
try
{
//SupervisorActor is a singleton
var supervisorActor = ContainerOperations.Instance.Container.ResolveNamed<IActorRef>("SupervisorActor");
var actorMessage = new SomeActorMessage();
var supervisorRunTask = runModelSupervisorActor.Ask(actorMessage, TimeSpan.FromMinutes(25));
//we want to wait this guy out
var supervisorRunResult = supervisorRunTask.GetAwaiter().GetResult();
switch (supervisorRunResult)
{
case CompletedEvent completed:
{
break;
}
case FailedEvent failed:
{
throw failed.Exception;
}
}
}
catch (Exception ex)
{
_log.Error(ex, "Error found in Webjob");
//throw it for the actual RabbitMqQueueReader Handler so message gets NACK
throw;
}
});
Thread.Sleep(Timeout.Infinite);
}
catch (Exception ex)
{
_log.Error(ex, "Error found");
throw;
}
}
And this is the relevant IOC code (we are using Autofac + Akka.NET DI for Autofac)
builder.RegisterType<SupervisorActor>();
_actorSystem = new Lazy<ActorSystem>(() =>
{
var akkaconf = ActorUtil.LoadConfig(_akkaConfigPath).WithFallback(ConfigurationFactory.Default());
return ActorSystem.Create("WebJobSystem", akkaconf);
});
builder.Register<ActorSystem>(cont => _actorSystem.Value);
builder.Register(cont =>
{
var system = cont.Resolve<ActorSystem>();
return system.ActorOf(system.DI().Props<SupervisorActor>(),"SupervisorActor");
})
.SingleInstance()
.Named<IActorRef>("SupervisorActor");
The problem
So the code is working fine and doing what we want it to, apart from the Akka.Net "ask" timeout shown above in the WebJob code.
Annoyingly this seems to work fine if I try and run the webjob locally. Where I can simulate a "ask" timeout by providing a new supervisorActor that simply doesn't EVER respond with a message back to the "Sender".
This works perfectly running on my machine, but when we run this code in Azure, we DO NOT see a Timeout for the "ask" even though one of our workflow runs exceeded the "ask" timeout by a mile.
I just don't know what could be causing this behavior, does anyone have any ideas?
Could there be some Azure specific config value for the WebJob that I need to set.
The answer to this was to use the async rabbit handlers which apparently came out in V5.0 of the C# rabbit client. The offical docs still show the sync usage (sadly).
This article is quite good : https://gigi.nullneuron.net/gigilabs/asynchronous-rabbitmq-consumers-in-net/
Once we did this, all was good

go websockets eof

I'm trying to make a simple command forwarder to connect my home computer to a server I own, so that I can push commands to my server and my home pc gets it. Those commands are simple pause/resume for my downloader. My design is, that on a server, I run a hub instance, which creates a window for passing commands and a window for backend to pass those commands to my pc. I'm bounding those two "windows" with a channel, they run a server. When a client connects and sends a message to the hub, it gets streamed through a channel to backend window and then to the real backend (on my home pc). When backend responds to the backend window on the hub, the hub prints the result back to the client.
With this approach, only the first message passes and works with my downloader. I have to reconnect the backend from my home pc with the hub each time I get a message to get this working properly. I don't think that's the proper way with websockets, so here I am. After one successful request (when the backend finishes it's work and replies the result), it gets looped forever with EOF error.
The important parts of the code are:
main executable
hub handlers
backend connector
If you put the source in your GOPATH (i'm developing it for the tip version of go to support modern websockets), to compile it:
go build gosab/cmd, to run it:
./cmd -mode="hub" hub
./cmd -mode="backend" --address="localhost:8082" backend
To pass messages to the hub, use this javascript:
var s = new WebSocket("ws://localhost:8082")
s.send("1 5")
So how do I handle it? Are channels a good way to communicate between two different requests?
I'm surprised you haven't received an answer to this.
What you need to do is something like the code below. When you receive an incoming websocket connection, a new goroutine is spawned for that connection. If you let that goroutine end, it'll disconnect the websocket client.
I'm making an assumption that you're not necessarily going to be running the client and server on the same computer. If you always are, then it'd be better to do the communication internally via channels or such instead of using websockets or a network port. I only mention this because I'm not completely sure what you're using this for. I just hope I answered the right part of your question.
package main
import (
"code.google.com/p/go.net/websocket"
"flag"
"fmt"
"net/http"
"os"
"time"
)
type Message struct {
RequestID int
Command string
SomeOtherThing string
Success bool
}
var mode *string = flag.String("mode", "<nil>", "Mode: server or client")
var address *string = flag.String("address", "localhost:8080", "Bind address:port")
func main() {
flag.Parse()
switch *mode {
case "server":
RunServer()
case "client":
RunClient()
default:
flag.Usage()
}
}
func RunServer() {
http.Handle("/", http.FileServer(http.Dir("www")))
http.Handle("/server", websocket.Handler(WSHandler))
fmt.Println("Starting Server")
err := http.ListenAndServe(*address, nil)
if err != nil {
fmt.Printf("HTTP failed: %s\n", err.Error())
os.Exit(1)
}
}
func WSHandler(ws *websocket.Conn) {
defer ws.Close()
fmt.Println("Client Connected")
for {
var message Message
err := websocket.JSON.Receive(ws, &message)
if err != nil {
fmt.Printf("Error: %s\n", err.Error())
return
}
fmt.Println(message)
// do something useful here...
response := new(Message)
response.RequestID = message.RequestID
response.Success = true
response.SomeOtherThing = "The hot dog left the castle as requested."
err = websocket.JSON.Send(ws, response)
if err != nil {
fmt.Printf("Send failed: %s\n", err.Error())
os.Exit(1)
}
}
}
func RunClient() {
fmt.Println("Starting Client")
ws, err := websocket.Dial(fmt.Sprintf("ws://%s/server", *address), "", fmt.Sprintf("http://%s/", *address))
if err != nil {
fmt.Printf("Dial failed: %s\n", err.Error())
os.Exit(1)
}
incomingMessages := make(chan Message)
go readClientMessages(ws, incomingMessages)
i := 0
for {
select {
case <-time.After(time.Duration(2e9)):
i++
response := new(Message)
response.RequestID = i
response.Command = "Eject the hot dog."
err = websocket.JSON.Send(ws, response)
if err != nil {
fmt.Printf("Send failed: %s\n", err.Error())
os.Exit(1)
}
case message := <-incomingMessages:
fmt.Println(message)
}
}
}
func readClientMessages(ws *websocket.Conn, incomingMessages chan Message) {
for {
var message Message
err := websocket.JSON.Receive(ws, &message)
if err != nil {
fmt.Printf("Error: %s\n", err.Error())
return
}
incomingMessages <- message
}
}

Resources