I use golang's driver (github.com/arangodb/go-driver) to talk to Arangodb and this issue seems happening sporadically but often enough to be concerned about.
Request to read a single document fails with error "operation was canceled". Not much to add here, except that it might be a timeout.
var contributor model.Contributor
_, err = col.ReadDocument(ctx, id, &contributor)
if err != nil {
log.Errorf("cannot read contributor %s document: %v", id, err)
return nil, err
}
Related
Context: I'm writing a Terraform Provider
A typical pattern (#1) in resource implementations in the Create/CreateContext function is to return the Read/ReadContext function at the end to fill in the Terraform State for all attributes.
Another typical pattern (#2) in resource implementations in the Read/ReadContext function is to remove the resource from the Terraform State if the remote system returns an error or status that indicates the remote resource no longer exists by explicitly calling d.SetId("") and returning no error. If the remote system is not strongly read-after-write consistent (eventually consistent), this means the resource creation can return no error and also return no resource state.
Official HashiCorp's tutorial supports #2 pattern:
When you create something in Terraform but delete it manually,
Terraform should gracefully handle it. If the API returns an error
when the resource doesn't exist, the read function should check to see
if the resource is available first. If the resource isn't available,
the function should set the ID to an empty string so Terraform
"destroys" the resource in state. The following code snippet is an
example of how this can be implemented; you do not need to add this to
your configuration for this tutorial.
if resourceDoesntExist {
d.SetID("")
return
}
Here's a minimal working code sample that displays usage of both #1 and #2 patterns:
func resourceServiceThingCreate(d *schema.ResourceData, meta interface{}) error {
/* ... */
return resourceServiceThingRead(d, meta)
}
func resourceServiceThingRead(d *schema.ResourceData, meta interface{}) error {
/* ... */
output, err := conn.DescribeServiceThing(input)
if !d.IsNewResource() && ErrCodeEquals(err, 404) {
log.Printf("[WARN] {Service} {Thing} (%s) not found, removing from state", d.Id())
d.SetId("")
return nil
}
if err != nil {
return fmt.Errorf("error reading {Service} {Thing} (%s): %w", d.Id(), err)
}
/* ... */
}
The problem is when I add an importer to the resource using
Importer: &schema.ResourceImporter{
StateContext: schema.ImportStatePassthroughContext,
},
that basically calls resourceServiceThingRead(d, meta) and if conn.DescribeServiceThing(input) returns 404 (e.g., there was a typo and the user used wrong ID that caused 404 status code) TF provider will print no errors:
terraform import resource_foo.example-2 foo-1234
resource_foo.example-2: Importing from ID "foo-1234"...
resource_foo.example-2: Import prepared!
Prepared resource_foo for import
resource_foo.example-2: Refreshing state... [id=foo-1234]
Import successful!
# but the client actually received 404 error that it ignored
since
if !d.IsNewResource() && ErrCodeEquals(err, 404) {
log.Printf("[WARN] {Service} {Thing} (%s) not found, removing from state", d.Id())
d.SetId("")
// print out no error and confuse a user
return nil
}
// instead of going here
if err != nil { // print out & return descriptive error }
piece of code will be executed. Is there a way to mark imported resource as new or something so I'd be able to print out the real 404 error for a failed attempt of importing a resource?
Otherwise it looks like we'll have to remove schema.ImportStatePassthroughContext and use copy pasted resourceServiceThingRead_2() that excludes d.SetId("") check to support showing descriptive 404 errors for imports.
I got two Smart contract definitions currently commited on the same channel. Both are written in Go and are somewhat basic, only doing basic CRUD operations. However, I've noticed that key/value pairs written with one chaincode, are unavailable to the other.
So, with go-audit I created the following record:
But then, I tried to perform a get operation on key ping with chaincode go-asset, is not found with the following error (returned by the chaincode)
bad request: failed to invoke go-asset: Error: No valid responses from any peers. Errors: **someurl***, status=500, message=the asset ping does not exist.
This is the transaction that reads:
func (c *GoAssetContract) ReadGoAsset(ctx contractapi.TransactionContextInterface, goAssetID string) (*GoAsset, error) {
exists, err := c.GoAssetExists(ctx, hlpAssetID)
if err != nil {
return nil, fmt.Errorf("could not read from world state. %s", err)
} else if !exists {
return nil, fmt.Errorf("the asset %s does not exist", goAssetID)
}
bytes, _ := ctx.GetStub().GetState(goAssetID)
goAsset := new(GoAsset)
err = json.Unmarshal(bytes, goAsset)
if err != nil {
return nil, fmt.Errorf("could not unmarshal world state data to type GoAsset")
}
return GoAsset, nil
}
and the GoAsset struct
// GoAsset stores a value
type GoAsset struct {
Value string `json:"value"`
}
shouldn't world state be available to all chaincodes approved/committed on a channel?
chaincodes deployed to the same channel are namespaced, so that their keys remain specific to the chaincode that is using them. So what you see with 2 chaincodes deployed to the same channel is working as designed, they cannot see each others keys.
However a chaincode can consist of multiple distinct contracts and in that case the contracts have access to each others keys because they are still in the same chaincode deployment.
You can have one chaincode invoke/query another chaincode on the same channel using the InvokeChaincode() API. The called chaincode can return keys/values to the caller chaincode. With this approach it is possible to embed all access control logic in the called chaincode.
I'm referencing to the client library in https://github.com/Azure/azure-service-bus-go. I was able to write a simple client which listens to a subscription and read the messages. If I drop the network, after a few seconds I could see the receive loop exiting with the error message "context canceled". I was hoping the client library will do some kind of retry mechanism and handle any connection issues or any server timeouts etc. Do we need to handle this our selves ? Also do we get any exceptions which we could identify and use it for this retrying mechanism ? Any sample code would be highly appreciated.
below is the sample code I tried (only including the vital parts).
err = subscription.Receive(ctx, servicebus.HandlerFunc(func(ctx context.Context, message *servicebus.Message) error {
fmt.Println(string(message.Data))
return message.Complete(ctx)
}))
if err != nil {
fmt.Println("FATAL: ", err)
return
}
Unfortunately, the older library doesn't do a good job of retrying, so you'll need to wrap the code in your own retry loop.
func demoReceiveWithRecovery(ns *servicebus.Namespace) {
parentCtx := context.TODO()
for {
q, err := ns.NewQueue(queueName)
if err != nil {
panic(err)
}
defer q.Close(parentCtx)
handler := servicebus.HandlerFunc(func(c context.Context, m *servicebus.Message) error {
log.Printf("Received message")
return m.Complete(c)
})
err = q.Receive(parentCtx, handler)
// check for potential recoverable situation
if err != nil && errors.Is(err, context.Canceled) {
// This workaround distinguishes between the handler cancelling because the
// library has disconnected vs the parent context (passed in by you) being cancelled.
if parentCtx.Err() != nil {
// our parent context cancelled, which caused the entire operation to be canceled
log.Printf("Cancelled by parent context")
return
} else {
// cancelled internally due to an error, we can restart the queue
log.Printf("Error occurred, restarting")
_ = q.Close(parentCtx)
continue
}
} else {
log.Printf("Other error, closing client and restarting: %s", err.Error())
_ = q.Close(parentCtx)
}
}
}
NOTE: This library was deprecated recently, but it was deprecated after you posted your question. The new package is here if you want to try it - azservicebus with a migration guide here to make things easier: migrationguide.md
The new package should recover properly in this scenario for receiving. There is an open bug in the new package I'm working on for the sending side to do recovery #16695.
If you want to ask further questions or have some feature requests you can submit those in the github issues for https://github.com/Azure/azure-sdk-for-go.
I have the following code which works ok, the issue is that when the socket.Connect() fails to connect I want to stop the process, I’ve tried with the following code
but it’s not working, I.e. if the socket connect fails to connect the program still runs.
What I want to happen is that if the connect fails, the process stops and the channe…what am I missing here?
func run (appName string) (err error) {
done = make(chan bool)
defer close(done)
serviceURL, e := GetContext().getServiceURL(appName)
if e != nil {
err = errors.New("process failed" + err.Error())
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
url := "wss://" + serviceURL + route
socket := gowebsocket.New(url)
addPass(&socket, user, pass)
socket.OnConnectError = OnConnectErrorHandler
socket.OnConnected = OnConnectedHandler
socket.OnTextMessage = socketTextMessageHandler
socket.OnDisconnected = OnDisconnectedHandler
LogDebug("In %v func connecting to URL %v", methodName, url)
socket.Connect()
jsonBytes, e := json.Marshal(payload)
if e != nil {
err = errors.New("build process failed" + e.Error())
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
jsonStr := string(jsonBytes)
LogDebug("In %v Connecting to payload JSON is %v", methodName, jsonStr)
socket.SendText(jsonStr)
<-done
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
func OnConnectErrorHandler(err error, socket gowebsocket.Socket) {
methodName := "OnConnectErrorHandler"
LogDebug("Starting %v parameters [err = %v , socket = %v]", methodName, err, socket)
LogInfo("Disconnected from server ")
done <- true
}
The process should open one ws connection for process that runs about 60-90 sec (like execute npm install) and get the logs of the process via web socket and when it finish , and of course handle the issue that could happen like network issue or some error running the process
So, #Slabgorb is correct - if you look here (https://github.com/sacOO7/GoWebsocket/blob/master/gowebsocket.go#L87) you will see that the OnConnectErrorHandler is called synchronously during the execution of your call to Connect(). The Connect() function doesn't kick off a separate goroutine to handle the websocket until after the connection is fully established and the OnConnected callback has completed. So when you try to write to the unbuffered channel done, you are blocking the same goroutine that called into the run() function to begin with, and you deadlock yourself, because no goroutine will ever be able to read from the channel to unblock you.
So you could go with his solution and turn it into a buffered channel, and that will work, but my suggestion would be not to write to a channel for this sort of one-time flag behavior, but use close signaling instead. Define a channel for each condition you want to terminate run(), and in the appropriate websocket handler function, close the channel when that condition happens. At the bottom of run(), you can select on all the channels, and exit when the first one closes. It would look something like this:
package main
import "errors"
func run(appName string) (err error) {
// first, define one channel per socket-closing-reason (DO NOT defer close these channels.)
connectErrorChan := make(chan struct{})
successDoneChan := make(chan struct{})
surpriseDisconnectChan := make(chan struct{})
// next, wrap calls to your handlers in a closure `https://gobyexample.com/closures`
// that captures a reference to the channel you care about
OnConnectErrorHandler := func(err error, socket gowebsocket.Socket) {
MyOnConnectErrorHandler(connectErrorChan, err, socket)
}
OnDisconnectedHandler := func(err error, socket gowebsocket.Socket) {
MyOnDisconectedHandler(surpriseDisconnectChan, err, socket)
}
// ... declare any other handlers that might close the connection here
// Do your setup logic here
// serviceURL, e := GetContext().getServiceURL(appName)
// . . .
// socket := gowebsocket.New(url)
socket.OnConnectError = OnConnectErrorHandler
socket.OnConnected = OnConnectedHandler
socket.OnTextMessage = socketTextMessageHandler
socket.OnDisconnected = OnDisconnectedHandler
// Prepare and send your message here...
// LogDebug("In %v func connecting to URL %v", methodName, url)
// . . .
// socket.SendText(jsonStr)
// now wait for one of your signalling channels to close.
select { // this will block until one of the handlers signals an exit
case <-connectError:
err = errors.New("never connected :( ")
case <-successDone:
socket.Close()
LogDebug("mission accomplished! :) ")
case <-surpriseDisconnect:
err = errors.New("somebody cut the wires! :O ")
}
if err != nil {
LogDebug(err)
}
return err
}
// *Your* connect error handler will take an extra channel as a parameter
func MyOnConnectErrorHandler(done chan struct{}, err error, socket gowebsocket.Socket) {
methodName := "OnConnectErrorHandler"
LogDebug("Starting %v parameters [err = %v , socket = %v]", methodName, err, socket)
LogInfo("Disconnected from server ")
close(done) // signal we are done.
}
This has a few advantages:
1) You don't need to guess which callbacks happen in-process and which happen in background goroutines (and you don't have to make all your channels buffered 'just in case')
2) Selecting on the multiple channels lets you find out why you are exiting and maybe handle cleanup or logging differently.
Note 1: If you choose to use close signaling, you have to use different channels for each source in order to avoid race conditions that might cause a channel to get closed twice from different goroutines (e.g. a timeout happens just as you get back a response, and both handlers fire; the second handler to close the same channel causes a panic.) This is also why you don't want to defer close all the channel at the top of the function.
Note 2: Not directly relevant to your question, but -- you don't need to close every channel - once all the handles to it go out of scope, the channel will get garbage collected whether or not it has been closed.
Ok, what is happening is the channel is blocking when you try to add something to it. Try initializing the done channel with a buffer (I used 1) like this:
done = make(chan bool, 1)
I trying to pull messages from azure service bus queue using Go, but I got an error when running the code. Here is my code.
func Example_queue_receive() {
ctx, cancel :=context.WithTimeout(context.Background(),10*time.Second)
defer cancel()
connectionString :="Endpoint=sb://{my_service_name}.servicebus.windows.net/;SharedAccessKeyName = RootManageSharedAccessKey;SharedAccessKey={my_shared_access_key_value}"
// Create a client to communicate with a Service Bus Namespace.
ns, err := servicebus.NewNamespace(servicebus.NamespaceWithConnectionString(connectionString))
if err != nil {
fmt.Println(err)
}
// Create a client to communicate with the queue.
q, err := ns.NewQueue("MyQueueName")
if err != nil {
fmt.Println("FATAL: ", err)
}
err = q.ReceiveOne(ctx, servicebus.HandlerFunc(func(ctx context.Context, message *servicebus.Message) servicebus.DispositionAction {
fmt.Println(string(message.Data))
return message.Complete()
}))
if err != nil {
fmt.Println("FATAL: ", err)
}
}
This is the error:
link detached, reason: *Error{Condition: amqp:not-found}
I searched for the error information in the Github repo, and I found the code ErrorNotFound MessageErrorCondition = "amqp:not-found", but there is not any explaination for the error.
I compared it with Exception types in C# from the offical document Service Bus messaging exceptions and my testing, I think it's the same as below.
In my environment go version go1.11.3 windows/amd64, I run the similar code without an existing queue MyQueueName, I got the similar error below.
FATAL: unhandled error link xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx: status code 404 and description: The messaging entity 'sb://.servicebus.windows.net/MyQueueName' could not be found. TrackingId:f9fc309d-xxxx-xxxx-xxxx-8fccd694f266_G42, SystemTracker:.servicebus.windows.MyQueueName, Timestamp:2019-01-25T09:45:28
So I think the error means your Queue MyQueueName in your code is not existing in your Azure Service Bus, you should create it first before to use.
Meanwhile, as #JerryLiu said, your code below has some mistakes.
err = q.ReceiveOne(ctx, servicebus.HandlerFunc(func(ctx context.Context, message *servicebus.Message) servicebus.DispositionAction {
fmt.Println(string(message.Data))
return message.Complete()
}))
According to the godoc for azure-service-bus-go, the parameter type of servicebus.HanderFunc method must be HandlerFunc which is a function return error, not servicebus.DispositionAction in your code.
And the method message.Complete should be passed a parameter ctx (a context object) and return error that not match servicebus.DispositionAction. The message.CompleteAction method return servicebus.DispositionAction , but not suitable in the reciving message code.
Please refer to the example of godoc Example (QueueSendAndReceive) to modify your code.