Notification feed following Flat feed isn't showing activities - getstream-io

I have a notification feed like NOTIFICATIONS:userID and I have a flat feed GLOBAL:domain.
The notification feed is set up to follow the flat feed, but when I push activities to the flat feed they are not coming through to the notification feed. I can't get them to come through the react components or making the API calls directly. Any items in the notification feed come through fine, but not the flat feed.
Is there anything I would have missed when setting up the feeds to make this possible? I'm not sure why it isn't working.
Here's the code used to call getstream:
// AddNotification writes a feed notification to the provided feed.
func (c *Client) AddNotification(feedID, actor string, n *feed.Notification) error {
keys := map[string]bool{}
feeds := make([]stream.Feed, 0)
for _, s := range n.Streams {
if s == feed.STREAM_NONE {
continue
}
if _, ok := keys[s.String()]; ok {
continue
}
f, err := c.getstream.FlatFeed(s.String(), feedID)
if err != nil {
return errors.Wrapf(err, "failed to get feed %s", feedID)
}
keys[s.String()] = true
feeds = append(feeds, f)
}
extra, err := getExtraFromString(n.Content)
if err != nil {
return errors.Wrap(err, "failed to marshal extra content")
}
appliesAt, err := time.FromProtoTS(n.GetAppliesAt())
if err != nil {
return errors.Wrap(err, "failed to cast applies at time")
}
activity := stream.Activity{
Actor: actor,
Verb: n.GetVerb(),
Object: n.GetObject(),
Extra: extra,
ForeignID: n.GetIdempotentKey(),
Time: stream.Time{Time: appliesAt},
}
log.WithFields(log.Fields{
"activity": activity,
"feeds": keys,
}).Debug("sending request to stream.io")
if err = c.getstream.AddToMany(activity, feeds...); err != nil {
return errors.Wrap(err, "error while feeding to stream.io")
}
return nil
}
Just to explain the code a bit. We have a feed.Notification type that allows you to specify what we've called "streams", these are just types that represent the slugs.
In this case, I'm using the GLOBAL:domain feed, which the user's NOTIFICATION:userID feed is set up to follow.

From batch add docs:
Activities added using this method are not propagated to followers. That is, any other Feeds that follow the Feed(s) listed in the API call will not receive the new Activity.
If you're using batching, you need to specify all feeds you want to add the activity for. Another way is that you can add to feeds one by one to push to followers.

Related

Query for PageRanges of a Managed Disk Incremental Snapshot returns zero changes

We have a solution which takes Incremental Snapshots of all the disks of a virtual machine. Once the snapshot is created, the solution queries it's page ranges to get the changed data.
The issue which we are facing currently is that the page ranges are returned as empty even when it's a first snapshot for the disk and the disk has data. This happens intermittently for OS as well as Data disks. Strangely, if a virtual machine has multiple disks, the page ranges return appropriate information for few and empty for others.
We are using Virtual Machine Disk Incremental Snapshot Operation as below. The solution makes use of GO SDK of Azure for making these operations. Below is the sample code for the operations.
// Prepare azure snapshot api client
snapClient, err := armcompute.NewSnapshotsClient(subscriptionID, azureCred, nil)
// Configure snapshot parameters
snapParams := armcompute.Snapshot{
Location: to.Ptr(location), // Snapshot location
Name: &snapshotName,
Properties: &armcompute.SnapshotProperties{
CreationData: &armcompute.CreationData{
CreateOption: to.Ptr(armcompute.DiskCreateOptionCopy),
SourceResourceID: to.Ptr(getAzureIDFromName(constant.AzureDiskURI, subscriptionID, resourceGroupName, diskID, "")),
}, // Disk ID for which the snapshot needs to be created
Incremental: to.Ptr(true),
},
}
// Create Disk Snapshot (Incremental)
snapPoller, err := snapClient.BeginCreateOrUpdate(ctx, resourceGroupName, snapshotName, snapParams, nil)
if err != nil {
return nil, err
}
Once the snapshot is created successfully, we prepare the changed areas for the snapshot using PageRanges feature as below.
// Grant Snapshot Access
resp, err := snapClient.BeginGrantAccess(ctx, resourceGroupName, snapshotId), armcompute.GrantAccessData{
Access: &armcompute.PossibleAccessLevelValues()[1], // 0 is none, 1 is read and 2 is write
DurationInSeconds: to.Ptr(duration), // 1 hr
}, nil)
if err != nil {
return "", err
}
grantResp, err := resp.PollUntilDone(ctx, nil)
if err != nil {
return "", err
}
currentSnapshotSAS = grantResp.AccessSAS
// Create Page Blob Client
pageBlobClient, err := azblob.NewPageBlobClientWithNoCredential(currentSnapshotSAS, nil)
pageOption := &azblob.PageBlobGetPageRangesDiffOptions{}
pager := pageBlobClient.GetPageRangesDiff(pageOption)
if err != nil {
return nil, err
}
// Gather Page Ranges for all the changed data
var pageRange []*azblob.PageRange
if pager.NextPage(ctx) {
resp := pager.PageResponse()
pageRange = append(pageRange, resp.PageRange...)
}
// Loop through page ranges and collect all changed data indexes
var changedAreaForCurrentIter int64
changedAreasString := ""
for _, page := range pageRanges {
length := (*page.End + 1) - *page.Start
changedAreaForCurrentIter = changedAreaForCurrentIter + length
changedAreasString = changedAreasString + strconv.FormatInt(*page.Start, 10) + ":" + strconv.FormatInt(length, 10) + ","
}
zap.S().Debug("Change areas : [" changedAreasString "]")
It is this when the Change areas is coming in as empty. We have checked Disk properties for the ones which are successful and which failed and they are all the same. There is no Lock configured on the disk.
Following are the SDK versions which we are using.
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v3 v3.0.0
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.1.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.4.1
Can someone please provide pointers explaining what are the factors which could make this problem appear intermittently.

Failed to fetch next results: QUERY_STATE_NEXT failed: transaction ID: XXXXX: no ledger context

I am trying to fetch data from blockchain using query in chaincode. I have invoked around 2,50,000 records in blockchain and trying to fetch the data using query. When I run the chaincode and get the peer logs, I am getting the below error.
Chaincode error in peer logs:
Here is my code:
queryStringsa := fmt.Sprintf("{\"selector\":{\"$and\":[{\"savesID\":{\"$ne\":\"%s\"}},{\"bankID\":{\"$eq\":\"%s\"}},{\"ytdSavedFlag\":{\"$ne\":\"%s\"}},{\"saveMonthYear\":{\"$eq\":\"%s\"}}]},\"use_index\":[\"_design/indexSavesDataReportDoc\",\"indexSavesDataReportName\"]}","null",bankidsave,"Yes",lastImportDatekey)
queryResultss11sa, errsav := getQueryResultForQueryString(stub, queryStringsa)
// getQueryResultForQueryString
func getQueryResultForQueryString(stub shim.ChaincodeStubInterface, queryString string) ([]byte, error) {
_scbLogger.Infof(string("**********************************"))
_scbLogger.Infof(string("getQueryResultForQueryString queryString : "+ queryString))
_scbLogger.Infof(string("**********************************"))
resultsIterator, err := stub.GetQueryResult(queryString)
if err != nil {
_scbLogger.Error("Error Starting SCB-Efficiency Chaincode is " + string(err.Error()))
return nil, err
}
defer resultsIterator.Close()
// buffer is a JSON array containing QueryRecords
var buffer bytes.Buffer
buffer.WriteString("[")
bArrayMemberAlreadyWritten := false
fmt.Println("resultsIterator length : ", (resultsIterator))
for resultsIterator.HasNext() {
queryResponse, err := resultsIterator.Next()
//fmt.Println("queryresponse inside for next : ", queryResponse)
if err != nil {
fmt.Println("$$$$$$$$$$$ error in reuslt iterator : ", err)
return nil, err
}
// Add a comma before array members, suppress it for the first array member
if bArrayMemberAlreadyWritten == true {
buffer.WriteString(",")
}
buffer.WriteString("{\"Key\":")
buffer.WriteString("\"")
buffer.WriteString(queryResponse.Key)
buffer.WriteString("\"")
buffer.WriteString(", \"Record\":")
// Record is a JSON object, so we write as-is
//fmt.Println("string(queryResponse.Value) : ",string(queryResponse.Value))
buffer.WriteString(string(queryResponse.Value))
buffer.WriteString("}")
bArrayMemberAlreadyWritten = true
}
buffer.WriteString("]")
//fmt.Printf("- getQueryResultForQueryString queryResult:\n%s\n", buffer.String())
return buffer.Bytes(), nil
}
I have 5 set of different queries in same function. sometimes few queries return the query result and sometimes none of the queries gives the result, rather it shows the above error.
When I run the same queries in CouchDB fauxton I am getting the query result. When I run the same function for less number of records, the queries work properly without any errors.

GATT library directly reading characteristic values without iterating through services

I'm trying to use go on a raspberry pi to query bluetooth low energy devices. It's functional, I can connect to the device I want and iterate through the services and characteristics of the connected device. Now I'm just trying to streamline things and just read/write the values I'm interested in. It isn't working.
Code:
func onPeriphConnected(p gatt.Peripheral, err error) {
fmt.Println("Connected")
defer p.Device().CancelConnection(p)
if err := p.SetMTU(500); err != nil {
fmt.Printf("Failed to set MTU, err: %s\n", err)
}
batteryServiceId := gatt.MustParseUUID("180f")
// Direct read attempt (not working)
batterySerivce := gatt.NewService(batteryServiceId)
batteryLevelUUID := gatt.MustParseUUID("2a19")
batteryChar := gatt.NewCharacteristic(batteryLevelUUID,batterySerivce,gatt.Property(0x12),0,0)
e, err := p.ReadCharacteristic(batteryChar)
if err != nil {
fmt.Printf("Failed to read battery level, err: %s\n", err)
} else {
fmt.Println(e)
}
// iterate services read (working)
ss, err := p.DiscoverServices(nil)
if err != nil {
fmt.Printf("Failed to discover services, err: %s\n", err)
return
}
for _, s := range ss {
if(s.UUID().Equal(batteryServiceId)) {
fmt.Println("Found the battery service")
// Discovery characteristics
cs, err := p.DiscoverCharacteristics(nil, s)
if err != nil {
fmt.Printf("Failed to discover characteristics, err: %s\n", err)
continue
}
for _, c := range cs {
msg := " Characteristic " + c.UUID().String()
if len(c.Name()) > 0 {
msg += " (" + c.Name() + ")"
}
msg += "\n properties " + c.Properties().String()
fmt.Println(msg)
if (c.Properties() & gatt.CharRead) != 0 {
b, err := p.ReadCharacteristic(c)
if err != nil {
fmt.Printf("Failed to read characteristic, err: %s\n", err)
continue
}
fmt.Printf(" value %x\n", b)
}
}
}
}
}
Results:
Connected
[10 0 0 1]
Found the battery service
Characteristic 2a19 (Battery Level)
properties read notify
value 53
You can see where I expect to get a hex value of 53 I'm instead getting an array of [10 0 0 1]. I'm pretty new to go so I'm probably missing something here or just assembling my read incorrectly. Any pointers are much appreciated. Thanks!
A link to the appropriate documentation would be advisable. I don't know if this link is correct as there seem to be multiple different versions of package gatt.
Edit: see also https://godoc.org/github.com/paypal/gatt/examples/service#NewBatteryService at https://github.com/paypal/gatt/blob/master/examples/service/battery.go, which appears to show the right way of creating a battery service directly. Original answer below.
Having had a quick scan through said documentation, two things leap out at me:
The battery level ID is 2a19. The battery service ID is 180f. You use:
batteryServiceId := gatt.MustParseUUID("180f")
batterySerivce := gatt.NewService(batteryServiceId)
batteryChar := gatt.NewCharacteristic(batteryLevelUUID, batterySerivce,
gatt.Property(0x12), 0, 0)
e, err := p.ReadCharacteristic(batteryChar)
(I kept variable name spellings, but added a bit of white space to fit better on the StackOverflow display.) You never call NewDescriptor nor either of AddDescriptor or SetDescriptors on batteryChar. Are such calls required? I don't know; the documentation doesn't say. But the call that works uses DiscoverServices followed by DiscoverCharacteristics, which perhaps does create these (documented but undescribed) Descriptors. They look like they interpose themselves in value-oriented operations, so they might be critical.
Looking further at the code, after or instead of creating a characteristic directly, I think you do have to at least link the characteristic back into the service. The right call might be AddCharacteristic or SetCharacteristics. See this chunk of code in DiscoverCharacteristics.
(Minor.) gatt.Property(0x12) is definitely the wrong way to construct the constant. You probably should use:
gatt.Property.CharRead | gatt.Property.CharNotify

Hyperledger Fabric golang chaincode not working as expected store data on ledger manually but not when try to store via function call

i am trying to store fund transfer record on hyperledger fabric. i have written chain code in go lang. it work fine when i add data in initLedger function. but when i call it from other function like createTransfer(i will provide both codes) it show successful transaction but when i retrieve chain data .it does not appear in it.
transfer struct
type Transfer struct {
TransferID string `json:"transferID"`
FromAccount string `json:"fromAccount"`
ToAccount string `json:"toAcount"`
Amount string `json:"amount"`
}
this function write data to ledger: it workis fine when i directly call it in initLedger method
func writeTransferToLedger(APIStub shim.ChaincodeStubInterface, transfers []Transfer) sc.Response {
for i := 0; i < len(transfers); i++ {
key := transfers[i].TransferID
chkBytes, _ := APIStub.GetState(key)
if chkBytes == nil {
asBytes, _ := json.Marshal(transfers[i])
err := APIStub.PutState(transfers[i].TransferID, asBytes)
if err != nil {
return shim.Error(err.Error())
}
} else {
msg := "Transfer already exist" + key + " Failure---------------"
return shim.Error(msg)
}
}
return shim.Success([]byte("Write to Ledger"))
}
Call writeToTransferLedger method with in createTransfer function:
func (s *SmartContract) createTransfer(APIStub shim.ChaincodeStubInterface, args []string) sc.Response {
if len(args) != 4 {
return shim.Error("Incorrect Number of arguments for transfer func, Expecting 4")
}
transfers := []Transfer{Transfer{TransferID: args[0], FromAccount: args[1], ToAccount: args[2], Amount: args[3]}}
writeTransferToLedger(APIStub, transfers)
return shim.Success([]byte("stored:" + args[0] + args[1] + args[2] + args[3]))
}
when i call createTransfer from nodesdk code it execute succefully but when i retrive data from chain code not thing return.
i Expect it to work with createTransfer function as it is working with writeTransferToLedger.
inside initLedger method i have created transfer struct with given data and called writeTransferToLedger function code is given below:
transfer := []Transfer{
{TransferID: "1233", FromAccount: "US_John_Doe_123", ToAccount: "UK_Alice_456", Amount: "200"},
{TransferID: "231", FromAccount: "JPY_Alice_456", ToAccount: "UK_John_Doe", Amount: "3000"},
}
writeTransferToLedger(APIstub, transfer)
thanks for your help . i have resolved the issue.
i was calling invoke function when trying to retrieve data from customer ledger.
i have to query the ledger and get transfer data from ledger.

How to search a string in the elasticsearch document(indexed) in golang?

I am writing a function in golang to search for a string in elasticsearch documents which are indexed. I am using elasticsearch golang client elastic. For example consider the object is tweet,
type Tweet struct {
User string
Message string
Retweets int
}
And the search function is
func SearchProject() error{
// Search with a term query
termQuery := elastic.NewTermQuery("user", "olivere")
searchResult, err := client.Search().
Index("twitter"). // search in index "twitter"
Query(&termQuery). // specify the query
Sort("user", true). // sort by "user" field, ascending
From(0).Size(10). // take documents 0-9
Pretty(true). // pretty print request and response JSON
Do() // execute
if err != nil {
// Handle error
panic(err)
return err
}
// searchResult is of type SearchResult and returns hits, suggestions,
// and all kinds of other information from Elasticsearch.
fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)
// Each is a convenience function that iterates over hits in a search result.
// It makes sure you don't need to check for nil values in the response.
// However, it ignores errors in serialization. If you want full control
// over iterating the hits, see below.
var ttyp Tweet
for _, item := range searchResult.Each(reflect.TypeOf(ttyp)) {
t := item.(Tweet)
fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
}
// TotalHits is another convenience function that works even when something goes wrong.
fmt.Printf("Found a total of %d tweets\n", searchResult.TotalHits())
// Here's how you iterate through results with full control over each step.
if searchResult.Hits != nil {
fmt.Printf("Found a total of %d tweets\n", searchResult.Hits.TotalHits)
// Iterate through results
for _, hit := range searchResult.Hits.Hits {
// hit.Index contains the name of the index
// Deserialize hit.Source into a Tweet (could also be just a map[string]interface{}).
var t Tweet
err := json.Unmarshal(*hit.Source, &t)
if err != nil {
// Deserialization failed
}
// Work with tweet
fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
}
} else {
// No hits
fmt.Print("Found no tweets\n")
}
return nil
}
This search is printing tweets by the user 'olivere'. But if I give 'olive' then search is not working. How do I search for a string which is part of User/Message/Retweets?
And the Indexing function looks like this,
func IndexProject(p *objects.ElasticProject) error {
// Index a tweet (using JSON serialization)
tweet1 := `{"user" : "olivere", "message" : "It's a Raggy Waltz"}`
put1, err := client.Index().
Index("twitter").
Type("tweet").
Id("1").
BodyJson(tweet1).
Do()
if err != nil {
// Handle error
panic(err)
return err
}
fmt.Printf("Indexed tweet %s to index %s, type %s\n", put1.Id, put1.Index, put1.Type)
return nil
}
Output:
Indexed tweet 1 to index twitter, type tweet
Got document 1 in version 1 from index twitter, type tweet
Query took 4 milliseconds
Tweet by olivere: It's a Raggy Waltz
Found a total of 1 tweets
Found a total of 1 tweets
Tweet by olivere: It's a Raggy Waltz
Version
Go 1.4.2
Elasticsearch-1.4.4
Elasticsearch Go Library
github.com/olivere/elastic
Could anyone help me on this.? Thank you
How you search and find data depends on your analyser - from your code it's likely that the standard analyser is being used (i.e. you haven't specified an alternative in your mapping).
The Standard Analyser will only index complete words. So to match "olive" against "olivere" you could either:
Change the search process
e.g. switch from a term query to a Prefix query or use a Query String query with a wildcard.
Change the index process
If you want to find strings within larger strings then look at using nGrams or Edge nGrams in your analyser.
multiQuery := elastic.NewMultiMatchQuery(
term,
"name", "address", "location", "email", "phone_number", "place", "postcode",
).Type("phrase_prefix")

Resources