Mark GetStream activity as seen and read - node.js

I want to apply mark_seen and mark_read for a specific activity of getstream.io, but the counter is still the same. Here is my code:
const notification = client.feed('NOTIFICATION', '1');
notification.get({ mark_seen: ['1111111-f22d-333-33-0000001f'] }); //getstream activity id
Update:
Here is my get stream response:
{
"results": [
{
"activities": [
{
"actor": "user:1",
"foreign_id": "POST:50",
"id": "00000000-0000-11e0-8000-800001be0000",
"object": "POST:50",
"origin": null,
"target": "USER:1",
"time": "2018-01-12T14:08:03.000000",
"verb": "CREATE"
}
],
"activity_count": 1,
"actor_count": 1,
"created_at": "2018-01-12T14:08:12.324882",
"group": "user:1_POST:50",
"id": "111111bb-f1a1-11e1-1111-111111111b11.user:1_POST:50",
"is_read": false,
"is_seen": false,
"updated_at": "2018-01-12T14:08:12.324882",
"verb": "CREATE"
}
],
"next": "",
"duration": "25.85ms",
"unseen": 3,
"unread": 3
}
I am using activity.id to mark my activity as read, but its not working.
However it works (and decrements the counter) when I use id (the group id).

Related

Sharepoint REST API - post comment on behalf of another user

There is a way of how to add comments to Sharepoint site using REST API. It is explained here https://beaucameron.com/2021/01/18/add-comments-to-sharepoint-list-items-using-the-rest-api/ for example.
But when I add comment like this, it adds it on behalf of my name - because REST endpoint is accessed using access token, which is linked to my e-mail.
I'd like to migrate comments from one site to the other, and keep original authors.
Is there a way to post comments on behalf of other users?
I tried this POST body:
{
"__metadata": {
"type": "Microsoft.SharePoint.Comments.comment"
},
"text": "Some new comment",
"author": {
"__metadata": {
"type": "SP.Sharing.Principal"
},
"email": "AlexW#OnMicrosoft.com",
"id": 18,
"loginName": "i:0#.f|membership|alexw#onmicrosoft.com",
"name": "Alex Wilber",
"principalType": 1
}
}
But still, comment is posted on behalf of my name. The response is like the following:
{
"d": {
"__metadata": {
"id": "https://sharepoint.com/_api/web/lists('017dd808-5a37-4d65-89f9-b5ce994554b4')/GetItemById(1)/Comments(15)",
"uri": "https://sharepoint.com/_api/web/lists('017dd808-5a37-4d65-89f9-b5ce994554b4')/GetItemById(1)/Comments(15)",
"type": "Microsoft.SharePoint.Comments.comment"
},
"likedBy": {
"__deferred": {
"uri": "https://sharepoint.com/_api/web/lists('017dd808-5a37-4d65-89f9-b5ce994554b4')/GetItemById(1)/Comments(15)/likedBy"
}
},
"replies": {
"__deferred": {
"uri": "https://sharepoint.com/_api/web/lists('017dd808-5a37-4d65-89f9-b5ce994554b4')/GetItemById(1)/Comments(15)/replies"
}
},
"author": {
"__metadata": {
"type": "SP.Sharing.Principal"
},
"email": "myName.mySurname#onmicrosoft.com",
"expiration": null,
"id": 12,
"isActive": true,
"isExternal": false,
"jobTitle": null,
"loginName": "i:0#.f|membership|myName.mySurname#onmicrosoft.com",
"name": "myName mySurname",
"principalType": 1,
"userId": null,
"userPrincipalName": null
},
"createdDate": "2022-05-24T08:40:19.0841947Z",
"id": "15",
"isLikedByUser": false,
"isReply": false,
"itemId": 1,
"likeCount": 0,
"listId": "017dd808-5a37-4d65-89f9-b5ce994554b4",
"mentions": null,
"parentId": "0",
"replyCount": 0,
"text": "Some new comment"
}
}
So still, I'm the author of the comment...

Why does Stripe checkout Webhooks send customer_email to null?

I try to get the customer email after a payment on Checkout Stripe new interface. The JSON posted by stripe Webhook always send customer_email with null value.
The stripe Checkout page ask for customer email so I don't understand why Stripe send back this value to null.
Though, customer value is not null.
{
"id": "evt_1FItv8Kj5elW7ZcvEuY6",
"object": "event",
"api_version": "2019-03-14",
"created": 1568539286,
"data": {
"object": {
"id": "cs_test_123123123",
"object": "checkout.session",
"billing_address_collection": null,
"cancel_url": "https://www.example.fr/canceled",
"client_reference_id": null,
"customer": "cus_FoWzBx2yusHfs9",
"customer_email": null,
"display_items": [
{
"amount": 1000,
"currency": "eur",
"quantity": 1,
"sku": {
"id": "sku_1234567",
"object": "sku",
"active": true,
"attributes": {
"name": "Product test"
},
"created": 1568538814,
"currency": "eur",
"image": null,
"inventory": {
"quantity": null,
"type": "infinite",
"value": null
},
"livemode": false,
"metadata": {
},
"package_dimensions": null,
"price": 1000,
"product": "prod_FoWr00dX3",
"updated": 1568538814
},
"type": "sku"
}
],
"livemode": false,
"locale": null,
"mode": "payment",
"payment_intent": "pi_1FItj5elW70Z2",
"payment_method_types": [
"card"
],
"setup_intent": null,
"submit_type": null,
"subscription": null,
"success_url": "https://www.example.fr/success"
}
},
"livemode": false,
"pending_webhooks": 1,
"request": {
"id": null,
"idempotency_key": null
},
"type": "checkout.session.completed"
}
The email the customer entered is actually on the Customer object that the CheckoutSession links to. [0] The customer_email field is something else(it's the field that your code might have set to prefill an email into the Session).
So retrieve the Customer object from the API (cus_FoWzBx2yusHfs9) and check the email field there; or retrieve the Session object and expand the Customer field.
[0] - https://stripe.com/docs/api/customers/object#customer_object-email

Working with both Array and Record type in Azure Stream Analytics query

I'm new to Azure Stream Analytics query. My scenario is using Continuous Export to write Application Insight telemetry to Azure Blob storage and use Stream Analytics job to push data from Blob storage to Power BI. My json file has both Array and Request type as follows:
{
"request": [
{
"id": "|HLHUdGy4c3g=.556f8524_",
"name": "HEAD Todos/Index",
"count": 1,
"responseCode": 200,
"success": true,
"url": "http://todoapp20183001.azurewebsites.net/",
"urlData": {
"base": "/",
"host": "todoapp20183001.azurewebsites.net",
"hashTag": "",
"protocol": "http"
},
"durationMetric": {
"value": 973023,
"count": 1,
"min": 973023,
"max": 973023,
"stdDev": 0,
"sampledValue": 973023
}
}
],
"internal": {
"data": {
"id": "124c5c1c-0820-11e8-a590-d95f25fd3f7f",
"documentVersion": "1.61"
}
},
"context": {
"data": {
"eventTime": "2018-02-02T13:50:39.591Z",
"isSynthetic": false,
"samplingRate": 100
},
"cloud": {},
"device": {
"type": "PC",
"roleName": "todoapp20183001",
"roleInstance": "RD0003FF6D001A",
"screenResolution": {}
},
"user": {
"isAuthenticated": false
},
"session": {
"isFirst": false
},
"operation": {
"id": "HLHUdGy4c3g=",
"parentId": "HLHUdGy4c3g=",
"name": "HEAD Todos/Index"
},
"location": {
"clientip": "35.153.211.0",
"continent": "North America",
"country": "United States",
"province": "Virginia",
"city": "Ashburn"
},
"custom": {
"dimensions": [
{
"_MS.ProcessedByMetricExtractors": "(Name:'Requests', Ver:'1.0')"
}
]
}
}
}
Using the following query I can receive the expected output.
WITH Request AS
(
SELECT
context.location.country as country,
context.location.city as city,
GetArrayElement(request,0) as requests
FROM FromBlob
)
SELECT country, city, requests.name
FROM Request
Now I need to count all the request by city but I cannot seem to get it done with COUNT() and GROUP BY(). Is there a hint or reference to have a look in this case?
Here's a example to count the number of requests every 5 minutes.
Note that I had to add a time component to GROUB BY since your data is streaming data and you want to have the aggregate on a finite time.
WITH Request AS
(
SELECT
context.location.country as country,
context.location.city as city,
GetArrayElement(request,0) as requests
FROM iothub
)
SELECT country, city, count(requests.name)
FROM Request
group by country,city,SlidingWindow(minute,5)
Let me know if it works for you.

MongoDB create product summary collection

Say I have a product collection like this:
{
"_id": "5a74784a8145fa1368905373",
"name": "This is my first product",
"description": "This is the description of my first product",
"category": "34/73/80",
"condition": "New",
"images": [
{
"length": 1000,
"width": 1000,
"src": "products/images/firstproduct_image1.jpg"
},
...
],
"attributes": [
{
"name": "Material",
"value": "Synthetic"
},
...
],
"variation": {
"attributes": [
{
"name": "Color",
"values": ["Black", "White"]
},
{
"name": "Size",
"values": ["S", "M", "L"]
}
]
}
}
and a variation collection like this:
{
"_id": "5a748766f5eef50e10bc98a8",
"name": "color:black,size:s",
"productID": "5a74784a8145fa1368905373",
"condition": "New",
"price": 1000,
"sale": null,
"image": [
{
"length": 1000,
"width": 1000,
"src": "products/images/firstvariation_image1.jpg"
}
],
"attributes": [
{
"name": "Color",
"value": "Black"
},
{
"name": "Size",
"value": "S"
}
]
}
I want to keep the documents separate and for the purpose of easy browsing, searching and faceted search implementation, I want to fetch all the data in a single query but I don't want to do join in my application code.
I know it's achievable using a third collection called summary that might look like this:
{
"_id": "5a74875fa1368905373",
"name": "This is my first product",
"category": "34/73/80",
"condition": "New",
"price": 1000,
"sale": null,
"description": "This is the description of my first product",
"images": [
{
"length": 1000,
"width": 1000,
"src": "products/images/firstproduct_image1.jpg"
},
...
],
"attributes": [
{
"name": "Material",
"value": "Synthetic"
},
...
],
"variations": [
{
"condition": "New",
"price": 1000,
"sale": null,
"image": [
{
"length": 1000,
"width": 1000,
"src": "products/images/firstvariation_image.jpg"
}
],
"attributes": [
"color=black",
"size=s"
]
},
...
]
}
problem is, I don't know how to keep the summary collection in sync with the product and variation collection. I know it can be done using mongo-connector but i'm not sure how to implement it.
please help me, I'm still a beginner programmer.
you don't actually need to maintain a summary collection, its redundant to store product and variation summary in another collection
instead of you can use an aggregate pipeline $lookup to outer join product and variation using productID
aggregate pipeline
db.products.aggregate(
[
{
$lookup : {
from : "variation",
localField : "_id",
foreignField : "productID",
as : "variations"
}
}
]
).pretty()

Is context telemetry "grouped" during the sampling of request telemetry?

Is context telemetry "grouped" during the sampling of request telemetry?
For example, the data below contains a request which has a sample count of 10 ("count": 10), meaning that it is being used to represent 9 other "similar" requests; 90% of the telemetry has actually been discarded.
Does Application Insights only sample data together when the context data is exactly the same for the requests? For example, can I assume that the other 9 requests were also from 41.191.204.0 and have a custom field company of value 22f0141f-b3dc-53e1-86b8-dd0727c14497?
{
"request": [
{
"id": "bs6o2dRoL/Q=",
"name": "GET /api/resources",
"count": 10,
"responseCode": 200,
"success": true,
"url": "https://example.com/api/resources",
"urlData": {
"base": "/api/resources",
"host": "example.com",
"hashTag": "",
"protocol": "https"
},
"durationMetric": {
"value": 1073743.0,
"count": 11.0,
"min": 97613.0,
"max": 97613.0,
"stdDev": 0.0,
"sampledValue": 97613.0
}
}
],
"internal": {
"data": {
"id": "8cbd12ec-9780-11e6-b38b-c5e9335e7642",
"documentVersion": "1.61"
}
},
"context": {
"application": {
"version": "1.0.16286.5"
},
"data": {
"eventTime": "2016-10-21T11:21:16.942Z",
"isSynthetic": false,
"samplingRate": 9.09090909090909
},
"device": {
"type": "PC",
"osVersion": "Windows 10",
"roleInstance": "RD0003FF727A10",
"deviceName": "Other",
"deviceModel": "Other",
"browser": "Chrome",
"browserVersion": "Chrome 53.0",
},
"user": {
"isAuthenticated": false
},
"session": {
"isFirst": false
},
"operation": {
"id": "bs6o2dRoL/Q=",
"parentId": "bs6o2dRoL/Q=",
"name": "GET Resources/GetResourceAsync [id]"
},
"location": {
"clientip": "41.191.204.0",
"continent": "Africa",
"country": "South Africa",
"province": "Eastern Cape"
},
"custom": {
"dimensions": [
{
"company": "22f0141f-b3dc-53e1-86b8-dd0727c14497"
},
{
"factor": "100"
}
]
}
}
}
Application Insights does not group telemetry events based on the context, but based on the Operation ID. This is synchronized between the SDK sampling and the server side sampling to make sure you will be able to navigate between related page views and requests.
So if you want to make sure some events are grouped together in sampling, set their OperationId to be the same.
See here for full details on how Application Insights implements it's sampling.
Hope this helps,
Asaf

Resources