For our IoT solution we are trying to tackle a synchronizing issue with the device Twin.
In the normal situation the Cloud is in charge. So the cloud will set a desired property in the IoT hub device twin. The device will get a notification, change the property on the device and write the reported property that the device is in sync.
But for our case the user of the device can also change properties locally. So in this case the reported property will change and is out of sync with the desired.
How should we handle this? update the desired? leave it as is?
And a other case can be that properties can be deleted from both sides. see the attacted picture.
Writen use cases
here an example of the json twin:
"desired" : {
"recipes" : {
"recipe1" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe2" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe3" : {
"uri" : "blob.name.csv",
"version" : "1"
}
}
},
"reported" : {
"recipes" : {
"recipe1" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe2" : {
"uri" : "blob.name.csv",
"version" : "3"
},{
"recipe3" : {
"uri" : "blob.name.csv",
"version" : "2"
}
}
I hope the question is clear. Thanks in advance.
Kind regards,
Marc
The approach to conflict resolution is specific to the business, it's not possible to define a universal rule. In some scenarios the user intent is more important than the service, and viceversa.
For instance an employee working late wants an office temperature of 76F, and automatic building management service wants a temp of 70F out of hours, in this case the user wins (desired property is discarded). In another example, an employee wants to enter the office building out of hours and turn on all the light, but the building management service won't allow it (while a building admin would be allowed instead...) etc.
Related
I am working with Microsoft 365 teams, channels and SharePoint Online sites/site collections. I have the Microsoft Graph API and the PnP.SharePoint PowerShell module at my disposal (among other APIs).
Private channels get their own SharePoint sites. I have the following data for the private channel:
> Get-PnPTeamsChannel -Team 9e0e388c-ad9e-40c4-a7f5-406060b175af | FL
Type :
Tabs :
TabResources :
Messages :
DisplayName : Private Channel
MembershipType : private
Description :
IsFavoriteByDefault :
Id : 19:2e708ed1ddee425fbcb6f509ea8d497c#thread.tacv2
Members :
I have the following data for the corresponding SharePoint site:
> Get-PnPTenantSite https://redacted.sharepoint.com/sites/Marketing-PrivateChannel | FL
AllowDownloadingNonWebViewableFiles : False
AllowEditing : True
AllowSelfServiceUpgrade : True
AnonymousLinkExpirationInDays : 0
BlockDownloadLinksFileType : WebPreviewableFiles
CommentsOnSitePagesDisabled : False
CompatibilityLevel : 15
ConditionalAccessPolicy : AllowFullAccess
DefaultLinkPermission : None
DefaultLinkToExistingAccess : False
DefaultSharingLinkType : None
DenyAddAndCustomizePages : Enabled
Description :
DisableAppViews : NotDisabled
DisableCompanyWideSharingLinks : NotDisabled
DisableFlows : NotDisabled
DisableSharingForNonOwnersStatus :
ExternalUserExpirationInDays : 0
GroupId : 00000000-0000-0000-0000-000000000000
HubSiteId : 00000000-0000-0000-0000-000000000000
InformationSegment :
IsHubSite : False
LastContentModifiedDate : 18.01.2022 07:44:15
LimitedAccessFileType : WebPreviewableFiles
LocaleId : 1031
LockIssue :
LockState : Unlock
OverrideTenantAnonymousLinkExpirationPolicy : False
OverrideTenantExternalUserExpirationPolicy : False
Owner : [REDACTED]
OwnerEmail : [REDACTED]
OwnerLoginName : [REDACTED]
OwnerName : [REDACTED]
ProtectionLevelName :
PWAEnabled : Disabled
RelatedGroupId : 9e0e388c-ad9e-40c4-a7f5-406060b175af
ResourceQuota : 300
ResourceQuotaWarningLevel : 200
ResourceUsageAverage : 0
ResourceUsageCurrent : 0
RestrictedToGeo : Unknown
SandboxedCodeActivationCapability : Disabled
SensitivityLabel :
SharingAllowedDomainList :
SharingBlockedDomainList :
SharingCapability : ExternalUserSharingOnly
SharingDomainRestrictionMode : None
ShowPeoplePickerSuggestionsForGuestUsers : False
SiteDefinedSharingCapability : ExternalUserSharingOnly
SocialBarOnSitePagesDisabled : False
Status : Active
StorageQuota : 26214400
StorageQuotaType :
StorageQuotaWarningLevel : 25574400
StorageUsageCurrent : 0
Template : TEAMCHANNEL#1
Title : Marketing - Private Channel
Url : https://redacted.sharepoint.com/sites/Marketing-PrivateChannel
WebsCount : 1
I cannot see anything sensible linking these two together.
I have found a way, but don't like it and don't know if it's reliable. I can make the following Graph API call:
https://graph.microsoft.com/v1.0/teams/9e0e388c-ad9e-40c4-a7f5-406060b175af/channels/19:2e708ed1ddee425fbcb6f509ea8d497c#thread.tacv2/filesFolder
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#teams('9e0e388c-ad9e-40c4-a7f5-406060b175af')/channels('19%3A2e708ed1ddee425fbcb6f509ea8d497c%40thread.tacv2')/filesFolder/$entity",
"id": "01C2YTDRBXDNT52I3PDBHYAWPR7R5ZFVHC",
"createdDateTime": "0001-01-01T00:00:00Z",
"lastModifiedDateTime": "2022-01-25T14:55:05Z",
"name": "Private Channel",
"webUrl": "https://redacted.sharepoint.com/sites/Marketing-PrivateChannel/Freigegebene%20Dokumente/Private%20Channel",
"size": 0,
"parentReference": {
"driveId": "b!oc1UdWZkGke6dY_CL8UCbH68JWN0VxpOmPzYFUC8hoMZ8jgTqQM5S5QHyRVAZnAR",
"driveType": "documentLibrary"
},
"fileSystemInfo": {
"createdDateTime": "2022-01-25T14:55:05Z",
"lastModifiedDateTime": "2022-01-25T14:55:05Z"
},
"folder": {
"childCount": 0
}
}
Now I could compare the returned webUrl of the filesFolder with the Url of the SharePoint site ("contains").
Is there another way or a better way? Thank you very much in advance.
You can use the tenant admin site to get the SharePoint URLs of the private channels. I wrote a description down in https://notdoneyet.blog/powershell/ms%20teams/2022/12/05/Find-private-channels.html
It's not really clear what you're wanting to actually do here, but in simple terms, yes, you could match them up this way. It's not really actually necessary to do that though - you can get the SharePoint Online site right from your filesFolder call. You have:
https://redacted.sharepoint.com/sites/Marketing-PrivateChannel/Freigegebene%20Dokumente/Private%20Channel
That follows a standard pattern:
https://[tenant].sharepoint.com/[something, but generally 'sites']/[the site part]/[document library name - note that this is language specific, like in your case it is in German]/[private channel name].
If you remove the last part [document library name/private channel name], you'll be fine. You can use regex or similar to that, grabbing the first part. Another option is to remove https and then split the string on / and grab part '2'.
I am using firebase real-time database. I don't want to get all child nodes for a particular parent node, I am concerned this with a particular node, not the sibling nodes. Fetching all the sibling nodes increases my billing in firebase as extra XXX MB of data is fetched. I am using NodeJs admin library for fetching this.
Adding a sample JSON
{
"phone" : {
"shsjsj" : {
"battery" : {
"isCharging" : true,
"level" : 0.25999999046325684,
"updatedAt" : "2018-05-15 12:45:29"
},
"details" : {
"deviceCodeName" : "sailfish",
"deviceManufacturer" : "Google",
"deviceName" : "Google Pixel",
},
"downloadFiles" : {
"7bfb21ff683f8652ea390cd3a380ef53" : {
"uploadedAt" : 1526141772270,
}
},
"token" : "cgcGiH9Orbs:APA91bHDT3mI5L3N62hqUT2LojqsC0IhwntirCd6x0zz1CmVBz6CqkrbC",
"uploadServer" : {
"createdAt" : 1526221336542,
}
},
"hshssjjs" : {
"battery" : {
"isCharging" : true,
"level" : 0.25999999046325684,
"updatedAt" : "2018-05-15 12:45:29"
},
"details" : {
"deviceCodeName" : "sailfish",
"deviceManufacturer" : "Google",
"deviceName" : "Google Pixel",
},
"downloadFiles" : {
"7bfb21ff683f8652ea390cd3a380ef53" : {
"uploadedAt" : 1526141772270,
}
},
"token" : "cgcGiH9Orbs:APA91bH_oC18U56xct4dRuyw9qhI5L3N62hqUT2LojqsC0IhwntirCd6x0zz1CmVBz6CqkrbC",
"uploadServer" : {
"createdAt" : 1526221336542,
}
}
}
}
In the above sample JSON file, i want to fetch all phone->$deviceId->token. Currently, I am fetching the whole phone object, then I iterate over all the phone ID's to fetch the token. This spikes my database download usage and increases billing. I am only concerned with the token of all the devices. Siblings of the token is unnecessary.
All queries to Realtime Database fetch everything under the location requested. There is no way to limit to certain children under that location. If you want only certain children at a location, but not everything under that location, you'll have to query for each one of them separately. Or, you can restructure or duplicate your data to support the specific queries you want to perform - duplication is common for nosql type databases.
We upgraded from graylog 2.1.3 to 2.3.2 and now receive this message repeatedly. Some parts of the UI load but not Search or Streams. Alerts are still going out. Anyone now how I can fix this? Rollback seems to not work at all.
Could not apply filter [StreamMatcher] on message <d8fa4293-dc7a-11e7-bc81-0a206782e8c1>:
java.lang.IllegalStateException: index set must not be null! (stream id=5a00a043a9b2c72984c581b6 title="My Streams")
What seems to have happened is that some streams did not get the "index_set_id" added in their definition in the Streams Collection in mongo. Here is an example of a bad one:
{
"_id" : ObjectId("5a1d6bb2a9b2c72984e24dc0"),
"creator_user_id" : "admin",
"matching_type" : "AND",
"description" : "EU2 Queue Prod",
"created_at" : ISODate("2017-11-28T13:59:14.546Z"),
"disabled" : false,
"title" : "EU2 Queue Prod",
"content_pack" : null
}
I was able to add the "index_set_id" : "59bb08b469d42f3bcfa6f18e" value in and restore the streams:
{
"_id" : ObjectId("5a1d6bb2a9b2c72984e24dc0"),
"creator_user_id" : "admin",
"index_set_id" : "59bb08b469d42f3bcfa6f18e",
"matching_type" : "AND",
"description" : "EU2 Queue Prod",
"created_at" : ISODate("2017-11-28T13:59:14.546Z"),
"disabled" : false,
"title" : "EU2 Queue Prod",
"content_pack" : null
}
I faced this issue too with other version of Graylog in kubernetes environment.
I took below actions to fix this issue:
From Graylog UI under Stream menu, select more actions next to your stream, in your case its : My stream click > edit stream > select "Default index set" from drop down list.
Do it for all the available streams.
I am trying to solve a problem that occurs when inserting related nodes in Neo4j. Nodes are inserted by several threads using the standard save method of org.springframework.data.neo4j.repository.GraphRepository.
Sometimes the insertion fails when fetching a related node in order to define a relationship. The exception messages are like this: org.neo4j.graphdb.NotFoundException: '__type__' on http://neo4j:7474/db/data/relationship/105550.
Calling this URL from curl returns a JSON object which appears to have __type__ correctly defined, which suggests that the exception is caused by a race between inserting threads.
The method that originates the calls to the repository is annotated #Neo4jTransactional. What atomicity and transaction isolation does #Neo4jTransactional guarantee? And how should I use it for multi-threaded insertions?
Update:
I have now been able to see this happening in the debugger. The code is trying to fetch the node at one end of this relationship, together with all its relationships. It throws an exception because the type attribute is missing. This is the JSON initially returned.
{
"extensions" : { },
"start" : "http://localhost:7474/db/data/node/617",
"property" : "http://localhost:7474/db/data/relationship/533/properties/{key}",
"self" : "http://localhost:7474/db/data/relationship/533",
"properties" : "http://localhost:7474/db/data/relationship/533/properties",
"type" : "CONTAINS",
"end" : "http://localhost:7474/db/data/node/650",
"metadata" : {
"id" : 533,
"type" : "CONTAINS"
},
"data" : { }
}
A few seconds later, the same REST call returns this JSON:
{
"extensions" : { },
"start" : "http://localhost:7474/db/data/node/617",
"property" : "http://localhost:7474/db/data/relationship/533/properties/{key}",
"self" : "http://localhost:7474/db/data/relationship/533",
"properties" : "http://localhost:7474/db/data/relationship/533/properties",
"type" : "CONTAINS",
"end" : "http://localhost:7474/db/data/node/650",
"metadata" : {
"id" : 533,
"type" : "CONTAINS"
},
"data" : {
"__type__" : "ProductRelationship"
}
}
I can't understand why there is such a long delay between inserting the relationship and specifying the type. Why doesn't it all happen at once?
I am starting to develop online football management game using NodeJS and MongoDB. But now i don't know, should i use multiple collections or can i put everything in one ? Example:
{
"_id" : ObjectId("5118ee01032016dc02000001"),
"country" : "Aruba",
"date" : "February 11th 2013, 3:11:29 pm",
"email" : "tadad#adadasdsd.com",
"name" : "test",
"pass" : "9WcFwIITRp0e82ca3c3b314a656bfb437553b1d013",
"team" : {
"name" : "teamname",
"logo" : "urltologo",
"color" : "color",
"players" : [{
"name" : "name",
"surname" : "surname",
"tackling" : 58,
"finishing" : 84,
"pace" : 51,
....
}, {
"name" : "name",
"surname" : "surname",
"start_age" : 19,
"tackling" : 58,
"finishing" : 84,
"pace" : 51,
...
}],
"stadium" : {
"name" : "stadium",
"capacity" : 50000,
"pic" : "http://urltopic",
....
},
},
}
or create different collections for users, fixtures, players, teams ? Or any other method ?
When I started with MongoDB, I went by the mantra of 'embed everything', which is exactly what you're doing above. However, there needs to be some consideration for sub-documents that can grow to be very large. You should think about how often you'll be updating a particular document or subdocument as well. For instance, your players are probably going to be updated on a regular basis, so you'd probably want to put them in their own collection for ease of use. Anyway, the flexibility of MongoDB makes it so that there's really no 'right' answer to this problem, but it may help you to refer to the docs on data modeling.
There is no hard and fast rule on how to design schemas in mongo. A lot depends on your application data access patterns, frequency of data access and the relationships between different entities, how they shrink/grow/change and which of them stay intact. It is not feasible to give an advice without knowing how your application is supposed to work. I recommend you consult a book, such as MongoDB in Action for example, which has advice on how to design schema in mongo properly taking into the account application specific requirements.