Hyperledger fabric remove organisation from channel configuration results in error - hyperledger-fabric

Below is my use case:
I have two organisations Org1,Org2, when i replace Org2 from the channel configuration every thing works well, organisation gets removed from the channel, but when i remove the Org1 which is at index 0 in the config json and update it to the channel results in the error listed below. It seems like i am able to remove organisations in LIFO (LAST IN FIRST OUT) manner but i want to achieve the functionality where i can remove an organisation irrespective of the order of addition.
error applying config update to existing channel 'mychannel': initializing policymanager failed: policy Admins at path Channel/Application did not compile: identity index out of range, requested 1, but identities length is 1

It looks like you are not properly updating any policies that refer to the organization being removed.
When you remove an organization, you must remove the entire entry under Application.groups, for example Application.groups.Org1MSP.
You must also remove the organization from any policies under Application.policies, for example Application.policies.Admins.
Depending on the policy type, you may have one or more policies that look like this (AND('Org1MSP.admin', 'Org2MSP.admin')):
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "Org1MSP",
"role": "ADMIN"
},
"principal_classification": "ROLE"
},
{
"principal": {
"msp_identifier": "Org2MSP",
"role": "ADMIN"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
},
{
"signed_by": 1
}
]
}
},
"version": 0
}
},
"version": "0"
}
It is not enough to remove the organization from the policy.value.identities array. The signed_by values in the policy.value.rule.n_out_of.rules array reference entries (by their array index) in the policy.value.identities array.
The error you have suggests that you have a policy somewhere with a signed_by value of 1, but the corresponding policy.value.identities array for the policy only has a length of 1.

Related

Azure Policy - GitHub Actions - Authoring Custom Policies

I'm working on testing out using GitHub and GitHub Actions to do policy as code for Azure. I have been successful in following the tutorials that Microsoft has where you export the policy you want to manage to GitHub from the Azure portal. This works fine and I'm able to edit and run the workflows to update Azure with changes to policies.
What I'd like to know is, can you create NEW policies in GitHub and push them to Azure? It seems that you need to first export a custom policy from Azure into GitHub, then you can manage that policy. I say this because when I create a new policy and a workflow for that policy I get the following error in GitHub from the workflow:
> Did not find any policies to create/update. No policy files match the
> given patterns or no changes were detected.
The policy I have in the folder is called "policy.json"
I also see:
Error occured while reading policy in path :
policies/global_tagging_policy. Error : Error: Path :
policies/global_tagging_policy. Property id is missing from the policy
definition. Please add id to the definition file.
That leads me to believe I need an ID prior to being able to push a policy, that says to me that Azure must have assigned one... I can't just make one up.
This is the policy I'm trying to push - just a tagging policy for testing, I don't have an ID in there, I read that you don't need to add one... that Azure would do it for you. Am I wrong?:
{
"properties": {
"displayName": "test-policy",
"description": "this is a test policy",
"mode": "indexed",
"parameters": {
"tagName": {
"type": "String",
"metadata": {
"displayName": "Tag Name",
"description": "Name of the tag, such as 'environment'"
}
},
"tagValue": {
"type": "String",
"metadata": {
"displayName": "Tag Value",
"description": "Value of the tag, such as 'production'"
}
}
}
},
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Resources/subscriptions/resourceGroups"
},
{
"field": "[concat('tags[', parameters('tagName'), ']')]",
"exists": "false"
}
]
},
"then": {
"effect": "modify",
"details": {
"roleDefinitionIds": [
"/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"operations": [
{
"operation": "add",
"field": "[concat('tags[', parameters('tagName'), ']')]",
"value": "[parameters('tagValue')]"
}
]
}
}
}
}
This tripped me up too so I did some exploring of the APIs and files. I've written about this in greater detail here.
To create a custom Policy, Initiative or Assignment file using GitHub Actions you'll need to generate an id, name & type at the root of the JSON.
The name property needs to be unique at the scope you assign it, I use GUIDs for this but you don't have to. Bear in mind if you define/assign at the Management Group scope then the name needs to be 24 characters or less.
The type denotes the type of file, the options are:
Microsoft.Authorization/policyDefinitions --> Policies
Microsoft.Authorization/policySetDefinitions --> Initiatives
Microsoft.Authorization/policyAssignments --> Assignments
The id is a bit more complex, and is a concatenation of the name and type values with other values mixed in.
The prefix depends on the scope which you want to define your Policy/Initiative/Assignment.
For Management Groups it would be:
/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000
Subscriptions would be:
/subscrptions/00000000-0000-0000-0000-000000000000
Resource Groups:
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup
This is followed by: providers in all cases
Next is the type value, so whatever you've used for that use again here.
Finally the last segment of the id is the same value you've used for the name property.
In one line that is
/{scope}/providers/{type}/{name}
So as an example:
Policy Definition scoped at a Management Group
{
"id": "/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/policyDefinitions/5f44e572-5d2d-4edf-9d61",
"name": "5f44e572-5d2d-4edf-9d61",
"type": "Microsoft.Authorization/policyDefinitions",
"properties":{}
}
Policy Definition scoped at a Subscription
{
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/policyDefinitions/8e4a8c58-1938-4467-8698",
"name": "8e4a8c58-1938-4467-8698",
"type": "Microsoft.Authorization/policyDefinitions",
"properties":{}
}
Initiative scoped at a Management Group
{
"id": "/providers/Microsoft.Management/managementGroups/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/policySetDefinitions/be09f23f-0252-4d8a-a805",
"name": "5f44e572-5d2d-4edf-9d61",
"type": "Microsoft.Authorization/policySetDefinitions",
"properties":{}
}
Initiative scoped at a Subscription
{
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Authorization/policySetDefinitions/8e4a8c58-1938-4467-8698",
"name": "8e4a8c58-1938-4467-8698",
"type": "Microsoft.Authorization/policySetDefinitions",
"properties":{}
}

Can I index an array in a composite index in Azure Cosmos DB?

I have a problem indexing an array in Azure Cosmos DB
I am trying to save this indexing policy via the portal
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/*"
}
],
"excludedPaths": [
{
"path": "/\"_etag\"/?"
}
],
"compositeIndexes": [
[
{
"path": "/DeviceId",
"order": "ascending"
},
{
"path": "/TimeStamp",
"order": "ascending"
},
{
"path": "/Items/[]/Name/?",
"order": "ascending"
},
{
"path": "/Items/[]/DoubleValue/?",
"order": "ascending"
}
]
]
}
I get the error "Failed to update container DeviceEvents:
Message: {"code":"BadRequest","message":"Message: {"Errors":["The indexing path '\/Items\/[]\/Name\/?' could not be accepted, failed near position '8'."
This seems to be the array [] syntax that is giving an error.
On a side note I am not sure what I am doing makes sense at all but I have a query that looks like this
SELECT SUM(de0["DoubleValue"])
FROM root JOIN de0 IN root["Items"]
WHERE root["ApplicationId"] = 57 AND root["DeviceId"] = 126 AND root["TimeStamp"] >= "2021-02-21T17:55:29.7389397Z" AND de0["Name"] = "Use Case"
Where ApplicationId is the partition key and the item saved looks like this
{
"id": "59ab9323-26ca-436f-8d29-e1ddd826f025",
"DeviceId": 3,
"ApplicationId": 3,
"RawData": "640F7A000A00E30142000000",
"TimeStamp": "2021-02-20T18:36:52.833174Z",
"Items": [
{
"Name": "Battery Status",
"StringValue": "Full",
"DoubleValue": null
},
{
"Name": "Use Case",
"StringValue": null,
"DoubleValue": 12
},
{
"Name": "Battery Voltage",
"StringValue": null,
"DoubleValue": 3.962
},
{
"Name": "Rain Gauge Count",
"StringValue": null,
"DoubleValue": 10
}
],
"_rid": "CgdVAO7B0DNkAAAAAAAAAA==",
"_self": "dbs/CgdVAA==/colls/CgdVAO7B0DM=/docs/CgdVAO7B0DNkAAAAAAAAAA==/",
"_etag": "\"61008771-0000-0d00-0000-603156c50000\"",
"_attachments": "attachments/",
"_ts": 1613846213
}
I need to aggregate on some of these items in the array like say get MAX on temperature or something like this (using Use Case for test although it doesn't make sense). I reasoned that if all the data in the query is in a single composite index the database would be able to do the aggregation without reading the documents themselves. However I can't seem to add a composite index containing an array at all.
Yes, composite index can't contain an array path. It should be a scalar value.
Unlike with included or excluded paths, you can't create a path with
the /* wildcard. Every composite path has an implicit /? at the end of
the path that you don't need to specify. Composite paths lead to a
scalar value and this is the only value that is included in the
composite index.
Reference:https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy#composite-indexes

Q: Azure Cosmos DB Graph: How to run queries in Graph API when Indexing Policy is defined as Manual?

In Cosmos DB graph when I am defining Indexing policy as Automatic, I am able to run queries but when I am updating indexing policy to Manual and defining Indexing path (/label/?) and Indexing mode set as 'Consistent', the query is not fetching any data.
Let's say my first query (when Indexing policy set as Manual) is :
g.addV('Azure').property('name','Cerulean Software'))
Result is :
[
{
"id": "0c14a00a-edf6-46b1-9e40-45cc37f750ea",
"label": "Azure",
"type": "vertex",
"properties": {
"name": [
{
"id": "f89ee2ee-74df-4256-a5d4-2b47eb526976",
"value": "Cerulean Software"
}
]
}
}
]
Now, my second query (when Indexing policy set as Manual (see Edit #1 below)) is:
g.V().hasLabel('Azure')
This second query is not fetching any result even though there is vertex present in graph named as 'Azure'.
What could be the possible reason behind this?
Edit #1: Manual Indexing Policy Before Change
"indexingPolicy": {
"automatic": false,
"excludedPaths": [],
"includedPaths": [
{
"path": "/*",
"indexes": [
{
"dataType": "Number",
"kind": "Range",
"precision": -1
},
{
"dataType": "String",
"kind": "Hash",
"precision": 3
}
]
},
{
"path": "/label/?",
"indexes": [
{
"dataType": "String",
"kind": "Hash",
"precision": 3
},
{
"dataType": "Number",
"kind": "Range",
"precision": -1
}
]
}
],
"indexingMode": "consistent"
},
Edit #2: Manual Indexing Policy After Change
"indexingPolicy": {
"automatic": false,
"excludedPaths": [],
"includedPaths": [
{
"path": "/*",
"indexes": [
{
"dataType": "Number",
"kind": "Range",
"precision": -1
},
{
"dataType": "String",
"kind": "Hash",
"precision": 3
}
]
},
{
"path": "/_isEdge/?",
"indexes": [
{
"dataType": "String",
"kind": "Hash",
"precision": 3
},
{
"dataType": "Number",
"kind": "Range",
"precision": -1
}
]
}
],
"indexingMode": "consistent"
},
With Cosmos, graph statements are not executed as traversals on the Azure side. The graph client actually translates gremlin statements into Document SQL calls and then aggregates the results back to you on the client side. In the case of your statement g.V().hasLabel('Azure') the call is actually translated to {"query":"SELECT N_2 FROM Node N_2 WHERE (IS_DEFINED(N_2._isEdge) = false AND (N_2.label = 'Azure'))"}
This can be verified through the use of a proxy such as Fiddler which will allow you to inspect the outbound calls from your machine.
The top level _isEdge property seems to be used across almost all Gremlin translated queries so I suspect that if you add that property to your indexing policy you should start to see the expected results.
EDIT:
I originally missed the part of your indexing policy that sets automatic: false. According to the Cosmos docs (under the heading Opting in and opting out of indexing), By default, all documents are automatically indexed, but you can choose to turn it off. When indexing is turned off, documents can be accessed only through their self-links or by queries using ID.
If you choose to run with indexing turned off, then the rest of your indexing policy is effectively meaningless and queries that aren't directly by document Id will no longer work. Can you elaborate as to what you're actually trying to accomplish here? There seems to be a bit of confusion. The indexing settings you've placed on label and isEdge aren't even necessary because they are the same as the value you've put for * which is the default rule matching all paths.
Post what your goals are for your indexing strategy and I can try to make an appropriate recommendation but you're definitely going to want to put automatic: true back into your policy.

How to search through data with arbitrary amount of fields?

I have the web-form builder for science events. The event moderator creates registration form with arbitrary amount of boolean, integer, enum and text fields.
Created form is used for:
register a new member to event;
search through registered members.
What is the best search tool for second task (to search memebers of event)? Is ElasticSearch well for this task?
I wrote a post about how to index arbitrary data into Elasticsearch and then to search it by specific fields and values. All this, without blowing up your index mapping.
The post is here: http://smnh.me/indexing-and-searching-arbitrary-json-data-using-elasticsearch/
In short, you will need to do the following steps to get what you want:
Create a special index described in the post.
Flatten the data you want to index using the flattenData function:
https://gist.github.com/smnh/30f96028511e1440b7b02ea559858af4.
Create a document with the original and flattened data and index it into Elasticsearch:
{
"data": { ... },
"flatData": [ ... ]
}
Optional: use Elasticsearch aggregations to find which fields and types have been indexed.
Execute queries on the flatData object to find what you need.
Example
Basing on your original question, let's assume that the first event moderator created a form with following fields to register members for the science event:
name string
age long
sex long - 0 for male, 1 for female
In addition to this data, the related event probably has some sort of id, let's call it eventId. So the final document could look like this:
{
"eventId": "2T73ZT1R463DJNWE36IA8FEN",
"name": "Bob",
"age": 22,
"sex": 0
}
Now, before we index this document, we will flatten it using the flattenData function:
flattenData(document);
This will produce the following array:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "2T73ZT1R463DJNWE36IA8FEN"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Bob"
},
{
"key": "age",
"type": "long",
"key_type": "age.long",
"value_long": 22
},
{
"key": "sex",
"type": "long",
"key_type": "sex.long",
"value_long": 0
}
]
Then we will wrap this data in a document as I've showed before and index it.
Then, the second event moderator, creates another form having a new field, field with same name and type, and also a field with same name but with different type:
name string
city string
sex string - "male" or "female"
This event moderator decided that instead of having 0 and 1 for male and female, his form will allow choosing between two strings - "male" and "female".
Let's try to flatten the data submitted by this form:
flattenData({
"eventId": "F1BU9GGK5IX3ZWOLGCE3I5ML",
"name": "Alice",
"city": "New York",
"sex": "female"
});
This will produce the following data:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "F1BU9GGK5IX3ZWOLGCE3I5ML"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Alice"
},
{
"key": "city",
"type": "string",
"key_type": "city.string",
"value_string": "New York"
},
{
"key": "sex",
"type": "string",
"key_type": "sex.string",
"value_string": "female"
}
]
Then, after wrapping the flattened data in a document and indexing it into Elasticsearch we can execute complicated queries.
For example, to find members named "Bob" registered for the event with ID 2T73ZT1R463DJNWE36IA8FEN we can execute the following query:
{
"query": {
"bool": {
"must": [
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "eventId"}},
{"match": {"flatData.value_string.keyword": "2T73ZT1R463DJNWE36IA8FEN"}}
]
}
}
}
},
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "name"}},
{"match": {"flatData.value_string": "bob"}}
]
}
}
}
}
]
}
}
}
ElasticSearch automatically detects the field content in order to index it correctly, even if the mapping hasn't been defined previously. So, yes : ElasticSearch suits well these cases.
However, you may want to fine tune this behavior, or maybe the default mapping applied by ElasticSearch doesn't correspond to what you need : in this case, take a look at the default mapping or, for even further control, the dynamic templates feature.
If you let your end users decide the keys you store things in, you'll have an ever-growing mapping and cluster state, which is problematic.
This case and a suggested solution is covered in this article on common problems with Elasticsearch.
Essentially, you want to have everything that can possibly be user-defined as a value. Using nested documents, you can have a key-field and differently mapped value fields to achieve pretty much the same.

How do you partially-match IDs in CouchDB?

I have a set of ACLs in Couch and I want to create a view that matches applicable ones. So, given the data:
[
{
"_id": "/protected",
"type": "valid-user"
},
{
"_id": "/protected/group1",
"type": "require group group1"
},
{
"_id": "/protected/group1/public",
"type": "public"
},
{
"_id": "/protected/group2",
"type": "require group group2"
},
{
"_id": "/admin",
"type": "require user admin"
}
]
I'd like to create a view that'd allow me to pass in a string and have it find the "best" (that is to say the longest) match.
The best I've been able to do is to create a view that returns the ID split into an array and then spam queries trimming the last element off until I get a match. Surely there's a way to do this on the server side ...
You could create a list function to accomplish that.

Resources