Is there a way to set different indexes for different contracts in couchDB with hyperledger fabric - couchdb

I'm using hyperledger fabric 1.4 with CouchDB 2.3.1 and 2 contracts but I'm having trouble setting the indexes in the contracts and don't get how to upload the indexes to couchDB.
These are my indexes:
META-INF/statedb/couchdb/indexes/carIndex.json
{
"index": {
"fields": [
"idCar",
"date"
]
},
"ddoc": "indexIdCarDoc",
"name": "indexIdCar",
"type": "json"
}
META-INF/statedb/couchdb/indexes/bikeIndex.json
{
"index": {
"fields": [
"idBike",
"date"
]
},
"ddoc": "indexIdBikeDoc",
"name": "indexIdBike",
"type": "json"
}
how can I tell hyperledger to set first index for mychannel_carchaincode table and the second index for the mychannel_bikechaincode table?
Also, my chaincode its written in typescript, should my META-INF/statedb/couchdb/indexes folder be in the dist folder? its that why I can't see the indexes in my couchdb after I upgrade? or indexes can only be uplodaded on instantiate?
Thanks

I was with the same problem. My version is 1.4.7 for hyperledger and I'm using IBM Blockchain Extesion for VS Code.
I solve when I put this folder inside lib folder in my project.
lib/META-INF/statedb/couchdb/indexes/index.json
After Upgrade the smart contract,
to make sure that works, go to the terminal and use this docker command:
docker logs 39f4adec6057 2>&1 | grep "CouchDB index"
where 39f4adec6057 is the peer container
if work this show something like:
[couchdb] CreateIndex -> INFO 0fc Created CouchDB index [tipoAtivo] in state database [mychannel_integra-chaincode] using design document [_design/tipoAtivoDoc]
If you are using Typescript, this will be compiled: make sure that this folder will be copy to dist folder. For this you can add to your package JSON a postbuild
"postbuild": "cp -av ./META-INF dist/lib/META-INF",

When you give chaincode command, you have to use "use_index" parameter and have to tell the chaincode about what index it will use for this command.
Link and example:
https://hyperledger-fabric.readthedocs.io/en/release-2.2/couchdb_tutorial.html#use-best-practices-for-queries-and-indexes

Related

How to perform an In-Place-Update in Solr?

I currently try to perform an In-Place Update in Solr 8.11.1 (https://solr.apache.org/guide/8_11/updating-parts-of-documents.html#in-place-updates). But the update does not seem to be successful even though my field to be updated fulfills all the listed criteria as well as the version field and there are no copy fields.
I recreated it with a minimal schema with a local Solr docker container and still can't make it work.
started a new solr container: docker run -d -p 8983:8983 --name my_solr -t solr:8.11.1-slim (using this version because this version is used in our project)
created core: docker exec -it my_solr solr create_core -c gettingstarted
created a non-indexed, non-stored, single-valued, numeric docValued field popularity
this leads to a minimal schema with the following fields: http://localhost:8983/solr/gettingstarted/schema/fields
{
"responseHeader":{
"status":0,
"QTime":0},
"fields":[{
"name":"_nest_path_",
"type":"_nest_path_"},
{
"name":"_root_",
"type":"string",
"docValues":false,
"indexed":true,
"stored":false},
{
"name":"_text_",
"type":"text_general",
"multiValued":true,
"indexed":true,
"stored":false},
{
"name":"_version_",
"type":"plong",
"indexed":false,
"stored":false},
{
"name":"id",
"type":"string",
"multiValued":false,
"indexed":true,
"required":true,
"stored":true},
{
"name":"popularity",
"type":"pint",
"uninvertible":false,
"docValues":true,
"indexed":false,
"stored":false}]}
I created a document with ID "1"
I perform the in-place update and enforce the in-place update as suggested in the documentation:
http://localhost:8983/solr/gettingstarted/update?commit=true&update.partial.requireInPlace=true' --data-binary '[{"id":"1", "popularity":{"set":99}}]
with a successful response so I would expect that the in-place update was successful:
{"responseHeader":{"status":0,"QTime":1}}
However, the update was not performed as I can't retrieve the popularity-value via the field list (https://solr.apache.org/guide/6_6/docvalues.html#DocValues-RetrievingDocValuesDuringSearch)
{
"responseHeader":{
"status":0,
"QTime":2,
"params":{
"q":"*:*",
"indent":"true",
"fl":"id,popularity",
"q.op":"OR",
"_":"1662731041895"}},
"response":{"numFound":1,"start":0,"numFoundExact":true,"docs":[
{
"id":"1"}]
}}
Can anyone explain this behavior as I would expect this in-place-update to work as it is.
Best regards,
Jonas

Azure Function Binding Types - The binding type(s) 'webPubSubTrigger' were not found in the configured extension bundle

I'm trying to trigger an Azure Function when a Web PubSub message is published.
According to the example in this article, I should be able to use the following to trigger a function when a new message is sent to a specific hub...
{
"disabled": false,
"bindings": [
{
"type": "webPubSubTrigger",
"direction": "in",
"name": "data",
"hub": "ABC",
"eventName": "message",
"eventType": "user"
}
],
"scriptFile": "../dist/WebPubSubTrigger/index.js"
}
However, I keep getting this error whenever I initialise the function app...
The 'WebPubSubTrigger' function is in error: The binding type(s) 'webPubSubTrigger' were not found in the configured extension bundle. Please ensure the type is correct and the correct version of extension bundle is configured.
Here's my extensionBundle config in host.json...
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
But this article does not have it listed as a supported binding type so I'm a little puzzled to say the least!
Can anyone point me in the right direction please?
I'm running my Functions in a NodeJS environment but that shouldn't make a difference I don think.
I have also already tried installing manually, as per below, but tells me it's already installed 🤷🏽‍♂️
Me | Tue 28 # 15:49 ~/Development $ func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub --version 1.0.0
No action performed. Extension bundle is configured in /Users/me/Development/host.json`
According to the extension bundles versions available, setting the version to [3.3.0, 4.0.0) should do the trick looks like. Do note that this will update other extensions as well. So, it would be best to test that other functions are not breaking do to this change
Another option would be to just install this extension explicitly with this command as mentioned in the Web PubSub docs.
func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub --version 1.0.0

Nagios Core Influxdb not showing nagios data

I have followed this guide:
https://support.nagios.com/kb/article/nagios-core-performance-graphs-using-influxdb-nagflux-grafana-histou-802.html#Nagflux_Config
Already have pnp4nagios running on the server (Debian 9). But I can't get any further, busy for weeks to try to get this fixed.
I am stuck at this point:
Verify Nagflux Is Working
Execute the following query to verify that InfluxDB is being populated with Nagios performance data:
curl -G "http://localhost:8086/query?db=nagflux&pretty=true" --data-urlencode "q=show series"
When I execute that command I get this:
{
"results": [
{}
]
}
Already done this on another distro (CentOS 8), still not results.
But when I execute this command (earlier in the documentation)
curl -G "http://localhost:8086/query?pretty=true" --data-urlencode "q=show databases"
This works:
{
"results": [
{
"series": [
{
"name": "databases",
"columns": [
"name"
],
"values": [
[
"_internal"
],
[
"nagflux"
]
]
}
]
}
]
}
I can add the InfluxDB datasource succesfully in Grafana but I can not select any data when I try so select it from the field "FROM".
It's only showing:
Default
Autogen
So I am very curious what am I doing wrong, normally the documentation from Nagios support works very good.
Thank you big time for reading my issue :).
As you already have PNP4Nagios installed, https://support.nagios.com/kb/article/nagios-core-using-grafana-with-pnp4nagios-803.html would be more apropriate solution for you.
/usr/local/nagios/etc/nagios.cfg has different host_perfdata_file_processing_command when you try to fill influxdb (with nagflux) instead of using Grafana with PNP4Nagios.
You don't need another server. I have Nagios Core, InfluxDB, Nagflux, Histou and Grafana working on same machine.
And you don't have to uninstal PNP4Nagios, just stop & disable service on boot: systemctl stop npcd.service && systemctl disable npcd.service.
After that you have to edit nagios.cfg according to: https://support.nagios.com/kb/article/nagios-core-performance-graphs-using-influxdb-nagflux-grafana-histou-802.html#Nagios_Command_Config to change host_perfdata_file_processing_command value, and change format of *_perfdata_file_template.
Then define process-host-perfdata-file-nagflux & process-service-perfdata-file-nagflux commands in commands.cfg.
If you did like described above, after minute you should see changes in your nagflux database.
Install influxdb-client, then:
influx
use nagflux
SELECT * FROM METRICS
You should see your database loading :)

Azure Function blobTrigger not registered

As title, when I try to run my nodejs based azure function, I come across the following error:
The following 1 functions are in error: [7/2/19 1:41:17 AM] ***: The binding type(s) 'blobTrigger' are not registered. Please ensure the type is correct and the binding extension is installed.
I tried func extensions install --force with no luck still, any idea? My development environment is macOS and I tried both nodejs based azure-functions-core-tools and brew based install both doesn't work.
The most scary part is this used to work fine on the same machine, all a sudden it just failed to work.
Basically, you can refer to the offical tutorial for Linux Create your first function hosted on Linux using Core Tools and the Azure CLI (preview) to start up your work.
Due to the same shell bash used in MacOS and Linux, I will start up my sample demo for you on Linux and avoid using those incompatible operations. First of all, assumed that there is an usable NodeJS runtime in your environment. The version of node and npm is v10.16.0 and 6.9.0.
To install azure-functions-core-tools via npm and inspect it, as the figure below.
Next to init a project MyFunctionProj via func
Then to new a function with blob trigger
There is an issue about the requirement for .NET Core SDK. So I move to https://www.microsoft.com/net/download to install it, here is incompatible with MacOS, but I think you can easy to fix it by yourself. So I followed the offical installation instruction to install it.
After installed .NET Core SDK, try to func new again.
And completed like this.
To change two configuration files MyFunctionProj/local.settings.json and MyFunctionProj/MyBlobTrigger/function.json, as below.
MyFunctionProj/local.settings.json
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "node",
"AzureWebJobsStorage": "<your real storage connection string like `DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net`"
}
}
MyFunctionProj/MyBlobTrigger/function.json
{
"bindings": [
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "<the container name you want to monitor>/{name}",
"connection": "AzureWebJobsStorage"
}
]
}
Then, command func host start --build to start up it without any error.
Let's upload a test file named test.txt via Azure Storage Explorer to the container <the container name you want to monitor> which be configured in the function.json file. And you will see that MyBlobTrigger has been triggered and work fine.
Hope it helps.

How to add functions from custom JARs to EMR cluster?

I created an EMR cluster on AWS with Spark and Livy. I submitted a custom JAR with some additional libraries (e.g. datasources for custom formats) as a custom JAR step. However, the stuff from the custom JAR is not available when I try to access it from Livy.
What do I have to do to make the custom stuff available in the environment?
I am posting this as an answer to be able to accept it - I figured it out thanks to Yuval Itzchakov's comments and the AWS documentation on Custom Bootstrap Actions.
So here is what I did:
I put my library jar (a fat jar created with sbt assembly containing everything needed) into an S3 bucket
Created a script named copylib.sh which contains the following:
#!/bin/bash
mkdir -p /home/hadoop/mylib
aws s3 cp s3://mybucket/mylib.jar /home/hadoop/mylib
Created the following configuration JSON and put it into the same bucket besides the mylib.jar and copylib.sh:
[{
"configurations": [{
"classification": "export",
"properties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}],
"classification": "spark-env",
"properties": {}
}, {
"configurations": [{
"classification": "export",
"properties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}],
"classification": "yarn-env",
"properties": {}
},
{
"Classification": "spark-defaults",
"Properties": {
"spark.executor.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/mylib/mylib.jar",
"spark.driver.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/mylib/mylib.jar"
}
}
]
The classifications for spark-env and yarn-env are needed for PySpark to work with Python3 on EMR through Livy. And there is another issue: EMR already populates the two extraClassPaths with a lot of libraries which are needed for EMR to function properly, so I had to run a cluster without my lib, extract these settings from spark-defaults.conf and adjust my classification afterwards. Otherwise, things like S3 access wouldn't work.
When creating the cluster, in Step 1 I referenced the configuration JSON file from above in Edit software settings, and in Step 3, I configured copylib.sh as a Custom Bootstrap Action.
I can now open the Jupyterhub of the cluster, start a notebook and work with my added functions.
I use an alternative way that does not use a bootstrap action.
Place the JARs in S3
Pass them in the --jars option of spark-submit eg. spark-submit --jars s3://my-bucket/extra-jars/*.jar. All the jars will be copied to the cluster.
This way we can use any jar from s3 if we missed to add bootstrap action during cluster creation.

Resources