Query filter in composer rest server - hyperledger-fabric

I'm having problems with queries in composer rest server.
I'm building a filter like this:
{
"where": {
"and": [
{
"origin": "web"
},
{
"affiliate": "resource:org.acme.affiliates.Affiliate#2"
},
{
"createdAt": {
"gte": "2018-01-01"
}
},
{
"createdAt": {
"lte": "2018-06-07"
}
}
]
}
}
request:
curl -X GET --header 'Accept: application/json' 'http://localhost:3000/api/User?filter=%7B%22where%22%3A%7B%22and%22%3A%5B%7B%22origin%22%3A%22web%22%7D%2C%7B%22affiliate%22%3A%22resource%3Aorg.acme.affiliates.Affiliate%232%22%7D%2C%7B%22createdAt%22%3A%7B%22gte%22%3A%222018-01-01%22%2C%22lte%22%3A%222018-06-07%22%7D%7D%5D%7D%7D'
response:
[
{
"$class": "org.acme.affiliates.User",
"affiliate": "resource:org.acme.affiliates.Affiliate#2",
"userId": "14",
"email": "diego#duncan.com",
"firstName": "diego",
"lastName": "duncan",
"createdAt": "2018-04-20T20:48:08.151Z",
"origin": "web"
},
{
"$class": "org.acme.affiliates.User",
"affiliate": "resource:org.acme.affiliates.Affiliate#1",
"userId": "15",
"email": "diego#algo.com",
"firstName": "diego",
"lastName": "algo",
"createdAt": "2018-04-20T20:53:40.720Z",
"origin": "web"
}
]
As you see, filters are not working because Affiliate#1 appears.
I tested without createdAt filters and work perfectly, then i tested without affiliate and work good too. I tested with createdAt with range instead gte and lte with the same wrong result.
hlfv1
composer rest server v0.16.6

its a loopback filter issue, most likely to do with the date range comparison. (the other comparisons are fine as you wrote).
The suggestion here -> https://github.com/strongloop/loopback-connector-mongodb/issues/176 would suggest that you need to use the between operator instead for DateTimes. eg
{"where":{"createdAt":{"between": ['2018-01-05 10:00', '2018-05-10 10:00']}}}

I answer my own question:
I was using hlfv1 and composer 0.16.6
After update to hlfv11 and composer 0.19.8 the bug is fixed.

Related

POST relation in strapi

I have a strapi project and A single users have relation a single profile. For example if we using postman to get a single user it have response
[
{
"id": 45,
"username": "test",
"email": "test#gmail.com",
"provider": "local",
"confirmed": true,
"blocked": false,
"createdAt": "2022-07-18T08:50:43.642Z",
"updatedAt": "2022-07-18T08:50:43.642Z",
"profile": null
}
]
As you can see a user have a profile but have a null value. So when I tried to post a profile in strapi api http://localhost:1337/api/profiles?populate=* with body of json like this
{
"data" : {
"name": "fgh",
"address": "jksdjkdjs",
"phone": "345345",
"email" : "test#gmail.com" /// I tried to add this but the response is 500 internal server error
}}
It worked and status 200 ok but it didn't get the relation of the email from user
{
"data": {
"id": 12,
"attributes": {
"name": "fghdff",
"address": "jksdjkdjs",
"phone": 34534235,
"createdAt": "2022-08-05T13:48:29.548Z",
"updatedAt": "2022-08-05T13:48:29.548Z",
"publishedAt": "2022-08-05T13:48:29.547Z",
"email": {
"data": null
}
}
},
"meta": {}}
Any Idea how to post a relation in strapi? I tried to read docs or strapi forum but no one counter this problem
For someone facing how to POST relation in strapi we can use the ID of the User. For example to post and connect in my question is
{
"data" : {
"name": "dayee",
"address": "jksdjkdjs",
"phone": "34534235",
"email" : 77 // Post relation using ID
}}

CouchDB indexes to connect the dots between documents

I have the following documents:
{ _id: "123", type: "project", worksite_id: "worksite_1" }
{ _id: "456", type: "document", project_id: "123" }
{ _id: "789", type: "signature", document_id: "456" }
My goal is to run a query and to inevitably do a filtered replication of all documents that have a connection with worksite_id: worksite_1.
Example:
Because this project has the worksite I am looking for
document has that project
signature has that document
I should be able to retrieve all of these documents if I want everything from that worksite.
Normally I would just add a worksite_id to my type:document and type:signature. However, worksite's can change in a project for various reasons.
I was wondering if there is a way to create an index or do something I am not thinking about to show these resemblances.
This feels like it is on the right path but the explanation puts documents inside other documents where I just want them to be separate.
A map function only considers one document at a time, so unless that document knows about other documents, you can't link them together. Your structure implies a three-table join in SQL terms.
With your structure, the best you can hope for is a two-request solution. You can create a view that shows signed documents only:
function (doc) {
if (doc && doc.type && doc.type === "signature" && doc.document_id) {
emit(doc.document_id, {_id: doc.document_id})
}
}
and using the same technique, link projects to documents -- but you can't get all three linked.
I think I have what you are looking for.
Here's some data:
{
"docs": [
{
"_id": "123",
"type": "project",
"code": "p001"
},
{
"_id": "1234",
"type": "worksitelog",
"documents": [
{
"timestamp": "20180921091501",
"project_id": "123",
"document_id": "457",
"signature_id": "789"
},
{
"timestamp": "20180921091502",
"project_id": "123",
"document_id": "457",
"signature_id": "791"
},
{
"timestamp": "20180921091502",
"project_id": "123",
"document_id": "458",
"signature_id": "791"
},
{
"timestamp": "20180921091502",
"project_id": "123",
"document_id": "456",
"signature_id": "790"
}
],
"worksite_id": "worksite_2"
},
{
"_id": "1235",
"type": "worksitelog",
"documents": [
{
"timestamp": "20180913101502",
"project_id": "125",
"document_id": "459",
"signature_id": "790"
}
],
"worksite_id": "worksite_1"
},
{
"_id": "124",
"type": "project",
"code": "p002"
},
{
"_id": "125",
"type": "project",
"code": "p003"
},
{
"_id": "456",
"type": "document",
"code": "d001",
"project_id": "123",
"worksite_id": "worksite_2"
},
{
"_id": "457",
"type": "document",
"code": "d002",
"project_id": "123",
"worksite_id": "worksite_2"
},
{
"_id": "458",
"type": "document",
"code": "d003",
"project_id": "123",
"worksite_id": "worksite_2"
},
{
"_id": "459",
"type": "document",
"code": "d001",
"project_id": "125",
"worksite_id": "worksite_1"
},
{
"_id": "789",
"type": "signature",
"user": "alice",
"pubkey": "65ab64c64ed64ef41a1bvc7d1b",
"code": "s001"
},
{
"_id": "790",
"type": "signature",
"user": "carol",
"pubkey": "tlmg90834kmn90845kjndf98734",
"code": "s002"
},
{
"_id": "791",
"type": "signature",
"user": "bob",
"pubkey": "asdf654asdf6854awer654awer654eqr654wra6354f",
"code": "s003"
},
{
"_id": "_design/projDocs",
"views": {
"docsPerWorkSite": {
"map": "function (doc) {\n if (doc.type && ['worksitelog', 'document', 'project', 'signature'].indexOf(doc.type) > -1) {\n if (doc.type == 'worksitelog') {\n emit([doc.worksite_id, 0], null);\n for (var i in doc.documents) {\n emit([doc.worksite_id, Number(i)+1, 'p'], {_id: doc.documents[i].project_id});\n emit([doc.worksite_id, Number(i)+1, 'd'], {_id: doc.documents[i].document_id});\n emit([doc.worksite_id, Number(i)+1, 's'], {_id: doc.documents[i].signature_id});\n }\n }\n }\n}"
}
},
"language": "javascript"
}
]
}
Save that data to disk as stackoverflow_53752001.json.
Use Fauxton to create a database called stackoverflow_53752001.
Here's a bash script to load the data from the file stackoverflow_53752001.json into the databasestackoverflow_53752001`. You'll need to edit the first three parameters, obviously. Fix it, then paste it into a (Unix) terminal window:
USRID="you";
USRPWD="yourpwd";
HOST="yourdb.yourpublic.work";
COUCH_DATABASE="stackoverflow_53752001";
FILE="stackoverflow_53752001.json";
#
COUCH_URL="https://${USRID}:${USRPWD}#${HOST}";
FULL_URL="${COUCH_URL}/${COUCH_DATABASE}";
curl -H 'Content-type: application/json' -X POST "${FULL_URL}/_bulk_docs" -d #${FILE};
In Fauxton, select database stackoverflow_53752001 and then, in the left-hand menu select "Design Documents" >> "projDocs" >> "Views" >> "docsPerWorkSite".
You'll see data like this:
{"total_rows":17,"offset":0,"rows":[
{"id":"1235","key":["worksite_1",0],"value":null},
{"id":"1235","key":["worksite_1",1,"d"],"value":{"_id":"459"}},
: :
: :
{"id":"1234","key":["worksite_2",4,"p"],"value":{"_id":"123"}},
{"id":"1234","key":["worksite_2",4,"s"],"value":{"_id":"790"}}
]}
If you then click on the "Options" button, in the top right, you'll get an option sheet for modifying the raw query. Pick:
"Include Docs"
"Between Keys"
"Start key" : ["worksite_1", 0]
"End key" : ["worksite_1", 9999]
Hit "Run Query", and you should see:
{"total_rows":17,"offset":0,"rows":[
{"id":"1235","key":["worksite_1",0],"value":null,"doc":{"_id":"1235","_rev":"1-de2b919591c70f643ce1005c18da1c54","type":"worksitelog","documents":[{"timestamp":"20180913101502","project_id":"125","document_id":"459","signature_id":"790"}],"worksite_id":"worksite_1"}},
{"id":"1235","key":["worksite_1",1,"d"],"value":{"_id":"459"},"doc":{"_id":"459","_rev":"1-5422628e475bab0c14e5722a1340f561","type":"document","code":"d001","project_id":"125","worksite_id":"worksite_1"}},
{"id":"1235","key":["worksite_1",1,"p"],"value":{"_id":"125"},"doc":{"_id":"125","_rev":"1-312dd8a9dd432168d8608b7cd9eb92cd","type":"project","code":"p003"}},
{"id":"1235","key":["worksite_1",1,"s"],"value":{"_id":"790"},"doc":{"_id":"790","_rev":"1-be018df4ecdf2e6add68a2758b9bd12a","type":"signature","user":"carol","pubkey":"tlmg90834kmn90845kjndf98734","code":"s002"}}
]}
If you then change the start and end keys to ["worksite_2", 0] and ["worksite_2", 9999] you will see the data for the second work site.
For this to work, each time you have written a new document and signature to the database, you'll need to:
prepare an object {
"timestamp": "20180921091502",
"project_id": "123",
"document_id": "457",
"signature_id": "791"
}
get the corresponding work site log record
append the object to the documents array
put back the altered work site log record
I assumed there are multiple signatures per document, so you'll have to write a log record for each and every one of them. If that grows too big you can change worksite_id to something like worksite_1_201812, which would give one log per work site per month with out breaking the query logic, I think.

Create a view to get multiple documents in CouchDb

CouchDb newbie here.
I have several documents in CouchDb with the same structure:
{
"_id": "1170140286",
"_rev": "1-79ffad4d4cbe24effc72f9ec519373ca",
"data": [
{
"photo": "link_of_photo1",
"userid": "34623",
"username": "guest-user1"
},
{
"photo": "link_of_photo2",
"userid": "34623",
"username": "guest-user1"
},
{
"photo": "link_of_photo3",
"userid": "34623",
"username": "guest-user1"
}
]
}
and
{
"_id": "43573458",
"_rev": "1-0ca5aa68590fcb58399fe059aa8fb881",
"data": [
{
"photo": "link_of_photo1",
"userid": "6334",
"username": "guest-user2"
},
{
"photo": "link_of_photo2",
"userid": "6334",
"username": "guest-user2"
},
{
"photo": "link_of_photo3",
"userid": "6334",
"username": "guest-user2"
}
]
}
I don't know whether what I want to do is possible, but i am trying to create a view that will combine the data elements of these documents into one single document:
[
{
"photo": "link_of_photo1",
"userid": "34623",
"username": "guest-user1"
},
{
"photo": "link_of_photo2",
"userid": "34623",
"username": "guest-user1"
},
{
"photo": "link_of_photo3",
"userid": "34623",
"username": "guest-user1"
},
{
"photo": "link_of_photo1",
"userid": "6334",
"username": "guest-user2"
},
{
"photo": "link_of_photo2",
"userid": "6334",
"username": "guest-user2"
},
{
"photo": "link_of_photo3",
"userid": "6334",
"username": "guest-user2"
}
]
I am pretty sure I haven't understood the logic of couchdb correctly, so any help is highly aprpeciated.
What you can get is a view result with all links included. It will be an array but will look a bit different to your mocked result structure. A view logic can look like (the key of the rows is not used in the example - maybe the user name or id is an useful value to put in):
function (doc) {
var data = doc.data
if (!data) return
for (var i = 0, link; link = data[i++];)
emit(null, link)
}
For the sake of completeness should be mentioned that there is a way to "merge all docs" server-side. The view result can be manipulated by a CouchDB list before the data will finally send back to the requester. But.it.is.not.recommended! Please take that seriously and don't try that out - its a massive performance issue and was never the goal of CouchDB to provide such uses-cases.
Please refer these links.
http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views#Linked_documents
CouchDB "Join" two documents
combine multiple documents in a couchdb view
Sample Code:
function(doc){
if(doc.items)
doc.items.forEach(function(item){
emit(doc._id,{_id:item});
})
}
I hope these will solve your problem.

How to combine multiple CouchDB queries into a single request?

I'm trying to query documents in a Cloudant.com database (CouchDB). The two following query requests work fine separately:
{ "selector": { "some_field": "value_1" } }
{ "selector": { "some_field": "value_2" } }
Cloudant's documentation seems to indicate I should be able to combine those two queries into a single HTTP request as follows:
{ "selector": { "$or": [ { "some_field": "value_1" },
{ "some_field": "value_2" } ] } }
But when I try that I receive the following response:
{"error":"no_usable_index",
"reason":"There is no operator in this selector can used with an index."}
Can someone tell me what I need to do to get this to work?
There doesn't seem to be a way to achieve this with Cloudant Query at the moment. However, you can use a view query instead using the index created with Cloudant Query. Assuming the index is in a design document named ae97413b0892b3738572e05b2101cdd303701bb8:
curl -X POST \
'https://youraccount.cloudant.com/db/_design/ae97413b0892b3738572e05b2101cdd303701bb8/_view/ae97413b0892b3738572e05b2101cdd303701bb8?reduce=false&include_docs=true' \
-d '
{
"keys":[
["value_1"],
["value_2"]
]
}'
This will give you a response like this:
{
"total_rows": 3,
"offset": 1,
"rows": [
{
"id": "5fcec42ba5cad4fb48a676400dc8f127",
"key": [
"abc"
],
"value": null,
"doc": {
"_id": "5fcec42ba5cad4fb48a676400dc8f127",
"_rev": "1-0042bf88a7d830e9fdb0326ae957e3bc",
"some_field": "value_1"
}
},
{
"id": "955606432c9d3aaa48cab0c34dc2a9c8",
"key": [
"ghi"
],
"value": null,
"doc": {
"_id": "955606432c9d3aaa48cab0c34dc2a9c8",
"_rev": "1-68fac0c180923a2bf133132301b1c15e",
"some_field": "value_2"
}
}
]
}

indexing couchdb using elastic search

HI I have installed elasticsearch version 0.18.7 and configured couchdb according to these instructions. I am trying to create indexing in the following way:
curl -XPUT '10.50.10.86:9200/_river/tasks/_meta' -d '{
"type": "couchdb",
"couchdb": {
"host": "10.50.10.86",
"port": 5984,
"db": "tasks",
"filter": null
},
"index": {
"index": "tasks",
"type": "tasks",
"bulk_size": "100",
"bulk_timeout": "10ms"
}
}'
and got the message like,
{
"ok": true,
"_index": "_river",
"_type": "tasks",
"_id": "_meta",
"_version": 2
}
when trying to access the url like
curl -GET 'http://10.50.10.86:9200/tasks/tasks?q=*&pretty=true'
then
{
"error": "IndexMissingException[[tasks] missing]",
"status": 404
}
Please guide me how to indexing couchdb using elasticsearch.
I'm not sure where es_test_db2 is coming from. What's the output of this?
curl 10.50.10.86:9200/_river/tasks/_status\?pretty=1

Resources