I need to insert items into sharepoint by using SP connector - Send HTTP Request
I send body : "User": { "Key": "i:0#.f|membership|#{first(body('Get_by_mail')?['value'])['Email']}" },
Despite it having successfully created, the sharepoint shows the field without value. Do you have any idea what could be going on?
After reproducing from my end, I could able to make this work using the below JSON in the body while sending the HTTP request.
{
"__metadata": { "type": "SP.Data.<YOUR_LIST_NAME>ListItem" },
"Title": "ccc",
"UserId": 6
}
UserId is the key which represents the column in my Sharepoint which is named as User. Consider if Person is the column in your Sharepoint then make sure you set the key value as PersonId.
Results:
If you look at your JSON:
"User":
{
"Key": "i:0#.f|membership|#{first(body('Get_by_mail')?['value'])['Email']}"
}
you'll notice that you're sending just a key to a key/value pair target. The item inserts because a Key is provided, but it doesn't display anything because you did not provide a Value that would be displayed. Try the following JSON instead:
"User":
{
"Key": "i:0#.f|membership|#{first(body('Get_by_mail')?['value'])['Email']}",
"Value": "i:0#.f|membership|#{first(body('Get_by_mail')?['value'])['Email']}"
}
I am trying to index one json to elastic search.
It seems to be working fine as it is not giving any error.
I have indexed document as below.
await client.index({
id: fieldId.toString(),
index: 'project_documents_textfielddata',
body: {
FieldId: fieldId,
DocumentId: documentId,
Value: fieldData.fieldHTMLText,
},
routing: projectId.toString(),
});
But in elasticsearch kibana it is showing as buffer type as below (I have truncated buffer as it was very long).
{
"_index": "documenttextfile.files",
"_id": "6252ab411deaba21fd877c26",
"_version": 1,
"_score": 1,
"_routing": "62505a765ff176cd491f1d1e",
"_source": {
"id": "6252ab411deaba21fd877c26",
"Content": {
"type": "Buffer",
"data": [
10,
// Some extra large binary content removed for convenient
48,
56,
50,
],
"id": [
"6252ab411deaba21fd877c26"
],
"Content.type.keyword": [
"Buffer"
]
}
}
So how can I see my data as is (i.e. in json format) in Kibana. I've seen many tutorials on Kibana, they are able to see data in plain text instead of buffer.
Or am I doing anything wrong while indexing? I am basically trying to see the data the way we can see in mongodb compass.
Your fieldData.fieldHTMLTextfield is probably of type Buffer and you simply need to call fieldData.fieldHTMLText.toString()on it in order to transform the buffer to a string.
PS: the problem has nothing to do with Kibana which shows you exactly what you're sending to Elasticsearch, i.e. a Buffer. So the problem is more related to your understand of Node.js data structures (i.e. Buffer vs string) ;-)
I have a CouchDB with documents, which look like this:
{
"_id": "000040cc-e3b4-47cc-b051-a5508efb8996",
"_rev": "1-882d7f88cc2e1e767b55d0c82fb638d2",
"state": "uploaded",
"state_since": "2020-02-17T11:20:55.1450252Z"
// more metadata ...
"_attachments": {
"large.jpg": {
"content_type": "image/jpeg",
"revpos": 1,
"digest": "md5-NK7ejYjrErhMAs7tZ4+R8w==",
"length": 87846,
"stub": true
},
"medium.jpg": {
...
},
"small.jpg": {
...
}
}
}
Let's assume, I want to query a set of images like this:
{
"selector": {
"state": "uploaded"
},
"sort": ["state_since"],
"limit": 100
}
If I want to display the thumbnails of those 100 images, I'd have to iterate through the result list and download the corresponding attachments. This would be 101 requests in total.
I could also do it in one request by specifying, that I want to fetch the documents with attachments. But this would return all (potentially very large) attachments.
I know that I can set the fields property in my query to only return the fields I need. But can I apply this to attachments, too? And if yes: how?
No, there's no way to do what you're requesting. The only ways to fetch a subset of attachments are by fetching them one at a time, or by using the atts_since attribute when fetching a single document, which is intended for use in replication.
Perhaps consider re-designing your documents. Perhaps you can store your thumbnails on a separate document, that only contains thumbnails.
I'm trying to figure out how to insert an image into CouchDB using the node-CouchDB library found here: https://www.npmjs.com/package/node-couchdb
Here's what I've done:
fs.readFile('download.jpeg', (err, data) => {
binary_data = new Buffer(data, 'binary');
couch.insertAttachment("node_db", doc_number, "download.jpeg", binary_data, rev_number).then(({data, headers, status}) => {
}, err => {
console.log("ERROR"+ err.code);
});
});
The result is that CouchDB stores this in the document format like such:
{
"_id": "2741d6f37d61d6bbdf63df3be5000504",
"_rev": "22-bfdbe6db35c7d9873a2cc8a38afb2833",
"_attachments": {
"attachment": {
"content_type": "application/json",
"revpos": 22,
"digest": "md5-on0A+d7045WPI6FyS1ut4g==",
"length": 22482,
"stub": true
}
}
}
//This is what the data looks like in CouchDB using the View Attachment Function through the interface:
{"type":"Buffer","data":[255,216,255,224,0,16,74,70,73,70,0,1,1,0,0,1,0,1,0,0,255,219,0,132,0,9,6,7,18,18,18,21,18,19,19,22,21,21,23,23,23,24,21,21,21,23,23,21,21,24,21,21,21,23,22,22,21,21,22,24,29,40,32,24,26,37,29,21,21,33,49,33,37,41,43,46,46,46,23,31,51,56,51,45,55,40,45,46,43,1,10,10,10,14,13,14,26,16,16,26,45,37,29,37,45,45,45,45,45,45,45,241,...]
I then tried changing the Content-Type attribute to "image/jpeg" in the header of the request resulting in:
{
"_id": "2741d6f37d61d6bbdf63df3be5000504",
"_rev": "23-cf8c2076b43082fdfe605cad68ef2355",
"_attachments": {
"attachment": {
"content_type": "image/jpeg",
"revpos": 23,
"digest": "md5-SaekQP37DCCeGX2M8UVeGQ==",
"length": 22482,
"stub": true
}
}
}
However, this still results in an image that isn't viewable from the CouchDB interface (clicking View Attachments). The image, in this case, is only size 6,904 bytes but it's being stored with a length of ~22k (inflating the size in CouchDB) so I'm assuming I'm not passing the correct representation (encoding) of the image to CouchDB.
You can encode your image data as a base64 string and save it, although i would not recommend it at all. What I would do is to upload the file to a object storage like AWS S3 or it's open source alternative MinIO, and then save in the DB just a reference to the file (e.g. an Image URL).
P.S.: I'm sorry about the lack of links and references in my answer, I'm writing it on my phone. I can edit it and include references as soon as I'm home.
When parsing exported Application Insights telemetry from Blob storage, the request data looks something like this:
{
"request": [
{
"id": "3Pc0MZMBJgQ=",
"name": "POST Blah",
"count": 6,
"responseCode": 201,
"success": true,
"url": "https://example.com/api/blah",
"durationMetric": {
"value": 66359508.0,
"count": 6.0,
"min": 11059918.0,
"max": 11059918.0,
"stdDev": 0.0,
"sampledValue": 11059918.0
},
...
}
],
...
}
I am looking for the duration of the request, but I see that I am presented with a durationMetric object.
According to the documentation the request[0].durationMetric.value field is described as
Time from request arriving to response. 1e7 == 1s
But if I query this using Analytics, the value don't match up to this field:
They do, however, match up to the min, max and sampledValue fields.
Which field should I use? And what does that "value": 66359508.0 value represent in the above example?
It doesn't match because you're seeing sampled data (meaning this event represents sampled data from multiple requests). I'd recommend starting with https://azure.microsoft.com/en-us/documentation/articles/app-insights-sampling/ to understand how sampling works.
In this case, the "matching" value would come from duration.sampledValue (notice that value == count * sampledValue)
It's hard to compare exactly what you're seeing because you don't show the Kusto query you're using, but you do need to be aware of sampling when writing AI Analytics queries. See https://azure.microsoft.com/en-us/documentation/articles/app-insights-analytics-tour/#counting-sampled-data for more details on the latter.