Starting to the following kind of string:
const json = '{"list":"[{"additionalInformation": {"source": "5f645d7d94-c6ktd"}, "alarmName": "data", "description": "Validation Error. Fetching info has been skipped.", "eventTime": "2020-01-27T14:42:44.143200 UTC", "expires": 2784, "faultyResource": "Data", "name": "prisco", "severity": "Major"}]"}'
How can I manage this as a JSON? The following approach doesn't work
const obj = JSON.parse(json );
unuspected result
How can I parse it correctly?
In conclusion, I should extract the part relative to the first item list and then parse the JSON that it contains.
Your JSON is invalid. The following is the valid version of your JSON:
const json= {
"list": [ {
"additionalInformation": {
"source": "5f645d7d94-c6ktd"
},
"alarmName": "data",
"description": "Validation Error. Fetching info has been skipped.",
"eventTime": "2020-01-27T14:42:44.143200 UTC",
"expires": 2784,
"faultyResource": "Data",
"name": "prisco",
"severity": "Major"
}
]
}
The above is already a JSON and parsing it as JSON again throws an error.
JSON.parse() parse string/ text and turn it into JavaScript object. The string/ text should be in a JSON format or it will throw an error.
Update:
Create a function to clean your string and prepare it for JSON.parse():
cleanString(str) {
str = str.replace('"[', '[');
str = str.replace(']"', ']');
return str;
}
And use it like:
json = this.cleanString(json);
console.log(JSON.parse(json));
Demo:
let json = '{"list":"[{"additionalInformation": {"source": "5f645d7d94-c6ktd"}, "alarmName": "data", "description": "Validation Error. Fetching info has been skipped.", "eventTime": "2020-01-27T14:42:44.143200 UTC", "expires": 2784, "faultyResource": "Data", "name": "prisco", "severity": "Major"}]"}';
json = cleanString(json);
console.log(JSON.parse(json));
function cleanString(str) {
str = str.replace('"[', '[');
str = str.replace(']"', ']');
return str;
}
Remove the double quotes from around the array brackets to make the json valid:
const json = '{"list":[{"additionalInformation": {"source": "5f645d7d94-c6ktd"}, "alarmName": "data", "description": "Validation Error. Fetching info has been skipped.", "eventTime": "2020-01-27T14:42:44.143200 UTC", "expires": 2784, "faultyResource": "Data", "name": "prisco", "severity": "Major"}]}'
Related
I want to ingest base64 encoded avro messages in druid. I am getting the following error -
Avro's unnecessary EOFException, detail: https://issues.apache.org/jira/browse/AVRO-813
Going through the code (line 88) https://github.com/apache/druid/blob/master/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/InlineSchemaAvroBytesDecoder.java , it does not seem to be decoding the messages using base64 decoder. Am I missing something? How can we configure druid to parse base64 encoded avro messages?
Spec used -
"inputFormat": {
"type": "avro_stream",
"avroBytesDecoder": {
"type": "schema_inline",
"schema": {
"namespace": "org.apache.druid.data",
"name": "User",
"type": "record",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "price",
"type": "int"
}
]
}
},
"flattenSpec": {
"useFieldDiscovery": true,
"fields": [
{
"type": "path",
"name": "someRecord_subInt",
"expr": "$.someRecord.subInt"
}
]
},
"binaryAsString": false
}
Thanks:)
I haven't used Avro ingestion, but from the Apache Druid docs here it seems like you need to set "binaryAsString" to true.
The extension returns bytes and fixed Avro types as base64 encoded strings by default. To decode these types as UTF-8 strings, enable the binaryAsString option on the Avro parser.
I take AVRO bytes from Kafka and deserialize them.
But I get strange output because of decimal value and I cannot work with them next (for example turn into json or insert into DB):
import avro.schema, json
from avro.io import DatumReader, BinaryDecoder
# only needed part of schemaDict
schemaDict = {
"name": "ApplicationEvent",
"type": "record",
"fields": [
{
"name": "desiredCreditLimit",
"type": [
"null",
{
"type": "bytes",
"logicalType": "decimal",
"precision": 14,
"scale": 2
}
],
"default": null
}
]
}
schema_avro = avro.schema.parse(json.dumps(schemaDict))
reader = DatumReader(schema_avro_io)
decoder = BinaryDecoder(data) #data - bynary data from kafka
event_dict = reader.read(decoder)
print (event_dict)
#{'desiredCreditLimit': Decimal('100000.00')}
print (json.dumps(event_dict))
#TypeError: Object of type Decimal is not JSON serializable
I tried to use avro_json_serializer, but got error: "AttributeError: 'decimal.Decimal' object has no attribute 'decode'".
And because of this "Decimal" in dictionary I cannot insert values to DB too.
Also tried to use fastavro library, but I couldnot deserealize message, as I understand because sereliazation done without fastavro.
I am trying to read the api_key from the headers and pass it as binding expression, as show below:
{
"type": "table",
"direction": "in",
"name": "subscriptions",
"tableName": "subscriptions",
"partitionKey": "{api_key}",
"take": "50",
"connection": "learnbindingslab1_STORAGE"
}
what would the correct expression to get the api_key from the request headers?
It seems you can use any value under context.bindingData, and you can access nested data with dot notation - so in this case given:
context.bindingData =
{
"headers": {
"some-header": "value"
}
}
You can use it as a binding expression in Id/PartitionKey/sqlQuery with {headers.some-header}.
{
"Id": "{headers.some-id}",
"PartitionKey": "{headers.some-key}",
"sqlQuery": "SELECT * FROM O WHERE O.key = {headers.some-value}"
}
I'm using microsoft graph api to interview with sharepoint.
Upload file to sharepoint.
https://graph.microsoft.com/v1.0/sites/abc78c05-a77b-45bf-a1a1-51f09548b497/drive/root:/test1212123.txt:/content
Then we can got the response.
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#sites('abc78c05-a77b-45bf-a1a1-51f09548b497')/drive/root/$entity",
"#microsoft.graph.downloadUrl": "https://yeeofficesg.sharepoint.com/sites/GdTest/_layouts/15/download.aspx?UniqueId=b9d25e13-c915-432f-b9fb-f2d36a188d9f&Translate=false&tempauth=eyJ0eXAiOiJKV1QiLCJhbGciOiJub25lIn0.eyJhdWQiOiIwMDAwMDAwMy0wMDAwLTBmZjEtY2UwMC0wMDAwMDAwMDAwMDAveWVlb2ZmaWNlc2cuc2hhcmVwb2ludC5jb21AMzgzMDNhNTQtMjUwMS00MDcwLTlkYjItYzNmNTY2OTc2NGUxIiwiaXNzIjoiMDAwMDAwMDMtMDAwMC0wZmYxLWNlMDAtMDAwMDAwMDAwMDAwIiwibmJmIjoiMTU4NDY4MjQ5OSIsImV4cCI6IjE1ODQ2ODYwOTkiLCJlbmRwb2ludHVybCI6InltcjVvWHhDU0FIaFhhV0tYVnZuVDVjK05ETnZsejhzcC9YeFp3MStQaHc9IiwiZW5kcG9pbnR1cmxMZW5ndGgiOiIxMzUiLCJpc2xvb3BiYWNrIjoiVHJ1ZSIsImNpZCI6IlpUUmhPVFk1WkdFdE5EQXlOQzAwWlRnMExUazFZelF0WkRkalpqRmpOR1UxTm1ZMCIsInZlciI6Imhhc2hlZHByb29mdG9rZW4iLCJzaXRlaWQiOiJZV0pqTnpoak1EVXRZVGMzWWkwME5XSm1MV0V4WVRFdE5URm1NRGsxTkRoaU5EazMiLCJhcHBfZGlzcGxheW5hbWUiOiJIdHRwUmVxdWVzdCBUZXN0IiwibmFtZWlkIjoiNTk3ZDQ4YmMtMDVmMy00MTU4LThhY2MtYWU1Y2M3YTljNmFkQDM4MzAzYTU0LTI1MDEtNDA3MC05ZGIyLWMzZjU2Njk3NjRlMSIsInJvbGVzIjoiYWxsc2l0ZXMud3JpdGUgYWxsZmlsZXMud3JpdGUiLCJ0dCI6IjEiLCJ1c2VQZXJzaXN0ZW50Q29va2llIjpudWxsfQ.aTVxeDdWNkowcWFDK0xYOHUvZGo3K0VVSEd1dU02MFVheEFJbnBWWUJHTT0&ApiVersion=2.0",
"createdDateTime": "2020-03-20T05:34:59Z",
"eTag": "\"{B9D25E13-C915-432F-B9FB-F2D36A188D9F},1\"",
"id": "016REKDTITL3JLSFOJF5B3T67S2NVBRDM7",
"lastModifiedDateTime": "2020-03-20T05:34:59Z",
"name": "test1212123.txt",
"webUrl": "https://yeeofficesg.sharepoint.com/sites/GdTest/Shared%20Documents/test1212123.txt",
"cTag": "\"c:{B9D25E13-C915-432F-B9FB-F2D36A188D9F},1\"",
"size": 12,
"createdBy": {
"application": {
"id": "597d48bc-05f3-4158-8acc-ae5cc7a9c6ad",
"displayName": "HttpRequest Test"
}
},
"lastModifiedBy": {
"application": {
"id": "597d48bc-05f3-4158-8acc-ae5cc7a9c6ad",
"displayName": "HttpRequest Test"
}
},
"parentReference": {
"driveId": "b!BYzHq3unv0WhoVHwlUi0l_EO2rYM2NNCptmOTvJ-EqeM9aeJ-zj_TZktSrctfA1S",
"driveType": "documentLibrary",
"id": "016REKDTN6Y2GOVW7725BZO354PWSELRRZ",
"path": "/drive/root:"
},
"file": {
"mimeType": "text/plain",
"hashes": {
"quickXorHash": "RBBCDGQwAxrUIARAFAEJSgAAAAA="
}
},
"fileSystemInfo": {
"createdDateTime": "2020-03-20T05:34:59Z",
"lastModifiedDateTime": "2020-03-20T05:34:59Z"
}
}
Then I want to update the customized column of this list.
https://graph.microsoft.com/v1.0/sites/abc78c05-a77b-45bf-a1a1-51f09548b497/lists/89a7f58c-38fb-4dff-992d-4ab72d7c0d52/items/80/fields
step3, I needs the item id (this example is : 80)
but when I upload the file, I can't got the item id directly from the response.
use this api:https://graph.microsoft.com/v1.0/sites/abc78c05-a77b-45bf-a1a1-51f09548b497/lists/89a7f58c-38fb-4dff-992d-4ab72d7c0d52/items/
I can got the items list which include the item id is needed.
Finally, my question is ,when I upload file to sharepoint, how can I got the item id which is needed by update item.
I ended up extracting the Item GUID from the response, i.e.
"#microsoft.graph.downloadUrl": "https://yeeofficesg.sharepoint.com/sites/GdTest/_layouts/15/download.aspx?UniqueId=b9d25e13-c915-432f-b9fb-f2d36a188d9f&Translate=false&tempauth=....
or
"eTag": ""{B9D25E13-C915-432F-B9FB-F2D36A188D9F},1""
or
"cTag": ""c:{B9D25E13-C915-432F-B9FB-F2D36A188D9F},1""
And then use that in the PATCH call where the item ID is required, i.e. https://graph.microsoft.com/v1.0/sites/abc78c05-a77b-45bf-a1a1-51f09548b497/lists/89a7f58c-38fb-4dff-992d-4ab72d7c0d52/items/**B9D25E13-C915-432F-B9FB-F2D36A188D9F**/fields
Might be a more elegant way to solve the problem, however this worked for me
I have the following JSON response for my REST endpoint:
{
"response": {
"status": 200,
"startRow": 0,
"endRow": 1,
"totalRows": 1,
"next": "",
"data": {
"id": "workflow-1",
"name": "SampleWorkflow",
"tasks": [
{
"id": "task-0",
"name": "AWX",
"triggered_by": ["task-5"]
},
{
"id": "task-1",
"name": "BrainStorming",
"triggered_by": ["task-2", "task-5"]
},
{
"id": "task-2",
"name": "OnHold",
"triggered_by": ["task-0", "task-4", "task-7", "task-8", "task9"]
},
{
"id": "task-3",
"name": "InvestigateSuggestions",
"triggered_by": ["task-6"]
},
{
"id": "task-4",
"name": "Mistral",
"triggered_by": ["task-3"]
},
{
"id": "task-5",
"name": "Ansible",
"triggered_by": ["task-3"]
},
{
"id": "task-6",
"name": "Integration",
"triggered_by": []
},
{
"id": "task-7",
"name": "Tower",
"triggered_by": ["task-5"]
},
{
"id": "task-8",
"name": "Camunda",
"triggered_by": ["task-3"]
},
{
"id": "task-9",
"name": "HungOnMistral",
"triggered_by": ["task-0", "task-7"]
},
{
"id": "task-10",
"name": "MistralIsChosen",
"triggered_by": ["task-1"]
}
]
}
}
}
I am using rest-assured with a groovy gpath expression for an extraction as follows:
given()
.when()
.get("http://localhost:8080/workflow-1")
.then()
.extract()
.path("response.data.tasks.findAll{ it.triggered_by.contains('task-3') }.name")
which correctly gives me [Mistral, Ansible, Camunda]
What I am trying to achieve is to find the task names that are triggered by the InvestigateSuggestions task. But I don't know for sure that the taskId I have to pass in to contains() is task-3; I only know its name i.e. InvestigateSuggestions. So I attempt to do:
given()
.when()
.get("http://localhost:8080/workflow-1")
.then()
.extract()
.path("response.data.tasks.findAll{
it.triggered_by.contains(response.data.tasks.find{
it.name.equals('InvestigateSuggestions')}.id) }.name")
which does not work and complains that the parameter "response" was used but not defined.
How do I iterate over the outer collection from inside the findAll closure to find the correct id to pass into contains() ??
You can make use of a dirty secret, the restAssuredJsonRootObject. This is undocumented (and subject to change although it has never change as far as I can remember in the 7 year+ lifespan of REST Assured).
This would allow you to write:
given()
.when()
.get("http://localhost:8080/workflow-1")
.then()
.extract()
.path("response.data.tasks.findAll{
it.triggered_by.contains(restAssuredJsonRootObject.response.data.tasks.find{
it.name.equals('InvestigateSuggestions')}.id) }.name")
If you don't want to use this "hack" then you need to do something similar to what Michael Easter proposed in his answer.
When it comes to generating matchers based on the response body the story is better. See docs here.
I'm not sure if this is idiomatic but one approach is to find the id first and then substitute into another query:
#Test
void testCase1() {
def json = given()
.when()
.get("http://localhost:5151/egg_minimal/stacko.json")
// e.g. id = 'task-3' for name 'InvestigateSuggestions'
def id = json
.then()
.extract()
.path("response.data.tasks.find { it.name == 'InvestigateSuggestions' }.id")
// e.g. tasks have name 'task-3'
def tasks = json
.then()
.extract()
.path("response.data.tasks.findAll{ it.triggered_by.contains('${id}') }.name")
assertEquals(['Mistral', 'Ansible', 'Camunda'], tasks)
}