From file,
====================================
{
"id": "ffc131ff-1793-4109-940f-5b537f7061cf",
"securityResourceId": "48d0eeff-690d-4c2c-b6f9-9b25315f9ca3",
"name": "Dev-bpimdmgr-idev3-01",
"active": true,
"licensed": true,
"licenseType": "AUTHORIZED",
"status": "ONLINE",
"tags": []
},
{
"id": "82db2888-7a2f-48fe-bc25-26a5e28bb340",
"securityResourceId": "5a437865-6ced-402e-ac47-dd38191e5696",
"name": "obiee-cmixdmgr-nprd3-01",
"active": true,
"licensed": true,
"licenseType": "AUTHORIZED",
"status": "ONLINE",
"tags": [
{
"id": "fbf62944-a8a4-4a22-8e75-cd8d88eacaff",
"name": "obiee-tag",
"color": "#32cd32",
"description": "obiee tag for version import",
"objectType": "Agent"
}
]
},
I want to delete tags[] block including where are there inside this block. through perl or shell script
Regards,
Kalaiyarasan
Just use the JSON module:
use JSON qw{ from_json to_json };
my $struct = from_json("[$input]");
delete $_->{tags} for #$struct;
print to_json($struct);
Related
The text below is part of a string I parsed from html (pre). Since I can not place tags <> in this box I have replace the beginning and end tags as (pre) (/pre).
(pre)(b)Below are the details for server SomeServerName from NetSystem.(Data Length - 1)(/b)
[
{
"askId": "Value1",
"billingCode": "99999999",
"clusterId": null,
"createdBy": "Mike",
"createdFromLegacy": null,
"createdOn": "2021-08-06T17:54:28.220Z",
"description": "Windows 2019",
"environment": "devops",
"hostId": null,
"id": "acd16582-b009-4667-aa95-5977603772sa",
"infrastructure": {
"apiId": "App2019_SA_1_v8-w2019_mike3_cc8f7e02-d426-423d-addb-b29bc7e163be",
"capacityId": "ODI",
"catalogManagementGroup": "Sales Marketing"
},
"legacyId": "XL143036181",
"location": {
"availabilityZone": "ny",
"code": "mx31",
"description": "uhg : mn053",
"region": "west",
"vendor": "apple"
},
"maintenance": {
"group": "3",
"status": "steady_state",
"window": {
"days": "Sunday",
"endTime": "06:00:00.000Z",
"startTime": "02:00:00.000Z"
}
},
"name": "SomeServer",
"network": {
"fqdn": "SomeServer.dom.tes.contoso.com",
"ipAddress": "xx.xx.xx.xx"
},
"os": {
"description": "Microsoft windows 2019",
"type": "windows",
"vendor": "Microsoft",
"version": "2019"
},
"owner": {
"id": "000111111",
"msid": "jtest"
},
"provision": {
"id": "ba424e42-a925-49a5-a4b7-5dcf41b69d4e",
"requestingApi": "mars Hub",
"system": "vRealize"
},
"specs": {
"cpuCount": 4,
"description": "Virtual Hardware",
"ram": 64384,
"serialNumber": null
},
"status": "ACTIVE",
"support": {
"group": "Support group"
},
"tags": {
"appTag": "minitab"
},
"updatedBy": "snir_agent",
"updatedOn": "2021-08-06T17:54:31.525Z"
}
](/pre)
As you can see this is almost json data but I can not parse it as such because of the (b) (/b) tag that exists inside my (pre) (/pre) tag. How can I parse out this (b) tag with its content so I am left with the json data and can treat it as such enabling me to more easily select values with json functions.
If your JSON always has [] brackets you can extract the content inside it and then parse it:
Python example:
import re
import json
text = '<b>asd</b>[{"a": "b", "c": "d"}] pre' # your content
json_content = re.findall('\[[^\]]*\]', text)[0] # '[{"a": "b", "c": "d"}]'
result = json.loads(json_content) # [{'a': 'b', 'c': 'd'}]
You can do this either by using re as indicated here or using split:
cleaned = data.split("(b)")[0] + data.split("(/b)")[1]
Above line will concatenate the content before (b) and after (/b) cleaning the b tag and its content.
Why my database has field __data that exactly copying the real data but wouldn't update if the data is changed?
Here is the example of the data :
{
"id": ObjectId("600ffdf0317f9617960b7df6"),
"userId" : "bf959bb8-78a6-426b-b372-cf5a1f9ef731",
"name": "Product 1",
"isActive": true,
"createdAt": ISODate("2021-05-26 11:33:04.992Z"),
"updatedAt": null,
"__data": {
"id": "600ffdf0317f9617960b7df6",
"userId" : "bf959bb8-78a6-426b-b372-cf5a1f9ef731",
"name": "Product 1",
"isActive": true,
"createdAt": ISODate("2021-05-26 11:33:04.992Z"),
"updatedAt": null,
},
}
when I update the data lets say, {"name": "Product 1 New"}, but the "__data.name" still "Product 1"
The problem is when I get the data using find() or findById(), the result is showing "Product 1" which is get from the __data instead of the real data.
I'm using loopback v3 and using mongodb for the database.
Below is my schema for product.
product.json
{
"name": "products",
"base": "PersistedModel",
"plural": "products",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"id": {
"type": "string",
"id": true
},
"userId": {
"type": "string"
},
"name": {
"type": "object"
},
"isActive": {
"type": "boolean",
"default": false
},
"createdAt": {
"type": "date",
"default": "$now"
},
"updatedAt": {
"type": "date",
"default": "$now"
},
},
"validations": [],
"relations": {
"user": {
"type": "belongsTo",
"model": "reseller",
"foreignKey": "userId"
}
},
"acls": [
],
"scope": {
"order": ["createdAt DESC"]
},
"methods": {}
}
How to solve this?
I want to get the response is the real data which is the updated one, and how to avoid to have field __data?
In case somebody stumbled on this even though LB3 is pretty much dead and IBM is trying to kill off LB4.
My problem was the difference between update() and updateAll(), updateAll() seem to add the data object in mongodb not sure why or how if change to update() is should be fine.
I am doing some analysis on spark sql query execution plans. the execution plans that explain() api prints are not much readable. If we see spark web UI, a DAG graph is created which is divided into jobs, stages and tasks and much more readable. Is there any way to create that graph from execution plans or any apis in the code? if not, are there any apis that can read that grap from UI?
As close I can see, this project (https://github.com/AbsaOSS/spline-spark-agent) is able to interpret the execution plan and generate it in a readable way.
This spark job is reading a file, convert it to a CSV file, write to local.
A sample output in JSON look like
{
"id": "3861a1a7-ca31-4fab-b0f5-6dbcb53387ca",
"operations": {
"write": {
"outputSource": "file:/output.csv",
"append": false,
"id": 0,
"childIds": [
1
],
"params": {
"path": "output.csv"
},
"extra": {
"name": "InsertIntoHadoopFsRelationCommand",
"destinationType": "csv"
}
},
"reads": [
{
"inputSources": [
"file:/Users/liajiang/Downloads/spark-onboarding-demo-application/src/main/resources/wikidata.csv"
],
"id": 2,
"schema": [
"6742cfd4-d8b6-4827-89f2-4b2f7e060c57",
"62c022d9-c506-4e6e-984a-ee0c48f9df11",
"26f1d7b5-74a4-459c-87f3-46a3df781400",
"6e4063cf-4fd0-465d-a0ee-0e5c53bd52b0",
"2e019926-3adf-4ece-8ea7-0e01befd296b"
],
"params": {
"inferschema": "true",
"header": "true"
},
"extra": {
"name": "LogicalRelation",
"sourceType": "csv"
}
}
],
"other": [
{
"id": 1,
"childIds": [
2
],
"params": {
"name": "`source`"
},
"extra": {
"name": "SubqueryAlias"
}
}
]
},
"systemInfo": {
"name": "spark",
"version": "2.4.2"
},
"agentInfo": {
"name": "spline",
"version": "0.5.5"
},
"extraInfo": {
"appName": "spark-spline-demo-application",
"dataTypes": [
{
"_typeHint": "dt.Simple",
"id": "f0dede5e-8fe1-4c22-ab24-98f7f44a9a5a",
"name": "timestamp",
"nullable": true
},
{
"_typeHint": "dt.Simple",
"id": "dbe1d206-3d87-442c-837d-dfa47c88b9c1",
"name": "string",
"nullable": true
},
{
"_typeHint": "dt.Simple",
"id": "0d786d1e-030b-4997-b005-b4603aa247d7",
"name": "integer",
"nullable": true
}
],
"attributes": [
{
"id": "6742cfd4-d8b6-4827-89f2-4b2f7e060c57",
"name": "date",
"dataTypeId": "f0dede5e-8fe1-4c22-ab24-98f7f44a9a5a"
},
{
"id": "62c022d9-c506-4e6e-984a-ee0c48f9df11",
"name": "domain_code",
"dataTypeId": "dbe1d206-3d87-442c-837d-dfa47c88b9c1"
},
{
"id": "26f1d7b5-74a4-459c-87f3-46a3df781400",
"name": "page_title",
"dataTypeId": "dbe1d206-3d87-442c-837d-dfa47c88b9c1"
},
{
"id": "6e4063cf-4fd0-465d-a0ee-0e5c53bd52b0",
"name": "count_views",
"dataTypeId": "0d786d1e-030b-4997-b005-b4603aa247d7"
},
{
"id": "2e019926-3adf-4ece-8ea7-0e01befd296b",
"name": "total_response_size",
"dataTypeId": "0d786d1e-030b-4997-b005-b4603aa247d7"
}
]
}
}
I am able to create the "message route" in azure portal and able to route messages to servicebusqueue if the query matching, I want to create the message route using the restapi instead of using azure portal, I have seen many documents but unable to find the proper one. Whether creating the message route using restapi is possible or not? if yes,How can I achieve this and please provide the respective links to refer?
I haven't tried this through REST API, but as Roman suggested,
You can check the IotHubResource_CreateOrUpdate which will help you understand how to Create or update the metadata of an Iot hub. The usual pattern to modify a property is to retrieve the IoT hub metadata and security metadata, and then combine them with the modified values in a new body to update the IoT hub.
Sample Request:
PUT https://management.azure.com/subscriptions/91d12660-3dec-467a-be2a-213b5544ddc0/resourceGroups/myResourceGroup/providers/Microsoft.Devices/IotHubs/testHub?api-version=2018-04-01
Request Body:
{
"name": "iot-dps-cit-hub-1",
"type": "Microsoft.Devices/IotHubs",
"location": "centraluseuap",
"tags": {},
"etag": "AAAAAAFD6M4=",
"properties": {
"operationsMonitoringProperties": {
"events": {
"None": "None",
"Connections": "None",
"DeviceTelemetry": "None",
"C2DCommands": "None",
"DeviceIdentityOperations": "None",
"FileUploadOperations": "None",
"Routes": "None"
}
},
"state": "Active",
"provisioningState": "Succeeded",
"ipFilterRules": [],
"hostName": "iot-dps-cit-hub-1.azure-devices.net",
"eventHubEndpoints": {
"events": {
"retentionTimeInDays": 1,
"partitionCount": 2,
"partitionIds": [
"0",
"1"
],
"path": "iot-dps-cit-hub-1",
"endpoint": "sb://iothub-ns-iot-dps-ci-245306-76aca8e13b.servicebus.windows.net/"
},
"operationsMonitoringEvents": {
"retentionTimeInDays": 1,
"partitionCount": 2,
"partitionIds": [
"0",
"1"
],
"path": "iot-dps-cit-hub-1-operationmonitoring",
"endpoint": "sb://iothub-ns-iot-dps-ci-245306-76aca8e13b.servicebus.windows.net/"
}
},
"routing": {
"endpoints": {
"serviceBusQueues": [],
"serviceBusTopics": [],
"eventHubs": [],
"storageContainers": []
},
"routes": [],
"fallbackRoute": {
"name": "$fallback",
"source": "DeviceMessages",
"condition": "true",
"endpointNames": [
"events"
],
"isEnabled": true
}
},
"storageEndpoints": {
"$default": {
"sasTtlAsIso8601": "PT1H",
"connectionString": "",
"containerName": ""
}
},
"messagingEndpoints": {
"fileNotifications": {
"lockDurationAsIso8601": "PT1M",
"ttlAsIso8601": "PT1H",
"maxDeliveryCount": 10
}
},
"enableFileUploadNotifications": false,
"cloudToDevice": {
"maxDeliveryCount": 10,
"defaultTtlAsIso8601": "PT1H",
"feedback": {
"lockDurationAsIso8601": "PT1M",
"ttlAsIso8601": "PT1H",
"maxDeliveryCount": 10
}
},
"features": "None"
},
"sku": {
"name": "S1",
"tier": "Standard",
"capacity": 1
}
}
I am indexing data into an Azure Search Index that is produced by a custom skill. This custom skill produces complex data which I want to preserve into the Azure Search Index.
Source data is coming from blob storage and I am constrained to using the REST API without a very solid argument for using the .NET SDK.
Current code
The following is a brief rundown of what I currently have. I cannot change the index's field or the format of data produced by the endpoint used by the custom skill.
Complex data
The following is an example of complex data produced by the custom skill (in the correct value/recordId/etc. format):
{
"field1": 0.135412,
"field2": 0.123513,
"field3": 0.243655
}
Custom skill
Here is the custom skill which creates said data:
{
"#odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
"uri": "https://myfunction.azurewebsites.com/api,
"httpHeaders": {},
"httpMethod": "POST",
"timeout": "PT3M50S",
"batchSize": 1,
"degreeOfParallelism": 5,
"name": "MySkill",
"context": "/document/mycomplex
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "field1",
"targetName": "field1"
},
{
"name": "field2",
"targetName": "field2"
},
{
"name": "field3",
"targetName": "field3"
}
]
}
I have attempted several variations, notable using the ShaperSkill with each field as an input and the output "targetName" as "mycomplex" (with the appropriate context).
Indexer
Here is the indexer's output field mapping for the skill:
{
"sourceFieldName": "/document/mycomplex,
"targetFieldName": "mycomplex"
}
I have tried several variations such as "sourceFieldName": "/document/mycomplex/*.
Search index
And this is the targeted index field:
{
"name": "mycomplex",
"type": "Edm.ComplexType",
"fields": [
{
"name": "field1",
"type": "Edm.Double",
"retrievable": true,
"filterable": true,
"sortable": true,
"facetable": false,
"searchable": false
},
{
"name": "field2",
"type": "Edm.Double",
"retrievable": true,
"filterable": true,
"sortable": true,
"facetable": false,
"searchable": false
},
{
"name": "field3",
"type": "Edm.Double",
"retrievable": true,
"filterable": true,
"sortable": true,
"facetable": false,
"searchable": false
}
]
}
Result
My result is usually similar to Could not map output field 'mycomplex' to search index. Check your indexer's 'outputFieldMappings' property..
This may be a mistake with the context of your skill. Instead of setting the context to /document/mycomplex, can you try setting it to /document? You can then add a ShaperSkill with the context also set to /document and the output field being mycomplex to generate the expected complex type shape
Example skills:
"skills":
[
{
"#odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
"uri": "https://myfunction.azurewebsites.com/api,
"httpHeaders": {},
"httpMethod": "POST",
"timeout": "PT3M50S",
"batchSize": 1,
"degreeOfParallelism": 5,
"name": "MySkill",
"context": "/document"
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "field1",
"targetName": "field1"
},
{
"name": "field2",
"targetName": "field2"
},
{
"name": "field3",
"targetName": "field3"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Util.ShaperSkill",
"context": "/document",
"inputs": [
{
"name": "field1",
"source": "/document/field1"
},
{
"name": "field2",
"source": "/document/field2"
},
{
"name": "field3",
"source": "/document/field3"
}
],
"outputs": [
{
"name": "output",
"targetName": "mycomplex"
}
]
}
]
Please refer to the documentation on shaper skill for specifics.