Chef Nested Databag - nested

I'm new to Chef and I'm having some problems to get the values from data_bags with nested attributes.
{
"id": "bareos-fd",
"description": "Client resource of the Director itself.",
"address": "localhost",
"job": {
"backup-bareos-fd": {
"jobdefs": "DefaultJob"
},
"BackupCatalog": {
"description": "Backup the catalog database (after the nightly save)",
"jobdefs": "DefaultJob",
"level": "Full",
"fileset": "Catalog",
"schedule": "WeeklyCycleAfterBackup",
"run_before": "/usr/lib/bareos/scripts/make_catalog_backup.pl MyCatalog",
"run_after": "/usr/lib/bareos/scripts/delete_catalog_backup",
"bootstrap": "|/usr/bin/bsmtp -h localhost -f \\\"\\(Bareos\\) \\\" -s \\\"Bootstrap for Job %j\\\" root#localhost",
"priority": "11"
},
"RestoreFiles": {
"type": "Restore",
"fileset": "LinuxAll",
"storage": "File",
"pool": "Incremental",
"messages": "Standard",
"where": "/tmp/bareos-restores"
}
}
}
How can I write a foreach to get the nested values (like BackupCatalog and its values?)

The object returned from data_bag_item works like a hash:
bag = data_bag_item('something', 'bareos-fd')
bag['job']['BackupCatalog'].each do |key, value|
# ...
end

Related

Gitlab: Dependency scanner report is not shown on security dashboard

I am trying to create my own security scanner which will check dependencies. To test the functionality, I created a "mock scanner" which downloads a file from webhook, and saves it as an artifact ought to be uploaded to the server.
The artifact is uploaded successfully and in the CI output I can see the 201 code, but for some reason it is not presented in the security dashboard.
What am I doing wrong?
Thank you!
The CI job looks as following:
mysec_dependency_scanning:
stage: test
script:
- curl https://webhook.site/XXXX -o gl-dependency-scanning-report.json
- sleep 3
allow_failure: true
artifacts:
reports:
dependency_scanning: gl-dependency-scanning-report.json
The content of the json file is from the example provided by gitlab and it as following:
{
"version": "2.0",
"vulnerabilities": [
{
"id": "51e83874-0ff6-4677-a4c5-249060554eae",
"category": "dependency_scanning",
"name": "alik alik",
"message": "Regular Expression Denial of Service in debug",
"description": "alik to regular expression denial of service when untrusted user input is passed into the `o` formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue.",
"severity": "Unknown",
"solution": "Upgrade to latest versions.",
"scanner": {
"id": "dadada",
"name": "dadada"
},
"location": {
"file": "yarn.lock",
"dependency": {
"package": {
"name": "debug"
},
"version": "1.0.5"
}
},
"identifiers": [
{
"type": "gemnasium",
"name": "Gemnasium-37283ed4-0380-40d7-ada7-2d994afcc62a",
"value": "37283ed4-0380-40d7-ada7-2d994afcc62a",
"url": "https://deps.sec.gitlab.com/packages/npm/debug/versions/1.0.5/advisories"
}
],
"links": [
{
"url": "https://nodesecurity.io/advisories/534"
},
{
"url": "https://github.com/visionmedia/debug/issues/501"
},
{
"url": "https://github.com/visionmedia/debug/pull/504"
}
]
},
{
"id": "5d681b13-e8fa-4668-957e-8d88f932ddc7",
"category": "dependency_scanning",
"name": "Authentication bypass via incorrect DOM traversal and canonicalization",
"message": "Authentication bypass via incorrect DOM traversal and canonicalization in saml2-js",
"description": "Some XML DOM traversal and canonicalization APIs may be inconsistent in handling of comments within XML nodes. Incorrect use of these APIs by some SAML libraries results in incorrect parsing of the inner text of XML nodes such that any inner text after the comment is lost prior to cryptographically signing the SAML message. Text after the comment, therefore, has no impact on the signature on the SAML message.\r\n\r\nA remote attacker can modify SAML content for a SAML service provider without invalidating the cryptographic signature, which may allow attackers to bypass primary authentication for the affected SAML service provider.",
"severity": "Unknown",
"solution": "Upgrade to fixed version.\r\n",
"scanner": {
"id": "dadada",
"name": "dadada"
},
"location": {
"file": "yarn.lock",
"dependency": {
"package": {
"name": "saml2-js"
},
"version": "1.5.0"
}
},
"identifiers": [
{
"type": "gemnasium",
"name": "Gemnasium-9952e574-7b5b-46fa-a270-aeb694198a98",
"value": "9952e574-7b5b-46fa-a270-aeb694198a98",
"url": "https://deps.sec.gitlab.com/packages/npm/saml2-js/versions/1.5.0/advisories"
},
{
"type": "cve",
"name": "CVE-2017-11429",
"value": "CVE-2017-11429",
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11429"
}
],
"links": [
{
"url": "https://github.com/Clever/saml2/commit/3546cb61fd541f219abda364c5b919633609ef3d#diff-af730f9f738de1c9ad87596df3f6de84R279"
},
{
"url": "https://github.com/Clever/saml2/issues/127"
},
{
"url": "https://www.kb.cert.org/vuls/id/475445"
}
]
}
],
"remediations": [
{
"fixes": [
{
"id": "5d681b13-e8fa-4668-957e-8d88f932ddc7",
}
],
"summary": "Upgrade saml2-js",
"diff": "ZGlmZiAtLWdpdCBhL...OR0d1ZUc2THh3UT09Cg==" // some content is omitted for brevity
}
]
}
I was able to fix the problem, the issue was an invalid json format.
Had to do alot of trial and error but I was able to create a working template for a dependency scanning report.
{
"version": "3.0.0",
"vulnerabilities": [
{
"id": "dfa1f7f3d56db6e1c3451a232de42f153e0335611de6f0344443d84e448ee2cf",
"category": "dddda",
"name": "dddda",
"message": "ddda",
"description": "dddda lack of validation in `index.js`.",
"cve": "dada",
"severity": "Critical",
"solution": "Upgrade to version 2.0.5 or above.",
"scanner": {
"id": "lalal",
"name": "Code_Analyzer"
},
"location": {
"file": "yarn.lock",
"dependency": {
"iid": 447,
"package": {
"name": "copy-props"
},
"version": "2.0.4"
}
},
"identifiers": [
{
"type": "dada",
"name": "dada-e9e12690-2e4d-4251-bef0-7357ddc05881",
"value": "e9e57890-5e4d-4832-bef2-7337ddc05889",
"url": "https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/blob/master/npm/copy-props/CVE-2219-28503.yml"
},
{
"type": "cve",
"name": "CVE-2237-28503",
"value": "CVE-2237-28503",
"url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2237-28503"
}
],
"links": [
{
"url": "https://nvd.nist.gov/vuln/detail/CVE-2237-28503"
}
]
}
],
"remediations": [],
"dependency_files": [
{
"path": "yarn.lock",
"package_manager": "yarn",
"dependencies": [
{
"iid": 447,
"dependency_path": [
{
"iid": 708
},
{
"iid": 707
}
],
"package": {
"name": "copy-props"
},
"version": "2.0.4"
}
]
}
],
"scan": {
"scanner": {
"id": "lalal",
"name": "Code_Analyzer",
"url": "https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium",
"vendor": {
"name": "lalal"
},
"version": "2.29.5"
},
"type": "dependency_scanning",
"start_time": "2021-05-03T06:47:29",
"end_time": "2021-05-03T06:47:30",
"status": "success"
}
}

Node.js: Sending Cucumber JSON results to Jira's Xray - jira-client-xray gives HTTP 405 error

I'm struggling 5th day with sending test results to Jira. Our Jira has latest Xray plugin.
I use Node.js for test automation. I went easiest way to try Xray's capability of swallowing test automation results: 'jira-client-xray' dependency + Cucumber test.
In Jira I have Test Execution (id is KELLO-2426), what includes 1 Test (id is KELLO-2427) with Cucumber steps - Jira_Xray_Auto_Test.PNG.
I have 1 feature file - Feature_file.PNG.
After running the test/feature I got JSON-file with results:
[
{
"keyword": "Feature",
"description": "",
"line": 1,
"name": "Sample Snippets test",
"uri": "Can not be determined",
"tags": [],
"elements": [
{
"keyword": "Scenario",
"description": "",
"name": "open URL",
"tags": [
{
"name": "#KELLO:2426",
"location": {
"line": 6,
"column": 5
}
}
],
"id": "sample-snippets-test;open-url",
"steps": [
{
"arguments": [],
"keyword": "Before",
"name": "Hook",
"result": {
"status": "passed",
"duration": 1301000000
},
"line": "",
"match": {
"location": "can not be determined with webdriver.io"
}
},
{
"arguments": [],
"keyword": "Given",
"name": "the page url is not \"http://webdriverjs.christian-bromann.com/\"",
"result": {
"status": "passed",
"duration": 257000000
},
"line": 8,
"match": {
"location": "can not be determined with webdriver.io"
}
},
{
"arguments": [],
"keyword": "And",
"name": "I open the url \"http://webdriverjs.christian-bromann.com/\"",
"result": {
"status": "passed",
"duration": 1221000000
},
"line": 9,
"match": {
"location": "can not be determined with webdriver.io"
}
},
{
"arguments": [],
"keyword": "Then",
"name": "I expect that the url is \"http://webdriverjs.christian-bromann.com/\"",
"result": {
"status": "passed",
"duration": 244000000
},
"line": 10,
"match": {
"location": "can not be determined with webdriver.io"
}
},
{
"arguments": [],
"keyword": "And",
"name": "I expect that the url is not \"http://google.com\"",
"result": {
"status": "passed",
"duration": 205000000
},
"line": 11,
"match": {
"location": "can not be determined with webdriver.io"
}
},
{
"arguments": [],
"keyword": "After",
"name": "Hook",
"result": {
"status": "passed",
"duration": 186000000
},
"line": "",
"match": {
"location": "can not be determined with webdriver.io"
}
}
]
}
],
"id": "sample-snippets-test",
"metadata": {
"browser": {
"name": "chrome",
"version": "72.0.3626.121"
},
"device": "Device name not known",
"platform": {
"name": "Platform name not known",
"version": "Version not known"
}
}
}
]
Next, I have 'jira.client.xray.js' file, where sending the results is written:
var JiraApiWithXray = require('jira-client-xray');
// Initialize
var jiraXray = new JiraApiWithXray({
strictSSL: false,
protocol: 'https',
username: 'your_username',
password: 'your_password',
host: 'your_host',
apiVersion: '1.0' //Check version from DevTools -> Network tab
});
const testExecResults = './results/sample-snippets-test.1574077621820.json';
try {
jiraXray.importExecResultsFromCucumber(testExecResults).then(function (testExecIssueId) {});
} catch(ex) {
console.log('Error:');
console.log(ex);
}
Initiating delivery of the test results by command node jira.client.xray.js from projects root directory gives me the following error:
UnhandledPromiseRejectionWarning: StatusCodeError: 405 - undefined
What is wrong? Suggest me please.
Yours sincerely,
JS comrade
Here shows the error you are receiving
https://restfulapi.net/http-status-codes/
That would lead me to believe it is a problem with your request being sent. My initial thoughts are testExecResults is in the wrong structure based on the documentation https://www.npmjs.com/package/jira-client-xray
Please post the request being sent.

Missing Subscription Key field in Swagger API connector (trough Azure API Management) in Logic App

I have created a REST API with a Swagger/OPEN API specification which I will like to consume trough a Azure API Management tenant in a Logic App.
When I download the specification it looks like this:
{
"swagger": "2.0",
"info": {
"title": "Leasing",
"version": "1.0"
},
"host": "ENDPOINT.azure-api.net",
"basePath": "/leasing",
"schemes": [
"http",
"https"
],
"securityDefinitions": {
"apiKeyHeader": {
"type": "apiKey",
"name": "Ocp-Apim-Subscription-Key",
"in": "header"
},
"apiKeyQuery": {
"type": "apiKey",
"name": "subscription-key",
"in": "query"
}
},
"security": [
{
"apiKeyHeader": []
},
{
"apiKeyQuery": []
}
],
"paths": {
"/{Brand}/groups": {
"get": {
"description": "Get a list of leasing groups on a brand",
"operationId": "GetGroups",
"parameters": [
{
"name": "Brand",
"in": "path",
"description": "Selection of possible brands",
"required": true,
"type": "string",
"enum": [
"Volkswagen",
"Audi",
"Seat",
"Skoda",
"VolkswagenErhverv",
"Porsche",
"Ducati"
]
}
],
"responses": {
"200": {
"description": "Returns a list of leasing groups",
"schema": {
"$ref": "#/definitions/GroupArray"
}
},
"400": {
"description": "If the brand is not valid",
"schema": {
"$ref": "#/definitions/Error"
}
}
},
"produces": [
"application/json"
]
}
}
},
"definitions": {
"Group": {
"type": "object",
"properties": {
"id": {
"format": "int32",
"type": "integer"
},
"name": {
"type": "string"
},
"description": {
"type": "string"
},
"leasingModelCount": {
"format": "int32",
"type": "integer"
},
"lowestMonthlyFee": {
"format": "int32",
"type": "integer"
}
}
},
"Error": {
"type": "object",
"properties": {
"code": {
"enum": [
"NotValidBrand",
"NotValidGroupId"
],
"type": "string",
"x-ms-enum": {
"name": "ErrorCode",
"modelAsString": true
}
},
"message": {
"type": "string"
}
}
},
"GroupArray": {
"type": "array",
"items": {
"$ref": "#/definitions/Group"
}
}
}
}
When I add this in a Logic App with the connector HTTP + Swagger I only get to define the {Brand} query input but not the various ways of using the Subscriptions key (header or query) as defined in SecurityDefiniations.
The whole securityDefinitions and security section are automatically generated in the Azure API Management service, but not recognized in Logic App.
See image of missing subscription key field:
What am I doing wrong?
Update
I have tried the following:
Usage of the 'Authentication' field (but this field is limited to certain types of auths flows https://learn.microsoft.com/en-us/azure/connectors/connectors-native-http#authentication)
Change the Logic App 'Http + Swagger'-action in code to add the header parameter, but this action converts the action to a simple 'Http' action and therfore loosing the automatic schema generation from Swagger.
I think you need to specify this in the Authentication-field in a JSON format. Something like:
{
"apiKeyHeader" : "your Ocp-Apim-Subscription-Key",
"apiKeyQuery" : "your subscription key"
}

How to get Private IP of HDI cluster using ARM template

I have created template1 which will deploy HDI cluster and template2 which will deploy Azure VM seperately.
Now I want to get the Head-node Private IP from cluster and pass it to Azure VM template for processing using ARM template.
How can I do so?
Considering this is the object you are getting from HDcluster:
{
"id": "xxx",
"name": "xxx",
"type": "Microsoft.HDInsight/clusters",
"location": "East US",
"etag": "xxx",
"tags": null,
"properties": {
"clusterVersion": "3.5.1000.0",
"osType": "Linux",
"clusterDefinition": {
"blueprint": "https://blueprints.azurehdinsight.net/spark-3.5.1000.0.9865375.json",
"kind": "SPARK",
"componentVersion": {
"Spark": "1.6"
}
},
"computeProfile": {
"roles": [
{
"name": "headnode",
"targetInstanceCount": 2,
"hardwareProfile": {
"vmSize": "ExtraLarge"
},
"osProfile": {
"linuxOperatingSystemProfile": {
"username": "sshuser"
}
}
},
{
"name": "workernode",
"targetInstanceCount": 1,
"hardwareProfile": {
"vmSize": "Large"
},
"osProfile": {
"linuxOperatingSystemProfile": {
"username": "sshuser"
}
}
},
{
"name": "zookeepernode",
"targetInstanceCount": 3,
"hardwareProfile": {
"vmSize": "Medium"
},
"osProfile": {
"linuxOperatingSystemProfile": {
"username": "sshuser"
}
}
}
]
},
"provisioningState": "Succeeded",
"clusterState": "Running",
"createdDate": "2017-04-11T09:07:44.68",
"quotaInfo": {
"coresUsed": 20
},
"connectivityEndpoints": [
{
"name": "SSH",
"protocol": "TCP",
"location": "xxx.azurehdinsight.net",
"port": 22
},
{
"name": "HTTPS",
"protocol": "TCP",
"location": "xxx.azurehdinsight.net",
"port": 443
}
],
"tier": "standard"
}
}
I'm guessing this is the best output you can get, so you can use something like:
"outputs": {
"test": {
"type": "Object",
"value": "[reference(parameters('clusterName'),'2015-03-01-preview').connectivityEndpoints[0].location]"
}
}
This will get you an output of xxx.azurehdinsight.net
And you can either create a new deployment with this data or (just like I said) add RHEL VM to the same template and make it dependOn on the HDCluster deployment and reference the same thing as an input to VMextension.

How to flatten JSON with nested structure before passing to #csv filter

I am trying to parse some JSON that is the output of an AWS CLI command to display Snapshots. I want to load this data up into a spreadsheet to be able to filter, group, and audit it.
I've been stumped on how to get the nested Tags array flattened into the parent objects such that the intermediate can then be passed to the #csv filter.
Here is the example:
Initial input JSON:
{
"Snapshots": [
{
"SnapshotId": "snap-fff",
"StartTime": "2014-04-01T06:00:13.000Z",
"VolumeId": "vol-fff",
"VolumeSize": 50,
"Description": "desc1",
"Tags": [
{
"Value": "/dev/sdf",
"Key": "device"
},
{
"Value": "a name",
"Key": "Name"
},
{
"Value": "Internal",
"Key": "Customer"
},
{
"Value": "Demo",
"Key": "Environment"
},
{
"Value": "Brand 1",
"Key": "Branding"
},
{
"Value": "i-fff",
"Key": "instance_id"
}
]
},
{
"SnapshotId": "snap-ccc",
"StartTime": "2014-07-01T05:59:14.000Z",
"VolumeId": "vol-ccc",
"VolumeSize": 8,
"Description": "B Desc",
"Tags": [
{
"Value": "/dev/sda1",
"Key": "device"
},
{
"Value": "External",
"Key": "Customer"
},
{
"Value": "Production",
"Key": "Environment"
},
{
"Value": "i-ccc",
"Key": "instance_id"
},
{
"Value": "B Brand",
"Key": "Branding"
},
{
"Value": "B Name",
"Key": "Name"
},
{
"Value": "AnotherValue",
"Key": "AnotherKey"
}
]
}
]
}
Desired Intermediate:
[
{
"SnapshotId": "snap-fff",
"StartTime": "2014-04-01T06:00:13.000Z",
"VolumeId": "vol-fff",
"VolumeSize": 50,
"Description": "desc1",
"device": "/dev/sdf",
"Name": "a name",
"Customer": "Internal",
"Environment": "Demo",
"Branding": "Brand 1",
"instance_id": "i-fff",
}
{
"SnapshotId": "snap-ccc",
"StartTime": "2014-07-01T05:59:14.000Z",
"VolumeId": "vol-ccc",
"VolumeSize": 8,
"Description": "B Desc",
"device": "/dev/sda1",
"Customer": "External",
"Environment": "Production",
"instance_id": "i-ccc",
"Branding": "B Brand",
"Name": "B Name",
"AnotherKey": "AnotherValue",
}
]
Final Output:
"SnapshotId","StartTime","VolumeId","VolumeSize","Description","device","Name","Customer","Environment","Branding","instance_id","AnotherKey"
"snap-fff","2014-04-01T06:00:13.000Z","vol-fff",50,"desc1","/dev/sdf","a name","Internal","Demo","Brand 1","i-fff",""
"snap-ccc","2014-07-01T05:59:14.000Z","vol-ccc",8,"B Desc","/dev/sda1","External","Production","i-ccc","B Brand","B Name","AnotherValue"
The following jq filter produces the requested intermediate output:
.Snapshots[] | (. + (.Tags|from_entries)) | del(.Tags)
Explanation: from_entries converts the array of key-value objects to an object with the given key-value pairs. This is added to the target object, and finally the "Tags" key is removed.
If the "target" object has a key that also appears in the "Tags" array, then the above filter will favor the value in the "Tags" array. You may accordingly wish to change the order of the operands of "+", or resolve the conflict in some other way.

Resources