Given a massive shape file of world countries.
Given I'am using Topojson#1.0 installed locally via npm install topojson#1.0.
talled locally via npm install topojson#1.0.
Given I have csv data for many countries such as
FR,144145
EN,5643
DE,25667
ES,3567
US,83466
CN,34576
JA,69353
Given I want to bind that data to the Topojson+D3js generated SVG.
Thus I want a light yet precise world-id.topojson file with the rights properties... so to ease up the CSV-SVG data biding via matchings ids.
So, I go for :
# download GADM
curl \
-L -C - 'http://biogeo.ucdavis.edu/data/gadm2.8/gadm28_levels.shp.zip' \
-o ./gadm28_levels.shp.zip
unzip -n ./gadm28_levels.shp.zip -d ./
# Process data
node ./node_modules/topojson/bin/topojson -q 1e4 \
-p name=NAME_ENGL,iso=ISO,iso2=ISO2
-o world-all.json \
-- ./gadm28_adm0.shp
But it fails with Aborted (core dumped). How to proceed ?
EDIT: clean elegant world-id.json, 579.9kb, with iso-639-2, countrynames, and iso-639-3.
Currently via Topojoson#1.0. (version using Topojson#3.0 welcome!)
Output (Natural Earth) : clean elegant world-id.json, 579.9kb, with iso-639-2, countrynames, and iso-639-3.
Properties filtering
Add -p to keep all properties and their values, use nothing to drop them all, and use -p ISO to transmit to your topojson the duo "ISO": "FRA". See Topojson v.1 API
What we want
Data sample / visual :
{
"type": "MultiPolygon",
"arcs": [
[ [4347,4348,4349] ],
[ [4350,4350,4351,4352,4353,4354] ],
[ [4355,4356,4357,4358,4358,4358,4359,4360,4361,-4350,4362,4363,-3047,-1961,-1960,-598], [4364], [4365] ]
],
"properties": {
"name": "Italy",
"iso2": "IT",
"iso3": "ITA"
}
},
GADM data
# Install topojson v.1 locally
npm install topojson#1.0
# Run topojson
node --max_old_space_size=8000 ./node_modules/topojson/bin/topojson \
-q 1e4 \
-p name=NAME_ENGL,iso=ISO,iso2=ISO2 \
-o world-id.json \
-- countries=./gadm28_adm0.shp
Shapefile's data is such :
{
"properties": {
"OBJECTID": 79,
"ID_0": 79,
"ISO": "FRA",
"NAME_ENGLI": "France",
"NAME_ISO": "FRANCE",
"NAME_FAO": "France",
"NAME_LOCAL": "France",
"NAME_OBSOL": null,
"NAME_VARIA": null,
"NAME_NONLA": null,
"NAME_FRENC": "France",
"NAME_SPANI": "Francia",
"NAME_RUSSI": "ФÑанÑиÑ",
"NAME_ARABI": "ÙرÙسا",
"NAME_CHINE": "æ³å½",
"WASPARTOF": null,
"CONTAINS": null,
"SOVEREIGN": "France",
"ISO2": "FR",
"WWW": null,
"FIPS": "FR",
"ISON": 250,
"VALIDFR": "1944",
"VALIDTO": "Present",
"POP2000": 59237668,
"SQKM": 546728.875,
"POPSQKM": 108.349258122,
"UNREGION1": "Western Europe",
"UNREGION2": "Europe",
"DEVELOPING": 2,
"CIS": 0,
"Transition": 0,
"OECD": 1,
"WBREGION": null,
"WBINCOME": "High income: OECD",
"WBDEBT": "Debt not classified",
"WBOTHER": "EMU",
"CEEAC": 0,
"CEMAC": 0,
"CEPLG": 0,
"COMESA": 0,
"EAC": 0,
"ECOWAS": 0,
"IGAD": 0,
"IOC": 0,
"MRU": 0,
"SACU": 0,
"UEMOA": 0,
"UMA": 0,
"PALOP": 0,
"PARTA": 0,
"CACM": 0,
"EurAsEC": 0,
"Agadir": 0,
"SAARC": 0,
"ASEAN": 0,
"NAFTA": 0,
"GCC": 0,
"CSN": 0,
"CARICOM": 0,
"EU": 1,
"CAN": 0,
"ACP": 0,
"Landlocked": 0,
"AOSIS": 0,
"SIDS": 0,
"Islands": 0,
"LDC": 0,
"Shape_Leng": 130.51585694,
"Shape_Area": 64.5133204963
}
}
Natural Earth Data
Download : 1.3G
Input : actual source is just 5M and doesn't crash due to size.
Output : elegant world-id.json, 579.9kb.
Command
# download NaturalEarthData
curl \
-L -C - 'https://github.com/nvkelso/natural-earth-vector/archive/v4.0.0.zip' \
-o ./ne.shp.zip
unzip -n ./ne.shp.zip -d ./
# Install topojson v.1 locally
npm install topojson#1.0
# Run topojson
node ./node_modules/topojson/bin/topojson -q 1e3 --bbox \
-p name=ADMIN,iso2=WB_A2,iso3=WB_A3 \
-o world-id.json \
-- countries=./natural-earth-vector-4.0.0/10m_cultural/ne_10m_admin_0_countries.shp
Note: NE v4.0 data is :
{
"properties": {
"scalerank": 0,
"featurecla": "Admin-0 country",
"LABELRANK": 2,
"SOVEREIGNT": "France",
"SOV_A3": "FR1",
"ADM0_DIF": 1,
"LEVEL": 2,
"TYPE": "Country",
"ADMIN": "France",
"ADM0_A3": "FRA",
"GEOU_DIF": 0,
"GEOUNIT": "France",
"GU_A3": "FRA",
"SU_DIF": 0,
"SUBUNIT": "France",
"SU_A3": "FRA",
"BRK_DIFF": 0,
"NAME": "France",
"NAME_LONG": "France",
"BRK_A3": "FRA",
"BRK_NAME": "France",
"BRK_GROUP": null,
"ABBREV": "Fr.",
"POSTAL": "F",
"FORMAL_EN": "French Republic",
"FORMAL_FR": null,
"NAME_CIAWF": "France",
"NOTE_ADM0": null,
"NOTE_BRK": null,
"NAME_SORT": "France",
"NAME_ALT": null,
"MAPCOLOR7": 7,
"MAPCOLOR8": 5,
"MAPCOLOR9": 9,
"MAPCOLOR13": 11,
"POP_EST": 67106161,
"POP_RANK": 16,
"GDP_MD_EST": 2699000,
"POP_YEAR": 2017,
"LASTCENSUS": -99,
"GDP_YEAR": 2016,
"ECONOMY": "1. Developed region: G7",
"INCOME_GRP": "1. High income: OECD",
"WIKIPEDIA": -99,
"FIPS_10_": "FR",
"ISO_A2": "-99",
"ISO_A3": "-99",
"ISO_A3_EH": "-99",
"ISO_N3": "250",
"UN_A3": "250",
"WB_A2": "FR",
"WB_A3": "FRA",
"WOE_ID": -90,
"WOE_ID_EH": 23424819,
"WOE_NOTE": "Includes only Metropolitan France (including Corsica)",
"ADM0_A3_IS": "FRA",
"ADM0_A3_US": "FRA",
"ADM0_A3_UN": -99,
"ADM0_A3_WB": -99,
"CONTINENT": "Europe",
"REGION_UN": "Europe",
"SUBREGION": "Western Europe",
"REGION_WB": "Europe & Central Asia",
"NAME_LEN": 6,
"LONG_LEN": 6,
"ABBREV_LEN": 3,
"TINY": -99,
"HOMEPART": 1,
"MIN_ZOOM": 0,
"MIN_LABEL": 1.7,
"MAX_LABEL": 6.7
}
}
Related
Table I am looking to create in memory...Doing a web request to TDA API and getting a JSON formatted return on a number of strike prices for a stock/ticker. I am unable to get to the individual strike prices details which are in a json array. Below is the output I am getting:
What I am trying to accomplish is taking the data from each strike (eg: 13.5, 14.0...) and create an in-memory columnar table/array so that I can use the data to evaluate/assess potential trades/executions.
Any help would be greatly appreciated!!!
{
"symbol": "F",
"status": "SUCCESS",
"underlying": null,
"strategy": "SINGLE",
"interval": 0,
"isDelayed": true,
"isIndex": false,
"interestRate": 0.1,
"underlyingPrice": 13.08,
"volatility": 29,
"daysToExpiration": 0,
"numberOfContracts": 19,
"putExpDateMap": {},
"callExpDateMap": {
"2021-09-03:2": {
"13.5": [
{
"putCall": "CALL",
"symbol": "F_090321C13.5",
"description": "F Sep 3 2021 13.5 Call (Weekly)",
"exchangeName": "OPR",
"bid": 0.03,
"ask": 0.04,
"last": 0.04,
"mark": 0.04,
"bidSize": 207,
"askSize": 655,
"bidAskSize": "207X655",
"lastSize": 0,
"highPrice": 0.05,
"lowPrice": 0.02,
"openPrice": 0,
"closePrice": 0.03,
"totalVolume": 47477,
"tradeDate": null,
"tradeTimeInLong": 1630526399010,
"quoteTimeInLong": 1630526399727,
"netChange": 0.01,
"volatility": 35.069,
"delta": 0.169,
"gamma": 0.64,
"theta": -0.019,
"vega": 0.003,
"rho": 0,
"openInterest": 73416,
"timeValue": 0.04,
"theoreticalOptionValue": 0.035,
"theoreticalVolatility": 29,
"optionDeliverablesList": null,
"strikePrice": 13.5,
"expirationDate": 1630699200000,
"daysToExpiration": 2,
"expirationType": "S",
"lastTradingDay": 1630713600000,
"multiplier": 100,
"settlementType": " ",
"deliverableNote": "",
"isIndexOption": null,
"percentChange": 20.12,
"markChange": 0,
"markPercentChange": 5.11,
"inTheMoney": false,
"mini": false,
"nonStandard": false
}
],
"14.0": [
{
"putCall": "CALL",
"symbol": "F_090321C14",
"description": "F Sep 3 2021 14 Call (Weekly)",
"exchangeName": "OPR",
"bid": 0.01,
"ask": 0.02,
"last": 0.01,
"mark": 0.02,
"bidSize": 66,
"askSize": 1468,
"bidAskSize": "66X1468",
"lastSize": 0,
"highPrice": 0.02,
"lowPrice": 0.01,
"openPrice": 0,
"closePrice": 0.01,
"totalVolume": 3453,
"tradeDate": null,
"tradeTimeInLong": 1630526389748,
"quoteTimeInLong": 1630526395218,
"netChange": 0,
"volatility": 49.446,
"delta": 0.063,
"gamma": 0.224,
"theta": -0.013,
"vega": 0.001,
"rho": 0,
"openInterest": 31282,
"timeValue": 0.01,
"theoreticalOptionValue": 0.015,
"theoreticalVolatility": 29,
"optionDeliverablesList": null,
"strikePrice": 14,
"expirationDate": 1630699200000,
"daysToExpiration": 2,
"expirationType": "S",
"lastTradingDay": 1630713600000,
"multiplier": 100,
"settlementType": " ",
"deliverableNote": "",
"isIndexOption": null,
"percentChange": -33.33,
"markChange": 0,
"markPercentChange": 0,
"inTheMoney": false,
"mini": false,
"nonStandard": false
}
],
"14.5": [
{
"putCall": "CALL",
"symbol": "F_090321C14.5",
"description": "F Sep 3 2021 14.5 Call (Weekly)",
"exchangeName": "OPR",
"bid": 0,
"ask": 0.01,
"last": 0.01,
"mark": 0.01,
"bidSize": 0,
"askSize": 386,
"bidAskSize": "0X386",
"lastSize": 0,
"highPrice": 0.01,
"lowPrice": 0.01,
"openPrice": 0,
"closePrice": 0.01,
"totalVolume": 70,
"tradeDate": null,
"tradeTimeInLong": 1630520505930,
"quoteTimeInLong": 1630526227626,
"netChange": 0,
"volatility": 60.163,
"delta": 0.027,
"gamma": 0.092,
"theta": -0.008,
"vega": 0.001,
"rho": 0,
"openInterest": 5529,
"timeValue": 0.01,
"theoreticalOptionValue": 0.007,
"theoreticalVolatility": 29,
"optionDeliverablesList": null,
"strikePrice": 14.5,
"expirationDate": 1630699200000,
"daysToExpiration": 2,
"expirationType": "S",
"lastTradingDay": 1630713600000,
"multiplier": 100,
"settlementType": " ",
"deliverableNote": "",
"isIndexOption": null,
"percentChange": 40.85,
"markChange": 0,
"markPercentChange": -4.23,
"inTheMoney": false,
"mini": false,
"nonStandard": false
}
],
"15.0": [
{
"putCall": "CALL",
"symbol": "F_090321C15",
"description": "F Sep 3 2021 15 Call (Weekly)",
"exchangeName": "OPR",
"bid": 0,
"ask": 0.01,
"last": 0.01,
"mark": 0,
"bidSize": 0,
"askSize": 594,
"bidAskSize": "0X594",
"lastSize": 0,
"highPrice": 0.01,
"lowPrice": 0.01,
"openPrice": 0,
"closePrice": 0,
"totalVolume": 184,
"tradeDate": null,
"tradeTimeInLong": 1630524777537,
"quoteTimeInLong": 1630526370053,
"netChange": 0.01,
"volatility": 68.916,
"delta": 0.012,
"gamma": 0.041,
"theta": -0.005,
"vega": 0,
"rho": 0,
"openInterest": 4867,
"timeValue": 0.01,
"theoreticalOptionValue": 0.003,
"theoreticalVolatility": 29,
"optionDeliverablesList": null,
"strikePrice": 15,
"expirationDate": 1630699200000,
"daysToExpiration": 2,
"expirationType": "S",
"lastTradingDay": 1630713600000,
"multiplier": 100,
"settlementType": " ",
"deliverableNote": "",
"isIndexOption": null,
"percentChange": 185.71,
"markChange": 0,
"markPercentChange": -8.57,
"inTheMoney": false,
"mini": false,
"nonStandard": false
}
]
}
}
}
I'm trying to implement a prefix search using a field analyzed with an edge ngram analyzer.
However, whenever I do a search, it returns similar matches, but that do not contain the searched term.
The following query
POST /indexes/resources/docs/search?api-version=2020-06-30
{
"queryType": "full",
"searchMode": "all",
"search": "short_text_prefix:7024032"
}
Returns
{
"#odata.context": ".../indexes('resources')/$metadata#docs(*)",
"#search.nextPageParameters": {
"queryType": "full",
"searchMode": "all",
"search": "short_text_prefix:7024032",
"skip": 50
},
"value": [
{
"#search.score": 4.669537,
"short_text_prefix": "7024032 "
},
{
"#search.score": 4.6333756,
"short_text_prefix": "7024030 "
},
{
"#search.score": 4.6333756,
"short_text_prefix": "7024034 "
},
{
"#search.score": 4.6333756,
"short_text_prefix": "7024031 "
},
{
"#search.score": 4.6319494,
"short_text_prefix": "7024033 "
},
... omitted for brevity ...
],
"#odata.nextLink": ".../indexes('resources')/docs/search.post.search?api-version=2020-06-30"
}
Which includes a bunch of documents which almost match my term. And the "correct" document with the highest score on top.
The custom analyzer tokenizes "7024032 " like this
"#odata.context": "/$metadata#Microsoft.Azure.Search.V2020_06_30.AnalyzeResult",
"tokens": [
{
"token": "7",
"startOffset": 0,
"endOffset": 7,
"position": 0
},
{
"token": "70",
"startOffset": 0,
"endOffset": 7,
"position": 0
},
{
"token": "702",
"startOffset": 0,
"endOffset": 7,
"position": 0
},
{
"token": "7024",
"startOffset": 0,
"endOffset": 7,
"position": 0
},
{
"token": "70240",
"startOffset": 0,
"endOffset": 7,
"position": 0
},
{
"token": "702403",
"startOffset": 0,
"endOffset": 7,
"position": 0
},
{
"token": "7024032",
"startOffset": 0,
"endOffset": 7,
"position": 0
}
]
}
How do I exclude the documents which did not match the term exactly?
Ngram is not the right way in this case as the prefix '702403' appears in all those documents. You can use it if you specify the minimum length to be the length of the term you're searching for.
Here's an example:
token length: 3
sample content:
234
1234
2345
3456
001234
99234345
searching for '234'
it would return items 1 (234), 2 (1234), 3 (2345), 4 (001234) and 5 (99234345)
Another option, if you're 100% the content is stored in the way you presented, you could use regular expression to retrieve the way you want:
/.*7024032\s+/
I figured out the problem:
I had created the field with the "analyzer" property referring to my custom analyzer ("edge_nGram_analyzer"). Setting this field means the string are tokenized both on indexing and when searching. So searching for "7024032" meant I was searching for all tokens, split according to the egde n-gram analyzer: "7", "70", "702", "7024", "7024032", "70240", "702403", "7024032"
The indexAnalyzer and searchAnalyzer properties can instead be used, to handle index-tokenizing separately from search-tokenizing. When I used them separately:
{ "indexAnalyzer": "edge_nGram_analyzer", "searchAnalyzer": "whitespace" }
everything worked as expected.
I feel like the documentation on loading json files into cassandra is really lacking in dsbulk docs.
Here is part of the json file that im trying to load:
[
{
"tags": [
"r"
],
"owner": {
"reputation": 23,
"user_id": 12235281,
"user_type": "registered",
"profile_image": "https://www.gravatar.com/avatar/60e28f52215bff12adb9758fc2cf86dd?s=128&d=identicon&r=PG&f=1",
"display_name": "Me28",
"link": "https://stackoverflow.com/users/12235281/me28"
},
"is_answered": false,
"view_count": 3,
"answer_count": 0,
"score": 0,
"last_activity_date": 1589053659,
"creation_date": 1589053659,
"question_id": 61702762,
"link": "https://stackoverflow.com/questions/61702762/merge-dataframes-in-r-with-different-size-and-condition",
"title": "Merge dataframes in R with different size and condition"
},
{
"tags": [
"python",
"location",
"pyautogui"
],
"owner": {
"reputation": 1,
"user_id": 13507535,
"user_type": "registered",
"profile_image": "https://lh3.googleusercontent.com/a-/AOh14GgtdM9KrbH3X5Z33RCtz6xm_TJUSQS_S31deNYUcA=k-s128",
"display_name": "lowhatex",
"link": "https://stackoverflow.com/users/13507535/lowhatex"
},
"is_answered": false,
"view_count": 2,
"answer_count": 0,
"score": 0,
"last_activity_date": 1589053657,
"creation_date": 1589053657,
"question_id": 61702761,
"link": "https://stackoverflow.com/questions/61702761/want-to-get-a-grip-of-this-pyautogui-command",
"title": "Want to get a grip of this pyautogui command"
}
]
The way I have been trying to load this is following:
dsbulk load -url ./data_so1.json -k stackoverflow_t -t staging_t -h '182.14.0.1' -header false -u username -p password
This is the closest i get and it pushes the values into Cassandra row by row like this:
data
-------------------------------------------------------------------------------------------------------------------------------
"title": "'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine giving exception on client"
"profile_image": "https://www.gravatar.com/avatar/05085ede54486bdaebefcf8363e081e2?s=128&d=identicon&r=PG&f=1",
"view_count": 422,
"question_id": 61702768,
"user_id": 12235281,
This just takes the rows as they are (including the commas). I've tried the -m key for mapping but didnt really get anywhere with it.
What would be the right way to get these values to their own respective columns?
The content is on stdout
I'm trying to get the result of this package using Nodejs. I have been trying to use spawn, exec and log the child_process object to debug it but cannot see the value on stdout, even though the stderr data is ok.
when I direct terminal output, I'm able to log the stderr, but stdout is just empty if I log it to file, but it does show up in the terminal.
Then I tried using just the tool to check result then thought it's the tool problem, not the Nodejs code.
EDIT: Adding terminal content in text
Macbooks-MacBook-Pro:query macos$ lola --formula="EF DEADLOCK" input.lola --quiet --json
{"analysis": {"formula": {"parsed": "EF (DEADLOCK)", "parsed_size": 13, "type": "deadlock"}, "result": true, "stats": {"edges": 3, "states": 4}}, "call": {"architecture": 64, "assertions": false, "build_system": "x86_64-apple-darwin17.7.0", "error": null, "hostname": "Macbooks-MacBook-Pro.local", "optimizations": true, "package_version": "2.0", "parameters": ["--formula=EF DEADLOCK", "input.lola", "--quiet", "--json"], "signal": null, "svn_version": "Unversioned directory"}, "files": {"net": {"filename": "input.lola"}}, "limits": {"markings": null, "time": null}, "net": {"conflict_sets": 6, "filename": "input.lola", "places": 8, "places_significant": 6, "transitions": 7}, "store": {"bucketing": 16, "encoder": "bit-perfect", "threads": 1, "type": "prefix"}}
Macbooks-MacBook-Pro:query macos$ lola --formula="EF DEADLOCK" input.lola --quiet --json 2> aaa.txt
{"analysis": {"formula": {"parsed": "EF (DEADLOCK)", "parsed_size": 13, "type": "deadlock"}, "result": true, "stats": {"edges": 3, "states": 4}}, "call": {"architecture": 64, "assertions": false, "build_system": "x86_64-apple-darwin17.7.0", "error": null, "hostname": "Macbooks-MacBook-Pro.local", "optimizations": true, "package_version": "2.0", "parameters": ["--formula=EF DEADLOCK", "input.lola", "--quiet", "--json"], "signal": null, "svn_version": "Unversioned directory"}, "files": {"net": {"filename": "input.lola"}}, "limits": {"markings": null, "time": null}, "net": {"conflict_sets": 6, "filename": "input.lola", "places": 8, "places_significant": 6, "transitions": 7}, "store": {"bucketing": 16, "encoder": "bit-perfect", "threads": 1, "type": "prefix"}}
Macbooks-MacBook-Pro:query macos$
It sounds to me like the issue isn't really nodejs related. I did some quick googling on this Lola tool, and it looks like it might have some custom stdout/stderr, so it is possible for it to behave differently when used regularly from the terminal than when you specify redirects for the stdout and stderr.
One possible fix for you would be to use the --json option and specify a temporary filename, then have your nodejs code read the result from the temporary file created. (If you can't figure out the stdout/stderr issue.)
Is there any possible way to fetch the live availability of host/host group from Nagios monitoring tool (where host/hostgroups are already configured) which can be redirected/captured to an external webpage.
are there any exposed API's to do that, couldn't found a way.
Nagios is on a Linux host.
Any help or info is appreciated.
EDIT1:
I have a hostgroup say for example 'All_prod' in this hostgroup I will be having around 20 linux hosts for all the host there would be some metrics/checks defined (example availability, cpu load, free memory ..etc). Here I want the report of only availability metrics of all the host(example : lets say if in 24 hours if the availability is down for 10 minutes then it should provide me with the report as it was down for 10 minutes in 24 hours or just give me any related info which i can evaluate using data evaluation).
it would be great if there are any API's to fetch that information, which will return the data as json/xml.
You can use the Nagios JSON API. You can use the query builder here http://NAGIOSURL/jsonquery.html.
But, to answer your specific question, the queries for hosts would look like this:
http://NAGIOSURL/cgi-bin/statusjson.cgi?query=host&hostname=localhost
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384499000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "host",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384489000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"host": {
"name": "localhost",
"plugin_output": "egsdda",
"long_plugin_output": "",
"perf_data": "",
"status": 8,
"last_update": 1497384489000,
"has_been_checked": true,
"should_be_scheduled": false,
"current_attempt": 10,
"max_attempts": 10,
"last_check": 1496158536000,
"next_check": 0,
"check_options": 0,
"check_type": 1,
"last_state_change": 1496158536000,
"last_hard_state_change": 1496158536000,
"last_hard_state": 1,
"last_time_up": 1496158009000,
"last_time_down": 1496158536000,
"last_time_unreachable": 1480459504000,
"state_type": 1,
"last_notification": 1496158536000,
"next_notification": 1496165736000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 2,
"accept_passive_checks": true,
"event_handler_enabled": true,
"checks_enabled": false,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0.49,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
And for hostgroups:
http://NAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=hostlist&hostgroup=linux-servers
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384613000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "hostlist",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384609000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"selectors": {
"hostgroup": "linux-servers"
},
"hostlist": {
"localhost": 8
}
}
}
Hope this helps!
EDIT 1 (To correspond with the question's EDIT 1):
What you're asking for isn't built in by default. You can use the above methods to grab the data for each host (but it sounds like you want it for each service), so again we will use the JSON API found at http://YOURNAGIOSURL/jsonquery.html to grab service data..
http://YOURNAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=service&hostname=localhost&servicedescription=Current+Load
We'll get the following output (something similar, anyway):
{
"format_version": 0,
"result": {
"query_time": 1497875258000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "service",
"query_status": "released",
"program_start": 1497800686000,
"last_data_update": 1497875255000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"service": {
"host_name": "localhost",
"description": "Current Load",
"plugin_output": "OK - load average: 0.00, 0.00, 0.00",
"long_plugin_output": "",
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
"max_attempts": 4,
"current_attempt": 1,
"status": 2,
"last_update": 1497875255000,
"has_been_checked": true,
"should_be_scheduled": true,
"last_check": 1497875014000,
"check_options": 0,
"check_type": 0,
"checks_enabled": true,
"last_state_change": 1497019191000,
"last_hard_state_change": 1497019191000,
"last_hard_state": 0,
"last_time_ok": 1497875014000,
"last_time_warning": 1497019191000,
"last_time_unknown": 0,
"last_time_critical": 1497018891000,
"state_type": 1,
"last_notification": 0,
"next_notification": 0,
"next_check": 1497875314000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 0,
"accept_passive_checks": true,
"event_handler_enabled": true,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
The most important line for what you're trying to do (as far as I understand it) is the perfdata line:
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
This is the data you'd use to generate whatever custom metrics report you're trying to generate.
Keep in mind this is something that is sort of built in to Nagios XI (not in an exportable format like you're requesting) but the metrics component does allow you to easily drill down and take a look at some metric specific data.
Hope this helps!