string difference in jsondiffpatch nodejs - node.js

I am using jsondiffpatch npm module for the difference.
left JSON contains
"inclusionParams": "{\"internalCode\":{\"isIncluded\":true,\"type\":\"string\",\"searchStr\":\"320150,320285,321887,322866,322867,323007,323009,323011,323037,323051,323907,323914\"}}",
right JSON contains
"inclusionParams": "{\"country\":{\"isIncluded\":true,\"type\":\"string\",\"searchStr\":\"US\",\"expr\":null,\"ids\":null},\"ext.dmaCode\":{\"isIncluded\":false,\"type\":\"string\",\"searchStr\":\"868, 801, 641, 597, 504\",\"expr\":null,\"ids\":null},\"status\":{\"isIncluded\":true,\"type\":\"string\",\"searchStr\":\"Active\",\"expr\":null,\"ids\":null}}",
getting difference something like
inclusionParams=["## -80,76 +80,32 ##\n ,855\n-,%22%7D,%22status%22:%7B%22isIncluded%22:true,%22type%22:%22string%22,%22searchStr%22:%22Active%22\n+%22,%22expr%22:null,%22ids%22:null\n %7D,%22c\n## -163,10 +163,33 ##\n tr%22:%22US%22\n+,%22expr%22:null,%22ids%22:null\n %7D%7D\n",0,2];
but I want to get the difference as of right JSON value instead of the above value[character wise difference].
how to achieve it?
are there any configurations?

differ = jsondiffpatch.create({
textDiff: {
// default 60, minimum string length (left and right sides) to use text diff algorythm: google-diff-match-patch
minLength: 10000
},
});

Related

How to force gem that converts all bins to 93k multibins to output 93k native bins?

My need is to get good old fashioned 93k native bad bins defined in my testflow. My ruby file compiles but looks like the gem is converting all bins to multibins. Is there a way to force this from my ruby file instead of hacking the gem files? If yes, going ahead with this, I couldn't find how to specify hardbin description and softbin description in origen. That is something I would like to add in the ruby code instead of on ATE.
Also on a side note, I am trying to force the output file name to something i want. Like in the sample code below i want the output file to be test.tf. The gem is adding some string and an underscore in front of "test". I don't need that either.
sample code:
Flow.create interface: 'MyTester::Interface', params: :room, unique_test_names: nil, flow_name:
:test, file_name: :test, insertion: :prb do
test_info1 = {"key_1" =>
[{:testname => "t1",
:sbin => 100,
:patternname => "p1"}],
"key_2" =>
[{:testname => "t2",
:sbin => 200,
:patternname => "t3"}]
}
testnum = 100000
test_info1.each do |key,val|
puts key
val.each do |info|
tname, sb, pname = info.values_at(:testname, :sbin, :patternname)
puts "#{tname} : #{sb} : #{pname}"
test_suites.add("#{tname}", pattern: "#{pname}", tim_spec_set: 1, timset: 1, lev_equ_set: 1,
lev_spec_set: 10, levset: 1, test_method: test_methods.ac_tml.ac_test.functional_test)
testnum = testnum+100
test :"#{tname}", bin: 10, softbin: "#{sb}", tnum: testnum
end
end
end

How do I declare and use a variable in the yaml file that is formatted for pyresttest?

So, a brief description of what I want, what my issue is, and what I have tried.
I want to declare and use a dictionary variable for my tests in pyrest, specifically for the [url, body] section so that I can conduct my POST tests targeting a specific endpoint and with a preformatted body.
Here is how mytest.yml file is structured:
- data:
- id: 63
- rate: 25
... a sizable set of field for reasons ...
- type: lab_test_authorization
- modified_at: ansible_date_time.datetime # Useful way to generate
- test:
- url: "some-valid-url/{the_url_question}" # data['special_key']
- method: 'POST'
- headers : {etc..etc}
- body: '{ "data": ${the_body_question} }' # data (the content)
Now the problem starts in my lack of understanding why (if true) does pyrest does not have support for dictionary mappings. I understand yaml supports these feature but am not sure if pyrest can parse through it. Knowing how to call and use dictionary variable in my url and body tags would be significantly helpful.
As of right now, if I try to convert my data Sequence into a data Dictionary, I will get an error stating:
yaml.parser.ParserError: while parsing a block mapping
in "<unicode string>", line 4, column 1:
data:
^
expected <block end>, but found '-'
in "<unicode string>", line 36, column 1:
- config:
I'm pretty sure there are gaps in my knowledge regarding how yaml and pyresttest interact with each other, so any insight would be greatly appreciated.

Folium library error in choropleth

Am using folium library with an open data set from kaggle,
map.choropleth(geo_path=country_geo, data=plot_data,
columns=['CountryCode', 'Value'],
key_on='feature.id',
fill_color='YlGnBu', fill_opacity=0.7, line_opacity=0.2,
legend_name=hist_indicator
)
The above part of the code is giving me the following error:
TypeError: choropleth() got an unexpected keyword argument 'geo_path'
When I replace geo_path with geo_data I get this error:
JSONDecodeError: Expecting value: line 7 column 1 (char 6)
Is the issue related to "UCSanDiegoX: DSE200x Python for Data Science"? I took the advice of Cody and rename geo_path to geo_data at the specifications of map.choropleth.
At the git hub repository, take care of using the RAW data, which is in fact a file structured with the format GeoJSON. The first two lines should start like the code provided below
{"type":"FeatureCollection","features":[
{"type":"Feature","properties":{"name":"Afghanistan"},"geometry":
{"type":"Polygon","coordinates":[[[61.210817,35.650072],.....
geo_path doesn't work because it is not a parameter for choropleth. You are correct in replacing it with geo_data.
Your second error is likely due to non-existent or incorrectly formatted geojson file.
From http://python-visualization.github.io/folium/docs-master/modules.html?highlight=chor# your argument for geo_data needs to be a "URL, file path, or data (json, dict, geopandas, etc) to your GeoJSON geometries".
GeoJSON formatted files follow this structure from geojson.org:
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [125.6, 10.1]
},
"properties": {
"name": "Dinagat Islands"
}
}

Ignore the comment sign (%) in m-file within a string

In my code I have the following line:
fprintf(logfile,'Parameters: Size: %d\tH: %.4f\tF: %.1f\tI: %.3f\tR: %d\tSigma: %d\tDisp: %.1f\r\n',parameter_sets(ps,:));
which is too long, so I want to break it to:
fprintf(logfile,'Parameters: Size: %d\tH: %.4f\tF: %.1f\tI: %.3f\tR: ...
%d\tSigma: %d\tDisp: %.1f\r\n',parameter_sets(ps,:));
However, since the brake is within a string, MATLAB see the formatting %d sign in the second line as a start of a comment, and ignore this line (and produce an error...).
So I tried to make it clearer with a [] that warp the string:
fprintf(logfile,['Parameters: Size: %d\tH: %.4f\tF: %.1f\tI: %.3f\tR: ...
%d\tSigma: %d\tDisp: %.1f\r\n'],parameter_sets(ps,:));
but no help, it still interpret the second line as a comment. I also tried with and without the ellipsis (...) in different places, with no success.
So how can I write a line in a formatted way (i.e. a reasonable length) if it has a % sign in it?
Divide it in two lines like this:
fprintf(logfile,['Parameters: Size: %d\tH: %.4f\tF: %.1f\tI: %.3f\tR:', ...
'%d\tSigma: %d\tDisp: %.1f\r\n'],parameter_sets(ps,:));
% notice the apostrophe and comma(',) before ellpsis(...) at the end of first line
% and apostrophe(') at the start of the second line

Azure Stream Analitics list in the list

How do I take the data from the file list if json array is in the list? eg.
{"city":{"id":1851632,"name":"Shuzenji",
"coord":{"lon":138.933334,"lat":34.966671},
"country":"JP",
"cod":"200",
"message":0.0045,
"cnt":38,
"list":[{
"dt":1406106000,
"main":{
"temp":298.77,
"temp_min":298.77,
"temp_max":298.774,
"pressure":1005.93,
"sea_level":1018.18,
"grnd_level":1005.93,
"humidity":87
"temp_kf":0.26},
"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],
"clouds":{"all":88},
"wind":{"speed":5.71,"deg":229.501},
"sys":{"pod":"d"},
"dt_txt":"2014-07-23 09:00:00"}
]}
How do I get to weather.main?
First your JSON format seem to have errors, let's fix these first:
{"city":{"id":1851632,"name":"Shuzenji"},
"coord":{"lon":138.933334,"lat":34.966671},
"country":"JP",
"cod":"200",
"message":0.0045,
"cnt":38,
"list":[{
"dt":1406106000,
"main":{
"temp":298.77,
"temp_min":298.77,
"temp_max":298.774,
"pressure":1005.93,
"sea_level":1018.18,
"grnd_level":1005.93,
"humidity":87,
"temp_kf":0.26},
"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],
"clouds":{"all":88},
"wind":{"speed":5.71,"deg":229.501},
"sys":{"pod":"d"},
"dt_txt":"2014-07-23 09:00:00"}
]},
Note comma after at the end of humidity line, and closed record in the first line after Shuzenji.
Now, in the JSON example weather.main, is actually list[0].weather[0].main. Here is the way to get precisely this value
out of the JSON structure using Array and Record built-in functions:
SELECT
-- input.list[0].weather[0].main
WeatherMain = GetRecordPropertyValue(GetArrayElement(GetRecordPropertyValue(GetArrayElement(input.list, 0), 'weather'), 0), 'main')
FROM input

Resources