So I am building this react app, where I have to create a json structure and then add this to a json file.
Here is the structure I have to build:
{
"name": {
"label": "Name",
"type": "text",
"operators": ["equal", "not_equal"],
"defaultOperator": "not_equal"
},
"age": {
"label": "Age",
"type": "number",
"operators": [
"equal",
"not_equal",
"less",
"less_or_equal",
"greater",
"greater_or_equal",
"between",
"not_between",
"is_empty",
"is_not_empty"
]
},
"gender": {
"label": "Gender",
"type": "select",
"listValues": {
"male": "Male",
"female": "Female"
}
}
}
After finishing with the json structure, I want to push this to a json file, which is the configuration file for a react library (react-awesome-query-builder). Now how can I write to a json file using JS?
I know that I can use Node.js and use fs for this, but I am not sure how to use this in react. Perhaps there is a library I can use to do this?
Can someone point me to the right direction?
You can write it exactly as a traditional text file (because it is basically text). Meanwhile don't forget to give to your file the extension .json!
It is when you will open it afterward that you will have to parse it as json wich is well handled by plenty of libraries
Well... I guess I figured it out. Instead of using an external json file I tested the config file inside as a variable and it works. No need to use read/write from node.js or any of that.
Related
I am currently grabbing the entire swagger.json contents:
swagger_link = "https://<SWAGGER_URL>/swagger/v1/swagger.json"
swagger_link_contents = requests.get(swagger_link)
#write the returned swagger to a file
swagger_return = swagger_link_contents.json()
with open ("swagger_raw.json", "w") as f:
f.write(json.dumps(swagger_return, indent=4))
The file is filled with beautiful JSON now. But there are a ton of objects in there, quite a few I don't need. Say I wanted to grab the command "UpdateCookieCommand" from my document:
"parameters": [
{
"name": "command",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/UpdateCookieCommand"
},
"x-nullable": false
}
],
but it also has additional mentions later in the document...
UpdateCookieCommand": {
"allOf": [
{
"$ref": "#/definitions/BaseCommand"
},
{
"type": "object",
"required": [
"command",
"id"
],
The latter object is really what I want to take from the document. It tells me what's required for a nominal API command to Update a Cookie. The hope is I could have a script that will look at every release and be able to absorb changes of the swagger and build new commands.
What methodology would you all recommend to accomplish this? I considered making subprocess Linux commands to just awk, sed, grep it to what I want but maybe there's a more elegant way? Also I won't know the amount of lines or location of anything as a new Swagger will be produced each release and my script will need to automatically run with a CI/CD pipeline.
I have a FastAPI application. I need to test it using test data. I have a shell script written to load the data from a json file and run to test it and it works perfectly. I just need a way to automatically derive the json test file. The Json file contains something like this
{
"name": "registration-test",
"endpoint": "/user/register",
"method" : "post",
"request": {
"name": "hello ",
"email": "foo334#gnuze.org",
"password": "123456"
},
"expression": "email",
"expected": "foo334#gnuze.org"
}
To get the request data I need a way to get all the classes in the project and assign dummy values and return it in a JSON format.
I am using Perspective API (you can check out at: http://perspectiveapi.com/) for my discord application. I am sending an analyze request and api returning this:
{
"attributeScores": {
"TOXICITY": {
"spanScores": [
{
"begin": 0,
"end": 22,
"score": {
"value": 0.9345592,
"type": "PROBABILITY"
}
}
],
"summaryScore": {
"value": 0.9345592,
"type": "PROBABILITY"
}
}
},
"languages": [
"en"
],
"detectedLanguages": [
"en"
]
}
I need to get "value" in "summaryScore" as an integer. I searched it on Google, but i just found reading value for not categorized or only 1 times categorized json files. How can i do that?
Note: Sorry if i asked something really easy or if i slaughtered english. My primary language is not english and i am not much experienced on node.js
First you must make sure the object you have recived is presived by nodeJS as a JSON object, look at this answer for how first. After the object is stored as a JSON object you can do the following:
Reading from nested objects or arrays is as easy as doing this:
object.attributeScores.TOXICITY.summaryScore.value
If you look closer to the object and its structure you can see that the root object (the first {}) contains 3 values: "attributeScores", "languages" and "detectedLanguages".
The field you are looking for exists inside the "summeryScore" object that exists inside the "TOXICITY" object and so on. Thus you need to traverse the object structure until you get to the value you need.
I am using Alteryx to take an Excel file and convert to JSON. The JSON output I'm getting looks different to what I was expecting and also the object starts with "JSON": which I don't want to happen and I would also like to know how/which components I would use to map fields to specific JSON fields instead of key value pairs if I need to later in the flow.
I have attached my sample workflow and excel which are:
Excel screenshot
Alteryx test flow
JSON output I am seeing:
[
{
"JSON": "{\"email\":\"test123#test.com\",\"startdate\":\"2020-12-01\",\"isEnabled\":\"0\",\"status\":\"active\"}"
},
{
"JSON": "{\"email\":\"myemail#emails.com\",\"startdate\":\"2020-12-02\",\"isEnabled\":\"1\",\"status\":\"active\"}"
}
]
What I expected:
[{
"email": "test123#test.com",
"startdate": "2020-12-01",
"isEnabled": "0",
"status": "active"
},
{
"email": "myemail#emails.com",
"startdate": "2020-12-02",
"isEnabled": "1",
"status": "active"
}
]
Also, what component would I use if I wanted to map the structure above to another JSON structure similar this one:
[{
"name":"MyName",
"accounType":"array",
"contactDetails":{
"email":"test123#test.com",
"startDate":"2020-12-01"
}
}
} ]
Thanks
In the workflow that you have built, you are essentially creating the JSON twice. The JSON Build creates the JSON structure, so if you then want to output it, select your file to output and then change the dropdown to csv with delimiter \0 and no headers.
However, try putting an output straight after your Excel file and output to JSON, the Output Tool will build the JSON for you.
In answer to your second question, build the JSON for Contact Details first as a field (remember to rename JSON to contactDetails). Then build from there with one of the above options.
In an Azure App-Service Logic App I have an AzureStorageBlobConnector which retrieves a file from storage. The file is being retrieved as binary and without setting any ContentTransferEncoding. My connector definition (subscription details replaced with 'x') looks like this:
"azurestorageblobconnector": {
"type": "ApiApp",
"inputs": {
"apiVersion": "2015-01-14",
"host": {
"id": "/subscriptions/x/providers/Microsoft.AppService/apiapps/azurestorageblobconnector",
"gateway": "https://x.azurewebsites.net"
},
"operation": "GetBlob",
"parameters": {
"BlobPath": "#triggers().outputs.body.Properties['FilePath']",
"FileType": "Binary"
},
"authentication": {
"type": "Raw",
"scheme": "Zumo",
"parameter": "#parameters('/subscriptions/x/resourcegroups/x/providers/Microsoft.AppService/apiapps/azurestorageblobconnector/token')"
}
},
"repeat": null,
"conditions": []
},
I want to author a custom Api Connector to receive this file, make some changes to it, then return it for the next step in the workflow.
What form will the file be in when the storage blob connector passes it to the next connector as #body('azurestorageblobconnector').Content? Will it be HttpPostedFile or a Stream or Multipart content in the body, or something else?
It depends on how you configure the Connector, if you choose "Binary" then it will come as a string in Base64 encoded.
If you choose Text, then it will be "the raw text".
One way to deal with that, is in your API App try to Convert.FromBase64String and if that succeeds then you got yourself a byte array with the actual bytes. If it does not succeed then you can assume that is the raw text content of the file.