Excel table to json converter - excel

I have a excel table with contains the following data:
format:
mainheading:
subheading:
mandatory:
fieldname:
type:
description:
function:
optional:
type
description
again cycle repeats. Need to loop through that excel and required for
Need a optimum way to do it. can anyone help how to deal with this excel to json conversion

Related

Reading an excel file, extracting each cell value as a string

I have following excel file:
I need to save each value in a list as strings.
I am new to python, I have already read the file in a dataframe. Please help further.
df = pd.read_excel(userfile, converters={'Agentcode':str})

How to get File Upload field properties from Microsoft Form in Power Automate?

Desired Behaviour
I am trying to get the file properties of a file uploaded via a Microsoft Form in Power Automate.
Research
I've tried numerous variations of suggestions from sources such as:
How to get data from JSON objects using expressions in Power Automate (video)
Working with files from the Forms "File Upload" question type
Reference guide to using functions in expressions for Azure Logic Apps and Power Automate
I am a fairly experienced developer and I am familiar with variable types (String, Array, Object etc) and how to reference items in an object with dot or bracket notation and accessing array items by index.
I'm also familiar with:
JSON.parse()
The JSON.parse() method parses a JSON string, constructing the JavaScript value or object described by the string.
JSON.stringify()
The JSON.stringify() method converts a JavaScript object or value to a JSON string
And have read about Power Automate's Parse JSON action
To reference or access properties in JavaScript Object Notation (JSON) content, you can create user-friendly fields or tokens for those properties by using the Parse JSON action. That way, you can select those properties from the dynamic content list when you specify inputs for your logic app. For this action, you can either provide a JSON schema or generate a JSON schema from your sample JSON content or payload
But I am still having a lot of difficulty getting the values I need.
What I've Tried
01) Use Parse JSON to access properties of the Reponse body:
02) Run the flow and look at the Raw Outputs of Parse JSON:
{
"body": {
"responder": "me#domain.com",
"submitDate": "7/5/2021 7:03:26 AM",
"letters-and-numbers-here-1": "some text here",
"letters-and-numbers-here-2": "[{\"name\":\"File 01 Name.docx\",\"link\":\"https://tenant.sharepoint.com/sites/MySiteName/_layouts/15/Doc.aspx?sourcedoc=%7Bletters-and-numbers%7D&file=File%2001%20Name%20_Uploader%20Name.docx&action=default&mobileredirect=true\",\"id\":\"id-is-here\",\"type\":null,\"size\":20411,\"referenceId\":\"reference-id-is-here\",\"driveId\":\"drive-id-is-here\",\"status\":1,\"uploadSessionUrl\":null}]",
"letters-and-numbers-here-3": "[{\"name\":\"File 02 Name.docx\",\"link\":\"https://tenant.sharepoint.com/sites/MySiteName/_layouts/15/Doc.aspx?sourcedoc=%7Bletters-and-numbers%7D&file=File%2002%20Name%20_Uploader%20Name.docx&action=default&mobileredirect=true\",\"id\":\"id-is-here\",\"type\":null,\"size\":20411,\"referenceId\":\"reference-id-is-here\",\"driveId\":\"drive-id-is-here\",\"status\":1,\"uploadSessionUrl\":null}]",
"letters-and-numbers-here-4": "some other text here"
}
}
03. Try and get the name value of the first File Upload field.
I had assumed that the first Parse JSON would have recursively set all values as JSON Objects, however it looks like the value of the File Upload field is still a string. So I figure I have to do another Parse JSON action on the File Upload field.
Content:
body('Parse_JSON')?['letters-and-numbers-here-2']
Sample JSON Payload:
{
"letters-and-numbers-here-2": "[{\"name\":\"File 01 Name.docx\",\"link\":\"https://tenant.sharepoint.com/sites/MySiteName/_layouts/15/Doc.aspx?sourcedoc=%7Bletters-and-numbers%7D&file=File%2001%20Name%20_Uploader%20Name.docx&action=default&mobileredirect=true\",\"id\":\"id-is-here\",\"type\":null,\"size\":20411,\"referenceId\":\"reference-id-is-here\",\"driveId\":\"drive-id-is-here\",\"status\":1,\"uploadSessionUrl\":null}]"
}
Which produces the Error:
[
{
"message": "Invalid type. Expected Object but got Array.",
"lineNumber": 0,
"linePosition": 0,
"path": "",
"schemaId": "#",
"errorType": "type",
"childErrors": []
}
]
So I tried to target the first Object within the File Upload field Array, and I try the following values as the Content of Parse JSON 2:
body('Parse_JSON')?['letters-and-numbers-here-2']?[0]
body('Parse_JSON')['letters-and-numbers-here-2'][0]
Both produce the error:
Unable to process template language expressions in action 'Parse_JSON_2' inputs at line '1' and column '9206': 'The template language expression 'body('Parse_JSON')['letters-and-numbers-here-2'][0]' cannot be evaluated because property '0' cannot be selected. Property selection is not supported on values of type 'String'. Please see https://aka.ms/logicexpressions for usage details.'
Question
How do I access the File Upload field properties?
For reference, the Microsoft Form has 4 fields of type:
Text
File Upload (limited to 1 file only)
File Upload (limited to 1 file only)
Choice
The connectors used are:
When a new response is submitted
Get response details
The body returned by Get response details, in Raw outputs, is:
"body": {
"responder": "me#domain.com",
"submitDate": "7/5/2021 3:17:56 AM",
"lots-of-letters-and-numbers-1": "text string here",
"lots-of-letters-and-numbers-2": [{.....}],
"lots-of-letters-and-numbers-3": [{.....}],
"lots-of-letters-and-numbers-4": "text string here"
}
I tried a different approach, without the Parse JSON action, and could access the file name:
The expression values were:
`Compose` > `Inputs`: json(body('Get_response_details')?['lots-of-letters-and-numbers-2'])
`Initialize Variable` > `Value`: outputs('Compose')[0]['name']
Or, even simpler, just using Compose:
Or Initialize variable:
Using this expression with json() and body():
json(body('Get_response_details')?['lots-of-letters-and-numbers-2'])[0]['name']
I'm not sure if this is the best approach or if it comes with any 'gotchas'.

Reading textfile and insert data into database

I have an exported text file that looks like this:
Text file
So it is built up like a table and it is unicode encoded. I don't create the export file, so please don't tell me to use csv files.
I already have a mariadb database in place with with a table that contains the respective headers (ID, Name, ..).
My goal is to read the data from the text file and insert it correctly into the daatabase. I am using node js and would like to know what steps i need to follow in order to accomplish my goal.
Can is use this instruction URL? I already tried it this way but i think the unicode encoding caused some problems.
You should really consider using csv files to import/export any column related data, they have a tabular structure sort of implied from them.
Here in this case, you'll have to write some sort of a parser, which reads your file, one line at a time, then splits the data using multiple spaces as a delimiter, overall not really worth it.
Search into using csvs, there are even npm modules available for use with csv.
So the way you would approach this is with streams. You would read each line into the buffer, parse it and save the data into the database.
Have a look at csv-stream library https://github.com/klaemo/csv-stream. It does the parsing for you and you can configure it to use tab as the delimiter.
const csv = require('csv-streamify')
const fs = require('fs')
const parser = csv({
delimiter: '\t',
columns: true,
});
// emits each line as a an object with keys as headers and properties as row values
parser.on('data', (line) => {
console.log(line)
// { ID: '1', Name: 'test', Date: '2010', State: 'US' }
// Insert row into DB
// ...
})
fs.createReadStream('data.txt').pipe(parser)

Python format incomplete date to YYYYMM

As a start, I am extremely new at Python.
I am receiving an Excel file where the date field is incomplete. The value displays as "190808" (YYMMDD) instead of "2019-08-08".
Part of my automation attempt is to move the file to a different location, where the file is renamed. I want to use the date field to change the file name to the file description and date (e.g. "Sales figures 201908").
The code I have only works if the date format is
str(df['Bank date'][0].strftime("%Y%m"))
I have tried dateparser with the following:
dateparser.parse(df['Bank date'][0].strftime("%Y.%m"))
The error I am receiving is 'numpy.int64' object has no attribute 'strftime'
Any help will do.
Thanks.
I modified it slightly and built my own date-string using slicing.
vOldDate = str(df['Bank date'][0])
vNewDate = '20' + vOldDate[:2] + '.' + vOldDate[2:4]
Numpy is interpreting the date as an integer. To use dateparser, you need to convert that value into a string first, then parse that string, and then format the result:
dateparser.parse(str(df['Bank date'][0])).strftime("%Y.%m")
Since the input format is expected, you should specify it to ensure you get the right date:
>>> dateparser.parse(str(190808), date_formats=['%y%m%d']).strftime("%Y.%m")
'2019.08'

Format column as Zip Code when exporting from ag-grid

I am exporting to Excel using the Enterprise Ag-Grid built in solutions. Whenever I export a zipcode, any zipcode that begins with 0 loses that in the Excel file. I know that Excel supports a special Zip Code format, however, I keep striking out on my attempts.
{
headerName: 'ZIP',
type: 'zip code',
filter: 'number',
unSortIcon: true,
field: 'Zip',
filterParams: {
filterOptions: this.filterOption,
clearButton: true,
applyButton: true
},
minWidth: 120
}
That is how the column is currently defined within the columnDefs of the gridOptions.
Thank you in advance for any assistance or insight you may have.
Regards,
Youssef
You can format the required cells in your spreadsheet as "00000" which will show the required formatting. Format cells->Special->Zip Code.

Resources