My question is: Is there a way to extract the flexray singals from Mf4 using dbc/arxml file and save to MDF format
Not until somebody releases a reliable parser for the Fibex and Arxml database formats for Flexray.
Related
I want to know how can I import data in CSV and then how I can deal with it?
I had loaded the file but do not know how to read it.
'',' fixdsv dat ] load '/Users/apple/Downloads/data'
Assuming that the file /Users/apple/Downloads/data is a csv file then you should be able to load it into a J session as a boxed table like this:
load 'csv'
data=: readcsv '/Users/apple/Downloads/data'
If the file uses delimiters other than commas (e.g. Tabs) then you could use the tables/dsv addon.
data=: TAB readdsv '/Users/apple/Downloads/data'
See the J wiki for more information on the tables/csv and tables/dsv addons.
After loading the file, I think that I would start by reading the file into a variable then working with that.
data=: 1:!1 <'filepath/filename' NB. filename and path need to be boxed string
http://www.jsoftware.com/help/dictionary/dx001.htm
Also you could look at jd which is specifically a relational database system if you are more focussed on file management than data processing.
http://code.jsoftware.com/wiki/Jd/Index
I need to read xlsx in nodejs. Xlsx contains text with accents and apostrophes and so on. Then i have to save the text in json file.
What are the best practices to perform that task?
Stage 1 - take a look at this module node-xlsx or more robust and possibly better for your needs xlsx.
Stage 2 - Writing the file to JSON - if the module can return a JSON format then great. If you use xlsx it has an option to JSON --> take a look here.
Since you may need to actually strip and/or protect special accents etc. you may need to validate the data which is returned before producing a JSON file.
As to actually writing a JSON file, there are a huge amount of NPM modules for the task.
I need a microcontroller to read a data file that's been stored in a Zip file (actually a custom OPC-based file format, but that's irrelevant). The data file has a known name, path, and size, and is guaranteed to be stored uncompressed. What is the simplest & fastest way to find the start of the file data within the Zip file without writing a complete Zip parser?
NOTE: I'm aware of the Zip comment section, but the data file in question is too big to fit in it.
I ended up writing a simple parser that finds the file data in question using the central directory.
First post here so if I am doing something stupid, please let me know.
CentOS 7(3.10.0-123.6.3.el7.x86_64), perl5 (revision 5 version 16 subversion 3), Spreadsheet::ParseXLSX v0.16
I have an .xlsx that has some embedded .pdf's and another xlsx in embedded within it. I am trying to read these and insert them into a csv along with the original xlsx. After some googl'ing to no avail my thought was to unzip the original xlsx and read the files from the xl/embeddings directory but, alas there is something with the file that causes adobe reader to not be able to read the oleObject(X).bin files in that dir.
I have been able to successfully read all the worksheets that dio not contain embedded docs using Spreadsheet::ParseXLSX, works great. I have googled for a solution but am either not using correct search prams or ...
If you know how to do this can you point me to some instructions?
TIA,
JohnM
I am trying to work with Paradox files and convert these to an Excel file.
Does anyone know how to achieve such conversion?
I wrote a small Python script to read Paradox .DB files.
But please be careful, it's not complete: some field types may not be converted (only memos AFAIK, but I'm not a Paradox expert).
https://gist.github.com/BertrandBordage/9892556
You can either read a .DB file as Python objects using paradox.read('your_file.DB') or convert it to a CSV file using paradox.to_csv('your_file.DB').