I am new to node js
I want to print excel data without using any library as using libraries takes time and we have to deal with large memory files.
I tried reading a .xlsx file using fs.createReadStream() and logged the data to console, it prints some different characters but the actual data.
nodejs code
const stream=fs.createReadStream('example.xlsx');
stream.on('data',function(data){
console.log(Buffer.from(data, 'base64').toString('utf8'))
})
Can I know how to get the actual data or any library that can read large excel files?
Related
let dataCer = '0�\u0007\u00060�\u0006��\u0003\u0002\u0001\u0002\u0002\u0010Q��\u0000����K��Z�Q��0\n\u0006\b*�\u0003\u0007\u0001\u0001\u0003\u00020�\u0001l1\u001e0\u001c\u0006\.............'
fs.writeFile('111.cer', dataCer);
let dataPdf = '%PDF-1.4\r\n1 0 obj\r\n<< \r\n/Length 9947\r\n/Filter /FlateDecode\r\n>>\r\nstream\r\nX��]�n#9p}���\u000f���\u0005\b\u0002X��<\'X \u001f�\u001b\u0010 \u0001���H�,6�R�Z�\u0014�N`�\n�T�t�ڼT\u0015���?ԋz��_�{IN_Bz�����O.............'
fs.writeFile('111.pdf', dataPdf);
The data dataCer and dataPdf I get from the application using the GET requests. I can only get this data in this encoding.
And now I need to save them as files.
Also, I will need to then save any data to the file in the same way (zip, rar, png, jpeg, ...).
When i use fs.writeFile, I get files that do not open.
fs.writeFile, can not keep the original state data, ignoring the encoding does not give me the desired result.
Please tell me how to get around this error?
Or which library can save data to any file in node.js, while ignoring the encoding?
I need to read xlsx in nodejs. Xlsx contains text with accents and apostrophes and so on. Then i have to save the text in json file.
What are the best practices to perform that task?
Stage 1 - take a look at this module node-xlsx or more robust and possibly better for your needs xlsx.
Stage 2 - Writing the file to JSON - if the module can return a JSON format then great. If you use xlsx it has an option to JSON --> take a look here.
Since you may need to actually strip and/or protect special accents etc. you may need to validate the data which is returned before producing a JSON file.
As to actually writing a JSON file, there are a huge amount of NPM modules for the task.
I have a lambda script that retrieves an email from s3, parses it with MailParser (streaming), transforms attachments to csv if necessary, and stores them in a different bucket. The script handles csv files (no conversion) and zip files, but I can't figure out how to convert xls to csv using streams.
Exceljs looks really good for this, but I can't get it to work for some reason (and I'm really new to streams so that's probably it).
var workbook = new Excel.Workbook();
var xstream = workbook.xlsx.createInputStream();
xstream.on('done', function(data) {
// convert to csv and s3.upload
});
attachment.stream
.pipe(xstream);
I'm getting an error
Error: Unexpected xml node in parseOpen
before the 'done' event so I'm not sure if I'm using createInputStream correctly.
I am using nodejs to parse xlsx files with module "jsxlsx_async" and values will be stored in mongodb.
My code:
xlsx(file, function(err,wb){
if (err){
//handling err
}
//get data array
wb.getSheetDataByName('Sheet1', function(err,data){
if (err){
//handling err
}
//handling data
console.log(data);
});
});
Using: Nodejs: v0.10.25, MongoDB: v2.2.6,
OS: win8, RAM:6GB
My steps:
1.read uploaded xlsx file and saving those read values into an JS object.
2.Save the read values into mongodb collections by iterating the values on the JS object.
This works fine with smaller xlsx files but I wanted to parse xlsx files larger than 50MB.
My problem is where I am storing the entire xlsx values in a single JS object.
Please provide some better ideas for a solution.
Is there any better way to read xlsx by row and saving the values at once a row is read?
I had a similar problem before. I need to read a huge JSON object from a txt file, but the process was killed because it ran out of memory. Regarding this problem, my solution was to split this huge file into 2 files.
Regarding your problem, my suggestions are:
Try increasing memory limit of v8 engine. https://github.com/joyent/node/wiki/FAQ Example (8192 means 8GB):
node --max-old-space-size=8192 server.js
If #1 does not work, try reading xlsx file row by row with this lib: https://github.com/ffalt/xlsx-extract
If #1, #2 do not work, try https://github.com/extrabacon/xlrd-parser
I need to write a Stored Proc/ Function which reads data from a worksheet of Excel workbook. How do I do it in DB2 ? I am using AIX os.
Tried Read Excel from DB2 but wont work on my OS.
Also tried
Import from FileName.csv of DEL COMMITCOUNT 1000 insert into TableName
but invain.
You have several options, the cleanest is probably to write a Java Stored Procedure, utilising the Apache POI library, if you intend to read Excel workbooks (.xls or .xlsx) rather than plain CSV formatted text files.
Not as clean but just as effective you can write a Perl / Python / PHP script to read the file and return a line at a time, and invoke the script from a stored procedure, see: Making Operating System Calls from SQL
Its be better to convert your excel file to flat file like csv if possible. Because DB2 not natively know excel file. Its csv file that can processed natively using IMPORT, LOAD or INGEST tools from DB2