Fast access to excel data in X++ - excel

Can someone give me a clue how can I get the fast access of the excel data. Currently the excel contains more than 200K records and when I retrieve from the X++ code it takes a lot of time to retrieve all the records.
Following are the classes I am using to retrieve the data.
1 - SysExcelApplication, SysExcelWorksheet and SysExcelCells.
I am using the below code to retrieve cells.
excelApp.workbooks().open(filename);
excelWorksheet = excelApp.worksheets().itemFromName(itemName);
excelCells = excelWorkSheet.cells();
///pseudo code
loop
excelCells.item(rowcounter, column1);
similar for all columns;
end of loop
If any of the special property needs to be set here please tell me.

Overall performance will be a lot better (huge!) if you can use CSV files. If you are forced to use Excel files, you can easy and straigforward convert this excel file to a csv file and then read the csv file. If you can't work that way, you can read excel files throug ODBC (using a query string like connecting to a database) that will perform better that the Office API.

First things, reading Excel files (and any other file) will take a while for 200 K records.
You can read an Excel file using ExcelIo, but with no performance guaranties :)
As I see it, you have 3 options (best performance listed first):
Convert your Excel file to CSV file, then read with CommaIo.
Read the Excel file using C#, then call back to X++
Accept the fact and take the time

use CSV, it is faster, below is code example:
/* Excel Import*/
#AviFiles
#define.CurrentVersion(1)
#define.Version1(1)
#localmacro.CurrentList
#endmacro
FilenameOpen filename;
CommaIo file;
Container con;
/* File Open Dialog */
Dialog dialog;
dialogField dialogFilename;
dialogField dialogSiteID;
dialogField dialogLocationId;
DialogButton dialogButton;
InventSite objInventSite;
InventLocation objInventLocation;
InventSiteID objInventSiteID;
InventLocationId objInventLocationID;
int row;
str sSite;
NoYes IsCountingFound;
int iQty;
Counter insertCounter;
Price itemPrice;
ItemId _itemid;
EcoResItemColorName _inventColorID;
EcoResItemSizeName _inventSizeID;
dialog = new Dialog("Please select file");
dialogSiteID = dialog.addField(extendedTypeStr(InventSiteId), objInventSiteId);
dialogLocationId = dialog.addField(extendedTypeStr(InventLocationId), objInventLocationId);
dialogFilename = dialog.addField(extendedTypeStr(FilenameOpen));
dialog.filenameLookupFilter(["#SYS100852","*.csv"]);
dialog.filenameLookupTitle("Please select file");
dialog.caption("Please select file");
dialogFilename.value(filename);
if(!dialog.run())
return;
objInventSiteID = dialogSiteID.value();
objInventLocationID = dialogLocationId.value();
/*----- validating warehouse*/
while
select maxof(InventSiteId) from objInventLocation where objInventLocation.InventLocationId == objInventLocationId
{
If(objInventLocation.InventSiteID != objInventSiteID)
{
warning("Warehouse not belongs to site. Please select valid warehouse." ,"Counting lines import utility");
return;
}
}
filename = dialogFilename.value();
file = new commaIo(filename,'r');
file.inFieldDelimiter(',');
try
{
if (file)
{
ttsbegin;
while(file.status() == IO_Status::OK)
{
con = file.read();
if (con)
{
row ++;
if(row == 1)
{
if(
strUpr(strLtrim(strRtrim( conpeek(con,1) ))) != "ITEM"
|| strUpr(strLtrim(strRtrim( conpeek(con,2) ))) != "COLOR"
|| strUpr(strLtrim(strRtrim( conpeek(con,3) ))) != "SIZE"
|| strUpr(strLtrim(strRtrim( conpeek(con,4) ))) != "PRICE"
)
{
error("Imported file is not according to given format.");
ttsabort;
return;
}
}
else
{
IsCountingFound = NoYes::No;
_itemid = "";
_inventColorID = "";
_inventSizeID = "";
_itemid = strLtrim(strRtrim(conpeek(con,1) ));
_inventColorID = strLtrim(strRtrim(conpeek(con,2) ));
_inventSizeID = strLtrim(strRtrim(conpeek(con,3) ));
itemPrice = any2real(strLtrim(strRtrim(conpeek(con,4) )));
}
}
}
if(row <= 1)
{
ttsabort;
warning("No data found in excel file");
}
else
{
ttscommit;
}
}
}
catch
{
ttsabort;
Error('Upload Failed');
}

Related

How to Flatten / Recompile Excel Spreadsheet Using sheetjs or exceljs on Write

We use excel as a configuration file for clients. However, our processes only run on linux servers. We need to take a master file, update all the client workbooks with the new information, and commit to GitLab. The users then check it out, add their own changes, commit back to GitLab and a process promotes the workbook to Server A.
This process works great using nodeJS (exceljs)
Another process on a different server is using perl to pick up the workbook and then saves each sheet as a csv file.
The problem is, what gets written out is the data from the ORIGINAL worksheet and not the updated changes. This is true of both perl and nodejs. Code for perl and nodejs xlsx to csv is at the end of the post.
Modules Tried:
perl : Spreadsheet::ParseExcel; Spreadsheet::XLSX;
nodejs: node-xlsx, exceljs
I assume it has to do with Microsoft using XML inside the excel wrapper, it keeps the old version as history and since it was the original sheet name, it gets pulled instead of the updated latest version.
When I manually open in Excel, everything is correct with the new info as expected.
When I use "Save as..." instead of "Save" then the perl process is able to correctly write out the updated worksheet as csv. So our workaround is having the users always "Save as.." before committing their extra changes to GitLab. We'd like to rely on training, but the sheer number of users and clients makes trusting that the user will "Save AS..." is not practical.
Is there a way to replicate a "Save As..." during my promotion to Server A or at least be able to tell if the file had been saved correctly? I'd like to stick with excelJS, but I'll use whatever is necessary to replicate the "Save as..." which seems to recompile the workbook.
In addition to nodejs, I can use perl, python, ruby - whatever it takes - to make sure the csv creation process picks up the new changes.
Thanks for your time and help.
#!/usr/bin/env perl
use strict;
use warnings;
use Carp;
use Getopt::Long;
use Pod::Usage;
use File::Basename qw/fileparse/;
use File::Spec;
use Spreadsheet::ParseExcel;
use Spreadsheet::XLSX;
use Getopt::Std;
my %args = ();
my $help = undef;
GetOptions(
\%args,
'excel=s',
'sheet=s',
'man|help'=>\$help,
) or die pod2usage(1);
pod2usage(1) if $help;
pod2usage(-verbose=>2, exitstatus=>0, output=>\*STDOUT) unless $args{excel} || $args{sheet};
pod2usage(3) if $help;
pod2usage(-verbose=>2, exitstatus=>3, output=>\*STDOUT) unless $args{excel};
if (_getSuffix($args{excel}) eq ".xls") {
my $file = File::Spec->rel2abs($args{excel});
if (-e $file) {
print _XLS(file=>$file, sheet=>$args{sheet});
} else {
exit 1;
die "Error: Can not find excel file. Please check for exact excel file name and location. \nError: This Program is CASE SENSITIVE. \n";
}
}
elsif (_getSuffix($args{excel}) eq ".xlsx") {
my $file = File::Spec->rel2abs($args{excel});
if (-e $file) {
print _XLSX(file=>$file, sheet=>$args{sheet});
}
else {
exit 1;
die "\nError: Can not find excel file. Please check for exact excel file name and location. \nError: This Program is CASE SENSITIVE.\n";
}
}
else {
exit 5;
}
sub _XLS {
my %opts = (
file => undef,
sheet => undef,
#_,
);
my $aggregated = ();
my $parser = Spreadsheet::ParseExcel->new();
my $workbook = $parser->parse($opts{file});
if (!defined $workbook) {
exit 3;
croak "Error: Workbook not found";
}
foreach my $worksheet ($workbook->worksheet($opts{sheet})) {
if (!defined $worksheet) {
exit 2;
croak "\nError: Worksheet name doesn't exist in the Excel File. Please check the WorkSheet Name. \nError: This program is CASE SENSITIVE.\n\n";
}
my ($row_min, $row_max) = $worksheet->row_range();
my ($col_min, $col_max) = $worksheet->col_range();
foreach my $row ($row_min .. $row_max){
foreach my $col ($col_min .. $col_max){
my $cell = $worksheet->get_cell($row, $col);
if ($cell) {
$aggregated .= $cell->value().',';
}
else {
$aggregated .= ',';
}
}
$aggregated .= "\n";
}
}
return $aggregated;
}
sub _XLSX {
eval {
my %opts = (
file => undef,
sheet => undef,
#_,
);
my $aggregated_x = ();
my $excel = Spreadsheet::XLSX->new($opts{file});
foreach my $sheet ($excel->worksheet($opts{sheet})) {
if (!defined $sheet) {
exit 2;
croak "Error: WorkSheet not found";
}
if ( $sheet->{Name} eq $opts{sheet}) {
$sheet->{MaxRow} ||= $sheet->{MinRow};
foreach my $row ($sheet->{MinRow} .. $sheet->{MaxRow}) {
$sheet->{MaxCol} ||= $sheet->{MinCol};
foreach my $col ($sheet->{MinCol} .. $sheet->{MaxCol}) {
my $cell = $sheet->{Cells}->[$row]->[$col];
if ($cell) {
$aggregated_x .= $cell->{Val}.',';
}
else {
$aggregated_x .= ',';
}
}
$aggregated_x .= "\n";
}
}
}
return $aggregated_x;
}
};
if ($#) {
exit 3;
}
sub _getSuffix {
my $f = shift;
my ($basename, $dirname, $ext) = fileparse($f, qr/\.[^\.]*$/);
return $ext;
}
sub _convertlwr{
my $f = shift;
my ($basename, $dirname, $ext) = fileparse($f, qr/\.[^\.]*$/);
return $ext;
}
var xlsx = require('node-xlsx')
var fs = require('fs')
var obj = xlsx.parse(__dirname + '/test2.xlsx') // parses a file
var rows = []
var writeStr = ""
//looping through all sheets
for(var i = 0; i < obj.length; i++)
{
var sheet = obj[i]
//loop through all rows in the sheet
for(var j = 0; j < sheet['data'].length; j++)
{
//add the row to the rows array
rows.push(sheet['data'][j])
}
}
//creates the csv string to write it to a file
for(var i = 0; i < rows.length; i++)
{
writeStr += rows[i].join(",") + "\n"
}
//writes to a file, but you will presumably send the csv as a
//response instead
fs.writeFile(__dirname + "/test2.csv", writeStr, function(err) {
if(err) {
return console.log(err)
}
console.log("test.csv was saved in the current directory!")
The answer is its impossible. In order to update data inside a workbook that has excel functions, you must open it in Excel for the formulas to trigger. It's that simple.
You could pull the workbook apart, create your own javascript functions, run the data through it and then write it out, but there are so many possible issues that it is not recommended.
Perhaps one day Microsoft will release a linux Excel engine API for linux. But its still unlikely that such a thing would work via command line without invoking the GUI.

How do I download data trees to CSV?

How can I export nested tree data as a CSV file when using Tabulator? I tried using the table.download("csv","data.csv") function, however, only the top-level data rows are exported.
It looks like a custom file formatter or another option may be necessary to achieve this. It seems silly to re-write the CSV downloader, so while poking around the csv downloader in the download.js module, it looks like maybe adding a recursive function to the row parser upon finding a "_children" field might work.
I am having difficulty figuring out where to get started.
Ultimately, I need to have the parent-to-child relationship represented in the CSV data with a value in a parent ID field in the child rows (this field can be blank in the top-level parent rows because they have no parent). I think I would need to include an ID and ParentID in the data table to achieve this, and perhaps enforce the validation of that key using some additional functions as data is inserted into the table.
Below is currently how I am exporting nested data tables to CSV. This will insert a new column at the end to include a parent row identifier of your choice. It would be easy to take that out or make it conditional if you do not need it.
// Export CSV file to download
$("#export-csv").click(function(){
table.download(dataTreeCSVfileFormatter, "data.csv",{nested:true, nestedParentTitle:"Parent Name", nestedParentField:"name"});
});
// Modified CSV file formatter for nested data trees
// This is a copy of the CSV formatter in modules/download.js
// with additions to recursively loop through children arrays and add a Parent identifier column
// options: nested:true, nestedParentTitle:"Parent Name", nestedParentField:"name"
var dataTreeCSVfileFormatter = function(columns, data, options, setFileContents, config){
//columns - column definition array for table (with columns in current visible order);
//data - currently displayed table data
//options - the options object passed from the download function
//setFileContents - function to call to pass the formatted data to the downloader
var self = this,
titles = [],
fields = [],
delimiter = options && options.delimiter ? options.delimiter : ",",
nestedParentTitle = options && options.nestedParentTitle ? options.nestedParentTitle : "Parent",
nestedParentField = options && options.nestedParentField ? options.nestedParentField : "id",
fileContents,
output;
//build column headers
function parseSimpleTitles() {
columns.forEach(function (column) {
titles.push('"' + String(column.title).split('"').join('""') + '"');
fields.push(column.field);
});
if(options.nested) {
titles.push('"' + String(nestedParentTitle) + '"');
}
}
function parseColumnGroup(column, level) {
if (column.subGroups) {
column.subGroups.forEach(function (subGroup) {
parseColumnGroup(subGroup, level + 1);
});
} else {
titles.push('"' + String(column.title).split('"').join('""') + '"');
fields.push(column.definition.field);
}
}
if (config.columnGroups) {
console.warn("Download Warning - CSV downloader cannot process column groups");
columns.forEach(function (column) {
parseColumnGroup(column, 0);
});
} else {
parseSimpleTitles();
}
//generate header row
fileContents = [titles.join(delimiter)];
function parseRows(data,parentValue="") {
//generate each row of the table
data.forEach(function (row) {
var rowData = [];
fields.forEach(function (field) {
var value = self.getFieldValue(field, row);
switch (typeof value === "undefined" ? "undefined" : _typeof(value)) {
case "object":
value = JSON.stringify(value);
break;
case "undefined":
case "null":
value = "";
break;
default:
value = value;
}
//escape quotation marks
rowData.push('"' + String(value).split('"').join('""') + '"');
});
if(options.nested) {
rowData.push('"' + String(parentValue).split('"').join('""') + '"');
}
fileContents.push(rowData.join(delimiter));
if(options.nested) {
if(row._children) {
parseRows(row._children, self.getFieldValue(nestedParentField, row));
}
}
});
}
function parseGroup(group) {
if (group.subGroups) {
group.subGroups.forEach(function (subGroup) {
parseGroup(subGroup);
});
} else {
parseRows(group.rows);
}
}
if (config.columnCalcs) {
console.warn("Download Warning - CSV downloader cannot process column calculations");
data = data.data;
}
if (config.rowGroups) {
console.warn("Download Warning - CSV downloader cannot process row groups");
data.forEach(function (group) {
parseGroup(group);
});
} else {
parseRows(data);
}
output = fileContents.join("\n");
if (options.bom) {
output = "\uFEFF" + output;
}
setFileContents(output, "text/csv");
};
as of version 4.2 it is currently not possible to include tree data in downloads, this will be comming in a later release

Empty PHPExcel file using liuggio/ExcelBundle in Symfony

I have some code that iterates over the rows and columns of an Excel sheet and replaces text with other text. This is done with a service that has the excel file and a dictionary as parameters like this.
$mappedTemplate = $this->get('app.entity.translate')->translate($phpExcelObject, $dictionary);
The service itself looks like this.
public function translate($template, $dictionary)
{
foreach ($template->getWorksheetIterator() as $worksheet) {
foreach ($worksheet->getRowIterator() as $row) {
$cellIterator = $row->getCellIterator();
$cellIterator->setIterateOnlyExistingCells(false); // Loop all cells, even if it is not set
foreach ($cellIterator as $cell) {
if (!is_null($cell)) {
if (!is_null($cell->getCalculatedValue())) {
if (array_key_exists((string)$cell->getCalculatedValue(), $dictionary)) {
$worksheet->setCellValue(
$cell->getCoordinate(),
$dictionary[$cell->getCalculatedValue()]
);
}
}
}
}
}
}
return $template;
}
After some debugging I found out that the text actually is replaced and that the service works like it should. The problem is that when I return the new PHPExcel file as a response to download, the excel is empty.
This is the code I use to return the file.
// create the writer
$writer = $this->get('phpexcel')->createWriter($mappedTemplate, 'Excel5');
// create the response
$response = $this->get('phpexcel')->createStreamedResponse($writer);
// adding headers
$dispositionHeader = $response->headers->makeDisposition(
ResponseHeaderBag::DISPOSITION_ATTACHMENT,
$file_name
);
$response->headers->set('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
$response->headers->set('Pragma', 'public');
$response->headers->set('Cache-Control', 'maxage=1');
$response->headers->set('Content-Disposition', $dispositionHeader);
return $response;
What am I missing?
Your code is missing the calls to the writer.
You only create the writer, but never use it, at least not in your shared code examples:
$objWriter = new PHPExcel_Writer_Excel2007($objPHPExcel);
$response = $this->get('phpexcel')->createStreamedResponse($objWriter)
Another thing is the content type: Do you have the apache content types setup correctly?
$response->headers->set('Content-Type', 'application/vnd.ms-excel; charset=utf-8');

Error Running jsx file from Indesign to export each text frame as a .txt file

Last year my colleague helped to build a script for Indesign.
Subsequently, after a system update we no longer have the script as Indesign CS6 was reinstalled, all we have is a version as below.
Using this code in Adobe Indesign to export each text frame that begins with a particular Paragraph stylesheet "PRODUCT HEADING" however I get an error message when I run the script...
Script is based on the ExportAllStories.jsx bundled with InDesign, plus a few mods found online.
//ExportAllStories.jsx
//An InDesign CS6 JavaScript
/*
###BUILDINFO### "ExportAllStories.jsx" 3.0.0 15 December 2009
*/
//Exports all stories in an InDesign document in a specified text format.
//
//For more on InDesign scripting, go to http://www.adobe.com/products/indesign/scripting/index.html
//or visit the InDesign Scripting User to User forum at http://www.adobeforums.com
//
main();
function main(){
//Make certain that user interaction (display of dialogs, etc.) is turned on.
app.scriptPreferences.userInteractionLevel = UserInteractionLevels.interactWithAll;
if(app.documents.length != 0){
if (app.activeDocument.stories.length != 0){
myDisplayDialog();
}
else{
alert("The document does not contain any text. Please open a document containing text and try again.");
}
}
else{
alert("No documents are open. Please open a document and try again.");
}
}
function myDisplayDialog(){
with(myDialog = app.dialogs.add({name:"ExportAllStories"})){
//Add a dialog column.
myDialogColumn = dialogColumns.add()
with(myDialogColumn){
with(borderPanels.add()){
staticTexts.add({staticLabel:"Export as:"});
with(myExportFormatButtons = radiobuttonGroups.add()){
radiobuttonControls.add({staticLabel:"Text Only", checkedState:true});
radiobuttonControls.add({staticLabel:"RTF"});
radiobuttonControls.add({staticLabel:"InDesign Tagged Text"});
}
}
}
myReturn = myDialog.show();
if (myReturn == true){
//Get the values from the dialog box.
myExportFormat = myExportFormatButtons.selectedButton;
myDialog.destroy;
myFolder= Folder.selectDialog ("Choose a Folder");
if((myFolder != null)&&(app.activeDocument.stories.length !=0)){
myExportAllStories(myExportFormat, myFolder);
}
}
else{
myDialog.destroy();
}
}
}
//myExportStories function takes care of exporting the stories.
//myExportFormat is a number from 0-2, where 0 = text only, 1 = rtf, and 3 = tagged text.
//myFolder is a reference to the folder in which you want to save your files.
function myExportAllStories(myExportFormat, myFolder){
for(myCounter = 0; myCounter < app.activeDocument.stories.length; myCounter++){
myStory = app.activeDocument.stories.item(myCounter);
myID = myStory.id;
switch(myExportFormat){
case 0:
myFormat = ExportFormat.textType;
myExtension = ".txt"
break;
case 1:
myFormat = ExportFormat.RTF;
myExtension = ".rtf"
break;
case 2:
myFormat = ExportFormat.taggedText;
myExtension = ".txt"
break;
}
if (myStory.paragraphs[0].appliedParagraphStyle.name == "PRODUCT HEADING"){
myFileName = myFileName.replace(/\s*$/,' ');
myFileName2 = myFileName.replace(/\//g, ' ');
myFilePath = myFolder + "/" + myFileName2;
myFile = new File(myFilePath);
myStory.exportFile(myFormat, myFile);
}
}
}
This results in an error on
if (myStory.paragraphs[0].appliedParagraphStyle.name == "PRODUCT HEADING"){
Any advice would be appreciated.
There is definitely a block of text with the style PRODUCT HEADING (all caps) in the Indesign File. We run Indesign CS6 as previous
thanks!
Your problem is most likely with this part: myStory.paragraphs[0]. If the story has no paragraphs this will give you an error.
You could add a condition before running this line, like this for example:
if(myStory.paragraphs.length){
if (myStory.paragraphs[0].appliedParagraphStyle.name == "PRODUCT HEADING"){
myFileName = myFileName.replace(/\s*$/,' ');
myFileName2 = myFileName.replace(/\//g, ' ');
myFilePath = myFolder + "/" + myFileName2;
myFile = new File(myFilePath);
myStory.exportFile(myFormat, myFile);
}
}

Retrieve entire Word document in task pane app / office.js

Working in Word 2013 (desktop) and office.js, we see some functionality around the user's selection (GetSelectedDataAsync, SetSelectedDataAsync), but nothing that might let you view the entire (OpenXML) document. Am I missing something?
Office.context.document.getFileAsync will let you get the entire document in a choice of 3 formats:
compressed: returns the entire document (.pptx or .docx) in Office Open XML (OOXML) format as a byte array
pdf: returns the entire document in PDF format as a byte array
text: returns only the text of the document as a string. (Word only)
Here's the example taken from MSDN:
var i = 0;
var slices = 0;
function getDocumentAsPDF() {
Office.context.document.getFileAsync("pdf", { sliceSize: 2097152 }, function (result) {
if (result.status == "succeeded") {
// If the getFileAsync call succeeded, then
// result.value will return a valid File Object.
myFile = result.value;
slices = myFile.sliceCount;
document.getElementById("result").innerText = " File size:" + myFile.size + " #Slices: " + slices;
// Iterate over the file slices.
for (i = 0; i < slices; i++) {
var slice = myFile.getSliceAsync(i, function (result) {
if (result.status == "succeeded") {
doSomethingWithChunk(result.value.data);
if (slices == i) // Means it's done traversing...
{
SendFileComplete();
}
}
else
document.getElementById("result").innerText = result.error.message;
});
}
myFile.closeAsync();
}
else
document.getElementById("result2").innerText = result.error.message;
});
}
This is not exactly what you asked for (it is only the body of the document) but it helped me... So I post it here as it is where I landed when I googled my problem.
The documentation here: https://dev.office.com/reference/add-ins/word/body suggests that getOoxml() will get you the body of the document. There is also the property text which will return you the plain text content.
The way this API works is not overly straight forward - however the examples in the online doc really help in getting started.
All the best,

Resources