Laravel Excel Import Write into Batches of Excel file - excel

I have a large number of records in an excel sheet which is taking too much time while importing to the 3 tables in database. To overcome this issue, I am trying to import it through batch by creating small excel files with less data so that I run queue jobs of laravel on it. I am trying with the below code but it doesn't work, it throws an error of Array to String conversion and also sub excel files are not creating. I am using excel Maatweb, but I am handling it through controller.
Can someone lead me please.
function importBatchFiles(Request $request)
{
$this->validate($request, [
'file' => 'required|mimes:xls,xlsx,csv'
]);
$file = $request->file('file');
//$fileName='orders_'.$request->get('company_id').'_'.date('Y-m-d').uniqid().'.xlsx';
if ($file->isValid()) {
//$file->move('order_list', $fileName);
$data = Excel::toArray(new OrdersImport, request()->file('file'));
foreach($data as $key => $value)
{
foreach($value as $row)
{
$inputs[] = $row;
}
}
$data1 = array_slice($inputs, 1);
$parts = (array_chunk($data1, 500));
foreach($parts as $index => $part){
$filename = resource_path('pending-files/'.date('Y-m-d').$index.'.'.$request->file->extension());
file_put_contents($filename, $part);
}
return Response::json(['success' => 'Orders Queued for importing.']);
}else{
return Response::json(['error' => 'Some errror']);
}
}

When using the WithChunkReading concern, you can execute each chunk into a queue job. You can do so by simply adding the ShouldQueue contract.
For more details please refer to this link: https://docs.laravel-excel.com/3.1/imports/queued.html

Related

Import excel file to database using Laravel

why get undefined offset when importing excel file to the database using laravel.
UserImport.php
public function model(array $row)
{
var_dump($row);
return new User([
'name' =>$row[0],
'email'=>$row[1],
'password' => Hash::make('password'),
]);
}
UserImportController
public function store(Request $request){
$file = $request->file('file');
Excel::import(new UsersImport, $file);
return back()->withStatus('Successfully');
}
when uploading the excel file, display it like this.
enter image description here
I used var_dump() to see the array. I entered 4 rows in the excel file. But display 5 array data. Why that??? (display in entered image.
)
I believe it is because of the new line at the end of the file.
You should check if it contains only a new line and skip the import.
set if condition for UserImport.php
if($row[0] !=""){
return new User([
'name' =>$row[0],
'email'=>$row[1],
'password' => Hash::make('password'),
]);
}

How to Flatten / Recompile Excel Spreadsheet Using sheetjs or exceljs on Write

We use excel as a configuration file for clients. However, our processes only run on linux servers. We need to take a master file, update all the client workbooks with the new information, and commit to GitLab. The users then check it out, add their own changes, commit back to GitLab and a process promotes the workbook to Server A.
This process works great using nodeJS (exceljs)
Another process on a different server is using perl to pick up the workbook and then saves each sheet as a csv file.
The problem is, what gets written out is the data from the ORIGINAL worksheet and not the updated changes. This is true of both perl and nodejs. Code for perl and nodejs xlsx to csv is at the end of the post.
Modules Tried:
perl : Spreadsheet::ParseExcel; Spreadsheet::XLSX;
nodejs: node-xlsx, exceljs
I assume it has to do with Microsoft using XML inside the excel wrapper, it keeps the old version as history and since it was the original sheet name, it gets pulled instead of the updated latest version.
When I manually open in Excel, everything is correct with the new info as expected.
When I use "Save as..." instead of "Save" then the perl process is able to correctly write out the updated worksheet as csv. So our workaround is having the users always "Save as.." before committing their extra changes to GitLab. We'd like to rely on training, but the sheer number of users and clients makes trusting that the user will "Save AS..." is not practical.
Is there a way to replicate a "Save As..." during my promotion to Server A or at least be able to tell if the file had been saved correctly? I'd like to stick with excelJS, but I'll use whatever is necessary to replicate the "Save as..." which seems to recompile the workbook.
In addition to nodejs, I can use perl, python, ruby - whatever it takes - to make sure the csv creation process picks up the new changes.
Thanks for your time and help.
#!/usr/bin/env perl
use strict;
use warnings;
use Carp;
use Getopt::Long;
use Pod::Usage;
use File::Basename qw/fileparse/;
use File::Spec;
use Spreadsheet::ParseExcel;
use Spreadsheet::XLSX;
use Getopt::Std;
my %args = ();
my $help = undef;
GetOptions(
\%args,
'excel=s',
'sheet=s',
'man|help'=>\$help,
) or die pod2usage(1);
pod2usage(1) if $help;
pod2usage(-verbose=>2, exitstatus=>0, output=>\*STDOUT) unless $args{excel} || $args{sheet};
pod2usage(3) if $help;
pod2usage(-verbose=>2, exitstatus=>3, output=>\*STDOUT) unless $args{excel};
if (_getSuffix($args{excel}) eq ".xls") {
my $file = File::Spec->rel2abs($args{excel});
if (-e $file) {
print _XLS(file=>$file, sheet=>$args{sheet});
} else {
exit 1;
die "Error: Can not find excel file. Please check for exact excel file name and location. \nError: This Program is CASE SENSITIVE. \n";
}
}
elsif (_getSuffix($args{excel}) eq ".xlsx") {
my $file = File::Spec->rel2abs($args{excel});
if (-e $file) {
print _XLSX(file=>$file, sheet=>$args{sheet});
}
else {
exit 1;
die "\nError: Can not find excel file. Please check for exact excel file name and location. \nError: This Program is CASE SENSITIVE.\n";
}
}
else {
exit 5;
}
sub _XLS {
my %opts = (
file => undef,
sheet => undef,
#_,
);
my $aggregated = ();
my $parser = Spreadsheet::ParseExcel->new();
my $workbook = $parser->parse($opts{file});
if (!defined $workbook) {
exit 3;
croak "Error: Workbook not found";
}
foreach my $worksheet ($workbook->worksheet($opts{sheet})) {
if (!defined $worksheet) {
exit 2;
croak "\nError: Worksheet name doesn't exist in the Excel File. Please check the WorkSheet Name. \nError: This program is CASE SENSITIVE.\n\n";
}
my ($row_min, $row_max) = $worksheet->row_range();
my ($col_min, $col_max) = $worksheet->col_range();
foreach my $row ($row_min .. $row_max){
foreach my $col ($col_min .. $col_max){
my $cell = $worksheet->get_cell($row, $col);
if ($cell) {
$aggregated .= $cell->value().',';
}
else {
$aggregated .= ',';
}
}
$aggregated .= "\n";
}
}
return $aggregated;
}
sub _XLSX {
eval {
my %opts = (
file => undef,
sheet => undef,
#_,
);
my $aggregated_x = ();
my $excel = Spreadsheet::XLSX->new($opts{file});
foreach my $sheet ($excel->worksheet($opts{sheet})) {
if (!defined $sheet) {
exit 2;
croak "Error: WorkSheet not found";
}
if ( $sheet->{Name} eq $opts{sheet}) {
$sheet->{MaxRow} ||= $sheet->{MinRow};
foreach my $row ($sheet->{MinRow} .. $sheet->{MaxRow}) {
$sheet->{MaxCol} ||= $sheet->{MinCol};
foreach my $col ($sheet->{MinCol} .. $sheet->{MaxCol}) {
my $cell = $sheet->{Cells}->[$row]->[$col];
if ($cell) {
$aggregated_x .= $cell->{Val}.',';
}
else {
$aggregated_x .= ',';
}
}
$aggregated_x .= "\n";
}
}
}
return $aggregated_x;
}
};
if ($#) {
exit 3;
}
sub _getSuffix {
my $f = shift;
my ($basename, $dirname, $ext) = fileparse($f, qr/\.[^\.]*$/);
return $ext;
}
sub _convertlwr{
my $f = shift;
my ($basename, $dirname, $ext) = fileparse($f, qr/\.[^\.]*$/);
return $ext;
}
var xlsx = require('node-xlsx')
var fs = require('fs')
var obj = xlsx.parse(__dirname + '/test2.xlsx') // parses a file
var rows = []
var writeStr = ""
//looping through all sheets
for(var i = 0; i < obj.length; i++)
{
var sheet = obj[i]
//loop through all rows in the sheet
for(var j = 0; j < sheet['data'].length; j++)
{
//add the row to the rows array
rows.push(sheet['data'][j])
}
}
//creates the csv string to write it to a file
for(var i = 0; i < rows.length; i++)
{
writeStr += rows[i].join(",") + "\n"
}
//writes to a file, but you will presumably send the csv as a
//response instead
fs.writeFile(__dirname + "/test2.csv", writeStr, function(err) {
if(err) {
return console.log(err)
}
console.log("test.csv was saved in the current directory!")
The answer is its impossible. In order to update data inside a workbook that has excel functions, you must open it in Excel for the formulas to trigger. It's that simple.
You could pull the workbook apart, create your own javascript functions, run the data through it and then write it out, but there are so many possible issues that it is not recommended.
Perhaps one day Microsoft will release a linux Excel engine API for linux. But its still unlikely that such a thing would work via command line without invoking the GUI.

Empty PHPExcel file using liuggio/ExcelBundle in Symfony

I have some code that iterates over the rows and columns of an Excel sheet and replaces text with other text. This is done with a service that has the excel file and a dictionary as parameters like this.
$mappedTemplate = $this->get('app.entity.translate')->translate($phpExcelObject, $dictionary);
The service itself looks like this.
public function translate($template, $dictionary)
{
foreach ($template->getWorksheetIterator() as $worksheet) {
foreach ($worksheet->getRowIterator() as $row) {
$cellIterator = $row->getCellIterator();
$cellIterator->setIterateOnlyExistingCells(false); // Loop all cells, even if it is not set
foreach ($cellIterator as $cell) {
if (!is_null($cell)) {
if (!is_null($cell->getCalculatedValue())) {
if (array_key_exists((string)$cell->getCalculatedValue(), $dictionary)) {
$worksheet->setCellValue(
$cell->getCoordinate(),
$dictionary[$cell->getCalculatedValue()]
);
}
}
}
}
}
}
return $template;
}
After some debugging I found out that the text actually is replaced and that the service works like it should. The problem is that when I return the new PHPExcel file as a response to download, the excel is empty.
This is the code I use to return the file.
// create the writer
$writer = $this->get('phpexcel')->createWriter($mappedTemplate, 'Excel5');
// create the response
$response = $this->get('phpexcel')->createStreamedResponse($writer);
// adding headers
$dispositionHeader = $response->headers->makeDisposition(
ResponseHeaderBag::DISPOSITION_ATTACHMENT,
$file_name
);
$response->headers->set('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
$response->headers->set('Pragma', 'public');
$response->headers->set('Cache-Control', 'maxage=1');
$response->headers->set('Content-Disposition', $dispositionHeader);
return $response;
What am I missing?
Your code is missing the calls to the writer.
You only create the writer, but never use it, at least not in your shared code examples:
$objWriter = new PHPExcel_Writer_Excel2007($objPHPExcel);
$response = $this->get('phpexcel')->createStreamedResponse($objWriter)
Another thing is the content type: Do you have the apache content types setup correctly?
$response->headers->set('Content-Type', 'application/vnd.ms-excel; charset=utf-8');

Fast access to excel data in X++

Can someone give me a clue how can I get the fast access of the excel data. Currently the excel contains more than 200K records and when I retrieve from the X++ code it takes a lot of time to retrieve all the records.
Following are the classes I am using to retrieve the data.
1 - SysExcelApplication, SysExcelWorksheet and SysExcelCells.
I am using the below code to retrieve cells.
excelApp.workbooks().open(filename);
excelWorksheet = excelApp.worksheets().itemFromName(itemName);
excelCells = excelWorkSheet.cells();
///pseudo code
loop
excelCells.item(rowcounter, column1);
similar for all columns;
end of loop
If any of the special property needs to be set here please tell me.
Overall performance will be a lot better (huge!) if you can use CSV files. If you are forced to use Excel files, you can easy and straigforward convert this excel file to a csv file and then read the csv file. If you can't work that way, you can read excel files throug ODBC (using a query string like connecting to a database) that will perform better that the Office API.
First things, reading Excel files (and any other file) will take a while for 200 K records.
You can read an Excel file using ExcelIo, but with no performance guaranties :)
As I see it, you have 3 options (best performance listed first):
Convert your Excel file to CSV file, then read with CommaIo.
Read the Excel file using C#, then call back to X++
Accept the fact and take the time
use CSV, it is faster, below is code example:
/* Excel Import*/
#AviFiles
#define.CurrentVersion(1)
#define.Version1(1)
#localmacro.CurrentList
#endmacro
FilenameOpen filename;
CommaIo file;
Container con;
/* File Open Dialog */
Dialog dialog;
dialogField dialogFilename;
dialogField dialogSiteID;
dialogField dialogLocationId;
DialogButton dialogButton;
InventSite objInventSite;
InventLocation objInventLocation;
InventSiteID objInventSiteID;
InventLocationId objInventLocationID;
int row;
str sSite;
NoYes IsCountingFound;
int iQty;
Counter insertCounter;
Price itemPrice;
ItemId _itemid;
EcoResItemColorName _inventColorID;
EcoResItemSizeName _inventSizeID;
dialog = new Dialog("Please select file");
dialogSiteID = dialog.addField(extendedTypeStr(InventSiteId), objInventSiteId);
dialogLocationId = dialog.addField(extendedTypeStr(InventLocationId), objInventLocationId);
dialogFilename = dialog.addField(extendedTypeStr(FilenameOpen));
dialog.filenameLookupFilter(["#SYS100852","*.csv"]);
dialog.filenameLookupTitle("Please select file");
dialog.caption("Please select file");
dialogFilename.value(filename);
if(!dialog.run())
return;
objInventSiteID = dialogSiteID.value();
objInventLocationID = dialogLocationId.value();
/*----- validating warehouse*/
while
select maxof(InventSiteId) from objInventLocation where objInventLocation.InventLocationId == objInventLocationId
{
If(objInventLocation.InventSiteID != objInventSiteID)
{
warning("Warehouse not belongs to site. Please select valid warehouse." ,"Counting lines import utility");
return;
}
}
filename = dialogFilename.value();
file = new commaIo(filename,'r');
file.inFieldDelimiter(',');
try
{
if (file)
{
ttsbegin;
while(file.status() == IO_Status::OK)
{
con = file.read();
if (con)
{
row ++;
if(row == 1)
{
if(
strUpr(strLtrim(strRtrim( conpeek(con,1) ))) != "ITEM"
|| strUpr(strLtrim(strRtrim( conpeek(con,2) ))) != "COLOR"
|| strUpr(strLtrim(strRtrim( conpeek(con,3) ))) != "SIZE"
|| strUpr(strLtrim(strRtrim( conpeek(con,4) ))) != "PRICE"
)
{
error("Imported file is not according to given format.");
ttsabort;
return;
}
}
else
{
IsCountingFound = NoYes::No;
_itemid = "";
_inventColorID = "";
_inventSizeID = "";
_itemid = strLtrim(strRtrim(conpeek(con,1) ));
_inventColorID = strLtrim(strRtrim(conpeek(con,2) ));
_inventSizeID = strLtrim(strRtrim(conpeek(con,3) ));
itemPrice = any2real(strLtrim(strRtrim(conpeek(con,4) )));
}
}
}
if(row <= 1)
{
ttsabort;
warning("No data found in excel file");
}
else
{
ttscommit;
}
}
}
catch
{
ttsabort;
Error('Upload Failed');
}

multi insert in kohana orm3

In my application i have a loop that executes about 1000 times, inside it i'm creating object and saving it. This is the part of application where i populate my database with data. In common this looks like this:
foreach(...){
...
try{
$object = new Model_Whatever;
$object->whatever=$whatever;
$object->save();}
catch(Exception $e){
...}
}
}
This produces 1000 of INSERT queries. Is it possible to, in some way, made kohana produce multi inserts. Split this into 10 inserts with 100 data sets in each. Is it possible and if yes that what is the way doing so?
Whilst the Kohana ORM doesn't support multi inserts, you can still use the query builder as follows:
$query = DB::insert('tablename', array('column1', 'column2','column3'));
foreach ($data as $d) {
$query->values($d);
}
try {
$result = $query->execute();
} catch ( Database_Exception $e ) {
echo $e->getMessage();
}
you'll still need to split the data up so the above doesn't try to execute a query with 1000 inserts.
$data assumes an array of arrays with the values corresponding to the order of the columns
thanks Isaiah in #kohana
php work very slow when insert multi array very big (so that method ::values have array_merge) so more fast:
class Database_Query_Builder_Bath_Insert
extends Database_Query_Builder_Insert{
public static function doExecute($table, $data) {
$insertQuery = DB::insert($table, array_keys(current($data)));
$insertQuery->_values = $data;
$insertQuery->execute();
}
}
call_user_func_array([$query, 'values'], $data);

Resources