get sheetname from xlsx2csv linux command - excel

I want to extract text from the excel file so i used xlsx2csv command
the text extracted does not give me sheetname
I have used command as :
/usr/bin/xlsx2csv #{excel_name}.xls >> #{excel_name}.txt
Can we get sheetname from using xlsx2csv??

Try:
xlsx2csv -s 0 ${excel_name}.xlsx >> ${excel_name}.txt
From man xlsx2csv:
-s SHEETID, --sheet=SHEETID
Sheet to convert (0 for all sheets).

Related

Bash script with multiline heredoc doesn't output anything

I'm writing a script to send SQL output in mail, but it is not executing successfully and is not generating the output I want.
The query generates two columns with multiple rows. How can I generate the output in table format as below?
Below is my code:
#!/bin/bash
ORACLE_HOME= **PATH
export ORACLE_HOME
PATH=$PATH:$ORACLE_HOME/bin
export PATH
TNS_ADMIN= ** PATH
export TNS_ADMIN
today=$(date +%d-%m-%Y)
output=$(sqlplus -S user/pass#service <<EOF
set heading off;
SELECT distinct list_name ,max(captured_dttm) as Last_received FROM db.table1
group by list_name having max(captured_dttm) <= trunc(sysdate - interval '2' hour);
EOF)
if [ -z "$output" ];
then
echo"its fine"
exit
else
echo "
Dear All,
Kindly check we've not received the list for last 2 hour : $output
Regards,
Team" | mailx -S smtp=XX.XX.X.XX:XX -s "URGENT! Please check list FOR $today" user#abc.com
fi
When using a here document, the closing string can't be followed by anything but a newline. Move the closing parenthesis to the next line:
output=$(sqlplus -S user/pass#service <<EOF
...
EOF
)

Sort command for a .txt file not operating in a VBA Shell

I believe I've replicated the stachoverflow recommended code to sort a .txt file using VBA in Excel. The following is the relevant code from my program. It processes this code without error but does not produce the sorted output file. The IntualityInputRealTimeFIFOTable file is produced correctly. Can't figure out what I'm doing wrong.
Public MyCommand As String
.
.
IntualityInputRealTimeFIFOTable = WorkbookPath & "IntualityInputRealTimeFIFOTable.txt"
IntualityInputRealTimeFIFOTableTemp = WorkbookPath & "IntualityInputRealTimeFIFOTableTemp.txt"
.
.
Close #3
MyCommand = "Sort IntualityInputRealTimeFIFOTable /O IntualityInputRealTimeFIFOTableTemp"
Shell MyCommand, vbHide

LFTP and SFTP not working using crontab

I am using below script :
script_home=/home/insetl/ppp_scripts/prod/VSAVS
date_var=`date +'%Y%m%d' -d "0 days ago"`
date_var1=`date +'%Y%m%d' -d "1 days ago"`
lftp<<END_SCRIPT
open sftp://213.239.205.74
user sftpvuclip LW?T8e62
set xfer:clobber on
get Vuclip_C2B_TRX-${date_var}0030.csv
bye
END_SCRIPT
sudo grep -v '^ACC' $script_home/Vuclip_C2B_TRX-${date_var}0030.csv
/home/insetl/ppp_scripts/prod/VSAVS/Vuclip_C2B_TRX-${date_var}.csv
scp $script_home/Vuclip_C2B_TRX-${date_var}.csv $TARGET_HOST:/tmp/
psql -h $TARGET_HOST -d $TARGET_DB -p 5432 << ENDSQL
delete from vsavs_offline_revenue_dump where update_date like '${date_var2}%';
copy vsavs_offline_revenue_dump from '/tmp/Vuclip_C2B_TRX-${date_var}.csv' with delimiter as ',';
update fact_channel_daily_aggr a
set revenue_from_file = (select sum(substr(amount,3)::float) from vsavs_offline_revenue_dump where to_char(to_timestamp(update_date, 'dd-mm-yyyy' ),'YYYYMMDD') = ${date_var1})
where product_sk = 110030 and
a.date_id=${date_var1};
ENDSQL
The script runs completely fine when run manually. However the LFTP/SFTP part of the script doesnt run when ran through crontab. Please advise the solution for the same.
Output :
/home/insetl/ppp_scripts/prod/VSAVS/sftp_pull_in.sh: line 9: 08/02/2017: No such file or directory
20170209
/home/insetl/ppp_scripts/prod/VSAVS
grep: /home/insetl/ppp_scripts/prod/VSAVS/Vuclip_C2B_TRX-201702090030.csv: No such file or directory
DELETE 0
COPY 0
UPDATE 20

How do I use awk under cygwin to print fields from an excel spreadsheet?

We seem to be seeing more and more questions about executing awk on Excel spreadsheets so here is a Q/A on how to do that specific thing.
I have this information in an Excel spreadsheet "$D/staff.xlsx" (where "$D" is the path to my Desktop):
Name Position
Sue Manager
Bill Secretary
Pat Engineer
and I want to print the Position field for a given Name, e.g. output Secretary given the input Bill.
I can currently save as CSV from Excel to get:
$ cat "$D/staff.csv"
Name,Position
Sue,Manager
Bill,Secretary
Pat,Engineer
and then run:
$ awk -F, -v name="Bill" '$1==name{print $2}' "$D/staff.csv"
Secretary
but this is just a small part of a larger task and so I have to be able to do this automatically from a shell script without manually opening Excel to export the CSV file. How do I do that from a Windows PC running cygwin?
The combination of the following VBS and shell scripts create a CSV file for each sheet in the Excel spreadsheet:
$ cat xls2csv.vbs
csv_format = 6
Dim strFilename
Dim objFSO
Set objFSO = CreateObject("scripting.filesystemobject")
strFilename = objFSO.GetAbsolutePathName(WScript.Arguments(0))
If objFSO.fileexists(strFilename) Then
Call Writefile(strFilename)
Else
wscript.echo "no such file!"
End If
Set objFSO = Nothing
Sub Writefile(ByVal strFilename)
Dim objExcel
Dim objWB
Dim objws
Set objExcel = CreateObject("Excel.Application")
Set objWB = objExcel.Workbooks.Open(strFilename)
For Each objws In objWB.Sheets
objws.Copy
objExcel.ActiveWorkbook.SaveAs objWB.Path & "\" & objws.Name & ".csv", csv_format
objExcel.ActiveWorkbook.Close False
Next
objWB.Close False
objExcel.Quit
Set objExcel = Nothing
End Sub
.
$ cat xls2csv
PATH="$HOME:$PATH"
# the original XLS input file path components
inXlsPath="$1"
inXlsDir=$(dirname "$inXlsPath")
xlsFile=$(basename "$inXlsPath")
xlsBase="${xlsFile%.*}"
# The tmp dir we'll copy the XLS to and run the tool on
# to get the CSVs generated
tmpXlsDir="/usr/tmp/${xlsBase}.$$"
tmpXlsPath="${tmpXlsDir}/${xlsFile}"
absXlsPath="C:/cygwin64/${tmpXlsPath}" # need an absolute path for VBS to work
mkdir -p "$tmpXlsDir"
trap 'rm -f "${tmpXlsDir}/${xlsFile}"; rmdir "$tmpXlsDir"; exit' 0
cp "$inXlsPath" "$tmpXlsDir"
cygstart "$HOME/xls2csv.vbs" "$absXlsPath"
printf "Waiting for \"${tmpXlsDir}/~\$${xlsFile}\" to be created:\n" >&2
while [ ! -f "${tmpXlsDir}/~\$${xlsFile}" ]
do
# VBS is done when this tmp file is created and later removed
printf "." >&2
sleep 1
done
printf " Done.\n" >&2
printf "Waiting for \"${tmpXlsDir}/~\$${xlsFile}\" to be removed:\n" >&2
while [ -f "${tmpXlsDir}/~\$${xlsFile}" ]
do
# VBS is done when this tmp file is removed
printf "." >&2
sleep 1
done
printf " Done.\n" >&2
numFiles=0
for file in "$tmpXlsDir"/*.csv
do
numFiles=$(( numFiles + 1 ))
done
if (( numFiles >= 1 ))
then
outCsvDir="${inXlsDir}/${xlsBase}.csvs"
mkdir -p "$outCsvDir"
mv "$tmpXlsDir"/*.csv "$outCsvDir"
fi
Now we execute the shell script which internally calls cygstart to run the VBS script to generate the CSV files (one per sheet) in a subdirectory under the same directory where the Excel file exists named based on the Excel file name (e.g. Excel file staff.xlsx produces CSVs directory staff.csvs):
$ ./xls2csv "$D/staff.xlsx"
Waiting for "/usr/tmp/staff.2700/~$staff.xlsx" to be created:
.. Done.
Waiting for "/usr/tmp/staff.2700/~$staff.xlsx" to be removed:
. Done.
There is only one sheet with the default name Sheet1 in the target Excel file "$D/staff.xlsx" so the output of the above is a file "$D/staff.csvs/Sheet1.csv":
$ cat "$D/staff.csvs/Sheet1.csv"
Name,Position
Sue,Manager
Bill,Secretary
Pat,Engineer
$ awk -F, -v name="Bill" '$1==name{print $2}' "$D/staff.csvs/Sheet1.csv"
Secretary
Also see What's the most robust way to efficiently parse CSV using awk? for how to then operate on those CSVs.
See also https://stackoverflow.com/a/58879683/1745001 for how to do the opposite, i.e. call a cygwin bash command from a Windows batch file.

Bash merge columned files into one file with rows

I have many data files in this format:
-1597.5421
-1909.6982
-1991.8743
-2033.5744
But I would like to merge them all into one data file with each original data file taking up one row with spaces in between so I can import it in excel.
-1597.5421 -1909.6982 -1991.8743 -2033.5744
-1789.3324 -1234.5678 -9876.5433 -9999.4321
And so on. Each file is named ALL.ene and every directory in my working directory contains it. Can someone give me a quick fix? Thanks!
:edit. Each file has 11 entries. Those were just examples.
for i in */ALL.ene
do
echo $(<$i)
done > result.txt
Assumptions:
I assume all your data files are of this format:
<something1><newline>
<something2><newline>
<something3><newline>
So for example, if the last newline is missing, the following script will miss the field corresponding to <something3>.
Usage: ./merge.bash -o <output file> <input file list or glob>
The script appends to any existing output files from previous runs. It also does not make any assumptions to how many fields of data every input file has. It blindly puts every line into a line in the output file separated by spaces.
#!/bin/bash
# set -o xtrace # uncomment to debug
declare output
[[ $1 =~ -o$ ]] && output="$2" && shift 2 || { \
echo "The first argument should always be -o <output>";
exit -1; }
declare -a files=("${#}") row
for file in "${files[#]}";
do
while read data; do
row+=("$data")
done < "$file"
echo "${row[#]}" >> "$output"
row=()
done
Example:
$ cat data1
-1597.5421
-1909.6982
-1991.8743
-2033.5744
$ cat data2
-1789.3324
-1234.5678
-9876.5433
-9999.4321
$ ./merge.bash -o test data{1,2}
$ cat test
-1597.5421 -1909.6982 -1991.8743 -2033.5744
-1789.3324 -1234.5678 -9876.5433 -9999.4321
This is what coreutils paste is good at, try:
paste -s data_files*

Resources