So I have a code that downloads the version nr of each nuget package but it all stops after 50 in list.
I use jenkins with groovy code and get out a list of versions.
import groovy.json.JsonSlurperClassic
import groovy.json.JsonBuilder
import wslite.rest.*
def data = new URL("http://nexus.xx.xx.se:8081/service/rest/v1/search?repository=xx-sx-nuget&name=XXXFrontend").getText()
println data
/**
* 'jsonString' is the input json you have shown
* parse it and store it in collection
*/
Map convertedJSONMap = new JsonSlurperClassic().parseText(data)
//If you have the nodes then fetch the first one only
if(convertedJSONMap."items"){
println "Version : " + convertedJSONMap."items"[0]."version"
}
def list = convertedJSONMap.items.version
Collections.sort(list)
list
So the problem is that it only get 50 of the versions. How can I get more than 50? I have read about a continuetoken but I dont understand how to use that?
UPDATE
I have added this but still dont work
while(convertedJSONMap."continuesToken" != null){
def token = convertedJSONMap."continuationToken"
def data2 = new URL("http://nexus.xxx.xxx.se:8081/service/rest/v1/search?repository=xxx-xx-nuget&name=xxxxxx&continuationToken=" +token).getText()
convertedJSONMap = JsonSlurperClassic().parseText(data2)
}
This is how I solved it for me. It is just a snippet of the code that I use
def json = sendRequest(url)
addResultToMap(map2, json, release) //I do something here with the received result
def continuationToken = json.continuationToken
if (continuationToken != null) {
while (continuationToken != null) {
json = sendRequest(url + "&continuationToken=" + continuationToken)
addResultToMap(map2, json, release) //I do something here with the received result as above
continuationToken = json.continuationToken
}
}
And my sendRequest method looks like this
def sendRequest(def url, String method = "GET") {
String userPass = "${nexus.username}:${nexus.password}"
String basicAuth = "Basic " + "${printBase64Binary(userPass.getBytes())}"
def connection = new URL( url ).openConnection() as HttpURLConnection
connection.setRequestProperty('Accept', 'application/json' )
connection.setRequestProperty('Authorization', basicAuth)
connection.setRequestMethod(method)
try {
if ( connection.responseCode <= 299 ) {
if (connection.responseCode == 200) {
return connection.inputStream.withCloseable { inStream -> new JsonSlurper().parse( inStream as InputStream ) }
}
} else {
displayAndLogError(connection.responseCode + ": " + connection.inputStream.text, loglevel.DEBUG)
}
} catch(Exception exc) {
displayAndLogError(exc.getMessage())
}
}
Here is an alternative :
import groovy.json.JsonSlurper
try {
N_PAGES_MAX = 10
List<String> versions = new ArrayList<String>()
continuationToken = "123"
artifactsUrl = "http://nexus.zzz.local/service/rest/v1/components?repository=releases-super-project"
currentPage = 1
while (true) {
artifactsObjectRaw = ["curl", "-s", "-H", "accept: application/json", "-k", "--url", "${artifactsUrl}"].execute().text
artifactsJsonObject = (new JsonSlurper()).parseText(artifactsObjectRaw)
continuationToken = artifactsJsonObject.continuationToken
if (continuationToken!=null && continuationToken!='123') {
artifactsUrl = artifactsUrl + "&continuationToken=$continuationToken"
}
def items = artifactsJsonObject.items
for(item in items){
versions.add(item.name)
}
currentPage += 1
if (continuationToken==null || currentPage>N_PAGES_MAX) break
}
return versions.sort().reverse()
}
catch (Exception e) {
print "There was a problem fetching the versions"
}
Related
How to make only one point (Processor or Service) that will write the file and make it working in single thread, in my case i have workflow like this executescript1(single thread processor with write operations)->updateAttribute->InvokeHttpProcessor->executescript1(single thread processor with check operations (it is the first processor)) i have tried the code below but it nor fulfills sucessfully neither trows exception,
WHAT SHOULD I CHANGE?
here is my code:
File file = new File("C:/Users/Desktop/test/conf.xml");
String content = "";
BufferedReader s;
BufferedWriter w;
RandomAccessFile ini= new RandomAccessFile(file, "rwd");
FileLock lock= ini.getChannel().lock();
try {
def flowFile=session.get();
if(flowFile==null){
String sCurrentLine;
s = new BufferedReader(Channels.newReader(ini.getChannel(), "UTF-8"));
while ((sCurrentLine = s.readLine()) != null) {
content += sCurrentLine;
}
ini.seek(0);
def flowFile1=session.create()
flowFile1 = session.putAttribute(flowFile1, "filename", "conf.xml");
session.write(flowFile1, new StreamCallback() {
#Override
public void process(InputStream inputStream1, OutputStream outputStream) throws IOException {
outputStream.write(content.getBytes(StandardCharsets.UTF_8))
}
});
session.transfer(flowFile1,REL_SUCCESS);
def xml = new XmlParser().parseText(content);
xml.'**'.findAll{it.name() == 'run'}.each{ it.replaceBody 'false'}
def newxml=XmlUtil.serialize(xml);
String data =newxml;
if (!data.isEmpty()) {
ini.setLength(0);
w = new BufferedWriter(Channels.newWriter(ini.getChannel(), "UTF-8"));
w.write(data);
lock.release();
w.close();
}
}
else{
def serviceName=flowFile.getAttribute('serviceName');
def date=flowFile.getAttribute('filename').substring(0,10);
if(serviceName=='Decl'){
def xml = new XmlParser().parseText(content)
for(int i=0;i<names.size();i++) {
date = names.get(i).substring(0, 10);
xml.RS.Decl.details.findAll({ p ->
p.runAs[0].text() == "false" && p.start[0].text() == date.toString()
}).each({ p ->
p.start[0].value = addDays( p.start[0].text())
p.runAs[0].value = "true"
})
}
def newXml= groovy.xml.XmlUtil.serialize( xml )
data = newXml.toString()
if (!data.isEmpty()) {
ini.setLength(0);
w = new BufferedWriter(Channels.newWriter(ini.getChannel(), "UTF-8"));
w.write(data);
lock.release();
w.close();
}
}
else if(serviceName=='TaxyFee'){
def xml = new XmlParser().parseText(content)
for(int i=0;i<names.size();i++) {
date = names.get(i).substring(0, 10);
xml.RS.TaxyFee.details.findAll({ p ->
p.runAs[0].text() == "false" && p.start[0].text() == date.toString()
}).each({ p ->
p.start[0].value = addDays( p.start[0].text())
p.runAs[0].value = "true"
})
}
def newXml= groovy.xml.XmlUtil.serialize( xml )
data = newXml.toString()
if (!data.isEmpty()) {
ini.setLength(0);
w = new BufferedWriter(Channels.newWriter(ini.getChannel(), "UTF-8"));
w.write(data);
lock.release();
w.close();
}
}
}
} catch (FileNotFoundException e) {
//e.printStackTrace();
TimeUnit.SECONDS.sleep(50000);
} catch (IOException e) {
e.printStackTrace();
} catch(OverlappingFileLockException e){ TimeUnit.SECONDS.sleep(50000);
lock.release();
} catch (Exception e) {
e.printStackTrace();
} finally {
//lock.release();
ini.close();
}
set on the "scheduling" tab of the processor properties Concurrent Tasks = 1.
so only one instance of the processor will be running at the same time.
https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#scheduling-tab
I'm using below method to check and create elasticsearch index. This script works perfectly for Elasticsearch 2.3 and 2.4 versions. I'm trying things with Elasticsearch version 5.0 but it isn't working. All i'm trying is to create an index and search index dynamically using groovy script.
static def checkOrCreateESIndex(String baseUrl, String path)
{
try
{
def res_GET = null
def res_PUT = null
def status_message = null
def http = new HTTPBuilder(baseUrl)
println "New ES Check Index : "+baseUrl+path
http.request(Method.GET, ContentType.JSON)
{
uri.path = path
requestContentType = ContentType.XML
headers.'Accept-Encoding' = 'gzip,deflate'
headers.Accept = 'application/json';
response.success = { resp ->
res_GET = 200
println "SUCCESS! ${resp.status}"
}
response.failure = { resp ->
res_GET = 400
println "Failure! ${resp.status}"
}
}
if (res_GET != 200)
{
String params = "{\"settings\":{\"number_of_shards\":2,\"number_of_replicas\":0},\"mappings\":{\"run\":{\"_timestamp\":{\"enabled\":true},\"properties\":{\"70 Percentile\":{\"type\":\"float\"},\"80 Percentile\":{\"type\":\"float\"},\"85 Percentile\":{\"type\":\"float\"},\"95 Percentile\":{\"type\":\"float\"},\"90 Percentile\":{\"type\":\"float\"},\"Average\":{\"type\":\"float\"},\"Fail\":{\"type\":\"string\"},\"Maximum\":{\"type\":\"float\"},\"Minimum\":{\"type\":\"float\"},\"Pass\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"ProjectName\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"RunID\":{\"type\":\"string\"},\"VirtualUsers\":{\"type\":\"string\"},\"Release\":{\"type\":\"string\"},\"BuildNumber\":{\"type\":\"string\"},\"StartTime\":{\"type\":\"string\"},\"EndTime\":{\"type\":\"string\"},\"StdDeviation\":{\"type\":\"string\"},\"TestName\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"TransactionName\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"Baseline\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"SLAviolationcount\":{\"type\":\"float\"}}}}}"
def bodyMap2 = new JsonSlurper().parseText(params)
def response_body = null
def response_header = null
def http2 = new HTTPBuilder(baseUrl)
println "New ES Create Index : "+baseUrl+path
println "New ES Mapping : "+params
http2.request(Method.PUT)
{
uri.path = path
requestContentType = ContentType.JSON
headers.'Accept' = 'application/json';
body = bodyMap2
headers.'Accept-Encoding' = 'gzip,deflate'
headers.'Cookie' = 'JSESSIONID=934ED773C47D81C74C63BEAFE1D6CA1B'
response.success = { resp ->
res_PUT = 200
println "SUCCESS! ${resp.status}"
}
response.failure = { resp ->
res_PUT = 400
println "Failure! ${resp.status}"
}
}
}
if (res_GET == 200)
{
status_message = "IDX_EXISTS"
}
else if (res_GET != 200 && res_PUT == 200)
{
status_message = "IDX_CREATED"
}
else
{
status_message = "IDX_FAIL"
}
return status_message
}
catch (groovyx.net.http.HttpResponseException ex)
{
ex.printStackTrace()
return null
}
catch (java.net.ConnectException ex)
{
ex.printStackTrace()
return null
}
}
static def postElasticSearchMessage(String baseUrl, String path,String params)
{
try
{
def res_ES = null
def bodyMap = new JsonSlurper().parseText(params)
def response_body = null
def response_header = null
def http = new HTTPBuilder(baseUrl)
http.request(Method.POST)
{
uri.path = path
requestContentType = ContentType.JSON
body = bodyMap
headers.'Accept-Encoding' = 'gzip,deflate'
headers.'Cookie' = 'JSESSIONID=934ED773C47D81C74C63BEAFE1D6CA1B'
response.success = { resp ->
res_ES = 'Y'
println "SUCCESS! ${resp.status}"
}
response.failure = { resp ->
res_ES = 'N'
println "FAILURE! ${resp.status}"
}
}
return res_ES
}
catch (groovyx.net.http.HttpResponseException ex)
{
ex.printStackTrace()
return 'N'
}
catch (java.net.ConnectException ex)
{
ex.printStackTrace()
return 'N'
}
}
Below is my index structure:
{\"settings\":{\"number_of_shards\":2,\"number_of_replicas\":0},\"mappings\":{\"run\":{\"_timestamp\":{\"enabled\":true},\"properties\":{\"70 Percentile\":{\"type\":\"float\"},\"80 Percentile\":{\"type\":\"float\"},\"85 Percentile\":{\"type\":\"float\"},\"95 Percentile\":{\"type\":\"float\"},\"90 Percentile\":{\"type\":\"float\"},\"Average\":{\"type\":\"float\"},\"Fail\":{\"type\":\"string\"},\"Maximum\":{\"type\":\"float\"},\"Minimum\":{\"type\":\"float\"},\"Pass\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"ProjectName\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"RunID\":{\"type\":\"string\"},\"VirtualUsers\":{\"type\":\"string\"},\"Release\":{\"type\":\"string\"},\"BuildNumber\":{\"type\":\"string\"},\"StartTime\":{\"type\":\"string\"},\"EndTime\":{\"type\":\"string\"},\"StdDeviation\":{\"type\":\"string\"},\"TestName\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"TransactionName\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"Baseline\":{\"type\":\"string\",\"index\":\"not_analyzed\"},\"SLAviolationcount\":{\"type\":\"float\"}}}}}
how would i make this work for elasticsearch version 5.0. Kindly help me with the index structure and search query for 5.0. That would be really helpful.Thanks in advance.
I'm trying to use Groovy to open a file to search for a specific substring and then grab a different substring that occurs below the first one.
For example the substring I'm searching for is "Charger is enabled. Checking charge parameters..."
and if it is found I want to get a specific string that occurs after this.
Is the best way to do this read the file into memory and search for the index of the first string?
// With Java
import java.io.File;
import java.io.BufferedReader;
import java.io.FileReader;
def localDirectory = "";
def fileName = "";
def searchKey = "Charger is enabled. Checking charge parameters";
def searchKey2 = "";
def errorMessage = "";
FileReader fr;
BufferedReader br;
try
{
File f1 = new File(localDirectory+"/"+fileName);
fr = new FileReader(f1);
br = new BufferedReader(fr);
def keyFound = false;
// Go through line by line.
def line;
while ((line = br.readLine()) != null)
{
// If first string is found, process the second string.
if(line.contains(searchKey))
{
while ((line = br.readLine()) != null)
{
// Do something with the second string.
if(line.contains(searchKey2))
{
keyFound=true;
break;
}
}
}
if(keyFound)
{
break;
}
}
}
catch (Exception e)
{
errorMessage += "\nUnexpected Exception: " + e.getMessage();
for (trace in e.getStackTrace())
{
errorMessage += "\n\t" + trace;
}
}
finally
{
fr?.close();
br?.close();
}
// With Groovy
def localDirectory = "";
def fileName = "";
def searchKey = "Charger is enabled. Checking charge parameters";
def searchKey2 = "";
def errorMessage = "";
try
{
File f1 = new File(localDirectory+"/"+fileName);
def keyFound = false;
// Go through line by line.
def line;
f1.withReader
{
reader ->
while((line = reader.readLine()) != null)
{
// If first string is found, process the second string.
if(line.contains(searchKey))
{
while((line = reader.readLine()) != null)
{
// Do something with the second string.
if(line.contains(searchKey2))
{
keyFound=true;
break;
}
}
}
if(keyFound)
{
break;
}
}
}
}
catch (Exception e)
{
errorMessage += "\nUnexpected Exception: " + e.getMessage();
for (trace in e.getStackTrace())
{
errorMessage += "\n\t" + trace;
}
}
I have an array of files like this..
string[] unZippedFiles;
the idea is that I want to parse these files in paralle. As they are parsed a record gets placed on a concurrentbag. As record is getting placed I want to kick of the update function.
Here is what I am doing in my Main():
foreach(var file in unZippedFiles)
{ Parallel.Invoke
(
() => ImportFiles(file),
() => UpdateTest()
);
}
this is what the code of Update loooks like.
static void UpdateTest( )
{
Console.WriteLine("Updating/Inserting merchant information.");
while (!merchCollection.IsEmpty || producingRecords )
{
merchant x;
if (merchCollection.TryTake(out x))
{
UPDATE_MERCHANT(x.m_id, x.mInfo, x.month, x.year);
}
}
}
This is what the import code looks like. It's pretty much a giant string parser.
System.IO.StreamReader SR = new System.IO.StreamReader(fileName);
long COUNTER = 0;
StringBuilder contents = new StringBuilder( );
string M_ID = "";
string BOF_DELIMITER = "%%MS_SKEY_0000_000_PDF:";
string EOF_DELIMITER = "%%EOF";
try
{
record_count = 0;
producingRecords = true;
for (COUNTER = 0; COUNTER <= SR.BaseStream.Length - 1; COUNTER++)
{
if (SR.EndOfStream)
{
break;
}
contents.AppendLine(Strings.Trim(SR.ReadLine()));
contents.AppendLine(System.Environment.NewLine);
//contents += Strings.Trim(SR.ReadLine());
//contents += Strings.Chr(10);
if (contents.ToString().IndexOf((EOF_DELIMITER)) > -1)
{
if (contents.ToString().StartsWith(BOF_DELIMITER) & contents.ToString().IndexOf(EOF_DELIMITER) > -1)
{
string data = contents.ToString();
M_ID = data.Substring(data.IndexOf("_M") + 2, data.Substring(data.IndexOf("_M") + 2).IndexOf("_"));
Console.WriteLine("Merchant: " + M_ID);
merchant newmerch;
newmerch.m_id = M_ID;
newmerch.mInfo = data.Substring(0, (data.IndexOf(EOF_DELIMITER) + 5));
newmerch.month = DateTime.Now.AddMonths(-1).Month;
newmerch.year = DateTime.Now.AddMonths(-1).Year;
//Update(newmerch);
merchCollection.Add(newmerch);
}
contents.Clear();
//GC.Collect();
}
}
SR.Close();
// UpdateTest();
}
catch (Exception ex)
{
producingRecords = false;
}
finally
{
producingRecords = false;
}
}
the problem i am having is that the Update runs once and then the importfile function just takes over and does not yield to the update function. Any ideas on what am I doing wrong would be of great help.
Here's my stab at fixing your thread synchronisation. Note that I haven't changed any of the code from the functional standpoint (with the exception of taking out the catch - it's generally a bad idea; exceptions need to be propagated).
Forgive if something doesn't compile - I'm writing this based on incomplete snippets.
Main
foreach(var file in unZippedFiles)
{
using (var merchCollection = new BlockingCollection<merchant>())
{
Parallel.Invoke
(
() => ImportFiles(file, merchCollection),
() => UpdateTest(merchCollection)
);
}
}
Update
private void UpdateTest(BlockingCollection<merchant> merchCollection)
{
Console.WriteLine("Updating/Inserting merchant information.");
foreach (merchant x in merchCollection.GetConsumingEnumerable())
{
UPDATE_MERCHANT(x.m_id, x.mInfo, x.month, x.year);
}
}
Import
Don't forget to pass in merchCollection as a parameter - it should not be static.
System.IO.StreamReader SR = new System.IO.StreamReader(fileName);
long COUNTER = 0;
StringBuilder contents = new StringBuilder( );
string M_ID = "";
string BOF_DELIMITER = "%%MS_SKEY_0000_000_PDF:";
string EOF_DELIMITER = "%%EOF";
try
{
record_count = 0;
for (COUNTER = 0; COUNTER <= SR.BaseStream.Length - 1; COUNTER++)
{
if (SR.EndOfStream)
{
break;
}
contents.AppendLine(Strings.Trim(SR.ReadLine()));
contents.AppendLine(System.Environment.NewLine);
//contents += Strings.Trim(SR.ReadLine());
//contents += Strings.Chr(10);
if (contents.ToString().IndexOf((EOF_DELIMITER)) > -1)
{
if (contents.ToString().StartsWith(BOF_DELIMITER) & contents.ToString().IndexOf(EOF_DELIMITER) > -1)
{
string data = contents.ToString();
M_ID = data.Substring(data.IndexOf("_M") + 2, data.Substring(data.IndexOf("_M") + 2).IndexOf("_"));
Console.WriteLine("Merchant: " + M_ID);
merchant newmerch;
newmerch.m_id = M_ID;
newmerch.mInfo = data.Substring(0, (data.IndexOf(EOF_DELIMITER) + 5));
newmerch.month = DateTime.Now.AddMonths(-1).Month;
newmerch.year = DateTime.Now.AddMonths(-1).Year;
//Update(newmerch);
merchCollection.Add(newmerch);
}
contents.Clear();
//GC.Collect();
}
}
SR.Close();
// UpdateTest();
}
finally
{
merchCollection.CompleteAdding();
}
}
I'm new to groovy, and I've used the ExcelBuilder code referenced below to iterate through an excel spreadsheet to grab data. Is there an easy to write data as I iterate?
For example, row 1 might have data like this (CSV):
value1,value2
And after I iterate, I want it to look like this:
value1,value2,value3
http://www.technipelago.se/content/technipelago/blog/44
Yes, this can be done! As I got into the guts of it, I realized that the problem I was trying to solve wasn't the same as trying to read and write from the same file at the same time, but rather the excel data was stored in an object that I could freely manipulate whenever I wanted. So I added methods specific to my needs - which may or many not meet the needs of anyone else - and I post them here for smarter people to pick apart. At the end of it all, it is now doing what I want it to do.
I added a cell method that takes an index (number or label) and a value which will update a cell for the current row in context (specifically while using .eachLine()), and a .putRow() method that adds a whole row to the spreadsheet specified. It also handles Excel 2003, 2007, and 2010 formats. When files, sheets, or cells referenced don't exist, they get created. Since my source spreadsheets often have formulas and charts ready to display the data I'm entering, the .save() and .saveAs() methods call .evaluateAllFormulaCells() before saving.
To see the code I started with and examples of how it works, check out this blog entry
Note that both the .save() and .saveAs() methods reload the workbook from the saved file immediately after saving. This is a workaround to a bug in Apache POI that doesn't seem to be fixed yet (see Exception when writing to the xlsx document several times using apache poi).
import groovy.lang.Closure;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Map;
import org.apache.poi.ss.usermodel.WorkbookFactory;
import org.apache.poi.ss.usermodel.DataFormatter;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.ss.usermodel.DateUtil;
import org.apache.poi.xssf.usermodel.XSSFFormulaEvaluator;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFRow;
import org.apache.poi.hssf.usermodel.HSSFCell;
import org.apache.poi.hssf.usermodel.HSSFDateUtil;
import org.apache.poi.hssf.usermodel.HSSFFormulaEvaluator;
class Excel
{
def workbook;
def sheet;
def labels;
def row;
def infilename;
def outfilename;
Excel(String fileName)
{
HSSFRow.metaClass.getAt = {int index ->
def cell = delegate.getCell(index);
if(! cell)
{
return null;
}
def value;
switch (cell.cellType)
{
case HSSFCell.CELL_TYPE_NUMERIC:
if(HSSFDateUtil.isCellDateFormatted(cell))
{
value = cell.dateCellValue;
}
else
{
value = new DataFormatter().formatCellValue(cell);
}
break;
case HSSFCell.CELL_TYPE_BOOLEAN:
value = cell.booleanCellValue
break;
default:
value = new DataFormatter().formatCellValue(cell);
break;
}
return value
}
XSSFRow.metaClass.getAt = {int index ->
def cell = delegate.getCell(index);
if(! cell)
{
return null;
}
def value = new DataFormatter().formatCellValue(cell);
switch (cell.cellType)
{
case XSSFCell.CELL_TYPE_NUMERIC:
if (DateUtil.isCellDateFormatted(cell))
{
value = cell.dateCellValue;
}
else
{
value = new DataFormatter().formatCellValue(cell);
}
break;
case XSSFCell.CELL_TYPE_BOOLEAN:
value = cell.booleanCellValue
break;
default:
value = new DataFormatter().formatCellValue(cell);
break;
}
return value;
}
infilename = fileName;
outfilename = fileName;
try
{
workbook = WorkbookFactory.create(new FileInputStream(infilename));
}
catch (FileNotFoundException e)
{
workbook = (infilename =~ /(?is:\.xlsx)$/) ? new XSSFWorkbook() : new HSSFWorkbook();
}
catch (Exception e)
{
e.printStackTrace();
}
}
def getSheet(index)
{
def requested_sheet;
if(!index) index = 0;
if(index instanceof Number)
{
requested_sheet = (workbook.getNumberOfSheets >= index) ? workbook.getSheetAt(index) : workbook.createSheet();
}
else if (index ==~ /^\d+$/)
{
requested_sheet = (workbook.getNumberOfSheets >= Integer.valueOf(index)) ? workbook.getSheetAt(Integer.valueOf(index)) : workbook.createSheet();
}
else
{
requested_sheet = (workbook.getSheetIndex(index) > -1) ? workbook.getSheet(index) : workbook.createSheet(index);
}
return requested_sheet;
}
def cell(index)
{
if (labels && (index instanceof String))
{
index = labels.indexOf(index.toLowerCase());
}
if (row[index] == null)
{
row.createCell(index);
}
return row[index];
}
def cell(index, value)
{
if (labels.indexOf(index.toLowerCase()) == -1)
{
labels.push(index.toLowerCase());
def frow = sheet.getRow(0);
def ncell = frow.createCell(labels.indexOf(index.toLowerCase()));
ncell.setCellValue(index.toString());
}
def cell = (labels && (index instanceof String)) ? row.getCell(labels.indexOf(index.toLowerCase())) : row.getCell(index);
if (cell == null)
{
cell = (index instanceof String) ? row.createCell(labels.indexOf(index.toLowerCase())) : row.createCell(index);
}
cell.setCellValue(value);
}
def putRow (sheetName, Map values = [:])
{
def requested_sheet = getSheet(sheetName);
if (requested_sheet)
{
def lrow;
if (requested_sheet.getPhysicalNumberOfRows() == 0)
{
lrow = requested_sheet.createRow(0);
def lcounter = 0;
values.each {entry->
def lcell = lrow.createCell(lcounter);
lcell.setCellValue(entry.key);
lcounter++;
}
}
else
{
lrow = requested_sheet.getRow(0);
}
def sheetLabels = lrow.collect{it.toString().toLowerCase()}
def vrow = requested_sheet.createRow(requested_sheet.getLastRowNum() + 1);
values.each {entry->
def vcell = vrow.createCell(sheetLabels.indexOf(entry.key.toLowerCase()));
vcell.setCellValue(entry.value);
}
}
}
def propertyMissing(String name)
{
cell(name);
}
def propertyMissing(String name, value)
{
cell(name, value);
}
def eachLine (Map params = [:], Closure closure)
{
/*
* Parameters:
* skiprows : The number of rows to skip before the first line of data and/or labels
* offset : The number of rows to skip (after labels) before returning rows
* max : The maximum number of rows to iterate
* sheet : The name (string) or index (integer) of the worksheet to use
* labels : A boolean to treat the first row as a header row (data can be reference by label)
*
*/
def skiprows = params.skiprows ?: 0;
def offset = params.offset ?: 0;
def max = params.max ?: 9999999;
sheet = getSheet(params.sheet);
def rowIterator = sheet.rowIterator();
def linesRead = 0;
skiprows.times{ rowIterator.next() }
if(params.labels)
{
labels = rowIterator.next().collect{it.toString().toLowerCase()}
}
offset.times{ rowIterator.next() }
closure.setDelegate(this);
while(rowIterator.hasNext() && linesRead++ < max)
{
row = rowIterator.next();
closure.call(row);
}
}
def save ()
{
if (workbook.getClass().toString().indexOf("XSSF") > -1)
{
XSSFFormulaEvaluator.evaluateAllFormulaCells((XSSFWorkbook) workbook);
}
else
{
HSSFFormulaEvaluator.evaluateAllFormulaCells((HSSFWorkbook) workbook);
}
if (outfilename != null)
{
try
{
FileOutputStream output = new FileOutputStream(outfilename);
workbook.write(output);
output.close();
workbook = null;
workbook = WorkbookFactory.create(new FileInputStream(outfilename));
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
def saveAs (String fileName)
{
if (workbook.getClass().toString().indexOf("XSSF") > -1)
{
XSSFFormulaEvaluator.evaluateAllFormulaCells((XSSFWorkbook) workbook);
}
else
{
HSSFFormulaEvaluator.evaluateAllFormulaCells((HSSFWorkbook) workbook);
}
try
{
FileOutputStream output = new FileOutputStream(fileName);
workbook.write(output);
output.close();
outfilename = fileName;
workbook = null;
workbook = WorkbookFactory.create(new FileInputStream(outfilename));
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
If you see any glaring errors or ways to improve (other than style), I'd love to hear them. Again, Groovy is not a language I have much experience with, and I haven't done anything with Java in several years, so I'm sure there might be some better ways to do things.