I consider an ActivePivot instance to compute CVA (Credit Valuation Adjustment).
I have to apply a piece of logic on a large number of cells (20k for each counter-party), each being associated to a float array of size 10k. Even if ActivePivot is massively multithreaded, an ABasicPostProcessor will be applied in a mono-threaded way for each range location. How could I make it compute through my point location in a multi-threaded way?
I built the following class, which specialize ABasicPostProcessor (a Core class enabling fast implementation of per-point post-processor) by just adding the calls to doEvaluation in a multi-threaded way.
Given an ABasicPostProcessor specialisation, one simply has to extend AParallelBasicPostProcessor in order to gain parallel evaluation!
/**
* Specialization of ABasicPostProcessor which will call doEvaluation in a
* multithreaded way
*
* #author BLA
*/
public abstract class AParallelBasicPostProcessor<OutputType> extends ABasicPostProcessor<OutputType> {
private static final long serialVersionUID = -3453966549173516186L;
public AParallelBasicPostProcessor(String name, IActivePivot pivot) {
super(name, pivot);
}
#Override
public void evaluate(ILocation location, final IAggregatesRetriever retriever) throws QuartetException {
// Retrieve required aggregates
final ICellSet cellSet = retriever.retrieveAggregates(Collections.singleton(location), Arrays.asList(prefetchMeasures));
// Prepare a List
List<ALocatedRecursiveTask<OutputType>> tasks = new ArrayList<ALocatedRecursiveTask<OutputType>>();
// Create the procedure to hold the parallel sub-tasks
final ICellsProcedure subTasksGeneration = makeSubTasksGenerationProcedure(tasks);
cellSet.forEachLocation(subTasksGeneration, underlyingMeasures);
ForkJoinTask.invokeAll(tasks);
for (ALocatedRecursiveTask<OutputType> task : tasks) {
OutputType returnValue;
try {
returnValue = task.get();
} catch (InterruptedException e) {
throw new RuntimeException(e);
} catch (ExecutionException e) {
// re-throw the root cause of the ExecutionException
throw new RuntimeException(e.getCause());
}
// We can write only non-null aggregates
if (null != returnValue) {
writeInRetriever(retriever, task.getLocation(), returnValue);
}
}
}
protected void writeInRetriever(IAggregatesRetriever retriever, ILocation location, OutputType returnValue) {
retriever.write(location, returnValue);
}
protected ICellsProcedure makeSubTasksGenerationProcedure(List<ALocatedRecursiveTask<OutputType>> futures) {
return new SubTasksGenerationProcedure(futures);
}
/**
* {#link ICellsProcedure} registering a {#link ALocatedRecursiveTask} per
* point location
*/
protected class SubTasksGenerationProcedure implements ICellsProcedure {
protected List<ALocatedRecursiveTask<OutputType>> futures;
public SubTasksGenerationProcedure(List<ALocatedRecursiveTask<OutputType>> futures) {
this.futures = futures;
}
#Override
public boolean execute(final ILocation pointLocation, int rowId, Object[] measures) {
// clone the array of measures as it is internally used as a buffer
final Object[] clone = measures.clone();
futures.add(makeLocatedFuture(pointLocation, clone));
return true;
}
}
protected ALocatedRecursiveTask<OutputType> makeLocatedFuture(ILocation pointLocation, Object[] measures) {
return new LocatedRecursiveTask(pointLocation, measures);
}
/**
* A specialization of RecursiveTask by associating it to a
* {#link ILocation}
*
* #author BLA
*
*/
protected static abstract class ALocatedRecursiveTask<T> extends RecursiveTask<T> {
private static final long serialVersionUID = -6014943980790547011L;
public abstract ILocation getLocation();
}
/**
* Default implementation of {#link ALocatedRecursiveTask}
*
* #author BLA
*
*/
protected class LocatedRecursiveTask extends ALocatedRecursiveTask<OutputType> {
private static final long serialVersionUID = 676859831679236794L;
protected ILocation pointLocation;
protected Object[] measures;
public LocatedRecursiveTask(ILocation pointLocation, Object[] measures) {
this.pointLocation = pointLocation;
this.measures = measures;
if (pointLocation.isRange()) {
throw new RuntimeException(this.getClass() + " accepts only point location: " + pointLocation);
}
}
#Override
protected OutputType compute() {
try {
// The custom evaluation will be computed in parallel
return AParallelBasicPostProcessor.this.doEvaluation(pointLocation, measures);
} catch (QuartetException e) {
throw new RuntimeException(e);
}
}
#Override
public ILocation getLocation() {
return pointLocation;
}
}
}
The ActivePivot query engine is heavily multithreaded, the invocation of several post processors within a single query is done in parallel (unless one depends on the result of another of course). When the same post processor is executed several times over the locations involved in the query, that's also done in parallel. So before rolling up your sleeves, it is worth checking whether there isn't a more obvious bottleneck in your query plan.
Now the invocation of one post processor over one location is indeed an indivisible workload in the ActivePivot query engine. And in the case aggregates are not just numbers that sum in nanoseconds, but large or structured objects like vectors, there may be room for parallelism driven performance boost.
The ActivePivot query engine is built on top of a fork/join pool (http://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html). It means that your post processor code is always called from within the fork join pool, and that makes it possible to fork your own sub-tasks, then join them. That is considered an expert trick, don't try that without a fair understanding of how the fork join pool works.
Let's consider a post processor that for each evaluated location computes the maximum of several measures:
package com.quartetfs.pivot.sandbox.postprocessor.impl;
import com.quartetfs.biz.pivot.IActivePivot;
import com.quartetfs.biz.pivot.ILocation;
import com.quartetfs.biz.pivot.postprocessing.impl.ABasicPostProcessor;
import com.quartetfs.fwk.QuartetException;
import com.quartetfs.fwk.QuartetExtendedPluginValue;
/**
*
* Post processor that computes the MAX of several measures.
*
* #author Quartet FS
*
*/
#QuartetExtendedPluginValue(interfaceName = "com.quartetfs.biz.pivot.postprocessing.IPostProcessor", key = MaxPostProcessor.TYPE)
public class MaxPostProcessor extends ABasicPostProcessor<Double> {
/** serialVersionUID */
private static final long serialVersionUID = -8886545079342151420L;
/** Plugin type */
public static final String TYPE = "MAX";
public MaxPostProcessor(String name, IActivePivot pivot) {
super(name, pivot);
}
#Override
public String getType() { return TYPE; }
#Override
protected Double doEvaluation(ILocation location, Object[] measures) throws QuartetException {
double max = ((Number) measures[0]).doubleValue();
for(int i = 1; i < measures.length; i++) {
max = Math.max(max, ((Number) measures[i]).doubleValue());
}
return max;
}
}
In that post processor the leaf locations resulting from the evaluated range location will be computed one after the other. You can decide to create tasks instead, and execute those tasks in parallel through the fork join pool. I hope the following will get you started:
package com.quartetfs.pivot.sandbox.postprocessor.impl;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import jsr166y.ForkJoinTask;
import jsr166y.RecursiveTask;
import com.quartetfs.biz.pivot.IActivePivot;
import com.quartetfs.biz.pivot.ILocation;
import com.quartetfs.biz.pivot.cellset.ICellSet;
import com.quartetfs.biz.pivot.cellset.ICellsProcedure;
import com.quartetfs.biz.pivot.query.aggregates.IAggregatesRetriever;
import com.quartetfs.fwk.QuartetException;
import com.quartetfs.fwk.QuartetExtendedPluginValue;
/**
*
* Post processor that computes the MAX of several measures,
* evaluation of locations is performed in parallel.
*
* #author Quartet FS
*
*/
#QuartetExtendedPluginValue(interfaceName = "com.quartetfs.biz.pivot.postprocessing.IPostProcessor", key = ParallelMaxPostProcessor.TYPE)
public class ParallelMaxPostProcessor extends MaxPostProcessor {
/** serialVersionUID */
private static final long serialVersionUID = -8886545079342151420L;
/** Plugin type */
public static final String TYPE = "PMAX";
public ParallelMaxPostProcessor(String name, IActivePivot pivot) {
super(name, pivot);
}
#Override
public String getType() { return TYPE; }
#Override
public void evaluate(ILocation location, IAggregatesRetriever retriever)throws QuartetException {
try {
// Retrieve required aggregates
ICellSet cellSet = retriever.retrieveAggregates(Collections.singleton(location), Arrays.asList(prefetchMeasures));
// Evaluate the cell set to create tasks
ParallelEvaluationProcedure evalProcedure = new ParallelEvaluationProcedure();
cellSet.forEachLocation(evalProcedure);
// Execute the tasks in parallel and write results
evalProcedure.writeResults(retriever);
} catch(Exception e) {
throw new QuartetException("Evaluation of " + this + " on location " + location + " failed.", e);
}
}
/**
* Procedure evaluated on the cell set.
*/
protected class ParallelEvaluationProcedure implements ICellsProcedure {
/** List of tasks */
protected final List<MaxComputation> tasks = new ArrayList<ParallelMaxPostProcessor.MaxComputation>();
#Override
public boolean execute(ILocation location, int rowId, Object[] measures) {
Object[] numbers = measures.clone();
tasks.add(new MaxComputation(location, numbers));
return true; // continue
}
/** Once all the tasks are executed, write results */
public void writeResults(IAggregatesRetriever retriever) throws Exception {
// Invoke all the tasks in parallel
// using the fork join pool that runs the post processor.
ForkJoinTask.invokeAll(tasks);
for(MaxComputation task : tasks) {
retriever.write(task.location, task.get());
}
}
}
/**
* Max computation task. It illustrates our example well
* but in real-life this would be too little
* of a workload to deserve parallel execution.
*/
protected class MaxComputation extends RecursiveTask<Double> {
/** serialVersionUID */
private static final long serialVersionUID = -5843737025175189495L;
final ILocation location;
final Object[] numbers;
public MaxComputation(ILocation location, Object[] numbers) {
this.location = location;
this.numbers = numbers;
}
#Override
protected Double compute() {
try {
return doEvaluation(location, numbers);
} catch (QuartetException e) {
completeExceptionally(e);
return null;
}
}
}
}
Related
I'm running a test where I take 1,000 Strings that represent an ID. I create an ExecutorService with 100 threads, loop over the 1,000 Strings, and create a Callable for each. This works fine in Java, but I found in Groovy that the anonymous inner class Callable is not storing the iterated value.
Here is my test:
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class CallableTest {
public static void main( String[] args ) throws InterruptedException, ExecutionException {
CallableTest test = new CallableTest();
test.runTest();
}
public List<String> setupTest(){
List<String> ids = new ArrayList<String>();
for(int i = 0; i < 1000; i++) {
ids.add( "ID_" + i );
}
return ids;
}
public void runTest() throws InterruptedException, ExecutionException {
ExecutorService executor = Executors.newFixedThreadPool(100);
List<Future<String>> futures = new ArrayList<Future<String>>();
List<String> ids = setupTest();
for(final String id : ids) {
Future<String> future = executor.submit(new Callable<String>() {
#Override
public String call() throws Exception {
return doSomethingWithId(id);
}
});
futures.add( future );
}
for(Future<String> future : futures) {
String message = future.get();
System.out.println(message);
}
}
public String doSomethingWithId(String id) {
return "Doing something with ID: " + id;
}
}
The problem in Groovy is here:
for(final String id : ids) {
Future<String> future = executor.submit(new Callable<String>() {
#Override
public String call() throws Exception {
// The ID value here is not the ID value from when the Callable was created
return doSomethingWithId(id);
}
});
futures.add( future );
}
As you can see in my comment, the id value when calling the method doSomethingWithId is not the same value as it was when the Callable was created. This leads to Doing something with ID: ID_999 being printed majority of the time.
If I copy this over to a Java project and run it, it runs as expected. I get Doing something with ID: ID_x from 0-999 with no duplicates.
Why doesn't this work in Groovy? My understanding is that Groovy should support anonymous inner classes, but it appears that the anonymous inner class Callable is not capturing the outer variable id state when it creates the anonymous class. I'm using Java 7 with Groovy 2.3.10.
UPDATE
I found that adding the following made this work:
for(final String id : ids) {
final String myId = id // This makes it work
Future<String> future = executor.submit(new Callable<String>() {
#Override
public String call() throws Exception {
return doSomethingWithId(myId);
}
});
futures.add( future );
}
Seems like Groovy isn't actually making the value returned from the iterator final?
Trying to execute appium tests parallely on multiple devices. Idea goes like initiating JUnitRunner class as parameterized (e.g. deviceList) that creates parallel thread per device. From runner, invoking TestSuite via JUnitCore.run. Problem is that instantiating driver in test cases need device name (probably from Runner class), but JUnitCore doesnt provide such option (to invoke Suite / Test class by instantiating it). Any help?
Code goes like: JUnitRunner.java
#RunWith(Parallelized.class) // extension of the Parameterized runner
public class JUTRunner {
private String device;
/**
* #param device
*/
public JUTRunner(String device) {
this.device = device;
}
/**
* #return
*/
#Parameters
public static Collection<Object[]> getParameters() {
List<String> deviceList = findAllDevices();
List<Object[]> parameters = new ArrayList<Object[]>(deviceList.size());
for (String device : deviceList) {
parameters.add(new Object[] { device });
}
return parameters;
}
/**
* #return
*/
private static List<String> findAllDevices() {
return DeviceList.getInstance().getDeviceList();
}
/**
* #throws InterruptedException
*/
#Test
public void testOnDevice() throws InterruptedException {
Result result = JUnitCore.runClasses(JUTSuite.class);
Result result = new JUnitCore().run(suite);
for (Failure failure : result.getFailures()) {
System.out.println(failure.toString());
}
if (result.wasSuccessful()) {
System.out.println("All tests finished successfully...");
}
}
}
And TestCase.java
public class MyTest extends TestCase {
protected String device;
protected AppiumDriver<MobileElement> driver;
private int deviceNum;
#Rule
public TestName testName = new TestName();
#Before
protected void setUp() throws Exception {
this.driver = new AndroidDriver(device).getDriver();
}
#Test
public void testLogin() {
System.out.println(testName.getMethodName() + device);
}
}
What is the way of reading a specific sheet from an Excel file using spring-batch-excel?
Specifically, I want to parse different sheets within an Excel file in a different manner, using a org.springframework.batch.item.excel.poi.PoiItemReader.
I can't see how to do this with the PoiItemReader, in that is appears to read each sheet in the document. Is there a way to handle sheets differently in the row mapper perhaps? Is it possible without writing a custom POI reader?
No way with out writing custom reader
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.CellType;
import org.apache.poi.ss.usermodel.DataFormatter;
import org.apache.poi.ss.usermodel.FormulaEvaluator;
import org.apache.poi.ss.usermodel.Row;
import org.springframework.batch.extensions.excel.Sheet;
import org.springframework.lang.Nullable;
public class PoiSheet implements Sheet {
private final DataFormatter dataFormatter = new DataFormatter();
private final org.apache.poi.ss.usermodel.Sheet delegate;
private final int numberOfRows;
private final String name;
private FormulaEvaluator evaluator;
/**
* Constructor which takes the delegate sheet.
* #param delegate the apache POI sheet
*/
PoiSheet(final org.apache.poi.ss.usermodel.Sheet delegate) {
super();
this.delegate = delegate;
this.numberOfRows = this.delegate.getLastRowNum() + 1;
this.name = this.delegate.getSheetName();
}
/**
* {#inheritDoc}
*/
#Override
public int getNumberOfRows() {
return this.numberOfRows;
}
/**
* {#inheritDoc}
*/
#Override
public String getName() {
return this.name;
}
/**
* {#inheritDoc}
*/
#Override
#Nullable
public String[] getRow(final int rowNumber) {
final Row row = this.delegate.getRow(rowNumber);
return map(row);
}
#Nullable
private String[] map(Row row) {
if (row == null) {
return null;
}
final List<String> cells = new LinkedList<>();
final int numberOfColumns = row.getLastCellNum();
for (int i = 0; i < numberOfColumns; i++) {
Cell cell = row.getCell(i);
CellType cellType = cell.getCellType();
if (cellType == CellType.FORMULA) {
cells.add(this.dataFormatter.formatCellValue(cell, getFormulaEvaluator()));
}
else {
cells.add(this.dataFormatter.formatCellValue(cell));
}
}
return cells.toArray(new String[0]);
}
/**
* Lazy getter for the {#code FormulaEvaluator}. Takes some time to create an
* instance, so if not necessary don't create it.
* #return the {#code FormulaEvaluator}
*/
private FormulaEvaluator getFormulaEvaluator() {
if (this.evaluator == null) {
this.evaluator = this.delegate.getWorkbook().getCreationHelper().createFormulaEvaluator();
}
return this.evaluator;
}
#Override
public Iterator<String[]> iterator() {
return new Iterator<String[]>() {
private final Iterator<Row> delegateIter = PoiSheet.this.delegate.iterator();
#Override
public boolean hasNext() {
return this.delegateIter.hasNext();
}
#Override
public String[] next() {
return map(this.delegateIter.next());
}
};
}
}
Excel Reader
import java.io.File;
import java.io.FileNotFoundException;
import java.io.InputStream;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.ss.usermodel.Workbook;
import org.apache.poi.ss.usermodel.WorkbookFactory;
import org.springframework.batch.extensions.excel.AbstractExcelItemReader;
import org.springframework.batch.extensions.excel.Sheet;
import org.springframework.core.io.Resource;
public class ExcelSheetItemReader <T> extends AbstractExcelItemReader<T> {
private Workbook workbook;
private InputStream inputStream;
private int sheetIndex = 0;
#Override
protected Sheet getSheet(final int sheet) {
return new PoiSheet(this.workbook.getSheetAt(sheetIndex));
}
#Override
protected int getNumberOfSheets() {
return 1;
}
#Override
protected void doClose() throws Exception {
super.doClose();
if (this.inputStream != null) {
this.inputStream.close();
this.inputStream = null;
}
if (this.workbook != null) {
this.workbook.close();
this.workbook = null;
}
}
/**
* Open the underlying file using the {#code WorkbookFactory}. Prefer {#code File}
* based access over an {#code InputStream}. Using a file will use fewer resources
* compared to an input stream. The latter will need to cache the whole sheet
* in-memory.
* #param resource the {#code Resource} pointing to the Excel file.
* #param password the password for opening the file
* #throws Exception is thrown for any errors.
*/
#Override
protected void openExcelFile(final Resource resource, String password) throws Exception {
try {
File file = resource.getFile();
this.workbook = WorkbookFactory.create(file, password, false);
}
catch (FileNotFoundException ex) {
this.inputStream = resource.getInputStream();
this.workbook = WorkbookFactory.create(this.inputStream, password);
}
this.workbook.setMissingCellPolicy(Row.MissingCellPolicy.CREATE_NULL_AS_BLANK);
}
public int getSheetIndex() {
return sheetIndex;
}
public void setSheetIndex(int sheetIndex) {
this.sheetIndex = sheetIndex;
}
}
Another example can be found here
I initializing a variable in my startApp method and after I do so I call a thread to check the whole run time of the app what is the current screen (canvas) and activating it's thread
import java.io.IOException;
import java.io.InputStream;
import zuma.core.MyGameCanvas;
import zuma.core.Game;
import zuma.gui.LevelSelectionMenu;
import zuma.gui.MainMenu;
import javax.microedition.lcdui.Display;
import javax.microedition.media.Manager;
import javax.microedition.media.MediaException;
import javax.microedition.midlet.*;
import zuma.util.AudioManager;
public class Midlet extends MIDlet implements Runnable{
private static Display display;
private static Game game;
private static MainMenu mainMenu;
private static LevelSelectionMenu levelSelectionMenu;
private static MyGameCanvas myGameCanvas;
private Thread t;
/**
* Specifies what happens when starting the application.
*/
public void startApp() {
display = Display.getDisplay(this);
display.setCurrent(getMainMenu());
t = new Thread(this);
t.start();
}
/**
* Specifies what happens when pausing the application.
*/
public void pauseApp() {
}
/**
* Specifies what happens when exiting the application.
* #param unconditional
*/
public void destroyApp(boolean unconditional) {
//game.getLevelsRecordStore().closeRecordStore();
notifyDestroyed();
}
/**
*
* #return the display.
*/
public static Display getDisplay() {
return display;
}
/**
*
* #return the game.
*/
public static Game getGame() {
if (game == null) { //|| game.getLevels() == null) {
game = new Game();
}
return game;
}
/**
*
* #return the mainMenu.
*/
public static synchronized MainMenu getMainMenu() {
if (mainMenu == null) {
mainMenu = new MainMenu();
}
return mainMenu;
}
/**
*
* #return the levelSelectionMenu.
*/
public static LevelSelectionMenu getLevelSelectionMenu() {
if (levelSelectionMenu == null) {
levelSelectionMenu = new LevelSelectionMenu(getGame());
}
return levelSelectionMenu;
}
/**
*
* #return the myGameCanvas.
*/
public static MyGameCanvas getMyGameCanvas() {
if (myGameCanvas == null) {
myGameCanvas = new MyGameCanvas();
}
return myGameCanvas;
}
/**
* Starts the thread of the current display.
*/
public synchronized void run() {
while (true) {
if (display.getCurrent().equals(mainMenu)) {
if (mainMenu.getT() == null || !mainMenu.getT().isAlive()) {
mainMenu.init();
}
if (getMainMenu().isQuitRequest()) {
destroyApp(true);
}
}
else if (display.getCurrent().equals(levelSelectionMenu)) {
if (levelSelectionMenu.getT() == null || !levelSelectionMenu.getT().isAlive()) {
levelSelectionMenu.init();
}
}
else if (display.getCurrent().equals(myGameCanvas)) {
if (myGameCanvas.getT() == null || !myGameCanvas.getT().isAlive()) {
myGameCanvas.init(game.getLevels()[game.getChosenLevel()]);
}
}
try {
Thread.sleep(10);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
}
Now as you can see I first init the variable mainMenu in this line
display.setCurrent(getMainMenu());
and only then call run method, but somehow SOMETIMES it gives me null pointer exception on this line
if (display.getCurrent().equals(mainMenu))
so somehow the thread is getting called even before the mainMenu is being created and I have no idea how, what I tried is making the run method and the getMainMenu methos synchronized but it didnt work, can anyone help me ?
The thread is not getting called before the main menu is created. After setCurrent is called, there is actually a delay before the displayable is made visible. The doc says: "The setCurrent() method returns immediately, without waiting for the change to take place. Because of this delay, a call to getCurrent() shortly after a call to setCurrent() is unlikely to return the value passed to setCurrent()."
I have a large 'order form' XPage that displays 99 rows, with 3 text input boxes on each row.
To capture changes, I have placed a call to a SSJS function in the 'onchange' event of each input box.
The call simply sends the product ID, the type of change (which column) and the quantity.
The SSJS function then preserves those changes in a sessionScope variable (java.util.HashMap).
There is no refresh associated with the change.
The changes are processed en masse when the user clicks the 'Submit' button.
That is another SSJS function that simply writes all of the changes to the back-end Domino database.
That all seems to work fine and has done for a couple of years.
However, it seems my users are becoming too efficient with the application, and are typing faster than it can keep up.
My debug code writes each change to the server's console, and I can see where some changes are simply ignored if the user makes changes in quick succession (they simply tab between the input boxes). It's almost as if the server is too busy processing the previous change and skips one to move on to another. At times, whole blocks of changes are missed, and then the application picks back up when it can.
Am I using the wrong technique to capture the changes? Is there something I can do to ensure the application initiates the onchange event each and every time?
I have tested this using IE8/9 & FF24.
I have looked at other such posts that propose using the 'onkeyup' event instead. I don't think that would work in my case, as the users may order double-digit quantities.
Any/all suggestions would be gratefully appreciated!
Terry,
you need to revisit the architecture. If the updates are processed on submit, why bother to send them individually to the server - as Tim nicely pointed out. What I would do:
create 2 Java classes: one "Order" one "LineItem"
Let the Order class implement the Map interface Map
Use the Order class for your repeat control (it will give you the key of each LineItem as the repeat variable)
Bind the fields inside the repeat to Order[RepeatKey].fieldName
Use Order in a object data source
Implement the save method in the Order class and call it in the save method of the object data source
Very rought outline, let me know if you need me to elaborate. The Java Collections Framework is your friend.
It is easier than it looks:
public class LineItem {
private String unid;
private String partno;
private int quantity;
private long unitprice;
/**
* Constructor for new items
*/
public LineItem() {
this.unid = null;
}
/**
* Constructor for existing items
*/
public LineItem(Document doc) {
this.unid = doc.getUniversalId();
// more here
}
/**
* #return the unid
*/
public String getUnid() {
return this.unid;
}
/**
* #return the partno
*/
public String getPartno() {
return this.partno;
}
/**
* #param partno the partno to set
*/
public void setPartno(String partno) {
this.partno = partno;
}
/**
* #return the quantity
*/
public int getQuantity() {
return this.quantity;
}
/**
* #param quantity the quantity to set
*/
public void setQuantity(int quantity) {
this.quantity = quantity;
}
/**
* #return the unitprice
*/
public long getUnitprice() {
return this.unitprice;
}
/**
* #param unitprice the unitprice to set
*/
public void setUnitprice(long unitprice) {
this.unitprice = unitprice;
}
public void save(Database db) {
Document doc = null;
if (this.unid == null) {
doc = db.createDocument();
doc.replaceItem("Form", "LineItem");
}
doc.replaceItem("PartNo", this.partno);
// More here
doc.save();
}
}
and for the Order - presuming you load from a document collection.
public class Order implements Map<String, LineItem> {
// You might want to have a stack here to keep order
private final Map<String, LineItem> backingMap = new LinkedHashMap<String, LineItem>();
private final Set<String> deletedItemKeys = new HashSet<String>();
// The key we use for new items when unid is null
private int lastNewItemNumber = 0;
#Override
public int size() {
return this.backingMap.size();
}
#Override
public boolean isEmpty() {
return this.backingMap.isEmpty();
}
#Override
public boolean containsKey(Object key) {
return this.backingMap.containsKey(key);
}
#Override
public boolean containsValue(Object value) {
return this.backingMap.containsValue(value);
}
#Override
public LineItem get(Object key) {
return this.backingMap.get(key);
}
#Override
public LineItem put(String key, LineItem value) {
// Here it gets a little special
// We need to prevent null keys
if (key == null) {
key = String.valueOf(this.lastNewItemNumber);
lastNewItemNumber++;
}
this.deletedItemKeys.remove(key);
return this.backingMap.put(key, value);
}
#Override
public LineItem remove(Object key) {
this.deletedItemKeys.add(key.toString());
return this.backingMap.remove(key);
}
#Override
public void putAll(Map<? extends String, ? extends LineItem> m) {
for (Map.Entry<? extends String, ? extends LineItem> me : m.entrySet()) {
this.put(me.getKey(), me.getValue());
}
}
#Override
public void clear() {
this.deletedItemKeys.addAll(this.backingMap.keySet());
this.backingMap.clear();
}
#Override
public Set<String> keySet() {
return this.backingMap.keySet();
}
#Override
public Collection<LineItem> values() {
return this.backingMap.values();
}
#Override
public Set<java.util.Map.Entry<String, LineItem>> entrySet() {
return this.backingMap.entrySet();
}
public void load(NotesDocumentCollection dc) throws NotesException {
Document doc = dc.getFirstDocument();
Document nextDoc;
while (doc != null) {
nextDoc = dc.getNextDocument(doc);
LineItem li = new LineItem(doc);
this.put(doc.getUniversalId(), li);
doc.recycle();
doc = nextDoc;
}
doc.recyle();
}
public void save(Database db) {
for (LineItem item : this.backingMap.values()) {
item.save(db);
}
// Now kill the left overs - needs error handling
for (String morituri : this.deletedItemKeys) {
Document delDoc = db.getDocumentByUnid(morituri);
if (delDoc != null) {
delDoc.remove(true);
}
}
}
}