I am using selenium webdriver to register my application..
Now I wanted take the values of username, password, confirm password, email and telephone number. I have used for loop to iterate this.. But when app grows, the fields are added more on the same page. I have to change again the limit value in for loop..
Can I use Iterator to check for a row, if the cell is empty, it should stop and control should be given to the testCase..
Can any one let me know how to use Iterator for this???
package com.xcha.selenium.utitlity;
import java.io.IOException;
import java.util.ArrayList;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
public class ReadExcel extends GlobalVariables {
/**
* #param args
* #throws IOException
*/
public static ArrayList readExcel(int rowcounter) throws IOException {
XSSFWorkbook srcBook = new XSSFWorkbook("./prop.xlsx");
XSSFSheet sourceSheet = srcBook.getSheetAt(0);
int rownum = rowcounter;
XSSFRow sourceRow = sourceSheet.getRow(rownum);
int lastcellNum = sourceRow.getLastCellNum() - 1;
int lastrowNum = sourceSheet.getLastRowNum() - 1;
ArrayList<String> rowValues = new ArrayList<String>();
for (int i = 0; i <= lastcellNum; i++) {
String val = (sourceRow.getCell(i)).getRichStringCellValue()
.getString();
System.out.println(val);
rowValues.add(val);
}
System.out.println("Printing Arraylist");
for (String strVal : rowValues) {
System.out.println(strVal);
}
return rowValues;
}
}
Related
I don't understand what is wrong in statement while((p=scan.next())!=null). By using scanner object i wanted to print lines after searching a string if(p.startsWith("START")). scanner object throwing a java.util.NoSuchElementException in the main.
import java.io.File;
import java.io.IOException;
import java.util.Scanner;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.encryption.InvalidPasswordException;
import org.apache.pdfbox.text.PDFTextStripper;
import org.apache.poi.poifs.filesystem.POIFSFileSystem;
public class ReadWriteTest {
static int i;
/**
* #param args
* #throws IOException
* #throws InvalidPasswordException
*/
public static void main(String[] args) throws InvalidPasswordException, IOException {
POIFSFileSystem fs = null;
PDDocument pdDoc = null;
String target_dir = "E:\\TEST_pdfs";
File dir = new File(target_dir);
File[] files = dir.listFiles();
{
for ( int s=0;s<files.length;s++){
if(files[s].isFile()){
pdDoc = PDDocument.load(files[s]);
//fs = new POIFSFileSystem(new FileInputStream(files[s]));
PDFTextStripper Stripper = new PDFTextStripper();
String st = Stripper.getText(pdDoc);
String linesp = System.lineSeparator();
String[] paragraphs = st.split(linesp);
for(String p: paragraphs){
Scanner scan = new Scanner(p);
while((p=scan.next())!=null) {
if(p.startsWith("START"))
do{
i++;
String nextline = scan.next();
System.out.println(nextline);
}while(i<5);
}
}
}
}}}}
Error :
Exception in thread "main" java.util.NoSuchElementException
at java.util.Scanner.throwFor(Scanner.java:862)
at java.util.Scanner.next(Scanner.java:1371)
at ReadWriteTest.main(ReadWriteTest.java:35)
it reads input.txt file and compared with target file.
If more than 3words are same as input file, the program should say plagiarized from .
I used substring so the program only compares first 3 letters.
should use tokenized? or how can it compared
Here is my code
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Scanner;
public class CheckPlagiarism{
public static void main(String args[]) throws FileNotFoundException
{
//Init HashMap
HashMap<String, Integer> corpus = new HashMap<>();
String fileName = args[0];
String tragetName = args[1];
int matchCount = Integer.parseInt(args[2]);
Scanner scanner = new Scanner(new File(fileName));
while(scanner.hasNext())
{
String[] line = scanner.nextLine().split(":");
corpus.put(line[1], Integer.parseInt(line[0]));
}
boolean found = false;
scanner = new Scanner(new File(tragetName));
while(scanner.hasNext())
{
String line = scanner.nextLine();
line = line.string(0,matchCount);
for(Entry<String, Integer> temp: corpus.entrySet()){
String key=temp.getKey();
if(key.contains(line))
{
System.out.println("Plagiarized from " + temp.getValue());
found = true;
break;
}
}
}
if(!found)
{
System.out.println("Not Plagiarized");
}
}
}
I'm using a pipeline to cluster text documents. The last stage in the pipeline is ml.clustering.KMeans which gives me a DataFrame with a column of cluster predictions. I would like to add the cluster centers as a column as well. I understand I can execute Vector[] clusterCenters = kmeansModel.clusterCenters(); and then convert the results into a DataFrame and join said results to the other DataFrame however I was hoping to find a way to accomplish this in a way similar to the Kmeans code below:
KMeans kMeans = new KMeans()
.setFeaturesCol("pca")
.setPredictionCol("kmeansclusterprediction")
.setK(5)
.setInitMode("random")
.setSeed(43L)
.setInitSteps(3)
.setMaxIter(15);
pipeline.setStages( ...
I was able extend KMeans and call the fit method via a pipeline however I'm not having any luck extending KMeansModel ... the constructor requires a String uid and a KMeansModel but I don't know how to pass in the model when defining the stages and calling the setStages method.
I also looked into extending KMeans.scala however as a Java developer I only understand about half the code thus, I'm hoping someone may have an easier solution before I tackle that. Ultimately I would like to end up with a DataFrame as follows:
+--------------------+-----------------------+--------------------+
| docid|kmeansclusterprediction|kmeansclustercenters|
+--------------------+-----------------------+--------------------+
|2bcbcd54-c11a-48c...| 2| [-0.04, -7.72]|
|0e644620-f5ff-40f...| 3| [0.23, 1.08]|
|665c1c2b-3065-4e8...| 3| [0.23, 1.08]|
|598c6268-e4b9-4c9...| 0| [-15.81, 0.01]|
+--------------------+-----------------------+--------------------+
Any help or hints is greatly appreciated.
Thank you
Answering my own question ... this was actually easy ... I extended KMeans and KMeansModel ... the extended Kmeans fit method must return the extended KMeansModel. For example:
public class AnalyticsKMeansModel extends KMeansModel ...
public class AnalyticsKMeans extends org.apache.spark.ml.clustering.KMeans { ...
public AnalyticsKMeansModel fit(DataFrame dataset) {
JavaRDD<Vector> javaRDD = dataset.select(this.getFeaturesCol()).toJavaRDD().map(new Function<Row, Vector>(){
private static final long serialVersionUID = -4588981547209486909L;
#Override
public Vector call(Row row) throws Exception {
Object point = row.getAs("pca");
Vector vector = (Vector)point;
return vector;
}
});
RDD<Vector> rdd = JavaRDD.toRDD(javaRDD);
org.apache.spark.mllib.clustering.KMeans algo = new org.apache.spark.mllib.clustering.KMeans().setK(BoxesRunTime.unboxToInt(this.$((Param<?>)this.k()))).setInitializationMode((String)this.$(this.initMode())).setInitializationSteps(BoxesRunTime.unboxToInt((Object)this.$((Param<?>)this.initSteps()))).setMaxIterations(BoxesRunTime.unboxToInt((Object)this.$((Param<?>)this.maxIter()))).setSeed(BoxesRunTime.unboxToLong((Object)this.$((Param<?>)this.seed()))).setEpsilon(BoxesRunTime.unboxToDouble((Object)this.$((Param<?>)this.tol())));
org.apache.spark.mllib.clustering.KMeansModel parentModel = algo.run(rdd);
AnalyticsKMeansModel model = new AnalyticsKMeansModel(this.uid(), parentModel);
return (AnalyticsKMeansModel) this.copyValues((Params)model, this.copyValues$default$2());
}
Once I changed the fit method to return my extended KMeansModel class everything worked as expected.
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.ml.clustering.KMeansModel;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import AnalyticsCluster;
public class AnalyticsKMeansModel extends KMeansModel {
private static final long serialVersionUID = -8893355418042946358L;
public AnalyticsKMeansModel(String uid, org.apache.spark.mllib.clustering.KMeansModel parentModel) {
super(uid, parentModel);
}
public DataFrame transform(DataFrame dataset) {
Vector[] clusterCenters = super.clusterCenters();
List<AnalyticsCluster> analyticsClusters = new ArrayList<AnalyticsCluster>();
for (int i=0; i<clusterCenters.length;i++){
Integer clusterId = super.predict(clusterCenters[i]);
Vector vector = clusterCenters[i];
double[] point = vector.toArray();
AnalyticsCluster analyticsCluster = new AnalyticsCluster(clusterId, point, 0L);
analyticsClusters.add(analyticsCluster);
}
JavaSparkContext jsc = JavaSparkContext.fromSparkContext(dataset.sqlContext().sparkContext());
JavaRDD<AnalyticsCluster> javaRDD = jsc.parallelize(analyticsClusters);
JavaRDD<Row> javaRDDRow = javaRDD.map(new Function<AnalyticsCluster, Row>() {
private static final long serialVersionUID = -2677295862916670965L;
#Override
public Row call(AnalyticsCluster cluster) throws Exception {
Row row = RowFactory.create(
String.valueOf(cluster.getID()),
String.valueOf(Arrays.toString(cluster.getCenter()))
);
return row;
}
});
List<StructField> schemaColumns = new ArrayList<StructField>();
schemaColumns.add(DataTypes.createStructField(this.getPredictionCol(), DataTypes.StringType, false));
schemaColumns.add(DataTypes.createStructField("clusterpoint", DataTypes.StringType, false));
StructType dataFrameSchema = DataTypes.createStructType(schemaColumns);
DataFrame clusterPointsDF = dataset.sqlContext().createDataFrame(javaRDDRow, dataFrameSchema);
//SOMETIMES "K" IS SET TO A VALUE GREATER THAN THE NUMBER OF ACTUAL ROWS OF DATA ... GET DISTINCT VALUES
clusterPointsDF.registerTempTable("clusterPoints");
DataFrame clustersDF = clusterPointsDF.sqlContext().sql("select distinct " + this.getPredictionCol()+ ", clusterpoint from clusterPoints");
clustersDF.cache();
clusterPointsDF.sqlContext().dropTempTable("clusterPoints");
DataFrame transformedDF = super.transform(dataset);
transformedDF.cache();
DataFrame df = transformedDF.join(clustersDF,
transformedDF.col(this.getPredictionCol()).equalTo(clustersDF.col(this.getPredictionCol())), "inner")
.drop(clustersDF.col(this.getPredictionCol()));
return df;
}
}
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.ml.param.Param;
import org.apache.spark.ml.param.Params;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.rdd.RDD;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import scala.runtime.BoxesRunTime;
public class AnalyticsKMeans extends org.apache.spark.ml.clustering.KMeans {
private static final long serialVersionUID = 8943702485821267996L;
private static String uid = null;
public AnalyticsKMeans(String uid){
AnalyticsKMeans.uid= uid;
}
public AnalyticsKMeansModel fit(DataFrame dataset) {
JavaRDD<Vector> javaRDD = dataset.select(this.getFeaturesCol()).toJavaRDD().map(new Function<Row, Vector>(){
private static final long serialVersionUID = -4588981547209486909L;
#Override
public Vector call(Row row) throws Exception {
Object point = row.getAs("pca");
Vector vector = (Vector)point;
return vector;
}
});
RDD<Vector> rdd = JavaRDD.toRDD(javaRDD);
org.apache.spark.mllib.clustering.KMeans algo = new org.apache.spark.mllib.clustering.KMeans().setK(BoxesRunTime.unboxToInt(this.$((Param<?>)this.k()))).setInitializationMode((String)this.$(this.initMode())).setInitializationSteps(BoxesRunTime.unboxToInt((Object)this.$((Param<?>)this.initSteps()))).setMaxIterations(BoxesRunTime.unboxToInt((Object)this.$((Param<?>)this.maxIter()))).setSeed(BoxesRunTime.unboxToLong((Object)this.$((Param<?>)this.seed()))).setEpsilon(BoxesRunTime.unboxToDouble((Object)this.$((Param<?>)this.tol())));
org.apache.spark.mllib.clustering.KMeansModel parentModel = algo.run(rdd);
AnalyticsKMeansModel model = new AnalyticsKMeansModel(this.uid(), parentModel);
return (AnalyticsKMeansModel) this.copyValues((Params)model, this.copyValues$default$2());
}
}
import java.io.Serializable;
import java.util.Arrays;
public class AnalyticsCluster implements Serializable {
private static final long serialVersionUID = 6535671221958712594L;
private final int id;
private volatile double[] center;
private volatile long count;
public AnalyticsCluster(int id, double[] center, long initialCount) {
// Preconditions.checkArgument(center.length > 0);
// Preconditions.checkArgument(initialCount >= 1);
this.id = id;
this.center = center;
this.count = initialCount;
}
public int getID() {
return id;
}
public double[] getCenter() {
return center;
}
public long getCount() {
return count;
}
public synchronized void update(double[] newPoint, long newCount) {
int length = center.length;
// Preconditions.checkArgument(length == newPoint.length);
double[] newCenter = new double[length];
long newTotalCount = newCount + count;
double newToTotal = (double) newCount / newTotalCount;
for (int i = 0; i < length; i++) {
double centerI = center[i];
newCenter[i] = centerI + newToTotal * (newPoint[i] - centerI);
}
center = newCenter;
count = newTotalCount;
}
#Override
public synchronized String toString() {
return id + " " + Arrays.toString(center) + " " + count;
}
// public static void main(String[] args) {
// double[] point = new double[2];
// point[0] = 0.10150532938119154;
// point[1] = -0.23734759238651829;
//
// Cluster cluster = new Cluster(1,point, 10L);
// System.out.println("cluster: " + cluster.toString());
// }
}
What is the way of reading a specific sheet from an Excel file using spring-batch-excel?
Specifically, I want to parse different sheets within an Excel file in a different manner, using a org.springframework.batch.item.excel.poi.PoiItemReader.
I can't see how to do this with the PoiItemReader, in that is appears to read each sheet in the document. Is there a way to handle sheets differently in the row mapper perhaps? Is it possible without writing a custom POI reader?
No way with out writing custom reader
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.CellType;
import org.apache.poi.ss.usermodel.DataFormatter;
import org.apache.poi.ss.usermodel.FormulaEvaluator;
import org.apache.poi.ss.usermodel.Row;
import org.springframework.batch.extensions.excel.Sheet;
import org.springframework.lang.Nullable;
public class PoiSheet implements Sheet {
private final DataFormatter dataFormatter = new DataFormatter();
private final org.apache.poi.ss.usermodel.Sheet delegate;
private final int numberOfRows;
private final String name;
private FormulaEvaluator evaluator;
/**
* Constructor which takes the delegate sheet.
* #param delegate the apache POI sheet
*/
PoiSheet(final org.apache.poi.ss.usermodel.Sheet delegate) {
super();
this.delegate = delegate;
this.numberOfRows = this.delegate.getLastRowNum() + 1;
this.name = this.delegate.getSheetName();
}
/**
* {#inheritDoc}
*/
#Override
public int getNumberOfRows() {
return this.numberOfRows;
}
/**
* {#inheritDoc}
*/
#Override
public String getName() {
return this.name;
}
/**
* {#inheritDoc}
*/
#Override
#Nullable
public String[] getRow(final int rowNumber) {
final Row row = this.delegate.getRow(rowNumber);
return map(row);
}
#Nullable
private String[] map(Row row) {
if (row == null) {
return null;
}
final List<String> cells = new LinkedList<>();
final int numberOfColumns = row.getLastCellNum();
for (int i = 0; i < numberOfColumns; i++) {
Cell cell = row.getCell(i);
CellType cellType = cell.getCellType();
if (cellType == CellType.FORMULA) {
cells.add(this.dataFormatter.formatCellValue(cell, getFormulaEvaluator()));
}
else {
cells.add(this.dataFormatter.formatCellValue(cell));
}
}
return cells.toArray(new String[0]);
}
/**
* Lazy getter for the {#code FormulaEvaluator}. Takes some time to create an
* instance, so if not necessary don't create it.
* #return the {#code FormulaEvaluator}
*/
private FormulaEvaluator getFormulaEvaluator() {
if (this.evaluator == null) {
this.evaluator = this.delegate.getWorkbook().getCreationHelper().createFormulaEvaluator();
}
return this.evaluator;
}
#Override
public Iterator<String[]> iterator() {
return new Iterator<String[]>() {
private final Iterator<Row> delegateIter = PoiSheet.this.delegate.iterator();
#Override
public boolean hasNext() {
return this.delegateIter.hasNext();
}
#Override
public String[] next() {
return map(this.delegateIter.next());
}
};
}
}
Excel Reader
import java.io.File;
import java.io.FileNotFoundException;
import java.io.InputStream;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.ss.usermodel.Workbook;
import org.apache.poi.ss.usermodel.WorkbookFactory;
import org.springframework.batch.extensions.excel.AbstractExcelItemReader;
import org.springframework.batch.extensions.excel.Sheet;
import org.springframework.core.io.Resource;
public class ExcelSheetItemReader <T> extends AbstractExcelItemReader<T> {
private Workbook workbook;
private InputStream inputStream;
private int sheetIndex = 0;
#Override
protected Sheet getSheet(final int sheet) {
return new PoiSheet(this.workbook.getSheetAt(sheetIndex));
}
#Override
protected int getNumberOfSheets() {
return 1;
}
#Override
protected void doClose() throws Exception {
super.doClose();
if (this.inputStream != null) {
this.inputStream.close();
this.inputStream = null;
}
if (this.workbook != null) {
this.workbook.close();
this.workbook = null;
}
}
/**
* Open the underlying file using the {#code WorkbookFactory}. Prefer {#code File}
* based access over an {#code InputStream}. Using a file will use fewer resources
* compared to an input stream. The latter will need to cache the whole sheet
* in-memory.
* #param resource the {#code Resource} pointing to the Excel file.
* #param password the password for opening the file
* #throws Exception is thrown for any errors.
*/
#Override
protected void openExcelFile(final Resource resource, String password) throws Exception {
try {
File file = resource.getFile();
this.workbook = WorkbookFactory.create(file, password, false);
}
catch (FileNotFoundException ex) {
this.inputStream = resource.getInputStream();
this.workbook = WorkbookFactory.create(this.inputStream, password);
}
this.workbook.setMissingCellPolicy(Row.MissingCellPolicy.CREATE_NULL_AS_BLANK);
}
public int getSheetIndex() {
return sheetIndex;
}
public void setSheetIndex(int sheetIndex) {
this.sheetIndex = sheetIndex;
}
}
Another example can be found here
I need to change the style of arbitrary cells in a TableView which has an variable number of columns. The code below shows the basic problem.
The ExampleRow class is proxy for the real data which comes from a spreadsheet, it's other function is to hold the highlighting information. Since I can't know how many columns there will be I just hold a list of columns that should be highlighted (column re-arrangement won't be supported). The ExampleTableCell class just sets the text for the cell and applies the highlight if needed.
If I set a highlight before the table gets drawn [cell (2,2)] then the cell correctly gets displayed with red text when the application starts. The problem is clicking the button sets cell (1,1) to be highlighted but the table doesn't change. If I resize the application window to nothing then open it back up again the highlighting of cell (1,1) is correctly drawn - presumably because this process forces a full redraw.
What I would like to know is how can I trigger the table to redraw newly highlighted cells (or all visible cells) so the styling is correct?
TIA
package example;
import java.util.HashSet;
import java.util.Set;
import javafx.application.Application;
import javafx.beans.property.SimpleIntegerProperty;
import javafx.beans.property.SimpleObjectProperty;
import javafx.beans.value.ObservableValue;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.event.ActionEvent;
import javafx.event.EventHandler;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.TableCell;
import javafx.scene.control.TableColumn;
import javafx.scene.control.TableView;
import javafx.scene.layout.BorderPane;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
import javafx.util.Callback;
public class CellHighlightExample extends Application {
private final int columnCount = 4;
private final int rowCount = 5;
private TableView<ExampleRow> table = new TableView<>();
#Override
public void start(Stage stage) {
BorderPane root = new BorderPane();
Scene scene = new Scene(root);
Callback<TableColumn.CellDataFeatures<ExampleRow, String>, ObservableValue<String>> cellValueFactory = new Callback<TableColumn.CellDataFeatures<ExampleRow, String>, ObservableValue<String>>() {
#Override
public ObservableValue<String> call(TableColumn.CellDataFeatures<ExampleRow, String> p) {
int row = p.getValue().getRow();
int col = p.getTableView().getColumns().indexOf(p.getTableColumn());
return new SimpleObjectProperty<>("(" + row + ", " + col + ")");
}
};
Callback<TableColumn<ExampleRow, String>, TableCell<ExampleRow, String>> cellFactory = new Callback<TableColumn<ExampleRow, String>, TableCell<ExampleRow, String>>() {
#Override
public TableCell<ExampleRow, String> call(TableColumn<ExampleRow, String> p) {
return new ExampleTableCell<>();
}
};
for (int i = 0, n = columnCount; i < n; i++) {
TableColumn<ExampleRow, String> column = new TableColumn<>();
column.setCellValueFactory(cellValueFactory);
column.setCellFactory(cellFactory);
table.getColumns().add(column);
}
ObservableList<ExampleRow> rows = FXCollections.observableArrayList();
for (int i = 0, n = rowCount; i < n; i++) {
ExampleRow row = new ExampleRow(i);
//Force a cell to be highlighted to show that highlighting works.
if (i == 2) { row.addHighlightedColumn(2); }
rows.add(row);
}
table.setItems(rows);
Button b = new Button("Click to Highlight");
b.setOnAction(new EventHandler<ActionEvent>() {
#Override
public void handle(ActionEvent t) {
ExampleRow row = table.getItems().get(1);
row.addHighlightedColumn(1);
//How to trigger a redraw of the table or cell to reflect the new highlighting?
}
});
root.setTop(b);
root.setCenter(table);
stage.setScene(scene);
stage.show();
}
public static void main(String[] args) {
launch(args);
}
private class ExampleTableCell<S extends ExampleRow, T extends String> extends TableCell<S, T> {
#Override
public void updateItem(T item, boolean empty) {
super.updateItem(item, empty);
if (item == null) {
setText(null);
setGraphic(null);
} else {
setText(item);
int colIndex = getTableView().getColumns().indexOf(getTableColumn());
ExampleRow row = getTableView().getItems().get(getIndex());
if (row.isHighlighted(colIndex)) {
setTextFill(Color.RED);
}
}
}
}
private class ExampleRow {
private SimpleIntegerProperty row;
private Set<Integer> highlightedColumns = new HashSet<>();
public ExampleRow(int row) {
this.row = new SimpleIntegerProperty(row);
}
public int getRow() { return row.get(); }
public void setRow(int row) { this.row.set(row); }
public SimpleIntegerProperty rowProperty() { return row; }
public boolean isHighlighted(int col) {
if (highlightedColumns.contains(col)) {
return true;
}
return false;
}
public void addHighlightedColumn(int col) {
highlightedColumns.add(col);
}
}
}
There are lots of discussions about this problem, namely refreshing tableview after altering the item(s).
See
JavaFX 2.1 TableView refresh items
Issues
http://javafx-jira.kenai.com/browse/RT-21822
http://javafx-jira.kenai.com/browse/RT-22463
http://javafx-jira.kenai.com/browse/RT-22599
The solution is to trigger internal tableview update method. Some suggests to remove tableview items and add them again vs.. but the simplest workaround for your case seems to:
b.setOnAction(new EventHandler<ActionEvent>() {
#Override
public void handle(ActionEvent t) {
ExampleRow row = table.getItems().get(1);
row.addHighlightedColumn(1);
//How to trigger a redraw of the table or cell to reflect the new highlighting?
// Workaround
table.getColumns().get(0).setVisible(false);
table.getColumns().get(0).setVisible(true);
}
});
which found in issue comments linked above. Is this a really workaround or illusion of it? You need to dig deeper yourself.