How can I use "ssc.sparkContext()" in foreachRDD of spark streaming?
If I use "ssc.sparkContext()" as it is in foreachRDD (JAVA) (basically, something like ssc.sparkContext().broadcast(map)), then I get "Task not serializable" error.
If I use "(new JavaSparkContext(rdd.context())).broadcast(map)" then there is no problem.
So, basically is "ssc.sparkContext()" equivalent to "(new JavaSparkContext(rdd.context()))"?
And if I use "(new JavaSparkContext(rdd.context())).broadcast(map)" will the broadcast variable i.e. associated "map" get distributed to all executors in SparkContext.
Code is given below:
Here, "bcv.broadcastVar = (new JavaSparkContext(rdd.context())).broadcast(map);" works but "bcv.broadcastVar = ssc.sparkContext.broadcast(map);" does not work
words.foreachRDD(new Function<JavaRDD<String>, Void>() {
#Override
public Void call(JavaRDD<String> rdd) throws Exception {
if (rdd != null) {
System.out.println("Hello World - words - SSC !!!"); // Gets printed on Driver
if (stat.data_changed == 1) {
stat.data_changed = 0;
bcv.broadcastVar.unpersist(); // Unpersist BC variable
bcv.broadcastVar = (new JavaSparkContext(rdd.context())).broadcast(map); // Re-broadcast same BC variable with NEW data
}
}
rdd.foreachPartition(new VoidFunction<Iterator<String>>() {
#Override
public void call(Iterator<String> items) throws Exception {
System.out.println("words.foreachRDD.foreachPartition: CALLED ..."); // Gets called on Worker/Executor
Integer index = 1;
String lastKey = "";
Integer lastValue = 0;
while (true) {
String key = "A" + Long.toString(index);
Integer value = bcv.broadcastVar.value().get(key); // Executor Consumes map
if (value == null) break;
lastKey = key;
lastValue = value;
index++;
}
System.out.println("Executor BC: key/value: " + lastKey + " = " + lastValue);
return;
}
});
return null;
}
});
Related
I am new to apache spark and am trying to run a custom nearest neighbor algorithm on an RDD that has been partitioned into 2 parts using a custom partitioner. The JavaPairRDD contains the graph details and the random object created on the graph.
According to my logic, I am building subgraphs for each partition, and I am running a custom algorithm on each subgraph. It seems to be working "although not properly". I am not sure if this is the correct way to apply action in each partition. I am adding my code and the results as well. Comments and suggestions are highly appreciated.
// <Partition_Index_Key, Map<Source_vertex, Map<Destination Vertex, Tuple2<Edge_Length, ArrayList of Random Objects>>
JavaPairRDD<Object, Map<Object, Map<Object, Tuple2<Double, ArrayList<RoadObject>>>>> adjVertForSubgraphsRDD = jscontext
.parallelizePairs(adjacentVerticesForSubgraphs)
.partitionBy(new CustomPartitioner(CustomPartitionSize));
//applying foreachPartition action on JavaPairRDD
adjVertForSubgraphsRDD.foreachPartition(
new VoidFunction<Iterator<Tuple2<Object, Map<Object, Map<Object, Tuple2<Double, ArrayList<RoadObject>>>>>>>() {
/**
*
*/
private static final long serialVersionUID = 1L;
#Override
public void call(
Iterator<Tuple2<Object, Map<Object, Map<Object, Tuple2<Double, ArrayList<RoadObject>>>>>> tupleRow)
throws Exception {
int sourceVertex;
int destVertex;
double edgeLength;
int roadObjectId;
boolean roadObjectType;
double distanceFromStart;
CoreGraph subgraph0 = new CoreGraph();
CoreGraph subgraph1 = new CoreGraph();
while (tupleRow.hasNext()) {
Map<Object, Map<Object, Tuple2<Double, ArrayList<RoadObject>>>> newMap = tupleRow.next()
._2();
if ((Integer.parseInt(String.valueOf(tupleRow.next()._1())) == 0)) {
for (Object srcVertex : newMap.keySet()) {
for (Object dstVertex : newMap.get(srcVertex).keySet()) {
if (newMap.get(srcVertex).get(dstVertex)._2() != null) {
sourceVertex = Integer.parseInt(String.valueOf(srcVertex));
destVertex = Integer.parseInt(String.valueOf(dstVertex));
edgeLength = newMap.get(srcVertex).get(dstVertex)._1();
subgraph0.addEdge(sourceVertex, destVertex, edgeLength);
for (int i = 0; i < newMap.get(srcVertex).get(dstVertex)._2()
.size(); i++) {
int currentEdgeId = subgraph0.getEdgeId(sourceVertex, destVertex);
roadObjectId = newMap.get(srcVertex).get(dstVertex)._2().get(i)
.getObjectId();
roadObjectType = newMap.get(srcVertex).get(dstVertex)._2().get(i)
.getType();
distanceFromStart = newMap.get(srcVertex).get(dstVertex)._2().get(i)
.getDistanceFromStartNode();
RoadObject rn0 = new RoadObject();
rn0.setObjId(roadObjectId);
rn0.setType(roadObjectType);
rn0.setDistanceFromStartNode(distanceFromStart);
subgraph0.addObjectOnEdge(currentEdgeId, rn0);
}
} else {
sourceVertex = Integer.parseInt(String.valueOf(srcVertex));
destVertex = Integer.parseInt(String.valueOf(dstVertex));
edgeLength = newMap.get(srcVertex).get(dstVertex)._1();
subgraph0.addEdge(sourceVertex, destVertex, edgeLength);
}
}
}
} else if ((Integer.parseInt(String.valueOf(tupleRow.next()._1())) == 1)) {
for (Object srcVertex : newMap.keySet()) {
for (Object dstVertex : newMap.get(srcVertex).keySet()) {
if (newMap.get(srcVertex).get(dstVertex)._2() != null) {
sourceVertex = Integer.parseInt(String.valueOf(srcVertex));
destVertex = Integer.parseInt(String.valueOf(dstVertex));
edgeLength = newMap.get(srcVertex).get(dstVertex)._1();
subgraph1.addEdge(sourceVertex, destVertex, edgeLength);
for (int i = 0; i < newMap.get(srcVertex).get(dstVertex)._2()
.size(); i++) {
int currentEdgeId = subgraph1.getEdgeId(sourceVertex, destVertex);
roadObjectId = newMap.get(srcVertex).get(dstVertex)._2().get(i)
.getObjectId();
roadObjectType = newMap.get(srcVertex).get(dstVertex)._2().get(i)
.getType();
distanceFromStart = newMap.get(srcVertex).get(dstVertex)._2().get(i)
.getDistanceFromStartNode();
RoadObject rn1 = new RoadObject();
rn1.setObjId(roadObjectId);
rn1.setType(roadObjectType);
rn1.setDistanceFromStartNode(distanceFromStart);
subgraph1.addObjectOnEdge(currentEdgeId, rn1);
}
} else {
sourceVertex = Integer.parseInt(String.valueOf(srcVertex));
destVertex = Integer.parseInt(String.valueOf(dstVertex));
edgeLength = newMap.get(srcVertex).get(dstVertex)._1();
subgraph1.addEdge(sourceVertex, destVertex, edgeLength);
}
}
}
}
}
// Straight forward nearest neighbor algorithm from each true to false.
ANNNaive ann = new ANNNaive();
System.err.println("-------------------------------");
Map<Integer, Integer> nearestNeighorPairsSubg0 = ann.compute(subgraph0, true);
System.out.println("for subgraph0");
System.out.println(nearestNeighorPairsSubg0);
System.err.println("-------------------------------");
System.err.println("-------------------------------");
Map<Integer, Integer> nearestNeighorPairsSubg1 = ann.compute(subgraph1, true);
System.out.println("for subgraph1");
System.out.println(nearestNeighorPairsSubg1);
System.err.println("-------------------------------");
}
});
I have defined following 2 classes Person(with PersonKey) and Company() with companyId as key. PersonKey is affinity collocated with companyId. Now I am trying to do SQL distributed join (Person.companyId = Company.companyId) on 2 nodes connected in grid. I repeated same join with only single node. With distributed join in 2 nodes I should get 2x performance improvement, but it is performing worst with comparison to single node. Why is this happening? Are both nodes not participating in computation(here select query) part?
class PersonKey
{
// Person ID used to identify a person.
private int personId;
// Company ID which will be used for affinity.
#AffinityKeyMapped
private String companyId;
public PersonKey(int personId, String companyId)
{
this.personId = personId;
this.companyId = companyId;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result
+ ((companyId == null) ? 0 : companyId.hashCode());
result = prime * result + personId;
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
PersonKey other = (PersonKey) obj;
if (companyId == null) {
if (other.companyId != null)
return false;
} else if (!companyId.equals(other.companyId))
return false;
if (personId != other.personId)
return false;
return true;
}
}
class Person
{
#QuerySqlField(index = true)
int personId;
#QuerySqlField(index = true)
String companyId;
public Person(int personId, String companyId)
{
this.personId = personId;
this.companyId = companyId;
}
private PersonKey key;
public PersonKey key()
{
if(key == null)
key = new PersonKey(personId, companyId);
return key;
}
}
class Company
{
#QuerySqlField(index = true)
String companyId;
String company_name;
public Company(String CompanyId, String company_name)
{
this.companyId = CompanyId;
this.company_name = company_name;
}
public String key()
{
return companyId;
}
}
Adding second node does not automatically mean that the query will become twice faster. Moreover, it can easily become slower because network is added, while in a single node deployment all the data is local.
To make the test more fair you can run a query from a client node [1] and change the number of server nodes. In this case the result set will be always sent across the network and you will see the real difference in performance with different number of servers.
[1] https://apacheignite.readme.io/docs/clients-vs-servers
Note: This question may look like a repetition of several question posted on the forum, but I am really stuck on this problem from quite some time and I am not able to solve this issue using the solutions posted for similar questions. I have posted my code here and need help to proceed further
So, here is my issue:
I am writing a Java GUI application which loads a file before performing any processing. There is a waiting time on an average of about 10-15 seconds during which the file is parsed. After this waiting time, what I get see on the GUI is,
The parsed file in the form of individual leaves in the JTree in a Jpanel
Some header information (example: data range) in two individual JTextField
A heat map generated after parsing the data in a different JPanel on the GUI.
The program connects to R to parse the file and read the header information.
Now, I want to use swing worker to put the file reading process on a different thread so that it does not block the EDT. I am not sure how I can build my SwingWorker class so that the process is done in the background and the results for the 3 components are displayed when the process is complete. And, during this file reading process I want to display a JProgressBar.
Here is the code which does the whole process, starting from selection of the file selection menu item. This is in the main GUI method.
JScrollPane spectralFilesScrollPane;
if ((e.getSource() == OpenImagingFileButton) || (e.getSource() == loadRawSpectraMenuItem)) {
int returnVal = fcImg.showOpenDialog(GUIMain.this);
// File chooser
if (returnVal == JFileChooser.APPROVE_OPTION) {
file = fcImg.getSelectedFile();
//JTree and treenode creation
DefaultMutableTreeNode root = new DefaultMutableTreeNode(file);
rawSpectraTree = new JTree(root);
DefaultTreeModel model = (DefaultTreeModel) rawSpectraTree.getModel();
try {
// R connection
rc = new RConnection();
final String inputFileDirectory = file.getParent();
System.out.println("Current path: " + currentPath);
rc.assign("importImagingFile", currentPath.concat("/importImagingFile.R"));
rc.eval("source(importImagingFile)");
rc.assign("currentWorkingDirectory", currentPath);
rc.assign("inputFileDirectory", inputFileDirectory);
rawSpectrumObjects = rc.eval("importImagingFile(inputFileDirectory,currentWorkingDirectory)");
rc.assign("plotAverageSpectra", currentPath.concat("/plotAverageSpectra.R"));
rc.eval("source(plotAverageSpectra)");
rc.assign("rawSpectrumObjects", rawSpectrumObjects);
REXP averageSpectraObject = rc.eval("plotAverageSpectra(rawSpectrumObjects)");
rc.assign("AverageMassSpecObjectToSpectra", currentPath.concat("/AverageMassSpecObjectToSpectra.R"));
rc.eval("source(AverageMassSpecObjectToSpectra)");
rc.assign("averageSpectraObject", averageSpectraObject);
REXP averageSpectra = rc.eval("AverageMassSpecObjectToSpectra(averageSpectraObject)");
averageSpectraMatrix = averageSpectra.asDoubleMatrix();
String[] spectrumName = new String[rawSpectrumObjects.asList().size()];
for (int i = 0; i < rawSpectrumObjects.asList().size(); i++) {
DefaultMutableTreeNode node = new DefaultMutableTreeNode("Spectrum_" + (i + 1));
model.insertNodeInto(node, root, i);
}
// Expand all the nodes of the JTree
for(int i=0;i< model.getChildCount(root);++i){
rawSpectraTree.expandRow(i);
}
DefaultMutableTreeNode firstLeaf = ((DefaultMutableTreeNode)rawSpectraTree.getModel().getRoot()).getFirstLeaf();
rawSpectraTree.setSelectionPath(new TreePath(firstLeaf.getPath()));
updateSpectralTableandChartRAW(firstLeaf);
// List the min and the max m/z of in the respective data fields
rc.assign("dataMassRange", currentPath.concat("/dataMassRange.R"));
rc.eval("source(dataMassRange)");
rc.assign("rawSpectrumObjects", rawSpectrumObjects);
REXP massRange = rc.eval("dataMassRange(rawSpectrumObjects)");
double[] massRangeValues = massRange.asDoubles();
minMzValue = (float)massRangeValues[0];
maxMzValue = (float)massRangeValues[1];
GlobalMinMz = minMzValue;
GlobalMaxMz = maxMzValue;
// Adds the range values to the jTextField
minMz.setText(Float.toString(minMzValue));
minMz.validate();
minMz.repaint();
maxMz.setText(Float.toString(maxMzValue));
maxMz.validate();
maxMz.repaint();
// Update status bar with the uploaded data details
statusLabel.setText("File name: " + file.getName() + " | " + "Total spectra: " + rawSpectrumObjects.asList().size() + " | " + "Mass range: " + GlobalMinMz + "-" + GlobalMaxMz);
// Generates a heatmap
rawIntensityMap = gim.generateIntensityMap(rawSpectrumObjects, currentPath, minMzValue, maxMzValue, Gradient.GRADIENT_Rainbow, "RAW");
rawIntensityMap.addMouseListener(this);
rawIntensityMap.addMouseMotionListener(this);
imagePanel.add(rawIntensityMap, BorderLayout.CENTER);
coordinates = new JLabel();
coordinates.setBounds(31, 31, rawIntensityMap.getWidth() - 31, rawIntensityMap.getHeight() - 31);
panelRefresh(imagePanel);
tabbedSpectralFiles.setEnabledAt(1, false);
rawSpectraTree.addTreeSelectionListener(new TreeSelectionListener() {
#Override
public void valueChanged(TreeSelectionEvent e) {
try {
DefaultMutableTreeNode selectedNode =
(DefaultMutableTreeNode) rawSpectraTree.getLastSelectedPathComponent();
int rowCount = listTableModel.getRowCount();
for (int l = 0; l < rowCount; l++) {
listTableModel.removeRow(0);
}
updateSpectralTableandChartRAW(selectedNode);
} catch (RserveException e2) {
e2.printStackTrace();
} catch (REXPMismatchException e1) {
e1.printStackTrace();
}
}
});
spectralFilesScrollPane = new JScrollPane();
spectralFilesScrollPane.setViewportView(rawSpectraTree);
spectralFilesScrollPane.setPreferredSize(rawFilesPanel.getSize());
rawFilesPanel.add(spectralFilesScrollPane);
tabbedSpectralFiles.validate();
tabbedSpectralFiles.repaint();
rawImage.setEnabled(true);
peakPickedImage.setEnabled(false);
loadPeakListMenuItem.setEnabled(true); //active now
loadPeaklistsButton.setEnabled(true); //active now
propertiesMenuItem.setEnabled(true); // active now
propertiesButton.setEnabled(true); //active now
} catch (RserveException e1) {
JOptionPane.showMessageDialog(this,
"There was an error in the R connection. Please try again!", "Error",
JOptionPane.ERROR_MESSAGE);
} catch (REXPMismatchException e1) {
JOptionPane.showMessageDialog(this,
"Operation requested is not supported by the given R object type. Please try again!", "Error",
JOptionPane.ERROR_MESSAGE);
}
// hideProgress();
}
}
I tried creating a SwingWorker class, but I am totally confused how I can get all the three outputs on the GUI, plus have a progress bar. It is not complete, but I don't know how to proceed further.
public class FileReadWorker extends SwingWorker<REXP, String>{
private static void failIfInterrupted() throws InterruptedException {
if (Thread.currentThread().isInterrupted()) {
throw new InterruptedException("Interrupted while loading imaging file!");
}
}
// The file that is being read
private final File fileName;
private JTree rawSpectraTree;
private RConnection rc;
private REXP rawSpectrumObjects;
private double[][] averageSpectraMatrix;
private Path currentRelativePath = Paths.get("");
private final String currentPath = currentRelativePath.toAbsolutePath().toString();
final JProgressBar progressBar = new JProgressBar();
// public FileReadWorker(File fileName)
// {
// this.fileName = fileName;
// System.out.println("I am here");
// }
public FileReadWorker(final JProgressBar progressBar, File fileName) {
this.fileName = fileName;
addPropertyChangeListener(new PropertyChangeListener() {
public void propertyChange(PropertyChangeEvent evt) {
if ("progress".equals(evt.getPropertyName())) {
progressBar.setValue((Integer) evt.getNewValue());
}
}
});
progressBar.setVisible(true);
progressBar.setStringPainted(true);
progressBar.setValue(0);
setProgress(0);
}
#Override
protected REXP doInBackground() throws Exception {
System.out.println("I am here... in background");
DefaultMutableTreeNode root = new DefaultMutableTreeNode(fileName);
rawSpectraTree = new JTree(root);
DefaultTreeModel model = (DefaultTreeModel) rawSpectraTree.getModel();
rc = new RConnection();
final String inputFileDirectory = fileName.getParent();
rc.assign("importImagingFile", currentPath.concat("/importImagingFile.R"));
rc.eval("source(importImagingFile)");
rc.assign("currentWorkingDirectory", currentPath);
rc.assign("inputFileDirectory", inputFileDirectory);
rawSpectrumObjects = rc.eval("importImagingFile(inputFileDirectory,currentWorkingDirectory)");
rc.assign("plotAverageSpectra", currentPath.concat("/plotAverageSpectra.R"));
rc.eval("source(plotAverageSpectra)");
rc.assign("rawSpectrumObjects", rawSpectrumObjects);
REXP averageSpectraObject = rc.eval("plotAverageSpectra(rawSpectrumObjects)");
rc.assign("AverageMassSpecObjectToSpectra", currentPath.concat("/AverageMassSpecObjectToSpectra.R"));
rc.eval("source(AverageMassSpecObjectToSpectra)");
rc.assign("averageSpectraObject", averageSpectraObject);
REXP averageSpectra = rc.eval("AverageMassSpecObjectToSpectra(averageSpectraObject)");
averageSpectraMatrix = averageSpectra.asDoubleMatrix();
for (int i = 0; i < rawSpectrumObjects.asList().size(); i++) {
DefaultMutableTreeNode node = new DefaultMutableTreeNode("Spectrum_" + (i + 1));
model.insertNodeInto(node, root, i);
}
// Expand all the nodes of the JTree
for(int i=0;i< model.getChildCount(root);++i){
rawSpectraTree.expandRow(i);
}
return averageSpectra;
}
#Override
public void done() {
setProgress(100);
progressBar.setValue(100);
progressBar.setStringPainted(false);
progressBar.setVisible(false);
}
}
Any help would be very much appreciated.
I set up a Cassandra cluster on AWS. What I want to get is increased I/O throughput (number of reads/writes per second) as more nodes are added (as advertised). However, I got exactly the opposite. The performance is reduced as new nodes are added.
Do you know any typical issues that prevents it from scaling?
Here is some details:
I am adding a text file (15MB) to the column family. Each line is a record. There are 150000 records. When there is 1 node, it takes about 90 seconds to write. But when there are 2 nodes, it takes 120 seconds. I can see the data is spread to 2 nodes. However, there is no increase in throughput.
The source code is below:
public class WordGenCAS {
static final String KEYSPACE = "text_ks";
static final String COLUMN_FAMILY = "text_table";
static final String COLUMN_NAME = "text_col";
public static void main(String[] args) throws Exception {
if (args.length < 2) {
System.out.println("Usage: WordGenCAS <input file> <host1,host2,...>");
System.exit(-1);
}
String[] contactPts = args[1].split(",");
Cluster cluster = Cluster.builder()
.addContactPoints(contactPts)
.build();
Session session = cluster.connect(KEYSPACE);
InputStream fis = new FileInputStream(args[0]);
InputStreamReader in = new InputStreamReader(fis, "UTF-8");
BufferedReader br = new BufferedReader(in);
String line;
int lineCount = 0;
while ( (line = br.readLine()) != null) {
line = line.replaceAll("'", " ");
line = line.trim();
if (line.isEmpty())
continue;
System.out.println("[" + line + "]");
String cqlStatement2 = String.format("insert into %s (id, %s) values (%d, '%s');",
COLUMN_FAMILY,
COLUMN_NAME,
lineCount,
line);
session.execute(cqlStatement2);
lineCount++;
}
System.out.println("Total lines written: " + lineCount);
}
}
The DB schema is the following:
CREATE KEYSPACE text_ks WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 2 };
USE text_ks;
CREATE TABLE text_table (
id int,
text_col text,
primary key (id)
) WITH COMPACT STORAGE;
Thanks!
Even if this an old post, I think it's worth posting a solution for these (common) kind of problems.
As you've already discovered, loading data with a serial procedure is slow. What you've been suggested is the right thing to do.
However, issuing a lot of queries without applying some sort of back pressure is likely looking for troubles, and you'll gonna lose data due to excessive overload on the server (and on the driver to some extent).
This solution will load data with async calls, and will try to apply some back pressure on the client to avoid data loss.
public class WordGenCAS {
static final String KEYSPACE = "text_ks";
static final String COLUMN_FAMILY = "text_table";
static final String COLUMN_NAME = "text_col";
public static void main(String[] args) throws Exception {
if (args.length < 2) {
System.out.println("Usage: WordGenCAS <input file> <host1,host2,...>");
System.exit(-1);
}
String[] contactPts = args[1].split(",");
Cluster cluster = Cluster.builder()
.addContactPoints(contactPts)
.build();
Session session = cluster.connect(KEYSPACE);
InputStream fis = new FileInputStream(args[0]);
InputStreamReader in = new InputStreamReader(fis, "UTF-8");
BufferedReader br = new BufferedReader(in);
String line;
int lineCount = 0;
// This is the futures list of our queries
List<Future<ResultSet>> futures = new ArrayList<>();
// Loop
while ( (line = br.readLine()) != null) {
line = line.replaceAll("'", " ");
line = line.trim();
if (line.isEmpty())
continue;
System.out.println("[" + line + "]");
String cqlStatement2 = String.format("insert into %s (id, %s) values (%d, '%s');",
COLUMN_FAMILY,
COLUMN_NAME,
lineCount,
line);
lineCount++;
// Add the "future" returned by async method the to the list
futures.add(session.executeAsync(cqlStatement2));
// Apply some backpressure if we issued more than X query.
// Change X to another value suitable for your cluster
while (futures.size() > 1000) {
Future<ResultSet> future = futures.remove(0);
try {
future.get();
} catch (Exception e) {
e.printStackTrace();
}
}
}
System.out.println("Total lines written: " + lineCount);
System.out.println("Waiting for writes to complete...");
// Wait until all writes are done.
while (futures.size() > 0) {
Future<ResultSet> future = futures.remove(0);
try {
future.get();
} catch (Exception e) {
e.printStackTrace();
}
}
System.out.println("Done!");
}
}
I have switched from PrimeFaces 3.5 to 4.0.
I am getting the error for "tieredMenu":
java.lang.IllegalArgumentException: component identifier's first character must be a letter or an underscore ('_')! But it is "0"
I have tracked this to "BaseMenuModel.class", method is "generateUniqueIds":
public void generateUniqueIds()
{
this.generateUniqueIds(getElements(), null);
}
private void generateUniqueIds(List<MenuElement> elements, String seed) {
if(elements == null || elements.isEmpty()) {
return;
}
int counter = 0;
for(MenuElement element : elements) {
String id = (seed == null) ? String.valueOf(counter++) : seed + "_" + counter++;
element.setId(id);
if(element instanceof MenuGroup) {
generateUniqueIds(((MenuGroup) element).getElements(), id);
}
}
}
Seems like it generates ID that starts with number because seed is null.
Is this expected?
Edit 1:
Bean code:
public String initActionMenus() throws Exception
{
ExpressionFactory factory = FacesContext.getCurrentInstance().getApplication().getExpressionFactory();
MethodExpression methodsExpressionDelete = factory.createMethodExpression(FacesContext.getCurrentInstance().getELContext(), "#{UserGroupMgmtBean.menuDeleteAction}", null, new Class[]{ActionEvent.class});
MethodExpressionActionListener actionListenerDelete = new MethodExpressionActionListener(methodsExpressionDelete);
model = new DynamicMenuModel();
UISubmenu smAction = new UISubmenu();
smAction.setLabel("Action");
UIMenuItem itemDelete = new UIMenuItem();
itemDelete.setValue("Delete");
itemDelete.setUpdate(UPDATE_AREA_ID);
itemDelete.setAjax(true);
itemDelete.addActionListener(actionListenerDelete);
smAction.getChildren().add(itemDelete);
model.addElement(smAction);
return "OK";
}
xthml code:
<p:tieredMenu model="#{UserGroupMgmtBean.model}" id="userGroupMenu"/>