How to change all PK seed values in hybris? - groovy

I'm trying to change all generated keys in hybris system. I'm using oracle database with hybris 6.3+

I used below groovy to solve generate new pk current value
import de.hybris.platform.core.Registry;
import de.hybris.platform.core.PK.PKCounterGenerator;
import de.hybris.platform.persistence.numberseries.SerialNumberGenerator;
import de.hybris.platform.jalo.numberseries.NumberSeriesManager;
numberseriesmanager = spring.getBean("default.core.numberSeriesManager");
Collection<String> keys = numberseriesmanager.getAllNumberSeriesKeys();
for(String ke : keys){
if(ke.contains('pk_')) {
String[] keyWrd = ke.split("_");
int keyint = Integer.parseInt(keyWrd[1]);
int current = new de.hybris.platform.core.DefaultPKCounterGenerator().fetchNextCounter(keyint);
SerialNumberGenerator generator = Registry.getCurrentTenant().getSerialNumberGenerator();
generator.removeSeries("pk_" + ke);
generator.createSeries("pk_" + ke, 1, current * 3)
}
}

Related

How to Add negative value for DataBar in apache poi?

i used this code to create a databar :
import org.apache.poi.ss.usermodel.*;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.apache.poi.xssf.usermodel.XSSFDataBarFormatting;
import org.apache.poi.ss.util.CellRangeAddress;
import java.io.FileOutputStream;
import java.lang.reflect.Field;
public class ConditionalFormattingDataBars {
public static void applyDataBars(SheetConditionalFormatting sheetCF, String region, ExtendedColor color) throws Exception {
CellRangeAddress[] regions = { CellRangeAddress.valueOf(region) };
ConditionalFormattingRule rule = sheetCF.createConditionalFormattingRule(color);
DataBarFormatting dbf = rule.getDataBarFormatting();
dbf.getMinThreshold().setRangeType(ConditionalFormattingThreshold.RangeType.MIN);
dbf.getMaxThreshold().setRangeType(ConditionalFormattingThreshold.RangeType.MAX);
dbf.setWidthMin(0); //cannot work for XSSFDataBarFormatting, see https://svn.apache.org/viewvc/poi/tags/REL_4_0_1/src/ooxml/java/org/apache/poi/xssf/usermodel/XSSFDataBarFormatting.java?view=markup#l57
dbf.setWidthMax(100); //cannot work for XSSFDataBarFormatting, see https://svn.apache.org/viewvc/poi/tags/REL_4_0_1/src/ooxml/java/org/apache/poi/xssf/usermodel/XSSFDataBarFormatting.java?view=markup#l64
if (dbf instanceof XSSFDataBarFormatting) {
Field _databar = XSSFDataBarFormatting.class.getDeclaredField("_databar");
_databar.setAccessible(true);
org.openxmlformats.schemas.spreadsheetml.x2006.main.CTDataBar ctDataBar =
(org.openxmlformats.schemas.spreadsheetml.x2006.main.CTDataBar)_databar.get(dbf);
ctDataBar.setMinLength(0);
ctDataBar.setMaxLength(100);
}
sheetCF.addConditionalFormatting(regions, rule);
}
public static void main(String[] args) throws Exception {
Workbook workbook = new XSSFWorkbook();
Sheet sheet = workbook.createSheet("new sheet");
java.util.List<Double> list = java.util.Arrays.asList(0.279, 0.252, 0.187, 0.128, 0.078, 0.043, 0.022, 0.012, 0.011, 0.0, 0.0);
for (int i = 0; i < list.size(); i++) {
sheet.createRow(i+1).createCell(1).setCellValue(list.get(i));
}
SheetConditionalFormatting sheetCF = sheet.getSheetConditionalFormatting();
ExtendedColor color = workbook.getCreationHelper().createExtendedColor();
color.setARGBHex("FF80C279");
applyDataBars(sheetCF, "B2:B12", color);
sheet.setColumnWidth(1, 50*256);
FileOutputStream out = new FileOutputStream("ConditionalFormattingDataBars.xlsx");
workbook.write(out);
out.close();
workbook.close();
}
}
And i fond it on the question provided Hear
And it is working !
However i tried to modify it to add negative value Like This :
but i did not fond any Class Or Object To do this !
is this even possible ?
This is a similar problem as this one: How to make solid color for databar in apache.poi.
The settings for negative values in data bars were not available when Office Open XML was published. It is a later added feature. Since apache poi uses the published features of Office Open XML only, this feature is not available directly. Also it is not available using the underlying org.openxmlformats.schemas.spreadsheetml.x2006.main.* classes as those also only are created from published Office Open XML.
How to see this? All Office Open XML files are ZIP archives. One can unzip them and have a look at the XML. So we set data bar settings as needed in Excel and save the *.xlsx file. Then we unzip the *.xlsx file and have a look at /xl/worksheets/sheet1.xml. There we find the settings for conditional formatting.
So we only can solve this problem the same way as in the linked answer above.
Complete example:
import org.apache.poi.ss.usermodel.*;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFDataBarFormatting;
import org.apache.poi.xssf.usermodel.XSSFConditionalFormattingRule;
import org.apache.poi.ss.util.CellRangeAddress;
import java.io.FileOutputStream;
import java.lang.reflect.Field;
public class ConditionalFormattingDataBars {
public static void applyDataBars(SheetConditionalFormatting sheetCF, String region, ExtendedColor colorPos, ExtendedColor colorNeg) throws Exception {
CellRangeAddress[] regions = { CellRangeAddress.valueOf(region) };
ConditionalFormattingRule rule = sheetCF.createConditionalFormattingRule(colorPos);
DataBarFormatting dbf = rule.getDataBarFormatting();
dbf.getMinThreshold().setRangeType(ConditionalFormattingThreshold.RangeType.MIN);
dbf.getMaxThreshold().setRangeType(ConditionalFormattingThreshold.RangeType.MAX);
dbf.setWidthMin(0); //cannot work for XSSFDataBarFormatting, see https://svn.apache.org/viewvc/poi/tags/REL_4_0_1/src/ooxml/java/org/apache/poi/xssf/usermodel/XSSFDataBarFormatting.java?view=markup#l57
dbf.setWidthMax(100); //cannot work for XSSFDataBarFormatting, see https://svn.apache.org/viewvc/poi/tags/REL_4_0_1/src/ooxml/java/org/apache/poi/xssf/usermodel/XSSFDataBarFormatting.java?view=markup#l64
if (dbf instanceof XSSFDataBarFormatting) {
Field _databar = XSSFDataBarFormatting.class.getDeclaredField("_databar");
_databar.setAccessible(true);
org.openxmlformats.schemas.spreadsheetml.x2006.main.CTDataBar ctDataBar =
(org.openxmlformats.schemas.spreadsheetml.x2006.main.CTDataBar)_databar.get(dbf);
ctDataBar.setMinLength(0);
ctDataBar.setMaxLength(100);
}
// use extension from x14 namespace to set data bars having setting for negative value
if (rule instanceof XSSFConditionalFormattingRule) {
Field _cfRule = XSSFConditionalFormattingRule.class.getDeclaredField("_cfRule");
_cfRule.setAccessible(true);
org.openxmlformats.schemas.spreadsheetml.x2006.main.CTCfRule ctRule =
(org.openxmlformats.schemas.spreadsheetml.x2006.main.CTCfRule)_cfRule.get(rule);
org.openxmlformats.schemas.spreadsheetml.x2006.main.CTExtensionList extList =
ctRule.addNewExtLst();
org.openxmlformats.schemas.spreadsheetml.x2006.main.CTExtension ext = extList.addNewExt();
String extXML =
"<x14:id"
+ " xmlns:x14=\"http://schemas.microsoft.com/office/spreadsheetml/2009/9/main\">"
+ "{00000000-000E-0000-0000-000001000000}"
+ "</x14:id>";
org.apache.xmlbeans.XmlObject xlmObject = org.apache.xmlbeans.XmlObject.Factory.parse(extXML);
ext.set(xlmObject);
ext.setUri("{B025F937-C7B1-47D3-B67F-A62EFF666E3E}");
Field _sh = XSSFConditionalFormattingRule.class.getDeclaredField("_sh");
_sh.setAccessible(true);
XSSFSheet sheet = (XSSFSheet)_sh.get(rule);
extList = sheet.getCTWorksheet().addNewExtLst();
ext = extList.addNewExt();
extXML =
"<x14:conditionalFormattings xmlns:x14=\"http://schemas.microsoft.com/office/spreadsheetml/2009/9/main\">"
+ "<x14:conditionalFormatting xmlns:xm=\"http://schemas.microsoft.com/office/excel/2006/main\">"
+ "<x14:cfRule type=\"dataBar\" id=\"{00000000-000E-0000-0000-000001000000}\">"
+ "<x14:dataBar minLength=\"" + 0 + "\" maxLength=\"" + 100 + "\" border=\"1\" negativeBarBorderColorSameAsPositive=\"0\">"
+ "<x14:cfvo type=\"min\"/>"
+ "<x14:cfvo type=\"max\"/>"
+ "<x14:borderColor rgb=\"" + colorPos.getARGBHex() + "\"/>"
+ "<x14:negativeFillColor rgb=\"" + colorNeg.getARGBHex() + "\"/>"
+ "<x14:negativeBorderColor rgb=\"" + colorNeg.getARGBHex() + "\"/>"
+ "<x14:axisColor theme=\"1\"/>"
+ "</x14:dataBar>"
+ "</x14:cfRule>"
+ "<xm:sqref>" + region + "</xm:sqref>"
+ "</x14:conditionalFormatting>"
+ "</x14:conditionalFormattings>";
xlmObject = org.apache.xmlbeans.XmlObject.Factory.parse(extXML);
ext.set(xlmObject);
ext.setUri("{78C0D931-6437-407d-A8EE-F0AAD7539E65}");
}
sheetCF.addConditionalFormatting(regions, rule);
}
public static void main(String[] args) throws Exception {
Workbook workbook = new XSSFWorkbook();
Sheet sheet = workbook.createSheet("new sheet");
java.util.List<Double> list = java.util.Arrays.asList(0.279, -0.252, 0.187, -0.128, 0.078, 0.043, -0.022, 0.012, 0.011, 0.0, 0.0);
for (int i = 0; i < list.size(); i++) {
sheet.createRow(i+1).createCell(1).setCellValue(list.get(i));
}
SheetConditionalFormatting sheetCF = sheet.getSheetConditionalFormatting();
ExtendedColor colorPos = workbook.getCreationHelper().createExtendedColor();
colorPos.setARGBHex("FF80C279");
ExtendedColor colorNeg = workbook.getCreationHelper().createExtendedColor();
colorNeg.setARGBHex("FFFF0000");
applyDataBars(sheetCF, "B2:B12", colorPos, colorNeg);
sheet.setColumnWidth(1, 50*256);
FileOutputStream out = new FileOutputStream("ConditionalFormattingDataBars.xlsx");
workbook.write(out);
out.close();
workbook.close();
}
}
This code will only work properly always for new created XSSFWorkbook. If the XSSFWorkbook was created from an existing workbook, the this could contain org.openxmlformats.schemas.spreadsheetml.x2006.main.CTExtensionList for x14 extensions already. If so, then these must be taken into account. But that would be a much more complex and challenging project.

Programmatically create a new Person having an Internal Login

I am trying to programmatically create Users with Internal Accounts as part of a testing system. The following code can not create an InternalLogin because there is not password hash set at object creation time.
Can a person + internal account be created using metadata_* functions ?
data _null_;
length uri_Person uri_PW uri_IL $256;
call missing (of uri:);
rc = metadata_getnobj ("Person?#Name='testbot02'", 1, uri_Person); msg=sysmsg();
put 'NOTE: Get Person, ' rc= uri_Person= / msg;
if rc = -4 then do;
rc = metadata_newobj ('Person', uri_Person, 'testbot02'); msg=sysmsg();
put 'NOTE: New Person, ' rc= uri_Person= / msg;
end;
rc = metadata_setattr (uri_Person, 'DisplayName', 'Test Bot #2'); msg=sysmsg();
put 'NOTE: SetAttr, ' rc= / msg;
rc = metadata_newobj ('InternalLogin', uri_IL, 'autobot - IL', 'Foundation', uri_Person, 'InternalLoginInfo'); msg=sysmsg();
put 'NOTE: New InternalLogin, ' rc= / msg;
run;
Logs
NOTE: Get Person, rc=-4 uri_Person=
NOTE: New Person, rc=0 uri_Person=OMSOBJ:Person\A5OJU4RB.AP0000SX
NOTE: SetAttr, rc=0
NOTE: New InternalLogin, rc=-2
ERROR: AddMetadata of InternalLogin is missing required property PasswordHash.
NOTE: DATA statement used (Total process time):
real time 0.02 seconds
cpu time 0.01 seconds
The management console and metabrowse were used to find what objects might need to be created.
Metabrowse
I ended up using Java to create new users.
A ISecurity_1_1 instance from MakeISecurityConnection() on the metadata connection was used to SetInternalPassword
Java ended up being a lot more clear coding wise.
package sandbox.sas.metadata;
import com.sas.iom.SASIOMDefs.GenericError;
import com.sas.metadata.remote.*;
import com.sas.meta.SASOMI.*;
import java.rmi.RemoteException;
import java.util.List;
/**
*
* #author Richard
*/
public class Main {
public static final String TESTGROUPNAME = "Automated Test Users";
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
String metaserver="################";
String metaport="8561";
String omrUser="sasadm#saspw";
String omrPass="##########";
try
{
MdFactoryImpl metadata = new MdFactoryImpl(false);
metadata.makeOMRConnection(metaserver,metaport,omrUser,omrPass);
MdOMIUtil util = metadata.getOMIUtil();
ISecurity_1_1 security = metadata.getConnection().MakeISecurityConnection();
MdObjectStore workunit = metadata.createObjectStore();
String foundationID = util.getFoundationReposID();
String foundationShortID = foundationID.substring(foundationID.indexOf(".") + 1);
// Create group for test bot accounts
List identityGroups = util.getMetadataObjects(foundationID, MetadataObjects.IDENTITYGROUP, MdOMIUtil.OMI_XMLSELECT,
String.format("<XMLSelect search=\"IdentityGroup[#Name='%s']\"/>", TESTGROUPNAME));
IdentityGroup identityGroup;
if (identityGroups.isEmpty()) {
identityGroup = (IdentityGroup) metadata.createComplexMetadataObject (workunit, TESTGROUPNAME, MetadataObjects.IDENTITYGROUP, foundationShortID);
identityGroup.setDisplayName(TESTGROUPNAME);
identityGroup.setDesc("Group for Automated Test Users performing concurrent Stored Process execution");
identityGroup.updateMetadataAll();
}
identityGroups = util.getMetadataObjectsSubset(workunit, foundationID, MetadataObjects.IDENTITYGROUP, MdOMIUtil.OMI_XMLSELECT,
String.format("<XMLSelect search=\"IdentityGroup[#Name='%s']\"/>", TESTGROUPNAME));
identityGroup = (IdentityGroup) identityGroups.get(0);
// Create (or Update) test bot accounts
for (int index = 1; index <= 25; index++)
{
String token = String.format("%02d", index);
String personName = String.format("testbot%s", token);
String password = String.format("testbot%s%s", token, token); * simple dynamically generated password for user (for testing scripts only);
String criteria = String.format("Person[#Name='%s']", personName);
String xmlSelect = String.format("<XMLSelect search=\"%s\"/>", criteria);
List persons = util.getMetadataObjectsSubset(workunit, foundationID, MetadataObjects.PERSON, MdOMIUtil.OMI_XMLSELECT | MdOMIUtil.OMI_GET_METADATA | MdOMIUtil.OMI_ALL_SIMPLE, xmlSelect);
Person person;
if (persons.size() == 1)
{
person = (Person) persons.get(0);
System.out.println(String.format("Have %s %s", person.getName(), person.getDisplayName()));
}
else
{
person = (Person) metadata.createComplexMetadataObject (workunit, personName, MetadataObjects.PERSON, foundationShortID);
person.setDisplayName(String.format("Test Bot #%s", token));
person.setDesc("Internal account for testing purposes");
System.out.println(String.format("Make %s, %s (%s)", person.getName(), person.getDisplayName(), person.getDesc()));
person.updateMetadataAll();
}
security.SetInternalPassword(personName, password);
AssociationList personGroups = person.getIdentityGroups();
personGroups.add(identityGroup);
person.setIdentityGroups(personGroups);
person.updateMetadataAll();
}
workunit.dispose();
metadata.closeOMRConnection();
metadata.dispose();
}
catch (GenericError | MdException | RemoteException e)
{
System.out.println(e.getLocalizedMessage());
System.exit(1);
}
}
}

Trying to apply GBT on a set of data getting ClassCastException

I am getting "Exception in thread "main" java.lang.ClassCastException: org.apache.spark.ml.attribute.UnresolvedAttribute$ cannot be cast to org.apache.spark.ml.attribute.NominalAttribute".
Source code
package com.spark.lograthmicregression;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.HashSet;
import java.util.Set;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.ml.Pipeline;
import org.apache.spark.ml.PipelineModel;
import org.apache.spark.ml.PipelineStage;
import org.apache.spark.ml.classification.GBTClassificationModel;
import org.apache.spark.ml.classification.GBTClassifier;
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator;
import org.apache.spark.ml.feature.IndexToString;
import org.apache.spark.ml.feature.StringIndexer;
import org.apache.spark.ml.feature.StringIndexerModel;
import org.apache.spark.ml.feature.VectorAssembler;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.catalyst.expressions.AttributeReference;
import org.apache.spark.sql.catalyst.expressions.Expression;
import org.apache.spark.sql.types.DataType;
import org.apache.spark.sql.types.DataTypes;
import com.google.common.collect.ImmutableMap;
import scala.collection.mutable.Seq;
public class ClickThroughRateAnalytics {
private static SimpleDateFormat sdf = new SimpleDateFormat("yyMMddHH");
public static void main(String[] args) {
final SparkConf sparkConf = new SparkConf().setAppName("Click Analysis").setMaster("local");
try (JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf)) {
SQLContext sqlContext = new SQLContext(javaSparkContext);
DataFrame dataFrame = sqlContext.read().format("com.databricks.spark.csv").option("inferSchema", "true").option("header", "true")
.load("/splits/sub-suaa");
// This will keep data in memory
dataFrame.cache();
// This will describe the column
// dataFrame.describe("hour").show();
System.out.println("Rows before removing missing data : " + dataFrame.count());
// This will describe column details
// dataFrame.describe("click", "hour", "site_domain").show();
// This will calculate variance between columns +ve one increases
// second increases and -ve means one increases other decreases
// double cov = dataFrame.stat().cov("click", "hour");
// System.out.println("cov : " + cov);
// It provides quantitative measurements of the statistical
// dependence between two random variables
// double corr = dataFrame.stat().corr("click", "hour");
// System.out.println("corr : " + corr);
// Cross Tabulation provides a table of the frequency distribution
// for a set of variables
// dataFrame.stat().crosstab("site_id", "site_domain").show();
// For frequent items
// System.out.println("Frequest Items : " +
// dataFrame.stat().freqItems(new String[] { "site_id",
// "site_domain" }, 0.3).collectAsList());
// TODO we can also set maximum occurring item to categorical
// values.
// This will replace null values with average for numeric columns
dataFrame = modifiyDatFrame(dataFrame);
// Removing rows which have some missing values
dataFrame = dataFrame.na().replace(dataFrame.columns(), ImmutableMap.of("", "NA"));
dataFrame.na().fill(0.0);
dataFrame = dataFrame.na().drop();
System.out.println("Rows after removing missing data : " + dataFrame.count());
// TODO Binning and bucketing
// normalizer will take the column created by the VectorAssembler,
// normalize it and produce a new column
// Normalizer normalizer = new
// Normalizer().setInputCol("features_index").setOutputCol("features");
dataFrame = dataFrame.drop("app_category_index").drop("app_domain_index").drop("hour_index").drop("C20_index")
.drop("device_connection_type_index").drop("C1_index").drop("id").drop("device_ip_index").drop("banner_pos_index");
DataFrame[] splits = dataFrame.randomSplit(new double[] { 0.7, 0.3 });
DataFrame trainingData = splits[0];
DataFrame testData = splits[1];
StringIndexerModel labelIndexer = new StringIndexer().setInputCol("click").setOutputCol("indexedclick").fit(dataFrame);
// Here we will be sending all columns which will participate in
// prediction
VectorAssembler vectorAssembler = new VectorAssembler().setInputCols(findPredictionColumns("click", dataFrame))
.setOutputCol("features_index");
GBTClassifier gbt = new GBTClassifier().setLabelCol("indexedclick").setFeaturesCol("features_index").setMaxIter(10).setMaxBins(69000);
IndexToString labelConverter = new IndexToString().setInputCol("prediction").setOutputCol("predictedLabel");
Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] { labelIndexer, vectorAssembler, gbt, labelConverter });
trainingData.show(1);
PipelineModel model = pipeline.fit(trainingData);
DataFrame predictions = model.transform(testData);
predictions.select("predictedLabel", "label").show(5);
MulticlassClassificationEvaluator evaluator = new MulticlassClassificationEvaluator().setLabelCol("indexedLabel")
.setPredictionCol("prediction").setMetricName("precision");
double accuracy = evaluator.evaluate(predictions);
System.out.println("Test Error = " + (1.0 - accuracy));
GBTClassificationModel gbtModel = (GBTClassificationModel) (model.stages()[2]);
System.out.println("Learned classification GBT model:\n" + gbtModel.toDebugString());
}
}
private static String[] findPredictionColumns(String outputCol, DataFrame dataFrame) {
String columns[] = dataFrame.columns();
String inputColumns[] = new String[columns.length - 1];
int count = 0;
for (String column : dataFrame.columns()) {
if (!column.equalsIgnoreCase(outputCol)) {
inputColumns[count++] = column;
}
}
return inputColumns;
}
/**
* This will replace empty values with mean.
*
* #param columnName
* #param dataFrame
* #return
*/
private static DataFrame modifiyDatFrame(DataFrame dataFrame) {
Set<String> numericColumns = new HashSet<String>();
if (dataFrame.numericColumns() != null && dataFrame.numericColumns().length() > 0) {
scala.collection.Iterator<Expression> iterator = ((Seq<Expression>) dataFrame.numericColumns()).toIterator();
while (iterator.hasNext()) {
Expression expression = iterator.next();
Double avgAge = dataFrame.na().drop().groupBy(((AttributeReference) expression).name()).avg(((AttributeReference) expression).name())
.first().getDouble(1);
dataFrame = dataFrame.na().fill(avgAge, new String[] { ((AttributeReference) expression).name() });
numericColumns.add(((AttributeReference) expression).name());
DataType dataType = ((AttributeReference) expression).dataType();
if (!"double".equalsIgnoreCase(dataType.simpleString())) {
dataFrame = dataFrame.withColumn("temp", dataFrame.col(((AttributeReference) expression).name()).cast(DataTypes.DoubleType))
.drop(((AttributeReference) expression).name()).withColumnRenamed("temp", ((AttributeReference) expression).name());
}
}
}
// Fit method of StringIndexer converts the column to StringType(if
// it is not of StringType) and then counts the occurrence of each
// word. It then sorts these words in descending order of their
// frequency and assigns an index to each word. StringIndexer.fit()
// method returns a StringIndexerModel which is a Transformer
StringIndexer stringIndexer = new StringIndexer();
String allCoumns[] = dataFrame.columns();
for (String column : allCoumns) {
if (!numericColumns.contains(column)) {
dataFrame = stringIndexer.setInputCol(column).setOutputCol(column + "_index").fit(dataFrame).transform(dataFrame);
dataFrame = dataFrame.drop(column);
}
}
dataFrame.printSchema();
return dataFrame;
}
#SuppressWarnings("unused")
private static void copyFile(DataFrame dataFrame) {
dataFrame
.select("id", "click", "hour", "C1", "banner_pos", "site_id", "site_domain", "site_category", "app_id", "app_domain", "app_category",
"device_id", "device_ip", "device_model", "device_type", "device_conn_type", "C14", "C15", "C16", "C17", "C18", "C19", "C20",
"C21")
.write().format("com.databricks.spark.csv").option("header", "true").option("codec", "org.apache.hadoop.io.compress.GzipCodec")
.save("/splits/sub-splitaa-optmized");
}
#SuppressWarnings("unused")
private static Integer parse(String sDate, int field) {
try {
if (sDate != null && !sDate.toString().equalsIgnoreCase("hour")) {
Date date = sdf.parse(sDate.toString());
Calendar cal = Calendar.getInstance();
cal.setTime(date);
return cal.get(field);
}
} catch (ParseException e) {
e.printStackTrace();
}
return 0;
}
}
I am using spark java. Sample file will be :
id,click,hour,C1,banner_pos,site_id,site_domain,site_category,app_id,app_domain,app_category,device_id,device_ip,device_model,device_type,device_conn_type,C14,C15,C16,C17,C18,C19,C20,C21
1000009418151094273,0,14102100,1005,0,1fbe01fe,f3845767,28905ebd,ecad2386,7801e8d9,07d7df22,a99f214a,ddd2926e,44956a24,1,2,15706,320,50,1722,0,35,-1,79
10000169349117863715,0,14102100,1005,0,1fbe01fe,f3845767,28905ebd,ecad2386,7801e8d9,07d7df22,a99f214a,96809ac8,711ee120,1,0,15704,320,50,1722,0,35,100084,79
10000371904215119486,0,14102100,1005,0,1fbe01fe,f3845767,28905ebd,ecad2386,7801e8d9,07d7df22,a99f214a,b3cf8def,8a4875bd,1,0,15704,320,50,1722,0,35,100084,79
10000640724480838376,0,14102100,1005,0,1fbe01fe,f3845767,28905ebd,ecad2386,7801e8d9,07d7df22,a99f214a,e8275b8f,6332421a,1,0,15706,320,50,1722,0,35,100084,79
10000679056417042096,0,14102100,1005,1,fe8cc448,9166c161,0569f928,ecad2386,7801e8d9,07d7df22,a99f214a,9644d0bf,779d90c2,1,0,18993,320,50,2161,0,35,-1,157
10000720757801103869,0,14102100,1005,0,d6137915,bb1ef334,f028772b,ecad2386,7801e8d9,07d7df22,a99f214a,05241af0,8a4875bd,1,0,16920,320,50,1899,0,431,100077,117
10000724729988544911,0,14102100,1005,0,8fda644b,25d4cfcd,f028772b,ecad2386,7801e8d9,07d7df22,a99f214a,b264c159,be6db1d7,1,0,20362,320,50,2333,0,39,-1,157
I am late in replying but I was also facing the same error while using gbt for a dataset in csv file.
I added .setLabels(labelIndexer.labels()) in labelConverter and this solved the problem.
IndexToString labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(labelIndexer.labels())

How to represent Dependency Triplets in .arff file format?

I'm developing a classifier for text categorisation using Weka java libraries. I've extracted a number of features using Stanfords' CoreNLP package, including a dependency parse of the text which returns a string "(rel, head, mod)".
I was wanting to use the dependency triplets returned from this as features for classification but I cannot figure out how to properly represent them in the ARFF file. Basically, I'm stumped; for each instance, there are an arbitrary number of dependency triplets, so I can't define them explicitly in the attributes, for example:
#attribute entityCount numeric
#attribute depTriple_1 string
#attribute depTriple_2 string
.
.
#attribute depTriple_n string
Is there a particular way to go about this? I've spent the better part of the day searching and have not found anything yet.
Thanks a lot for reading.
Extracted from Weka Wiki:
import weka.core.Attribute;
import weka.core.FastVector;
import weka.core.Instance;
import weka.core.Instances;
/**
* Generates a little ARFF file with different attribute types.
*
* #author FracPete
*/
public class SO_Test {
public static void main(String[] args) throws Exception {
FastVector atts;
FastVector attsRel;
FastVector attVals;
FastVector attValsRel;
Instances data;
Instances dataRel;
double[] vals;
double[] valsRel;
int i;
// 1. set up attributes
atts = new FastVector();
// - numeric
atts.addElement(new Attribute("att1"));
// - nominal
attVals = new FastVector();
for (i = 0; i < 5; i++)
attVals.addElement("val" + (i+1));
atts.addElement(new Attribute("att2", attVals));
// - string
atts.addElement(new Attribute("att3", (FastVector) null));
// - date
atts.addElement(new Attribute("att4", "yyyy-MM-dd"));
// - relational
attsRel = new FastVector();
// -- numeric
attsRel.addElement(new Attribute("att5.1"));
// -- nominal
attValsRel = new FastVector();
for (i = 0; i < 5; i++)
attValsRel.addElement("val5." + (i+1));
attsRel.addElement(new Attribute("att5.2", attValsRel));
dataRel = new Instances("att5", attsRel, 0);
atts.addElement(new Attribute("att5", dataRel, 0));
// 2. create Instances object
data = new Instances("MyRelation", atts, 0);
// 3. fill with data
// first instance
vals = new double[data.numAttributes()];
// - numeric
vals[0] = Math.PI;
// - nominal
vals[1] = attVals.indexOf("val3");
// - string
vals[2] = data.attribute(2).addStringValue("This is a string!");
// - date
vals[3] = data.attribute(3).parseDate("2001-11-09");
// - relational
dataRel = new Instances(data.attribute(4).relation(), 0);
// -- first instance
valsRel = new double[2];
valsRel[0] = Math.PI + 1;
valsRel[1] = attValsRel.indexOf("val5.3");
dataRel.add(new Instance(1.0, valsRel));
// -- second instance
valsRel = new double[2];
valsRel[0] = Math.PI + 2;
valsRel[1] = attValsRel.indexOf("val5.2");
dataRel.add(new Instance(1.0, valsRel));
vals[4] = data.attribute(4).addRelation(dataRel);
// add
data.add(new Instance(1.0, vals));
// second instance
vals = new double[data.numAttributes()]; // important: needs NEW array!
// - numeric
vals[0] = Math.E;
// - nominal
vals[1] = attVals.indexOf("val1");
// - string
vals[2] = data.attribute(2).addStringValue("And another one!");
// - date
vals[3] = data.attribute(3).parseDate("2000-12-01");
// - relational
dataRel = new Instances(data.attribute(4).relation(), 0);
// -- first instance
valsRel = new double[2];
valsRel[0] = Math.E + 1;
valsRel[1] = attValsRel.indexOf("val5.4");
dataRel.add(new Instance(1.0, valsRel));
// -- second instance
valsRel = new double[2];
valsRel[0] = Math.E + 2;
valsRel[1] = attValsRel.indexOf("val5.1");
dataRel.add(new Instance(1.0, valsRel));
vals[4] = data.attribute(4).addRelation(dataRel);
// add
data.add(new Instance(1.0, vals));
// 4. output data
System.out.println(data);
}
}
Your problem is in particular a "relational" attribute. This code segment has dealt with such relational attribute.
Alright, I did it! Just posting this as an answer incase anyone else has a similar problem. Previously I was following the guide found on the Weka Wiki (as posted below by Rushdi), but I was having a lot of trouble following as the guide is creating static instances of the relational attribute, where as I required dynamic declarations of an arbitrary amount. So I decided to re-evaluate how I was generating the attributes, and I managed to get it to work with slight changes to the above guide:
//1. Set up attributes
FastVector atts;
FastVector relAtts;
Instances relData;
atts = new FastVector();
//Entity Count - numeric
atts.addElement(new Attribute("entityCount"));
//Dependencies - Relational (Multi-Instance)
relAtts = new FastVector();
relAtts.addElement(new Attribute("depTriplet", (FastVector) null));
relData = new Instances("depTriples", relAtts, 0);
atts.addElement(new Attribute("depTriples", relData, 0));
atts.addElement(new Attribute("postTxt", (FastVector) null));
//2. Create Instances Object
Instances trainSet = new Instances("MyName", atts, 0);
/* 3. Fill with data:
Loop through text docs to extract features
and generate instance for train set */
//Holds the relational attribute instances
Instances relAttData;
for(Object doc: docList) {
List<String> depTripleList = getDepTriples(doc);
int entCount = getEntityCount(doc);
String pt = getText(doc);
//Create instance to be added to training set
Instance tInst = new Instance(trainSet.numAttributes());
//Entity count
tInst.setValue( (Attribute) atts.elementAt(0), entCount);
//Generate Instances for relational attribute
relAttData = new Instances(trainSet.attribute(1).relation(), 0);
//For each deplist entry, create an instance and add it to dataset
for(String depTriple: depTripleList) {
Instance relAttInst = new Instance(1);
relAttInst.setDataset(relAttData);
relAttInst.setValue(0, depTriple);
relAttData.add(relAttInst);
}
//Add relational attribute (now filled with a number of Instances of attributes) to the main Instance
tInst.setValue( (Attribute) atts.elementAt(1), trainSet.attribute(1).addRelation(relAttData));
//Finally, add the instance to the relational attribute
trainSet.add(tInst)
}
//4. Output data
System.out.println(trainSet);
I realise this could probably be done differently, but this works well with my situation. Please keep in mind this is not my actual code, but an excerpt of multiple parts stitched together to demonstrate the process used to fix the problem.

How can i find - which portlets are deployed on which pages in Liferay 6.1?

In other words, Which Database tables do i need to look into for the mapping of a portlet to a page in an organization? If there is such a thing?! We are using Liferay 6.1.20
Apart from the marketplace portlet.
If you have access to the Database you can fire a simple query on the Layout table to know in what all pages your portlet is added:
SELECT * FROM Layout WHERE typeSettings LIKE '%my_WAR_myportlet%';
Also you can build a FinderImpl with service-builder to execute this query through a portlet and add that portlet to page to display in whatever format you want.
Below is another solution without deploying any portlet and without having access to the DB (tested on MySQL DB):
Go to Server Administration
Go to Script tab
Select Beanshell from the Language drop-down
Paste the following script code in the textarea and press the button execute:
import com.liferay.portal.kernel.dao.jdbc.DataAccess;
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import com.liferay.portal.kernel.util.StringBundler;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
Connection con = null;
PreparedStatement ps = null;
ResultSet rs = null;
try {
con = DataAccess.getUpgradeOptimizedConnection();
// your custom query here
String sqlQuery = "SELECT * FROM Layout WHERE typeSettings LIKE '%my_WAR_myportlet%'";
out.println("SQL Query: "+ sqlQuery);
ps = con.prepareStatement(sqlQuery);
rs = ps.executeQuery();
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
List rows = new ArrayList();
List columnLabels = new ArrayList();
columnLabels.add("Sr. No.");
for (int c = 1; c <= columnCount; c++) {
String cl = rsmd.getColumnLabel(c);
columnLabels.add(cl);
}
while (rs.next()) {
List row = new ArrayList(columnCount);
for (int c = 1; c <= columnCount; c++) {
Object value = rs.getObject(c);
row.add(value);
}
rows.add(row);
}
// --- formatting the results ---
out.append("<div class=\"lfr-search-container \"><div class=\"results-grid\">");
out.append("<table class=\"taglib-search-iterator\">");
out.append("<thead>");
out.append("<tr class=\"portlet-section-header results-header\">");
for (String value : columnLabels) {
out.append("<th>");
out.append(value);
out.append("</th>");
}
out.append("</tr>");
out.append("</thead>");
out.append("<tbody>");
boolean alt = false;
long resultsCount = 1;
for (List line : rows) {
out.append("<tr class=\"portlet-section-alternate results-row " + (alt ? "alt" : "") + "\">");
// for sr. no.
out.append("<td>");
out.append(resultsCount + "");
out.append("</td>");
resultsCount = resultsCount + 1;
for (Iterator iterator = line.iterator(); iterator.hasNext();) {
Object value = (Object) iterator.next();
out.append("<td>");
if (value != null) {
out.append(value.toString());
} else {
out.append("<span style=\"font-style:italic\">null</span>");
}
out.append("</td>");
}
out.append("</tr>");
alt = !alt;
}
out.append("</tbody>");
out.append("</table>");
out.append("</div></div>");
} catch (Exception exp) {
out.println("ERROR: " + exp);
throw exp;
}
finally {
DataAccess.cleanUp(con, ps, rs);
}
Please remember to change the string my_WAR_myportlet to your portletId
Needless to say you can modify the script to update the styles and can show selective columns. Also you can run any SQL query, just update String sqlQuery.
Hope this helps.
This is the solution, a free marketplace plugin that resolve your issue.
http://www.liferay.com/it/marketplace/-/mp/application/27156990
Here is a code to get all the portlets from the first column
ThemeDisplay themeDisplay = (ThemeDisplay) request.getAttribute(WebKeys.THEME_DISPLAY);
Layout layout = themeDisplay.getLayout();
LayoutTypePortlet layoutTypePortlet = (LayoutTypePortlet) layout.getLayoutType();
List<Portlet> portlets = layoutTypePortlet.getAllPortlets("column-1");
for (Portlet portlet : portlets) {
// print your portlet.getPortletId() or do whatever you need
}
[layout] is the table in which you can find out which portlets(typeSettings ) are deployed on which page

Resources