storing objects in hazelcast - hazelcast

I am trying to store my object in Hazelcast map but its not working. It creates a new instance which i am able to see in console and mancenter as well. It also creates the map but it has nothing inside. Also, System.out.println (accountMap.get()) prints nothing.
I even tried by doing a put operation with simple string, still same result.
Here is my code:
userAccount user = new userAccount();
user.name = "pras";
user.pass = "12345";
HazelcastInstance instance = Hazelcast.newHazelcastInstance (new Config());
Map<Integer, userAccount> accountMap = instance.getMap("userMap");
accountMap.put(1, user);
System.out.println (accountMap.get(1));

Given:
package com.hazelcast;
import java.io.Serializable;
public class userAccount implements Serializable {
String name;
String pass;
}
And your code from above, I get the following output:
INFO: [192.168.1.70]:5701 [dev] [3.8.1]
Members [1] {
Member [192.168.1.70]:5701 - f8f3cf77-9b02-48b7-8a61-f353c40a6267 this
}
Apr 21, 2017 3:19:28 PM com.hazelcast.core.LifecycleService
INFO: [192.168.1.70]:5701 [dev] [3.8.1] [192.168.1.70]:5701 is STARTED
Apr 21, 2017 3:19:28 PM com.hazelcast.internal.partition.impl.PartitionStateManager
INFO: [192.168.1.70]:5701 [dev] [3.8.1] Initializing cluster partition table arrangement...
com.hazelcast.userAccount#70ab80e3
Hope this helps

Related

Quarkus unable to load the cassandra custom retry policy class

I am working on a task to migrate Quarkus from 1.x to 2.x and Quarkus integration with embedded Cassandra failed in unit testing with error -
Caused by: java.lang.IllegalArgumentException: Can't find class com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
(specified by advanced.retry-policy.class)
**Custom retry policy**
public class CassandraCustomRetryPolicy implements RetryPolicy {
public CassandraCustomRetryPolicy(DriverContext context, String profileName) {
}
//override methods
}
****quarkus test be like** -**
#QuarkusTest
#QuarkusTestResource(CassandraTestResource.class)
class Test {}
**CassandraTestResource class start the embedded cassandra**
public class CassandraTestResource implements QuarkusTestResourceLifecycleManager {
private Cassandra cassandra;
#Override
public Map<String, String> start() {
cassandra = new CassandraBuilder().version("3.11.9")
.addEnvironmentVariable("JAVA_HOME", getJavaHome())
.addJvmOptions("-Xms512M -Xmx512m").build();
cassandra.start();
}
I have override the default Cassandra driver policy in application.conf inside resource folder.
datastax-java-driver {
basic.request {
timeout = ****
consistency = ***
serial-consistency = ***
}
advanced.retry-policy {
class = com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
}
I have observed that my custom retry policy class comes under banned resource in QuarkusClassLoader.java-
String resourceName = sanitizeName(name).replace('.', '/') + ".class";
boolean parentFirst = parentFirst(resourceName, state);
if (state.bannedResources.contains(resourceName)) {
throw new ClassNotFoundException(name);
}
I have captured the following logs -
java.lang.ClassNotFoundException: com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:438)
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:414)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at com.datastax.oss.driver.internal.core.util.Reflection.loadClass(Reflection.java:57)
at com.datastax.oss.driver.internal.core.util.Reflection.resolveClass(Reflection.java:288)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfig(Reflection.java:235)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfigProfiles(Reflection.java:194)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.buildRetryPolicies(DefaultDriverContext.java:359)
at com.datastax.oss.driver.internal.core.util.concurrent.LazyReference.get(LazyReference.java:55)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.getRetryPolicies(DefaultDriverContext.java:761)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.init(DefaultSession.java:339)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.access$1100(DefaultSession.java:300)
at com.datastax.oss.driver.internal.core.session.DefaultSession.lambda$init$0(DefaultSession.java:146)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I am using quarkus version 2.7.2.Final with cassandra driver version 4.14.0
It's not a complete answer but I wanted to leave some notes here in case anybody else can get this over the finish line before I get back to it.
The underlying problem here is that in the Quarkus test case described above the Java driver code is loaded by the QuarkusClassLoader which (a) is more restrictive about where it loads code from and (b) doesn't appear to immediately support calling it's parent if necessary. So in this case executing the following in the test will fail with a ClassNotFoundException:
CqlSession.class.getClassLoader().forName(customretrypolicyclassname)
while the following works without issue:
CqlSession.class.getClassLoader().getParent().forName(customretrypolicyclassname)
The class loader used to load CqlSession is the QuarkusClassLoader instance while it's parent is a stock JVM class loader.
The Java driver uses Class.forName() to load the classes specified for this policy. But since the Quarkus class loader is used to load the driver code itself that's the loader that's used for these reflection ops... and as mentioned above that driver has some specific characteristics that make loading external code harder.
It worked after I initialized CQL session like -
CqlSession.builder()
.addContactPoint(new InetSocketAddress(settings.getAddress(), settings.getPort()))
.withLocalDatacenter("***")
. withClassLoader(Thread.currentThread().getContextClassLoader()).build())

How to set "SAP-PASSPORT" Header in OData call with Cloud SDK Neo project template

We are going to gather some statics in SAP BTP Neo environment with FRun (not support CF). To implement the tracing of outgoing connections calls. I need to update "SAP-PASSPORT" and forward it as a header.
I followed SAP official documentation to implement it.
https://help.sap.com/viewer/ea72206b834e4ace9cd834feed6c0e09/Cloud/en-US/05a07108d34540d39b8a79e2caf96c8c.html
From my perspective, Step 2 could be skipped. The only thing I need to do is to get updated SAP Passport Header and set it to request header.
Sample code:
1.Implement ConnectionInfo interface
public class ConnectionInfoNeo implements ConnectionInfo {
#Override
public byte[] getId() {
UUID uuid = java.util.UUID.randomUUID();
ByteBuffer bb = ByteBuffer.wrap(new byte[16]);
bb.putLong(uuid.getMostSignificantBits());
bb.putLong(uuid.getLeastSignificantBits());
return bb.array();
}
#Override
public int getCounter() {
return 1;
}
}
Get SapPassportHeader and set it to request header
public class MyPurchaseOrderService {
private static final Logger logger = LoggerFactory.getLogger(MyPurchaseOrderService.class);
private static final ConnectionInfo CONNECTION_INFO = new ConnectionInfoNeo();
public List<String> getPurchaseOrdersValueHelp(String purOrderStr) throws NamingException {
String destinationName = "ErpQueryEndpoint";
Context ctx = new InitialContext();
logger.info("Context: " + ctx);
// ConnectivityConfiguration configuration = (ConnectivityConfiguration) ctx.lookup("java:comp/env/connectivityConfiguration");
// DestinationConfiguration destConfiguration = configuration.getConfiguration(destinationName);
// String destinationUrl = destConfiguration.getProperty("URL");
SapPassportHeader sapPassportHeader = updateSapPassportHeader(ctx);
HttpDestination destination = DestinationAccessor.getDestination(destinationName).asHttp();
List<PurchaseOrder> purchaseOrders = new DefaultPurchaseOrderService()
.getAllPurchaseOrder()
.withHeader("SAP-PASSPORT", sapPassportHeader.getValue())
.filter(PurchaseOrder.PURCHASE_ORDER.startsWith(purOrderStr))
//.top(20)
.executeRequest(destination);
List<String> purOrderNumList = new ArrayList<>();
purchaseOrders.forEach(purchaseOrder -> {
purOrderNumList.add(purchaseOrder.getPurchaseOrder());
});
return purOrderNumList;
}
private SapPassportHeader updateSapPassportHeader(Context ctx) throws NamingException {
SapPassportHeaderProvider sapPassportHeaderProvider = (SapPassportHeaderProvider) ctx.lookup("java:comp/env/SapPassportHeaderProvider");
return sapPassportHeaderProvider.getSapPassportHeader(CONNECTION_INFO);
}
}
But when I tested in Neo environment, I got an exception.
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na#javax.naming.NameNotFoundException: Name [SapPassportHeaderProvider] is not bound in this Context. Unable to find [SapPassportHeaderProvider]. |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na# at org.apache.naming.NamingContext.lookup(NamingContext.java:824) |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na# at org.apache.naming.NamingContext.lookup(NamingContext.java:157) |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na# at org.apache.naming.NamingContext.lookup(NamingContext.java:834) |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na# at org.apache.naming.NamingContext.lookup(NamingContext.java:157) |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na# at org.apache.naming.NamingContext.lookup(NamingContext.java:834) |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na# at org.apache.naming.NamingContext.lookup(NamingContext.java:171) |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na# at org.apache.naming.SelectorContext.lookup(SelectorContext.java:161) |
2021 11 04 03:05:11#+00#ERROR#java.lang.Throwable##ZJE8SZH#https-jsse-nio-8041-exec-6#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5
#na#na#na#na# at javax.naming.InitialContext.lookup(InitialContext.java:417) |
How to register SapPassportHeaderProvider in JNDI? Is there any simple way to get the header in Cloud SDK Neo for Java7 project?
================================================================
I added some resource configuration in web.xml. The above issue is resolved. But SAP passport header is always null.
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>SapPassportHeaderProvider</res-ref-name>
<res-type>com.sap.core.connectivity.api.sappassport.SapPassportHeaderProvider</res-type>
</resource-ref>
2021 11 04 04:14:58#+00#ERROR#org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/PurchaseOrderNeo-application].[com.bosch.sbs.po.servet.PurchaseOrderValueHelpServlet]##ZJE8SZH#https-jsse-nio-8041-exec-7#na#s3td7fnnd5#purchaseorderneoapplication#web#s3td7fnnd5#na#na#na#na#Servlet.service() for servlet [com.bosch.sbs.po.servet.PurchaseOrderValueHelpServlet] in context with path [/PurchaseOrderNeo-application] threw exception com.sap.cloud.sdk.cloudplatform.exception.ShouldNotHappenException: com.sap.cloud.sdk.cloudplatform.thread.exception.ThreadContextExecutionException: java.lang.NullPointerException: while trying to invoke the method com.sap.core.connectivity.api.sappassport.SapPassportHeader.getValue() of a null object loaded from local variable 'sapPassportHeader'
=========================================================================
Since you tagged your question with sap-cloud-sdk, I would respond on behalf of the library.
It works independent form the API provided by [SAP official documentation][2]. You would just need to add the SAP internal dependency com.sap.cloud.sdk.cloudplatform:sap-passport into your project. As long as you are using the Destination API the respective headers are added automatically to your outgoing requests.

GridGain with SpringBoot

I've compiled a docker image of GridGain Pro and run this.
with Java i do the following...
Create the following #Configuration file
#Configuration
#EnableCaching
public class CustomConfiguration extends CachingConfigurerSupport {
#Bean
#Override
public KeyGenerator keyGenerator() {
return (target, method, params) -> {
StringBuilder sb = new StringBuilder();
sb.append(target.getClass().getName());
sb.append(method.getName());
for (Object obj : params) {
sb.append("|");
sb.append(obj.toString());
}
return sb.toString();
};
}
#Bean("cacheManager")
public SpringCacheManager cacheManager(IgniteConfiguration igniteConfiguration){
try {
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setIgniteInstanceName("ignite");
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(new CacheConfiguration<>().setCacheMode(CacheMode.REPLICATED));
return springCacheManager;
}
catch (Exception ex){
}
return null;
}
#Bean
#Profile("!dev")
IgniteConfiguration igniteConfiguration() {
GridGainConfiguration gridGainConfiguration = new GridGainConfiguration();
gridGainConfiguration.setRollingUpdatesEnabled(true);
IgniteConfiguration igniteConfiguration = new IgniteConfiguration()
.setPluginConfigurations(gridGainConfiguration)
.setClientMode(true)
.setPeerClassLoadingEnabled(false)
.setIgniteInstanceName("MyIgnite");
DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration();
DataRegionConfiguration dataRegionConfiguration = new DataRegionConfiguration();
dataRegionConfiguration.setInitialSize(20 * 1024 * 1024);
dataRegionConfiguration.setMaxSize(40 * 1024 * 1024);
dataRegionConfiguration.setMetricsEnabled(true);
dataStorageConfiguration.setDefaultDataRegionConfiguration(dataRegionConfiguration);
igniteConfiguration.setDataStorageConfiguration(dataStorageConfiguration);
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder tcpDiscoveryVmIpFinder = new TcpDiscoveryVmIpFinder();
tcpDiscoveryVmIpFinder.setAddresses(Arrays.asList("192.168.99.100:47500..47502"));
tcpDiscoverySpi.setIpFinder(tcpDiscoveryVmIpFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
return igniteConfiguration;
}
}
Start spring and get the following error.
2018-04-18 12:27:29.277 WARN 12588 --- [ main] .GridEntDiscoveryNodeValidationProcessor : GridGain node cannot be in one cluster with Ignite node [locNodeAddrs=[server/0:0:0:0:0:0:0:1, server/10.29.96.164, server/127.0.0.1, /192.168.56.1, /192.168.99.1], rmtNodeAddrs=[172.17.0.1/0:0:0:0:0:0:0:1%lo, 192.168.99.100/10.0.2.15, 10.0.2.15/127.0.0.1, /172.17.0.1, /192.168.99.100]]
2018-04-18 12:27:29.283 ERROR 12588 --- [ main] o.a.i.internal.IgniteKernal%MyIgnite : Got exception while starting (will rollback startup routine).
I'm trying to use gridgain as a replacement for redis and use the #Cacheable annotation.
Does anyone have a working gridgain example?
What is causing the error above?
G.
1) okay seems the issue was not providing H2 as a dependency.
2) using GridGain professional instead of GridGain Enterprise.
G.
GridGain node cannot be in one cluster with Ignite node is pretty self-explanatory.
Either you have forgot to stop some local Apache Ignite from earlier experiments.
Or you have deliberately tried to make GridGain join an Ignite cluster.
Or better yet, there is an instance of Apache Ignite running somewhere in your local network, and you have set multicast discovery or other kind of too-broad discovery, so they're seeing each other.
Maybe gridgain-core.x.x.x.jar jar is miising from one of nodes' classpath. Check and add it if necessary.

Runtime Configuration of Cassandra connection in Phantom DSL

I'm using phantom to connect to Apache Cassandra and want to configure the connector at runtime, i.e. I want to parse some configuration file, extract a list of Cassandra databases and pass that somehow to my Database object.
I followed this guide to have an additional layer DatabaseProvider between Database and my service. Hence, I can provide a static DatabaseProvider like this:
object ProdConnector {
val connector = ContactPoints(Seq("dev-cassndr.containers"), 9042)
.keySpace("test")
}
object ProdDatabase extends MyDatabase(ProdConnector.connector)
trait ProdDatabaseProvider extends MyDatabaseProvider {
override def database: MyDatabase = ProdDatabase
}
and in my main function I do
val service = new MessageService with ProdDatabaseProvider {}
How can I achieve the same result at runtime without the singleton objects?
I made several attempst but always got NullPointerExceptions. My current approach is to have a Cassandra configuration object which is read by Jackson from a file:
case class CassandraConfigurator(
contactPoints: Seq[String],
keySpace: String,
port: Int = 9042,
) {
#JsonCreator
def this() = this(null, null, 9042)
val connection: CassandraConnection = {
val p = ContactPoints(contactPoints, port)
p.keySpace(keySpace)
}
}
my entry point then extends StreamApp from fs2
object Main extends StreamApp[IO] {
override def stream(args: List[String], reqShutdown: IO[Unit])
: Stream[IO, ExitCode] = {
val conf: CassandraConfigurator = ???
val service = new MyService with MyDatabaseProvider {
override def database: MyDatabase = new MyDatabase(conf.connection)
}
service.database.create()
val api = ApiWithService(service).getApi
BlazeBuilder[IO].bindHttp(80, "0.0.0.0").mountService(api, "/").serve
}
}
This results in the following error:
12:44:03.436 [pool-10-thread-1] INFO com.datastax.driver.core.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer
12:44:03.436 [pool-10-thread-1] INFO com.datastax.driver.core.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer
12:44:03.463 [pool-10-thread-1] DEBUG com.datastax.driver.core.SystemProperties - com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
12:44:03.463 [pool-10-thread-1] DEBUG com.datastax.driver.core.SystemProperties - com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
12:44:03.477 [pool-10-thread-1] DEBUG com.datastax.driver.core.SystemProperties - com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
12:44:03.477 [pool-10-thread-1] DEBUG com.datastax.driver.core.SystemProperties - com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
java.lang.NullPointerException
at com.outworkers.phantom.connectors.ContactPoints$.$anonfun$apply$3(ContactPoint.scala:101)
at com.outworkers.phantom.connectors.DefaultSessionProvider.<init>(DefaultSessionProvider.scala:37)
at com.outworkers.phantom.connectors.CassandraConnection.provider$lzycompute(CassandraConnection.scala:46)
at com.outworkers.phantom.connectors.CassandraConnection.provider(CassandraConnection.scala:41)
at com.outworkers.phantom.connectors.CassandraConnection.session$lzycompute(CassandraConnection.scala:52)
at com.outworkers.phantom.connectors.CassandraConnection.session(CassandraConnection.scala:52)
at com.outworkers.phantom.database.Database.session$lzycompute(Database.scala:36)
at com.outworkers.phantom.database.Database.session(Database.scala:36)
at com.outworkers.phantom.ops.DbOps.$anonfun$createAsync$2(DbOps.scala:66)
at com.outworkers.phantom.builder.query.execution.ExecutionHelper$.$anonfun$sequencedTraverse$2(ExecutableStatements.scala:71)
at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:304)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:37)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Exception: sbt.TrapExitSecurityException thrown from the UncaughtExceptionHandler in thread "run-main-0"
[error] java.lang.RuntimeException: Nonzero exit code: 1
[error] at sbt.Run$.executeTrapExit(Run.scala:124)
[error] at sbt.Run.run(Run.scala:77)
[error] at sbt.Defaults$.$anonfun$bgRunTask$5(Defaults.scala:1168)
[error] at sbt.Defaults$.$anonfun$bgRunTask$5$adapted(Defaults.scala:1163)
[error] at sbt.internal.BackgroundThreadPool.$anonfun$run$1(DefaultBackgroundJobService.scala:366)
[error] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
[error] at scala.util.Try$.apply(Try.scala:209)
[error] at sbt.internal.BackgroundThreadPool$BackgroundRunnable.run(DefaultBackgroundJobService.scala:289)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
In your case there's nothing implicit here, and the runtime stuff looks like it should be simple. Your problem is more than likely Jackson failing to read from the file. To test that try the below, if it's successfully trying to connect to local Cassandra, then that's your problem.
object Main extends StreamApp[IO] {
override def stream(args: List[String], reqShutdown: IO[Unit])
: Stream[IO, ExitCode] = {
val service = new MyService with MyDatabaseProvider {
override def database: MyDatabase = new MyDatabase(Connector.default)
}
service.database.create()
val api = ApiWithService(service).getApi
BlazeBuilder[IO].bindHttp(80, "0.0.0.0").mountService(api, "/").serve
}
}
Are you sure val conf: CassandraConfigurator = ??? is being properly initialised? If that is what you have in code, I am not surprised you are getting the NPE.

How to create service builder for liferay plugin project with maven

I have Already create Liferay Plugin project. and maven install also over.
it gives
------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ---------------------------------------------------------------------
[INFO] Total time: 1:29.773s
[INFO] Finished at: Wed Jun 17 15:46:10 IST 2015
[INFO] Final Memory: 34M/151M
[INFO] -----------------------------.
But unable to add service builder.
When i try to add service builder
new->Liferay service builder.
It does not showing any plugin project.
Then how to add service builder?
Frankly speaking Liferay-Maven combination is not so fully supported by Liferay IDE in eclipse. Originally Liferay IDE has been created for only ANT support but since maven support has been introduced there are many things missing.
This issue which you have raised is same for following options as well.
JSF Portlet
Layout
Theme
Service Builder
This is not an issue for following options
Hook
Portlet
Vaadin Portlet
So, best way is to generate new service builder through maven archetypes supported for specific liferay version.
E.g com.liferay.maven.archetypes:liferay-servicebuilder-archetype:6.2.1 for liferay 6.2.1 GA2 version.
If you want to add a vaadin application to existing liferay portlet then create new LIFERAY-VAADIN project and there you can use "You can continue to use ServiceBuilder as you always have, and retrieve data from your services using XXXXXServiceUtil (or XXXXXLocalServiceUtil)." You may also want to check here, and here.
For example here is one DatabaseUtil class from vaadin.
import java.awt.List;
import java.util.ArrayList;
import com.vaadin.data.util.HierarchicalContainer;
public class DatabaseUtil {
public static HierarchicalContainer fillTree_db() {
HierarchicalContainer container = new HierarchicalContainer();
ArrayList < ArrayList < String >> treeNodes = new ArrayList < ArrayList < String >> ();
try {
List < TREEVIEW > nodes = TREEVIEWLocalServiceUtil.getAllNodes();
for (TREEVIEW node: nodes) {
String nodename = node.getNodename();
ArrayList < String > row = new ArrayList < String > ();
row.add(String.valueOf(node.getNodeid()));
row.add(node.getNodename());
row.add(String.valueOf(node.getRootid()));
container.addItem(nodename);
treeNodes.add(row);
}
for (int i = 0; i < treeNodes.size(); i++) {
int root = 0;
root = Integer.parseInt(treeNodes.get(i).get(2));
if (root != 0)
container.setParent(treeNodes.get(i).get(1),
treeNodes.get(root - 1).get(1));
}
} catch (Exception e) {
System.err.println("Exception: " + e.getMessage());
} finally {}
return container;
}
}

Resources