how to check the number of entries on local member - hazelcast

my prime member
public static void main(String[] args) throws InterruptedException {
Config config = new Config();
config.setProperty(GroupProperty.ENABLE_JMX, "true");
config.setProperty(GroupProperty.BACKPRESSURE_ENABLED, "true");
config.setProperty(GroupProperty.SLOW_OPERATION_DETECTOR_ENABLED, "true");
config.getSerializationConfig().addPortableFactory(1, new MyPortableFactory());
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<Integer, Rule> ruleMap = hz.getMap("ruleMap");
// TODO generate rule map data ; more than 100,000 entries
generateRuleMapData(ruleMap);
logger.info("generate rule finised!");
// TODO rule map index
// health check
PartitionService partitionService = hz.getPartitionService();
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
while (true) {
logger.info("isClusterSafe:{},isLocalMemberSafe:{},number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
}
logs
2016-06-28 13:53:05,048 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
2016-06-28 13:53:06,049 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
2016-06-28 13:53:07,050 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
my slave member
public static void main(String[] args) throws InterruptedException {
Config config = new Config();
config.setProperty(GroupProperty.ENABLE_JMX, "true");
config.setProperty(GroupProperty.BACKPRESSURE_ENABLED, "true");
config.setProperty(GroupProperty.SLOW_OPERATION_DETECTOR_ENABLED, "true");
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<Integer, Rule> ruleMap = hz.getMap("ruleMap");
PartitionService partitionService = hz.getPartitionService();
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
while (true) {
logger.info("isClusterSafe:{},isLocalMemberSafe:{},number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
}
logs
2016-06-28 14:05:53,543 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:54,556 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:55,563 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:56,578 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
my question is :
why number of entries owned on prime member is not changed, after the cluster adds one slave member?

I should get statics per second.
while (true) {
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
logger.info(
"isClusterSafe:{},isLocalMemberSafe:{},rulemap.size:{}, number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(), ruleMap.size(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}

Another option is to make use of localKeySet which returns the locally owned set of keys.
IMap::localKeySet.size()

Related

Multithreaded Kafka Consumer not processing all the partitions in parallel

I have created a multithreaded Kafka consumer in which one thread is assigned to each of the partition (I have total 100 partitions). I have followed https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example link.
Below is the init method of my consumer.
consumer = kafka.consumer.Consumer.createJavaConsumerConnector(createConsumerConfig());
System.out.println("Kafka Consumer initialized.");
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topicName, 100);
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topicName);
executor = Executors.newFixedThreadPool(100);
In the above init method, I got the list of Kafka streams (total 100) which should be connected to each of the partition (Which is happening as expected).
Then I did submit each of the streams to a different thread using below snippet.
public Object call() {
for (final KafkaStream stream : streams) {
executor.execute(new StreamWiseConsumer(stream));
}
return true;
}
Below is the StreamWiseConsumer class.
public class StreamWiseConsumer extends Thread {
ConsumerIterator<byte[], byte[]> consumerIterator;
private KafkaStream m_stream;
public StreamWiseConsumer(ConsumerIterator<byte[], byte[]> consumerIterator) {
this.consumerIterator = consumerIterator;
}
public StreamWiseConsumer(KafkaStream kafkaStream) {
this.m_stream = kafkaStream;
}
#Override
public void run() {
ConsumerIterator<byte[], byte[]> consumerIterator = m_stream.iterator();
while(!Thread.currentThread().isInterrupted() && !interrupted) {
try {
if (consumerIterator.hasNext()) {
String reqId = UUID.randomUUID().toString();
System.out.println(reqId+ " : Event received by threadId : "+Thread.currentThread().getId());
MessageAndMetadata<byte[], byte[]> messageAndMetaData = consumerIterator.next();
byte[] keyBytes = messageAndMetaData.key();
String key = null;
if (keyBytes != null) {
key = new String(keyBytes);
}
byte[] eventBytes = messageAndMetaData.message();
if (eventBytes == null){
System.out.println("Topic: No event fetched for transaction Id:" + key);
continue;
}
String event = new String(eventBytes).trim();
// Some Processing code
System.out.println(reqId+" : Processing completed for threadId = "+Thread.currentThread().getId());
consumer.commitOffsets();
} catch (Exception ex) {
}
}
}
}
Ideally, it should start processing from all the 100 partitions in parallel. But it is picking some random number of events from one of the threads and processing it then some other thread starts processing from another partition. It seems like it's sequential processing but with different-different threads. I was expecting processing to happen from all the 100 threads. Am I missing something here?
PFB for the logs link.
https://drive.google.com/file/d/14b7gqPmwUrzUWewsdhnW8q01T_cQ30ES/view?usp=sharing
https://drive.google.com/file/d/1PO_IEsOJFQuerW0y-M9wRUB-1YJuewhF/view?usp=sharing
I doubt whether this is the right approach for vertically scaling kafka streams.
Kafka streams inherently supports multi thread consumption.
Increase the number of threads used for processing by using num.stream.threads configuration.
If you want 100 threads to process the 100 partitions, set num.stream.threads as 100.

log4j2 isThreadContextMapInheritable property usage

I am trying to log events of a Java application to separate log files based on a key set to the ThreadContext. But my key is not reaching the child thread (created on MouseEvent) even after setting "log4j2.isThreadContextMapInheritable" to "true" in system properties. Someone please help me to get this resolved.
My main method:
public class Application {
static {
System.setProperty("log4j2.isThreadContextMapInheritable","true");
}
private final static Logger LOGGER = LogManager.getLogger(Application.class);
public static void main(String[] args) throws Exception
{
ThreadContext.put("cfg","RLS");
LOGGER.info("New window opening!!!"+ThreadContext.get("cfg"));
newWindow();
}
private static void newWindow() throws Exception {
ButtonFrame buttonFrame = new ButtonFrame("Button Demo");
buttonFrame.setSize( 350, 275 );
buttonFrame.setVisible( true );
}
}
ButtonFrame class:
public class ButtonFrame extends JFrame{
private final static Logger LOGGER = LogManager.getLogger(NewWindow.class);
JButton bChange;
JFrame frame = new JFrame("Our JButton listener example");
public ButtonFrame(String title)
{
super( title );
setLayout( new FlowLayout() );
bChange = new JButton("Click Me!");
bChange.addMouseListener(new MouseListener() {
#Override
public void mouseClicked(MouseEvent e) {
try {
LOGGER.info("Mouse clicked!!!"+ThreadContext.get("cfg"));
JDialog d = new JDialog(frame, "HI", true);
d.setLocationRelativeTo(frame);
d.setVisible(true);
} catch (Exception e1) {
e1.printStackTrace();
}
}
#Override
public void mousePressed(MouseEvent e) {}
#Override
public void mouseReleased(MouseEvent e) {}
#Override
public void mouseEntered(MouseEvent e) {}
#Override
public void mouseExited(MouseEvent e) {}
});
add( bChange );
setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );
}
}
log4j2.properties file:
appenders = rls,otr,routing
appender.rls.type = RollingFile
appender.rls.name = RollingFile_Rls
appender.rls.fileName = D:\\RLS\\rls_%d{MMdd}.log
appender.rls.filePattern = D:\\RLS\\rls_%d{MMdd}.log
appender.rls.layout.type = PatternLayout
appender.rls.layout.pattern = %d{ABSOLUTE} %level{length=1}
%markerSimpleName [%C{1}:%L] %m%n
appender.rls.policies.type = Policies
appender.rls.policies.time.type = TimeBasedTriggeringPolicy
appender.rls.policies.time.interval = 1
appender.rls.policies.time.modulate = true
appender.rls.policies.size.type = SizeBasedTriggeringPolicy
appender.rls.policies.size.size = 100MB
appender.rls.strategy.type = DefaultRolloverStrategy
appender.rls.strategy.max = 5
appender.otr.type = RollingFile
appender.otr.name = RollingFile_Otr
appender.otr.fileName = D:\\RLS\\otr_%d{MMdd}.log
appender.otr.filePattern = D:\\RLS\\otr_%d{MMdd}.log
appender.otr.layout.type = PatternLayout
appender.otr.layout.pattern = %d{ABSOLUTE} %level{length=1}
%markerSimpleName [%C{1}:%L] %m%n
appender.otr.policies.type = Policies
appender.otr.policies.time.type = TimeBasedTriggeringPolicy
appender.otr.policies.time.interval = 1
appender.otr.policies.time.modulate = true
appender.otr.policies.size.type = SizeBasedTriggeringPolicy
appender.otr.policies.size.size = 100MB
appender.otr.strategy.type = DefaultRolloverStrategy
appender.otr.strategy.max = 5
appender.routing.type = Routing
appender.routing.name = Route_Finder
appender.routing.routes.type = Routes
appender.routing.routes.pattern = $${ctx:cfg}
appender.routing.routes.route1.type = Route
appender.routing.routes.route1.ref = RollingFile_Rls
appender.routing.routes.route1.key = RLS
appender.routing.routes.route2.type = Route
appender.routing.routes.route2.ref = RollingFile_Otr
appender.routing.routes.route2.key = $${ctx:cfg}
loggers = rls,otr
logger.rls.name = logging
logger.rls.level = info
logger.rls.additivity = false
logger.rls.appenderRefs=rls
logger.rls.appenderRef.rls.ref = Route_Finder
logger.rls.name = logging
logger.rls.level = info
logger.rls.additivity = false
logger.rls.appenderRefs=rls
logger.rls.appenderRef.rls.ref = Route_Finder
logger.otr.name = other
logger.otr.level = info
logger.otr.additivity = false
logger.otr.appenderRefs=otr
logger.otr.appenderRef.otr.ref = Route_Finder
rootLogger.level = trace
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = stdout
You can put log4j2.component.properties file in the classpath to control various aspects of Log4j 2 behavior.
For example content of log4j2.component.properties:
# https://logging.apache.org/log4j/2.x/manual/configuration.html#SystemProperties
# If true use an InheritableThreadLocal to implement the ThreadContext map.
# Otherwise, use a plain ThreadLocal.
# (Maybe ignored if a custom ThreadContext map is specified.)
# Default is false
# Modern 2.10+
log4j2.isThreadContextMapInheritable=true
# Legacy for pre-2.10
isThreadContextMapInheritable=true
This has priority over system properties, but it can be overridden by environment variable LOG4J_IS_THREAD_CONTEXT_MAP_INHERITABLE as described in the
documentation.
Adding OP's comment as answer
The ThreadContext map can be configured to use an InheritableThreadLocal by setting system property isThreadContextMapInheritable to true.
Set the system property as -DisThreadContextMapInheritable=true when we start our application, or in application code using the following piece of code: System.setProperty("isThreadContextMapInheritable", "true");
https://logging.apache.org/log4j/2.x/manual/thread-context.html
https://logging.apache.org/log4j/2.x/manual/configuration.html#SystemProperties

Converting UnixTimestamp to TIMEUUID for Cassandra

I'm learning all about Apache Cassandra 3.x.x and I'm trying to develop some stuff to play around. The problem is that I want to store data into a Cassandra table which contains these columns:
id (UUID - Primary Key) | Message (TEXT) | REQ_Timestamp (TIMEUUID) | Now_Timestamp (TIMEUUID)
REQ_Timestamp has the time when the message left the client at frontend level. Now_Timestamp, on the other hand, is the time when the message is finally stored in Cassandra. I need both timestamps because I want to measure the amount of time it takes to handle the request from its origin until the data is safely stored.
Creating the Now_Timestamp is easy, I just use the now() function and it generates the TIMEUUID automatically. The problem arises with REQ_Timestamp. How can I convert that Unix Timestamp to a TIMEUUID so Cassandra can store it? Is this even possible?
The architecture of my backend is this: I get the data in a JSON from the frontend to a web service that process it and stores it in Kafka. Then, a Spark Streaming job takes that Kafka log and puts it in Cassandra.
This is my WebService that puts the data in Kafka.
#Path("/")
public class MemoIn {
#POST
#Path("/in")
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.TEXT_PLAIN)
public Response goInKafka(InputStream incomingData){
StringBuilder bld = new StringBuilder();
try {
BufferedReader in = new BufferedReader(new InputStreamReader(incomingData));
String line = null;
while ((line = in.readLine()) != null) {
bld.append(line);
}
} catch (Exception e) {
System.out.println("Error Parsing: - ");
}
System.out.println("Data Received: " + bld.toString());
JSONObject obj = new JSONObject(bld.toString());
String line = obj.getString("id_memo") + "|" + obj.getString("id_writer") +
"|" + obj.getString("id_diseased")
+ "|" + obj.getString("memo") + "|" + obj.getLong("req_timestamp");
try {
KafkaLogWriter.addToLog(line);
} catch (Exception e) {
e.printStackTrace();
}
return Response.status(200).entity(line).build();
}
}
Here's my Kafka Writer
package main.java.vcemetery.webservice;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
import org.apache.kafka.clients.producer.Producer;
public class KafkaLogWriter {
public static void addToLog(String memo)throws Exception {
// private static Scanner in;
String topicName = "MemosLog";
/*
First, we set the properties of the Kafka Log
*/
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
// We create the producer
Producer<String, String> producer = new KafkaProducer<>(props);
// We send the line into the producer
producer.send(new ProducerRecord<>(topicName, memo));
// We close the producer
producer.close();
}
}
And finally here's what I have of my Spark Streaming job
public class MemoStream {
public static void main(String[] args) throws Exception {
Logger.getLogger("org").setLevel(Level.ERROR);
Logger.getLogger("akka").setLevel(Level.ERROR);
// Create the context with a 1 second batch size
SparkConf sparkConf = new SparkConf().setAppName("KafkaSparkExample").setMaster("local[2]");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, Durations.seconds(10));
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "localhost:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "group1");
kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", false);
/* Se crea un array con los tópicos a consultar, en este caso solamente un tópico */
Collection<String> topics = Arrays.asList("MemosLog");
final JavaInputDStream<ConsumerRecord<String, String>> kafkaStream =
KafkaUtils.createDirectStream(
ssc,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
);
kafkaStream.mapToPair(record -> new Tuple2<>(record.key(), record.value()));
// Split each bucket of kafka data into memos a splitable stream
JavaDStream<String> stream = kafkaStream.map(record -> (record.value().toString()));
// Then, we split each stream into lines or memos
JavaDStream<String> memos = stream.flatMap(x -> Arrays.asList(x.split("\n")).iterator());
/*
To split each memo into sections of ids and messages, we have to use the code \\ plus the character
*/
JavaDStream<String> sections = memos.flatMap(y -> Arrays.asList(y.split("\\|")).iterator());
sections.print();
sections.foreachRDD(rdd -> {
rdd.foreachPartition(partitionOfRecords -> {
//We establish the connection with Cassandra
Cluster cluster = null;
try {
cluster = Cluster.builder()
.withClusterName("VCemeteryMemos") // ClusterName
.addContactPoint("127.0.0.1") // Host IP
.build();
} finally {
if (cluster != null) cluster.close();
}
while(partitionOfRecords.hasNext()){
}
});
});
ssc.start();
ssc.awaitTermination();
}
}
Thank you in advance.
Cassandra has no function to convert from UNIX timestamp. You have to do the conversion on client side.
Ref: https://docs.datastax.com/en/cql/3.3/cql/cql_reference/timeuuid_functions_r.html

Union a List of Flume Receivers in Spark Streaming

In order to increase parallelism as recommended in the Spark Streaming Programming guide I'm setting up multiple receivers and trying to union a list of them. This code works as expected:
private JavaDStream<SparkFlumeEvent> getEventsWorking(JavaStreamingContext jssc, List<String> hosts, List<String> ports) {
List<JavaReceiverInputDStream<SparkFlumeEvent>> receivers = new ArrayList<>();
for (String host : hosts) {
for (String port : ports) {
receivers.add(FlumeUtils.createStream(jssc, host, Integer.parseInt(port)));
}
}
JavaDStream<SparkFlumeEvent> unionStreams = receivers.get(0)
.union(receivers.get(1))
.union(receivers.get(2))
.union(receivers.get(3))
.union(receivers.get(4))
.union(receivers.get(5));
return unionStreams;
}
But I don't actually know how many receivers my cluster will have until runtime. When I try to do this in a loop I get an NPE.
private JavaDStream<SparkFlumeEvent> getEventsNotWorking(JavaStreamingContext jssc, List<String> hosts, List<String> ports) {
List<JavaReceiverInputDStream<SparkFlumeEvent>> receivers = new ArrayList<>();
for (String host : hosts) {
for (String port : ports) {
receivers.add(FlumeUtils.createStream(jssc, host, Integer.parseInt(port)));
}
}
JavaDStream<SparkFlumeEvent> unionStreams = null;
for (JavaReceiverInputDStream<SparkFlumeEvent> receiver : receivers) {
if (unionStreams == null) {
unionStreams = receiver;
} else {
unionStreams.union(receiver);
}
}
return unionStreams;
}
ERROR:
16/09/15 17:05:25 ERROR JobScheduler: Error in job generator
java.lang.NullPointerException
at org.apache.spark.streaming.DStreamGraph$$anonfun$getMaxInputStreamRememberDuration$2.apply(DStreamGraph.scala:172)
at org.apache.spark.streaming.DStreamGraph$$anonfun$getMaxInputStreamRememberDuration$2.apply(DStreamGraph.scala:172)
at scala.collection.TraversableOnce$$anonfun$maxBy$1.apply(TraversableOnce.scala:225)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
at scala.collection.IndexedSeqOptimized$class.reduceLeft(IndexedSeqOptimized.scala:68)
at scala.collection.mutable.ArrayBuffer.reduceLeft(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:225)
at scala.collection.AbstractTraversable.maxBy(Traversable.scala:105)
at org.apache.spark.streaming.DStreamGraph.getMaxInputStreamRememberDuration(DStreamGraph.scala:172)
at org.apache.spark.streaming.scheduler.JobGenerator.clearMetadata(JobGenerator.scala:270)
at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:182)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:87)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:86)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
16/09/15 17:05:25 INFO MemoryStore: ensureFreeSpace(15128) called with
curMem=520144, maxMem=555755765 16/09/15 17:05:25 INFO MemoryStore:
Block broadcast_24 stored as values in memory (estimated size 14.8 KB,
free 529.5 MB) Exception in thread "main"
java.lang.NullPointerException
at org.apache.spark.streaming.DStreamGraph$$anonfun$getMaxInputStreamRememberDuration$2.apply(DStreamGraph.scala:172)
at org.apache.spark.streaming.DStreamGraph$$anonfun$getMaxInputStreamRememberDuration$2.apply(DStreamGraph.scala:172)
at scala.collection.TraversableOnce$$anonfun$maxBy$1.apply(TraversableOnce.scala:225)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
at scala.collection.IndexedSeqOptimized$class.reduceLeft(IndexedSeqOptimized.scala:68)
at scala.collection.mutable.ArrayBuffer.reduceLeft(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:225)
at scala.collection.AbstractTraversable.maxBy(Traversable.scala:105)
at org.apache.spark.streaming.DStreamGraph.getMaxInputStreamRememberDuration(DStreamGraph.scala:172)
at org.apache.spark.streaming.scheduler.JobGenerator.clearMetadata(JobGenerator.scala:270)
at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:182)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:87)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:86)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
What's the correct way to do this?
Can you please try out the below code, It would solve your problem:
private JavaDStream<SparkFlumeEvent> getEventsNotWorking(JavaStreamingContext jssc, List<String> hosts, List<String> ports) {
List<JavaDStream<SparkFlumeEvent>> receivers = new ArrayList<JavaDStream<SparkFlumeEvent>>();
for (String host : hosts) {
for (String port : ports) {
receivers.add(FlumeUtils.createStream(jssc, host, Integer.parseInt(port)));
}
}
return jssc.union(receivers.get(0), receivers.subList(1, receivers.size()));;
}

CmisInvalidArgumentException bad request exception

i am getting below error while run this program
i am using SharePoint server 2010 and recently i am install danish language pack in SharePoint server for client environment . but after this when ever i am run below code
i am getting below exceptions
org.apache.chemistry.opencmis.commons.exceptions.CmisInvalidArgumentException: Bad Request
at org.apache.chemistry.opencmis.client.bindings.spi.atompub.AbstractAtomPubService.convertStatusCode(AbstractAtomPubService.java:453)
at org.apache.chemistry.opencmis.client.bindings.spi.atompub.AbstractAtomPubService.read(AbstractAtomPubService.java:601)
at org.apache.chemistry.opencmis.client.bindings.spi.atompub.NavigationServiceImpl.getChildren(NavigationServiceImpl.java:86)
at org.apache.chemistry.opencmis.client.runtime.FolderImpl$2.fetchPage(FolderImpl.java:285)
at org.apache.chemistry.opencmis.client.runtime.util.AbstractIterator.getCurrentPage(AbstractIterator.java:132)
at org.apache.chemistry.opencmis.client.runtime.util.AbstractIterator.getTotalNumItems(AbstractIterator.java:70)
at org.apache.chemistry.opencmis.client.runtime.util.AbstractIterable.getTotalNumItems(AbstractIterable.java:94)
at ShareTest1.main(ShareTest1.java:188)
public class ShareTest
{
static Session session = null;
static Map<String,Map<String, String>> allPropMap=new HashMap<String,Map<String, String>>();
static void getSubTypes(Tree tree)
{
ObjectType objType = (ObjectType) tree.getItem();
if(objType instanceof DocumentType)
{
System.out.println("\n\nType name "+objType.getDisplayName());
System.out.println("Type Id "+objType.getId());
ObjectType typeDoc=session.getTypeDefinition(objType.getId());
Map<String,PropertyDefinition<?>> mp=typeDoc.getPropertyDefinitions();
for(String key:mp.keySet())
{
PropertyDefinition<?> propdef=mp.get(key);
HashMap<String,String> propMap=new HashMap<String,String>();
propMap.put("id",propdef.getId());
propMap.put("displayName",propdef.getDisplayName());
System.out.println("\nId="+propMap.get("id")+" DisplayName="+propMap.get("displayName"));
System.out.println("Property Type = "+propdef.getPropertyType().toString());
System.out.println("Property Name = "+propdef.getPropertyType().name());
System.out.println("Property Local Namespace = "+propdef.getLocalNamespace());
if(propdef.getChoices()!=null)
{
System.out.println("Choices size "+propdef.getChoices().size());
}
if(propdef.getExtensions()!=null)
{
System.out.println("Extensions "+propdef.getExtensions().size());
}
allPropMap.put(propdef.getId(),propMap);
}
List lstc=tree.getChildren();
System.out.println("\nSize of list "+lstc.size());
for (int i = 0; i < lstc.size(); i++) {
getSubTypes((Tree) lstc.get(i));
}
}
}
public static void main(String[] args)
{
/**
* Get a CMIS session.
*/
String user="parag.patel";
String pwd="Admin123";
/*Repository : Abc*/
String url="http://sharepointind1:34326/sites/DanishTest/_vti_bin/cmis/rest/6B4D3830-65E5-49C9-9A02-5D67DB1FE87B?getRepositoryInfo";
String repositoryId="6B4D3830-65E5-49C9-9A02-5D67DB1FE87B";
// Default factory implementation of client runtime.
// default factory implementation
SessionFactory factory = SessionFactoryImpl.newInstance();
Map<String, String> parameter = new HashMap<String, String>();
// user credentials
parameter.put(SessionParameter.USER, "parag.patel");
parameter.put(SessionParameter.PASSWORD, "Admin123");
// connection settings
parameter.put(SessionParameter.ATOMPUB_URL, "http://sharepointind1:34326/sites/DanishTest/_vti_bin/cmis/rest/6B4D3830-65E5-49C9-9A02-5D67DB1FE87B?getRepositoryInfo");
parameter.put(SessionParameter.BINDING_TYPE, BindingType.ATOMPUB.value());
parameter.put(SessionParameter.REPOSITORY_ID, "6B4D3830-65E5-49C9-9A02-5D67DB1FE87B");
parameter.put(SessionParameter.LOCALE_ISO3166_COUNTRY, "DK");
parameter.put(SessionParameter.LOCALE_ISO639_LANGUAGE, "da");
parameter.put(SessionParameter.LOCALE_VARIANT, "");
parameter.put(SessionParameter.AUTHENTICATION_PROVIDER_CLASS, CmisBindingFactory.STANDARD_AUTHENTICATION_PROVIDER);
// create session
Session session = factory.createSession(parameter);
if(repositoryId!=null)
{
parameter.put(SessionParameter.REPOSITORY_ID, repositoryId);
session=factory.createSession(parameter);
RepositoryInfo repInfo=session.getRepositoryInfo();
System.out.println("Repository Id "+repInfo.getId());
System.out.println("Repository Name "+repInfo.getName());
System.out.println("Repository cmis version supported "+repInfo.getCmisVersionSupported());
System.out.println("Sharepoint product "+repInfo.getProductName());
System.out.println("Sharepoint version "+repInfo.getProductVersion());
System.out.println("Root folder id "+repInfo.getRootFolderId());
try
{
AclCapabilities cap=session.getRepositoryInfo().getAclCapabilities();
OperationContext operationContext = session.createOperationContext();
int maxItemsPerPage=5;
//operationContext.setMaxItemsPerPage(maxItemsPerPage);
int documentCount=0;
session.setDefaultContext(operationContext);
CmisObject object = session.getObject(new ObjectIdImpl(repInfo.getRootFolderId()));
Folder folder = (Folder) object;
System.out.println("======================= Root folder "+folder.getName());
ItemIterable<CmisObject> children = folder.getChildren();
long to=folder.getChildren().getTotalNumItems();
System.out.println("Total Children "+to);
Iterator<CmisObject> iterator = children.iterator();
while (iterator.hasNext()) {
CmisObject child = iterator.next();
System.out.println("\n\nChild Id "+child.getId());
System.out.println("Child Name "+child.getName());
if (child.getBaseTypeId().value().equals(ObjectType.FOLDER_BASETYPE_ID))
{
System.out.println("Type : Folder");
Folder ftemp=(Folder) child;
long tot=ftemp.getChildren().getTotalNumItems();
System.out.println("Total Children "+tot);
ItemIterable<CmisObject> ftempchildren = ftemp.getChildren();
Iterator<CmisObject> ftempIt = ftempchildren.iterator();
int folderDoc=0;
while (ftempIt.hasNext()) {
CmisObject subchild = ftempIt.next();
if(subchild.getBaseTypeId().value().equals(ObjectType.DOCUMENT_BASETYPE_ID))
{
System.out.println("============ SubDoc "+subchild.getName());
folderDoc++;
documentCount++;
}
}
System.out.println("Folder "+child.getName()+" No of documents="+(folderDoc));
}
else
{
System.out.println("Type : Document "+child.getName());
documentCount++;
}
}
System.out.println("\n\nTotal no of documents "+documentCount);
}
catch(CmisPermissionDeniedException pd)
{
System.out.println("Error ********** Permission Denied ***************** ");
pd.printStackTrace();
}
catch (CmisObjectNotFoundException co) {
System.out.println("Error ******** Root folder not found ***************");
co.printStackTrace();
}
catch (Exception e) {
e.printStackTrace();
}
}
else
{
System.out.println("Else");
Repository soleRepository=factory.getRepositories(
parameter).get(0);
session = soleRepository.createSession();
}
}
}
here it is my lib which i used in above code .
chemistry-opencmis-client-api-0.9.0
chemistry-opencmis-client-bindings-0.9.0
chemistry-opencmis-client-impl-0.9.0
chemistry-opencmis-commons-api-0.9.0
chemistry-opencmis-commons-impl-0.9.0
log4j-1.2.14
slf4j-api-1.6.1
slf4j-log4j12-1.6.1
it works fine when i am trying to connect repository (url) which is created in english language .
but when try to connect with the danish .repository then getting error.
The best thing you can do is to increase the SharePoint log level for CMIS. Sometimes the logs provide a clue.
The SharePoint 2010 CMIS implementaion isn't a 100% spec compliant. OpenCMIS 0.12.0 contains a few workarounds for SharePoint 2010 and 2013. Most of them a little things like an extra requied URL parameter that isn't in the spec. I wouldn't be supprised if this is something similar.

Resources