I need to use WSO2 CEP 3.0 for a project but I do not have a clue about it. My idea is to use the CEP engine as triggers on a little Cassandra database that I have created , to edit one field when another one is changed.
I have read the official documentation, searched on the support forums (stackoverflow included), googled about it, but still I do not know what steps I have to follow.
I would appreciate if anyone could give some explanation or any documentation for doing this task.
Thanks in advance.
CEP is a processing engine that processes events in real-time.. To process events, events needs to send to CEP in some manner. In your case if any change occur db, there should be some other client or external party need send events to CEP. There are several default adaptors available which receive events. see the links [1] & [2] for more info..
[1] http://docs.wso2.org/display/CEP300/Input+Event+Adaptors
[2] http://wso2.com/library/articles/2013/08/writing-custom-event-adaptors-for-cep-3.0.0/
Related
I'm developing a Policycenter integration with BillingCenter. I did the initial step-by-step according to the documentation, but when changing some field of an account in PolicyCenter, the synchronization is not performed as in BillingCenter.
I need to sync PolicyCenter account updates with BillingCenter but I couldn't find anything specific in the documentation
To resolve the issue with the PolicyCenter integration with BillingCenter not synchronizing properly, you can try the following steps:
Check the configuration settings: Ensure that the integration
settings are configured correctly, including the mapping of fields
between PolicyCenter and BillingCenter.
Verify data consistency: Make sure that the data in PolicyCenter and
BillingCenter is consistent and meets the required format. This
includes verifying the data types, length, and values of the fields.
Monitor system logs: Check the logs of both PolicyCenter and
BillingCenter for any error messages or exceptions related to the
integration. This can help you identify any issues with the data or
communication between the two systems.
Test the integration: Run a test of the integration to see if it is
working properly. This can help you identify any issues with the
integration or data flow.
Contact support: If you are unable to resolve the issue, reach out
to the Guidewire support team for assistance. They can provide
guidance on how to troubleshoot the issue and resolve any issues
that may be causing the synchronization problem.
If you need further guidance on any of these steps or have any other questions, you can consult the Guidewire documentation or reach out to the support team for help.
Try preUpdate (DemoPreUpdate.gs)
I have been reading docs and articles on pouchdb/couchdb/cloudant. I am not able to create this simple architecture in my head. I need help!
So there are many users on the app. Each user has a separate database (which I read is the approach in pouch/couch/cloudant setup).
Now lets just focus on a single user. This user has some remote data already present on our server(couchdb). He has 3 separate docs stored.
He accesses docs 1 and docs 2 from browser 1. And docs 2 and docs 3 from browser 2.
Content in both the browsers must be in sync.
Should I be using Sync api of pouchdb? But as I read, it sync's the whole database. How can I use this api to sync only a subset of the central database. Is filtered replication answer here?
And also I don't want to push both the docs in a single call. He can access docs as he needs.
What is the correct approach to implement this logic with pouch/couch databases. If you can explain with a little code, that will be great. I just need basic ideas.
Is this kind of problem easily solvable in upcoming releases of CouchDB 2.0 and PouchDB-find.
Thanks a lot!
If you take a look at the PouchDB documentation, you should see the options.doc_ids. This parameter let you setup a replication on certain document ids. In your scenario, this would be solving your problem.
As we all know that mongooplog tool is going to be removed in upcoming releases. I needed help about some the following issue:
I was planning to create a listener using mongooplog which will read any kind of activity on mongodb and will generate a trigger according to activity which will hit another server. Now, since mongooplog is going out, can anyone suggest what alternative can I use in this case and how to use it.
I got this warning when trying to use mongooplog. Please let me know if you any further questions.
warning: mongooplog is deprecated, and will be removed completely in a future release
PS: I am using node.js framework to implement the listener. I have not written any code yet so have no code to share.
The deprecation message you are quoting only refers to the mongooplog command-line tool, not the general approach of tailing the oplog. The mongooplog tool can be used for some types of data migrations, but isn't the right approach for a general purpose listener or to wrap in your Node.js application.
You should continue to create a tailable cursor to follow oplog activity. Tailable cursors are supported directly by the MongoDB drivers. For an example using Node.js see: The MongoDB Oplog & Node.js.
You may also want to watch/upvote SERVER-13932: Change Notification Stream API in the MongoDB issue tracker, which is a feature suggestion for a formal API (rather than relying on the internal oplog format used by replication).
Context
I want to develop an automated script for broker (IIB9/10) resource monitoring, capturing information about broker running status, message flows deployed, jvm usage, number of threads running, etc.
The initial thought is to have a report generated using scripts and then displayed over a browser.
Question
Can this be entirely done using only Ant scripts (i am not sure as have not explored iterative processing in Ant in detail) or a combination of Ant and batch/shell scripts is the best bet?
I know Web user interface in IIB10 does most of it but i want to add some features.
I suggest you to take a look at message flow statistics and accounting:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/ac19100_.htm?lang=en
This is a feature of IIB by which it is capable of emitting resource statistics. The statistics are published to a topic in a well defined XML format. I would try solving your requirement by writing an application to read these messages and use the data in them to generate your graphs or other reports.
There is a support pack, IS03 which can give you an idea of such an application.
This will not cover everything you mentioned, for example monitoring what flows are deployed cannot be achieved like this, but it gives a comprehensive view of the load and performance of your applications:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bj10440_.htm?lang=en
And there is a resource statistics feature as well for monitoring resources used by your applications:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bj43310_.htm?lang=en
To get everything you will need a variety of tools I think. You can use Resource Stats and Accounting / Stats as suggested by Attila to get JVM and thread usage. The Broker publishes updates to a topic so you can create a simple subscriber to grab that info.
For deploy related info, stop / start state and so forth I would be looking at building simple Integration API or REST API applications to call from ant.
You can find documentation for these API's here:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_10.0.0/com.ibm.etools.mft.doc/be43410_.htm?lang=en
and here:
http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/SSMKHH_10.0.0/com.ibm.etools.mft.restapi.doc/index.html
We are finding it very hard to monitor the logs spread over a cluster of four managed servers. So, I am trying to build a simple log4j appender which uses solrj api to store the logs in the solr server. The idea is to use leverage REST of solr to build a better GUI which could help us
search the logs and the display the previous and the next 50 lines or so and
tail the logs
Being awful on front ends, I am trying to cookup something with GWT (a prototype version). I am planning to host the project on googlecode under ASL.
Greatly appreciate if you could throw some insights on
Whether it makes sense to create a project like this ?
Is using Solr for this an overkill?
Any suggestions on web framework/tool which will help me build a tab-based front end for tailing.
You can use a combination of logstash (for shipping and filtering logs) + elasticsearch (for indexing and storage) + kibana (for a pretty GUI).
The loggly folks have also built logstash, which can be backed by quite a few things, including lucene via elastic search. It can forward to graylog also.
Totally doable thing. Many folks have done the roll your own. A couple of useful links.. there is an online service, www.loggly.com that does this. They are actually based on Solr as the core storage engine! Obviously they have built a proprietary interface.
Another option is http://www.graylog2.org/. It is opensource. Not backed by Solr, but still very cool!