WSO2 Stream Processor Connecting to Cassandra gives error - cassandra

I am trying to connect my siddhi application to cassandra data store by following the instructions given in the example program given in the editor.
I downloaded the datastax java jar(Osgi) and placed it in the WSO2/lib folder and now started the application. Now I get an error
> [2019-03-10_16-41-17_549] ERROR {org.wso2.siddhi.core.table.Table} - Error on 'Store-cassandra'. . Error while connecting to Table 'SweetProductionTable'. (Encoded)
java.lang.NullPointerException
at org.wso2.extension.siddhi.store.cassandra.CassandraEventTable.connect(CassandraEventTable.java:443)
at org.wso2.siddhi.core.table.Table.connectWithRetry(Table.java:388)
at org.wso2.siddhi.core.SiddhiAppRuntime.startWithoutSources(SiddhiAppRuntime.java:401)
at org.wso2.siddhi.core.SiddhiAppRuntime.start(SiddhiAppRuntime.java:376)
at org.wso2.carbon.siddhi.editor.core.internal.DebugRuntime.start(DebugRuntime.java:68)
at org.wso2.carbon.siddhi.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:37)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:588)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$57(MSF4JHttpConnectorListener.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2019-03-10_16-41-17_551] ERROR {org.wso2.siddhi.core.SiddhiAppRuntime} - Error starting Siddhi App 'Store-cassandra', triggering shutdown process. null (Encoded)
Below is the corresponding code
define stream SweetProductionStream (id int, name string);
#Store(type='cassandra' , cassandra.host='localhost' ,keyspace='production')
#index('uid')
#primaryKey('uid')
define table SweetProductionTable (uid int, name string);
/* Inserting event into the cassandra keyspace */
#info(name='query1')
from SweetProductionStream
select SweetProductionStream.id as uid, SweetProductionStream.name
insert into SweetProductionTable;
and below is the instructions given in the example
Prerequisites:
1) Ensure that Cassandra version 3 or above is installed on your machine.
2) Add the DataStax Java driver into {WSO2_SP_HOME}/lib as follows:
a) Download the DataStax Java driver from: http://central.maven.org/maven2/com/datastax/cassandra/cassandra-driver-core/3.3.2/cassandra-driver-core-3.3.2.jar
b) Use the "jartobundle" tool in {WSO2_SP_Home}/bin to extract and convert the above JARs into OSGi bundles.
For Windows: <SP_HOME>/bin/jartobundle.bat <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
For Linux: <SP_HOME>/bin/jartobundle.sh <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
Note: The driver given in the above link is a OSGi bundled one. Please skip this step if the jar is already OSGi bunbled.
c) Copy the converted bundles to the {WSO2_SP_Home}/lib directory.
3) Create a keyspace named 'production' in Cassanndra store.
4) In the store configuration of this application, replace 'username' and 'password' values with your Cassandra credentials.
5) Save this sample.
Executing the Sample:
1) Start the Siddhi application by clicking on 'Run'.
2) If the Siddhi application starts successfully, the following message is shown on the console
* Store-cassandra.siddhi - Started Successfully!
Note:
If you want to edit this application while it's running, stop the application, make your edits and save the application, and then start it again.
Testing the Sample:
1) Simulate single events:
a) Click on 'Event Simulator' (double arrows on left tab) and click 'Single Simulation'
b) Select 'Store-cassandra' as 'Siddhi App Name' and select 'searchSweetProductionStream' as 'Stream Name'.
c) Provide attribute values, and then click Send.
2) Send at least one event where the name matches a name value in the data you previously inserted into the SweetProductionTable. This will satisfy the 'on' condition of the join query.
3) Optionally, send events to the other corresponding streams to add, delete, update, insert, and search events.
Notes:
- After a change in the store, you can use the search stream to see whether the operation is successful.
- The Primary Key constraint in SweetProductionTable is disabled, because the name cannot be used as a PrimaryKey in a ProductionTable.
- You can use Siddhi functions to create a unique ID for the received events, which can then be used to apply the Primary Key constraint on the data store records. (http://wso2.github.io/siddhi/documentation/siddhi-4.0/#function)
Viewing the Results:
See the output for raw materials on the console. You can use searchSweetProductionStream to check for inserted, deleted, and updated events.
*/
Thanks in advance.

please provide your Cassandra credentials (username and password).
Eg: #store(type='cassandra' , cassandra.host='localhost', username='cassandra', password='cassandra',keyspace='production',
column.family='SweetProductionTable')
Please refer this sample.
Siddhi store casssandra

Related

Getting this error while running a systemd service

[ERROR] 05:24:00+0100 [main] internal.NodeStartupLogging.invoke - Could not create the DataSource: liquibase.exception.DatabaseException: Error executing SQL UPDATE PUBLIC.DATABASECHHANGELOGLOCK SET LOCKED = TRUE, LOCKEDBY = '172.18.0.1 (172.18.0.1)', LOCKGRANTED = '2019-04-03 05:23:18.603' WHERE ID = 1 AND LOCKED = FALSE: The database is read only; SQL statement:
where should i run the update query to set locked = false in the server?
It is said in error message that your database is in readonly mode. To allow liquibase to apply updates you have to enable the write (and most likely delete) permissions. For H2 it's done by adding the ACCESS_MODE_DATA=rws parameter to url like that: jdbc:h2:~/test;ACCESS_MODE_DATA=rws (H2 docs, Corda docs)
However it's very late,
Maybe you have copied cordapp files from another location or previously have run the node by other user. Therefore, delete these files and directories:
persistence.mv.db ,
persistence.trace.db ,
additional-node-infos ,
artemis ,
brokers ,
capsule ,
drivers

Jhipster auditing,wrong modified by name

I want to save 10k users from active directory and i am using auditing but I have a bug.
The column modified_by does not work well.
The error is
2016-04-04 14:49:27,353 DEBUG [ForkJoinPool.commonPool-worker-3] StateServiceImpl: Request to save userAD, 77879
2016-04-04 14:49:27,354 DEBUG [http-nio-80-exec-6] StateServiceImpl: Request to save userAD, 96459
As you can randomly it is using threads. When it is using ForkJoinPool.commonPool-worker-X in the modified_by column fill in with the name "system" and when it is called from http-nio-80-exec-X it filled by the name of the user who is logged in.
thanx
Please provide your .yo-rc.json file and JHipster version.
Indicate also how you load you 10k users: through REST API or something else?
Did you write/modify this code or is it the code generated by JHipster unmodified?
SecurityContextHolder probably uses a ThreadLocal variable to store current user and then if you start a thread you do not get this context copied to the execution thread which results in getting default user: system

Azure SQL Data Warehouse: No catalog entry found for partition ID <id> in database <id>. The metadata is inconsistent. Run DBCC CHECKDB

I am working on moving stored procedures from an on-prem SQL Server database to an Azure SQL Data Warehouse (ASDW). Throughout the process I have had to work around a few missing features - time consuming but not impossible. One thing I have had to do is replace CTE's followed by MERGE statements with temp tables followed by UPDATE/INSERT/DELETE statements (since CTE's cannot be followed by these statements). At the beginning of each SP I check for the temp tables and delete them if they exist.
Today, I created another stored procedure in the ASDW without any temp tables (no updates/inserts/deletes so I left the CTE's in there), it "compiled", and I was able to run it without issue (returned an empty result set, as there is no data yet). I created another SP after this, and when I went to execute it, I got the following error:
...No catalog entry found for partition ID (id) in database 26. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption...
I then went back to the first SP that I mentioned, and it gave me the same error, even though it had previously run without flaw.
I tried running DBCC CHECKDB as instructed but alas, it is not supported/doesn't work.
I dug around a lot, and what I ended up doing was scaling my database from 100DWU's to 500DWU's. I am at 0.16% of my database storage size limit, and there is barely any data anywhere (total DB size is <300MB).
Is there an explanation for this? If not, I can't in good conscience use this platform in a production environment.
Full error:
Msg 110802, Level 16, State 1, Line 1
110802;An internal DMS error occurred that caused this operation to fail.
Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Workers.DmsSqlNativeException,
Message: SqlNativeBufferReader.Run, error in OdbcExecuteQuery: SqlState:
42000, NativeError: 608, 'Error calling: SQLExecDirect(this->GetHstmt(), (SQLWCHAR *)statementText, SQL_NTS), SQL return code: -1 | SQL Error Info:
SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]No catalog entry found for partition ID
72057594047758336 in database 36. The metadata is inconsistent. Run DBCC
CHECKDB to check for a metadata corruption. | Error calling: pReadConn-
>ExecuteQuery(statementText, bufferFormat) | state: FFFF, number: 134148,
active connections: 100', Connection String: Driver={pdwodbc};APP=TypeC01-
DmsNativeReader:DB196\mpdwsvc (2504)- ODBC;Trusted_Connection=yes;AutoTranslate=no;Server=\\.\pipe\DB.196-
bb5f9dd884cf\sql\query
I'm sorry to hear about your experience with Azure SQL Data Warehouse. I believe this is a defect related to BIT data type handling for NOT NULL columns. Can you confirm that you have a BIT NOT NULL column (e.g., CREATE TABLE t1 (IsTrue BIT NOT NULL);)?
If so, a fix has been coded and is in testing for release. To mitigate this now, you can either switch to a TINY INT or remove the NOT NULL setting for the column.

Unable to select from SQL Database tables using node-ibm_db

I created a new table in the Bluemix SQL Database service by uploading a csv (baseball.csv) and took the default table name of "baseball".
I created a simple app in Node.js which is just trying to select data from the table with select * from baseball, but I keep getting the following error:
[IBM][CLI Driver][DB2/NT] SQL0204N "USERxxxx.BASEBALL" in an undefined name
Why can't it find my database table?
This issue seems independent of bluemix, rather it is usage error.
This error is possibly caused by following:
The object identified by name is not defined in the database.
User response
Ensure that the object name (including any required qualifiers) is correctly specified in the SQL statement and it exists.
try running "list tables" from command prompt to check if your table spelling is correct or not.
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.messages.sql.doc/doc/msql00204n.html?cp=SSEPGG_9.7.0%2F2-6-27-0-130
I created the table from SQL Database web UI in bluemix and took the default name of baseball. It looks like this creates a case-sensitive table name.
Unfortunately for me, the sql_db libary (and all db2 clients I believe) auto-capitalizes the SQL query into "SELECT * FROM BASEBALL"
The solution was to either
A. Explicitly name my table BASEBALL in the web UI; or
B. Modify my sql query by quoting the table name:
select * from "baseball"
More info at http://www.ibm.com/developerworks/data/library/techarticle/0203adamache/0203adamache.html#N10121

In H2 database, the auto_increment field is incremented by 32?

I have this simple Table (just for test) :
create table table
(
key int not null primary key auto_increment,
name varchar(30)
);
Then I execute the following requests:
insert into table values ( null , 'one');// key=1
insert into table values ( null , 'two');// key=2
At this Stage all goes well, then I close The H2 Console and re-open it and re-execute this request :
insert into table values ( null , 'three');// key=33
Finally, here is all results:
I do not know how to solve this problem, if it is a real problem...
pending a response from the author...
The database uses a cache of 32 entries for sequences, and auto-increment is internally implemented a sequence. If the system crashes without closing the database, at most this many numbers are lost. This is similar to how sequences work in other databases. Sequence values are not guaranteed to be generated without gaps in such cases.
So, did you really close the database? You should - it's not technically a problem if you don't, but closing the database will ensure such strange things will not occur. I can't reproduce the problem if I normally close the database (stop the H2 Console tool). Closing all connections will close the database, and the database is closed if the application is stopped normally (using a shutdown hook).
By the way, what is your exact database URL? It seems you are using jdbc:h2:tcp://... but I can't see the rest of the URL.
Don't close terminal. Terminal is parent process of h2-tcp-server. They are not detached. When you just close terminal, it's process closes all child processes, what means emergency server shutdown
This happens when a database "thinks" it got forced to close (an accident or emergency for example), and its related to "identity-cache"
In my case I was facing this issue while learning and playing with the H2 database with an SpringBoot application, the solution was that at the h2-console when finishing playing, execute the SHUTDOWN; command and after that you can safely stop your spring boot application without having this tremendous jump on your autogenerated fields.
Personal Note: This usually is not a problem if you are creating the new database on every application start, but when you persist the data (for example on a data.sql file like on the below properties) you are playing with on the h2 database and it persist even when restarting, then this happens, so close it safely with SHUTDOWN command.
spring.datasource.url=jdbc:h2:./src/main/resources/data;DB_CLOSE_ON_EXIT=FALSE;AUTO_RECONNECT=TRUE
spring.jpa.hibernate.ddl-auto=update
References:
Solution
https://stackoverflow.com/a/40135657/10195307
Learn about identity-cache https://www.sqlshack.com/learn-to-avoid-an-identity-jump-issue-identity_cache-with-the-help-of-trace-command-t272/

Resources