Openbravo foreign key not found issue while doing update.database - openbravo

Issue while doing a update.database in openbravo
[java] ALTER TABLE AD_FIELD
[java] ADD CONSTRAINT AD_COLUMN_FIELD FOREIGN KEY (AD_COLUMN_ID) REFERENCES AD_COLUMN (AD_COLUMN_ID)
[java] 250661 ERROR - Not all the commands in the final update step were executed correctly. This likely means at least one foreign key was not activated successfully. Please review which one, and fix the missing references, or recover the backup of your sources.
[java] java.lang.Exception: There were serious problems while updating the database. Please review and fix them before continuing with the application rebuild
[java] at org.openbravo.ddlutils.task.AlterDatabaseDataAll.doExecute(AlterDatabaseDataAll.java:227)
[java] at org.openbravo.ddlutils.task.BaseDatabaseTask.execute(BaseDatabaseTask.java:86)
[java] at org.openbravo.ddlutils.task.AlterDatabaseJava.main(AlterDatabaseJava.java:38)`

Please execute the below query in your database
select AD_FIELD_ID from AD_FIELD where AD_COLUMN_ID not in (select AD_COLUMN_ID from AD_COLUMN);
The output columns are missing foreign key AD_COLUMN_ID in the AD_FIELD table.
Go to modulepath/src-db/database/sourcedata/AD_FIELD.xml
search for individual AD_FIELD_ID from above query output and delete from the AD_FIELD.xml file and do update.database again.

Related

Google Cloud Spanner not implementing check constraint

When trying to create a table with gcloud CLI (using the spanner emulator)
gcloud spanner databases ddl update db_name --instance=instance_name --ddl=$data
I'm attempting to add a check constraint where $data evaluates to:
CREATE TABLE my_table (
my_key STRING(36),
some_column STRING(100),
CONSTRAINT ck_my_table_some_column_enum CHECK (some_column = 'this' OR some_column = 'that'),
) PRIMARY KEY (my_key)
and keep getting the following error:
ERROR: (gcloud.spanner.databases.ddl.update) HttpError accessing <http://localhost:9020/v1/projects/spanner-emulator-test/instances/instance_name/databases/db_name/ddl?alt=json>: response: <{'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Tue, 31 Aug 2021 22:53:52 GMT', 'transfer-encoding': 'chunked', 'status': 501}>, content <{"error":"Check Constraint is not implemented.","code":12,"message":"Check Constraint is not implemented."}>
This may be due to network connectivity issues. Please check your network settings, and the status of the service you are trying to reach.
First inclination might be to verify the emulator is running given the last part of the message. It is. And I've tried immediately executing the same DDL without the check constraint and it works perfectly - table is created.
DBeaver
I've also tried creating this table using the DBeaver db tool and get the following error:
org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [12]: UNIMPLEMENTED: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:513)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:444)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:431)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:816)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3440)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4718)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcSqlExceptionImpl: UNIMPLEMENTED: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory.of(JdbcSqlExceptionFactory.java:222)
at com.google.cloud.spanner.jdbc.AbstractJdbcStatement.execute(AbstractJdbcStatement.java:259)
at com.google.cloud.spanner.jdbc.JdbcStatement.executeStatement(JdbcStatement.java:110)
at com.google.cloud.spanner.jdbc.JdbcStatement.execute(JdbcStatement.java:106)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:130)
... 12 more
Caused by: com.google.cloud.spanner.SpannerException: UNIMPLEMENTED: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:284)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:61)
at com.google.cloud.spanner.SpannerExceptionFactory.fromApiException(SpannerExceptionFactory.java:299)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:174)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:110)
at com.google.cloud.spanner.DatabaseAdminClientImpl.lambda$updateDatabaseDdl$7(DatabaseAdminClientImpl.java:337)
at com.google.api.core.ApiFutures$ApiFunctionToGuavaFunction.apply(ApiFutures.java:240)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:223)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:211)
at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:124)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:724)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.addListener(FluentFuture.java:110)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:45)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.AbstractCatchingFuture.create(AbstractCatchingFuture.java:41)
at com.google.common.util.concurrent.Futures.catching(Futures.java:297)
at com.google.api.core.ApiFutures.catching(ApiFutures.java:99)
at com.google.api.gax.longrunning.OperationFutureImpl.<init>(OperationFutureImpl.java:97)
at com.google.cloud.spanner.DatabaseAdminClientImpl.updateDatabaseDdl(DatabaseAdminClientImpl.java:335)
at com.google.cloud.spanner.connection.DdlClient.executeDdl(DdlClient.java:89)
at com.google.cloud.spanner.connection.DdlClient.executeDdl(DdlClient.java:84)
at com.google.cloud.spanner.connection.SingleUseTransaction.lambda$executeDdlAsync$1(SingleUseTransaction.java:272)
at io.grpc.Context$2.call(Context.java:596)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Suppressed: com.google.cloud.spanner.connection.AbstractBaseUnitOfWork$SpannerAsyncExecutionException: Execution failed for statement: CREATE TABLE my_table (
my_key STRING(36),
some_column STRING(100),
CONSTRAINT ck_my_table_some_column_enum CHECK (some_column = 'this' OR some_column = 'that'),
) PRIMARY KEY (my_key)
at com.google.cloud.spanner.connection.AbstractBaseUnitOfWork.executeStatementAsync(AbstractBaseUnitOfWork.java:233)
at com.google.cloud.spanner.connection.AbstractBaseUnitOfWork.executeStatementAsync(AbstractBaseUnitOfWork.java:145)
at com.google.cloud.spanner.connection.SingleUseTransaction.executeDdlAsync(SingleUseTransaction.java:281)
at com.google.cloud.spanner.connection.ConnectionImpl.executeDdlAsync(ConnectionImpl.java:1157)
at com.google.cloud.spanner.connection.ConnectionImpl.execute(ConnectionImpl.java:803)
at com.google.cloud.spanner.jdbc.AbstractJdbcStatement.execute(AbstractJdbcStatement.java:250)
at com.google.cloud.spanner.jdbc.JdbcStatement.executeStatement(JdbcStatement.java:110)
at com.google.cloud.spanner.jdbc.JdbcStatement.execute(JdbcStatement.java:106)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:130)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:513)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:444)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:431)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:816)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3440)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4718)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at io.grpc.Status.asRuntimeException(Status.java:535)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
... 3 more
Check constraints is available in the latest version of emulator. Can you please check and get back to us if there are issues.
https://console.cloud.google.com/gcr/images/cloud-spanner-emulator/global/emulator#sha256:ef0fd2ec74bb17b6c31e5010fb699fd0009b9829721d5159e6a11b6a40f881f1/details
Well... it looks like check constraints don't seem to work with the emulator. I tried this with a test db in the console and it worked. If anyone comes up with a solution to this, however, or if there's something I need to configure, please let me know! Thank you.

WSO2 Stream Processor Connecting to Cassandra gives error

I am trying to connect my siddhi application to cassandra data store by following the instructions given in the example program given in the editor.
I downloaded the datastax java jar(Osgi) and placed it in the WSO2/lib folder and now started the application. Now I get an error
> [2019-03-10_16-41-17_549] ERROR {org.wso2.siddhi.core.table.Table} - Error on 'Store-cassandra'. . Error while connecting to Table 'SweetProductionTable'. (Encoded)
java.lang.NullPointerException
at org.wso2.extension.siddhi.store.cassandra.CassandraEventTable.connect(CassandraEventTable.java:443)
at org.wso2.siddhi.core.table.Table.connectWithRetry(Table.java:388)
at org.wso2.siddhi.core.SiddhiAppRuntime.startWithoutSources(SiddhiAppRuntime.java:401)
at org.wso2.siddhi.core.SiddhiAppRuntime.start(SiddhiAppRuntime.java:376)
at org.wso2.carbon.siddhi.editor.core.internal.DebugRuntime.start(DebugRuntime.java:68)
at org.wso2.carbon.siddhi.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:37)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:588)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$57(MSF4JHttpConnectorListener.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2019-03-10_16-41-17_551] ERROR {org.wso2.siddhi.core.SiddhiAppRuntime} - Error starting Siddhi App 'Store-cassandra', triggering shutdown process. null (Encoded)
Below is the corresponding code
define stream SweetProductionStream (id int, name string);
#Store(type='cassandra' , cassandra.host='localhost' ,keyspace='production')
#index('uid')
#primaryKey('uid')
define table SweetProductionTable (uid int, name string);
/* Inserting event into the cassandra keyspace */
#info(name='query1')
from SweetProductionStream
select SweetProductionStream.id as uid, SweetProductionStream.name
insert into SweetProductionTable;
and below is the instructions given in the example
Prerequisites:
1) Ensure that Cassandra version 3 or above is installed on your machine.
2) Add the DataStax Java driver into {WSO2_SP_HOME}/lib as follows:
a) Download the DataStax Java driver from: http://central.maven.org/maven2/com/datastax/cassandra/cassandra-driver-core/3.3.2/cassandra-driver-core-3.3.2.jar
b) Use the "jartobundle" tool in {WSO2_SP_Home}/bin to extract and convert the above JARs into OSGi bundles.
For Windows: <SP_HOME>/bin/jartobundle.bat <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
For Linux: <SP_HOME>/bin/jartobundle.sh <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
Note: The driver given in the above link is a OSGi bundled one. Please skip this step if the jar is already OSGi bunbled.
c) Copy the converted bundles to the {WSO2_SP_Home}/lib directory.
3) Create a keyspace named 'production' in Cassanndra store.
4) In the store configuration of this application, replace 'username' and 'password' values with your Cassandra credentials.
5) Save this sample.
Executing the Sample:
1) Start the Siddhi application by clicking on 'Run'.
2) If the Siddhi application starts successfully, the following message is shown on the console
* Store-cassandra.siddhi - Started Successfully!
Note:
If you want to edit this application while it's running, stop the application, make your edits and save the application, and then start it again.
Testing the Sample:
1) Simulate single events:
a) Click on 'Event Simulator' (double arrows on left tab) and click 'Single Simulation'
b) Select 'Store-cassandra' as 'Siddhi App Name' and select 'searchSweetProductionStream' as 'Stream Name'.
c) Provide attribute values, and then click Send.
2) Send at least one event where the name matches a name value in the data you previously inserted into the SweetProductionTable. This will satisfy the 'on' condition of the join query.
3) Optionally, send events to the other corresponding streams to add, delete, update, insert, and search events.
Notes:
- After a change in the store, you can use the search stream to see whether the operation is successful.
- The Primary Key constraint in SweetProductionTable is disabled, because the name cannot be used as a PrimaryKey in a ProductionTable.
- You can use Siddhi functions to create a unique ID for the received events, which can then be used to apply the Primary Key constraint on the data store records. (http://wso2.github.io/siddhi/documentation/siddhi-4.0/#function)
Viewing the Results:
See the output for raw materials on the console. You can use searchSweetProductionStream to check for inserted, deleted, and updated events.
*/
Thanks in advance.
please provide your Cassandra credentials (username and password).
Eg: #store(type='cassandra' , cassandra.host='localhost', username='cassandra', password='cassandra',keyspace='production',
column.family='SweetProductionTable')
Please refer this sample.
Siddhi store casssandra

What are datanucleus DELETEME* tables and when are they CREATED and DROPPED

I am getting following exception while executing spark Jobs.
org.datanucleus.exceptions.NucleusDataStoreException: Exception thrown obtaining schema column information from datastore
Which is caused by
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'hive_metastore.DELETEME1530184568175' doesn't exist
I had the same problem:
22/04/10 04:10:30 ERROR Datastore: Error thrown executing CREATE TABLE `DELETEME1649563830414`
(
`UNUSED` INTEGER NOT NULL
) ENGINE=INNODB : CREATE command denied to user 'someuser'#'xx.xx.xx.xx' for table 'DELETEME1649563830414'
since it was for develop, I just granted all privileges
mysql> GRANT ALL PRIVILEGES ON *.* TO 'someuser'#'%';

Spark Error message: Please consider a new data model based on the query pattern instead of using ALLOW FILTERING

My DSE Opscenter sends me this message:
Please consider a new data model based on the query pattern instead of using ALLOW FILTERING.
And after changing my spark code I already removed the below column value from my query. But the below error message still keeps popping up. I don't know why? Also the error message only occurs in my OPScenter on in the actual table. Thanks for your help.
Query:
select * from dse_perf.node_slow_log
Column value/ error mesage
SELECT "XXX", "XXX", "XXX", "likes", "XXX" FROM "XXX"."axes" WHERE token("article") > ? AND token("article") <= ? ALLOW FILTERING
Please consider a new data model based on the query pattern instead of using ALLOW FILTERING.
Opscenter is warning you that your request can be pretty expensive and suggesting your review the use case.
"Allow Filtering" can be pretty expensive as described here:
http://www.datastax.com/dev/blog/allow-filtering-explained-2
It maybe your use falls into the OK category - in which case you can ignore the warning. If not - it may be worth looking at other ways of modelling your data that allow you to sort it in a more efficient manner.

Exception of type 'Microsoft.WindowsAzure.StorageClient.StorageClientException' was thrown

Exception of type 'Microsoft.WindowsAzure.StorageClient.StorageClientException' was thrown.
Sometimes even if we have the fabric running and the role manager is up, we get an exception of this sort.
The code breaks at the line:
emailAddressClient.CreateTableIfNotExist("EmailAddress");
public EmailAddressDataContext(CloudStorageAccount account) :
base(account.TableEndpoint.AbsoluteUri, account.Credentials)
{
this.storageAccount = account;
CloudTableClient emailAddressClient =
new CloudTableClient(storageAccount.TableEndpoint.AbsoluteUri,
storageAccount.Credentials);
emailAddressClient.CreateTableIfNotExist("EmailAddress");
}
I give Windows Azure tables camel-cased names all the time without issues.
I wonder if by chance you already used this table name and recently deleted it? For a time after deletion (when the table is still being deleted asynchronously), you won't be able to recreate it. I believe 409 Conflict is the error code to expect in that case.
I agree with Steve Marx, casing does not seem to affect this issue. In fact Microsoft's Azure diagnostics tables are created with unusual casing eg: WADPerformanceCounters. I get the problem even in the dev environment. So it is something else entirely - my opinion.
Error fixed in my case: The problem was an error with the connection string as defined in (or lack thereof) in the webrole or workerrole project properties.
Fix:
Right-click on the webrole under "Roles" folder in your cloud application. Select "Properties" from the context menu.
Select the "Settings" tab.
Verify or Add a setting for you connection string that you will use to initialize table storage.
Mine was a simple error - no setting for my connection string.
Easy fix is to change "EmailAddress" to "Emailaddress". For some reasons it would not allow CamelCasing. So please make sure, you just have one capital letter in the name of the table that too at the beginning. Since the table names are case insensitive, you can also name it as 'emailaddress'

Resources