Connection wth BigQuery on Google Cloud Platform using Spark with Scala - apache-spark

I am not able to connect to Big Query table from Spark on GCP.
https://cloud.google.com/hadoop/bigquery-connector
I already tried steps present in above by providing project Id dataset name and table name link still no success .When I am trying to print the data using below code I am getting below error:
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/cloud/hadoop/io/bigquery/BigQueryConfiguration
at Main.main(Main.scala:27)
at Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.orgapachesparkdeploySparkSubmitrunMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmitanon2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 14 more

You probably miss the spark-bigquery-connector in your classpath, you can add it by adding the following parameter:
gcloud dataproc jobs submit spark --cluster "$MY_CLUSTER" --jars gs://spark-lib/bigquery/spark-bigquery-latest.jar ...

Related

Write file to GCS using PySpark

Trying to write to Google Cloud Storage using a local PySpark session, getting this error:
FileSystem: Failed to initialize fileystem gs://vta-delta-lake/test_table: java.io.IOException: toDerInputStream rejects tag type 123
Full stack trace:
Py4JJavaError: An error occurred while calling o51.save.
: java.io.IOException: toDerInputStream rejects tag type 123
at sun.security.util.DerValue.toDerInputStream(DerValue.java:874)
at sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1942)
at java.security.KeyStore.load(KeyStore.java:1445)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.util.SecurityUtils.loadKeyStore(SecurityUtils.java:80)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.util.SecurityUtils.loadPrivateKeyFromKeyStore(SecurityUtils.java:113)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.auth.oauth2.GoogleCredential$Builder.setServiceAccountPrivateKeyFromP12File(GoogleCredential.java:715)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.auth.oauth2.GoogleCredential$Builder.setServiceAccountPrivateKeyFromP12File(GoogleCredential.java:699)
at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromPrivateKeyServiceAccount(CredentialFactory.java:276)
at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.util.CredentialFactory.getCredential(CredentialFactory.java:401)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.getCredential(GoogleHadoopFileSystemBase.java:1341)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.createGcsFs(GoogleHadoopFileSystemBase.java:1497)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1479)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:467)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:461)
at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:556)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:748)
I've configured the Google Cloud Storage Hadoop connector, and I can read from a gcs bucket without any issue, in the same notebook. If I impersonate the service account, I can upload to the bucket without any issue. I just can't write to the bucket using Spark. Any ideas?
Found it - I had been messing with the spark configuration. Originally I had set this property:
spark.hadoop.google.cloud.auth.service.account.keyfile
To point to my service account JSON file. This hadn't worked when trying to read from a bucket, so I set the GOOGLE_APPLICATION_CREDENTIALS to the json file path. This got read working. It seems, though, that when writing the code looks for the config setting above first, and errors out because it's expecting a P12 file. I needed to use this property instead:
spark.hadoop.google.cloud.auth.service.account.json.keyfile
Having set that and restarted PySpark, I can now write to GCS buckets.

Alfresco CMIS unauthorized

im doing an integration of Liferay and alfresco, my goal is to use alfresco 5.2 to store content created in Liferay DXP.
To do that ,i have added this line to portal-ext.porperties
dl.store.impl=com.liferay.portal.store.cmis.CMISStore
and added the config file com.liferay.portal.store.cmis.configuration.CMISStoreConfiguration.cfg with this content:
repositoryUrl=http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/atom
credentialsUsername=admin credentialsPassword=password
systemRootDir=Liferay
Everything worked fine but suddenly im getting this error while rebooting liferay:
21:19:37,536 ERROR [localhost-startStop-1][com_liferay_portal_store_cmis:97] [com.liferay.portal.store.cmis.CMISStore(1549)] The activate method has thrown an exception
org.apache.chemistry.opencmis.commons.exceptions.CmisUnauthorizedException: Unauthorized
at org.apache.chemistry.opencmis.client.bindings.spi.atompub.AbstractAtomPubService.convertStatusCode(AbstractAtomPubService.java:477)
at org.apache.chemistry.opencmis.client.bindings.spi.atompub.AbstractAtomPubService.read(AbstractAtomPubService.java:645)
at org.apache.chemistry.opencmis.client.bindings.spi.atompub.AbstractAtomPubService.getRepositoriesInternal(AbstractAtomPubService.java:808)
at org.apache.chemistry.opencmis.client.bindings.spi.atompub.RepositoryServiceImpl.getRepositoryInfos(RepositoryServiceImpl.java:65)
at org.apache.chemistry.opencmis.client.bindings.impl.RepositoryServiceImpl.getRepositoryInfos(RepositoryServiceImpl.java:90)
at org.apache.chemistry.opencmis.client.runtime.SessionFactoryImpl.getRepositories(SessionFactoryImpl.java:135)
at org.apache.chemistry.opencmis.client.runtime.SessionFactoryImpl.getRepositories(SessionFactoryImpl.java:112)
at com.liferay.portal.store.cmis.CMISStore.createSession(CMISStore.java:591)
at com.liferay.portal.store.cmis.CMISStore.activate(CMISStore.java:475)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.felix.scr.impl.inject.BaseMethod.invokeMethod(BaseMethod.java:224)
at org.apache.felix.scr.impl.inject.BaseMethod.access$500(BaseMethod.java:39)
at org.apache.felix.scr.impl.inject.BaseMethod$Resolved.invoke(BaseMethod.java:617)
at org.apache.felix.scr.impl.inject.BaseMethod.invoke(BaseMethod.java:501)
at org.apache.felix.scr.impl.inject.ActivateMethod.invoke(ActivateMethod.java:302)
at org.apache.felix.scr.impl.inject.ActivateMethod.invoke(ActivateMethod.java:294)
at org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:297)
at org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:108)
at org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:906)
at org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:879)
at org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:823)
at org.eclipse.osgi.internal.serviceregistry.ServiceFactoryUse$1.run(ServiceFactoryUse.java:212)
at java.security.AccessController.doPrivileged(Native Method)
What does unauthorized mean in this context?

Spark on Amazon EMR: "Timeout waiting for connection from pool"

I'm running a Spark job on a small three server Amazon EMR 5 (Spark 2.0) cluster. My job runs for an hour or so, fails with the error below. I can manually restart and it works, processes more data, and eventually fails again.
My Spark code is fairly simple and is not using any Amazon or S3 APIs directly. My Spark code passes S3 text string paths to Spark and Spark uses S3 internally.
My Spark program just does the following in a loop: Load data from S3 -> Process -> Write data to different location on S3.
My first suspicion is that some internal Amazon or Spark code is not properly disposing of connections and the connection pool becomes exhausted.
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.AmazonClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:618)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3826)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1015)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:991)
at com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:212)
at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy44.retrieveMetadata(Unknown Source)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:780)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1428)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:313)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:85)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:487)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:226)
at com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:195)
at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.conn.$Proxy45.getConnection(Unknown Source)
at com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:423)
at com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at com.amazon.ws.emr.hadoop.fs.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
... 41 more
I encountered this issue with a very trivial program on EMR (read data from S3, filter, write to S3).
I could solve it by using the S3A file system implementation and setting fs.s3a.connection.maximum to 100 to have a bigger connection pool.
(default is 15; see Hadoop-AWS module: Integration with Amazon Web Services for more config properties)
This is how I set the configuration:
// in Scala
val hc = sc.hadoopConfiguration
// in Python (not tested)
hc = sc._jsc.hadoopConfiguration()
// setting the config is the same for both languages
hc.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hc.setInt("fs.s3a.connection.maximum", 100)
To make it work, the S3 URIs passed to Spark have to start with s3a://...
This issue may also be resolved while remaining on EMRFS by setting fs.s3.maxConnections to something larger than the default 500 in emrfs-site config
https://aws.amazon.com/premiumsupport/knowledge-center/emr-timeout-connection-wait/
If using Java SDK to spin up an EMR cluster you can set this using the withConfigurations method (which is much easier than doing it manually by modifying files). See also https://stackoverflow.com/a/52595058/1586965
You can check this has been set correctly by using the Configurations tab in EMR, e.g.

Create a new JDBC driver with netbeans to connect to Cassandra?

When I run the main class I get these errors:
**Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory
at org.apache.cassandra.cql.jdbc.CassandraDriver.<clinit>(CassandraDriver.java:52)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:188)
at oracledbtest.CqlJdbcTestBasic.main(CqlJdbcTestBasic.java:19)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
How can I create a new driver connection to Cassandra?
What exactly are you trying to do? The error simply means that you do not have slf4j's jar in your classpath..

Grails 2.0.1 Hibernate exception in foreign worker thread

I have a Grails 2.0.1 app and I spawn worker threads. My worker threads use GORM to read/write domain objects from MySql. Given that the code that is accessing the DB is not inside of an HTTP request, I create a Hibernate session for the save() like this:
MyClass.withTransaction { status ->
myClass.save()
}
I do that to resolve the "No Hibernate Session bound to thread" problem.
This worked fine on Grails 1.3.7. I am currently attempting to upgrade to Grails 2.0.1, and am seeing an exception I do not understand.
The following stack trace shows the exception.
Although I recognize the exception, and normally resolve it via the "withTransaction" technique shown above, the thing that baffles me about this one is that it is happening on a worker thread that I did not create.
Exception in thread "pool-12-thread-2" org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here
at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63)
at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:687)
at org.codehaus.groovy.grails.orm.hibernate.SessionFactoryProxy.getCurrentSession(SessionFactoryProxy.java:145)
at org.codehaus.groovy.grails.orm.hibernate.validation.HibernateDomainClassValidator.validate(HibernateDomainClassValidator.java:51)
at org.codehaus.groovy.grails.validation.GrailsDomainClassValidator.validate(GrailsDomainClassValidator.java:121)
at org.codehaus.groovy.grails.orm.hibernate.metaclass.ValidatePersistentMethod.doInvokeInternal(ValidatePersistentMethod.java:119)
at org.codehaus.groovy.grails.orm.hibernate.metaclass.AbstractDynamicPersistentMethod.invoke(AbstractDynamicPersistentMethod.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1231)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:189)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
at org.codehaus.groovy.grails.orm.hibernate.HibernateGormValidationApi.validate(HibernateGormEnhancer.groovy:702)
at com......MyClass.validate(MyClass.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1231)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at org.codehaus.groovy.grails.orm.hibernate.support.ClosureEventListener$7.call(ClosureEventListener.java:282)
at org.codehaus.groovy.grails.orm.hibernate.support.ClosureEventListener$7.call(ClosureEventListener.java:267)
at org.codehaus.groovy.grails.orm.hibernate.support.ClosureEventListener.doWithManualSession(ClosureEventListener.java:302)
at org.codehaus.groovy.grails.orm.hibernate.support.ClosureEventListener.onPreUpdate(ClosureEventListener.java:267)
at org.codehaus.groovy.grails.orm.hibernate.EventTriggeringInterceptor.onPreUpdate(EventTriggeringInterceptor.java:164)
at org.codehaus.groovy.grails.orm.hibernate.EventTriggeringInterceptor.onPersistenceEvent(EventTriggeringInterceptor.java:89)
at org.grails.datastore.mapping.engine.event.AbstractPersistenceEventListener.onApplicationEvent(AbstractPersistenceEventListener.java:46)
at org.springframework.context.event.SimpleApplicationEventMulticaster$1.run(SimpleApplicationEventMulticaster.java:92)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
I am hoping someone can enlighten me regarding:
where this worker thread pool originates from?
why is is doing a preUpdate event (when I believe the save in my thread already did a successful validate and save
Can I somehow config my app such that these foreign worker threads are not there
If #3 can't be pulled off, how can I do the equivalent of a "withTransaction" over in the worker thread code that I do not "own"

Resources