javax/xml/bind/ValidationEventHandler error - jooq

I am getting "javax/xml/bind/ValidationEventHandler" error with Java 11 and spring boot 2.1.6. I have these libraries in my classpath
classpath group: 'org.jooq', name: 'jooq', version: '3.10.7'
classpath group: 'org.jooq', name: 'jooq-meta', version: '3.10.7'
classpath group: 'org.jooq', name: 'jooq-codegen', version: '3.10.7'
And I have added these gradle dependencies.
compile group: 'javax.xml.bind', name: 'jaxb-api', version: '2.3.2'
compile group: 'org.glassfish.jaxb', name: 'jaxb-runtime', version: '2.3.2'
compile group: 'com.sun.xml.bind', name: 'jaxb-impl', version: '2.3.2'
compile group: 'com.sun.xml.bind', name: 'jaxb-core', version: '2.3.2'
compile group: 'javax.activation', name: 'activation', version: '1.1.1'
May I know, what I am missing?

Please upgrade to the latest version of jOOQ. These issues should be resolved starting from jOOQ 3.12. A few related issues:
https://github.com/jOOQ/jOOQ/issues/6477
https://github.com/jOOQ/jOOQ/issues/6610
https://github.com/jOOQ/jOOQ/issues/6650
https://github.com/jOOQ/jOOQ/issues/6879
https://github.com/jOOQ/jOOQ/issues/7047
https://github.com/jOOQ/jOOQ/issues/7499

Related

Pre-commit suddenly fails to install Flake8

The pre-commit run suddenly fails without changes being made to the pre-commit configuration file. The log shows the following error message when attempting to initialise the environment for Flake8:
...
[INFO] Initializing environment for https://gitlab.com/pycqa/flake8.git.
An unexpected error has occurred: CalledProcessError: command: ('/usr/bin/git', 'fetch', 'origin', '--tags')
return code: 128
expected return code: 0
stdout: (none)
stderr:
fatal: could not read Username for 'https://gitlab.com': No such device or address
...
The pre-commit configuration file includes the following entry:
---
repos:
- repo: https://gitlab.com/pycqa/flake8.git
rev: 3.9.2
hooks:
- id: flake8
exclude: 'tests|env|docs'
The Flake8 project has moved to GitHub and requires updating the repository URL in the pre-commit configuration file:
---
repos:
- repo: https://github.com/pycqa/flake8.git
rev: 3.9.2
hooks:
- id: flake8
exclude: 'tests|env|docs'

Error: Platform 'nodejs' version '14.17.1' is unsupported

I am trying to deploy Angular application to Azure. I created a static web app in Azure, and connected it to my Github main branch. When I run the action to deploy the app, I am getting an error in Build and Deploy action as Platform 'nodejs' version '14.17.1' is unsupported.
Here is the snippet of the error message:
Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
You can report issues at https://github.com/Microsoft/Oryx/issues
Oryx Version: 0.2.[20](https://github.com/nimeshshrestha/autoglasslocator-admin-ui-v01/runs/7714362605?check_suite_focus=true#step:5:21)220401.1, Commit: afed4f696a08bfd4c8e4109704a082bf9fb20515, ReleaseTagName: 20220401.1
Build Operation ID: |ZIeL+T275t4=.afcbd56a_
Repository Commit : 848b07de657ce2303e1469a882d68999bed7925a
Detecting platforms...
Error: Platform 'nodejs' version '14.17.1' is unsupported. Supported versions: 10.1.0, 10.10.0, 10.12.0, 10.14.2, 10.16.3, 10.18.0, 10.18.1, 10.19.0, 10.20.0, 10.20.1, 10.[21](https://github.com/nimeshshrestha/autoglasslocator-admin-ui-v01/runs/7714362605?check_suite_focus=true#step:5:22).0, 10.[22](https://github.com/nimeshshrestha/autoglasslocator-admin-ui-v01/runs/7714362605?check_suite_focus=true#step:5:23).0, 10.22.1, 10.[23](https://github.com/nimeshshrestha/autoglasslocator-admin-ui-v01/runs/7714362605?check_suite_focus=true#step:5:24).0, 10.[24](https://github.com/nimeshshrestha/autoglasslocator-admin-ui-v01/runs/7714362605?check_suite_focus=true#step:5:25).1, 12.11.0, 12.11.1, 12.12.0, 12.13.0, 12.14.0, 12.14.1, 12.16.0, 12.16.1, 12.16.2, 12.16.3, 12.18.0, 12.18.1, 12.18.2, 12.18.3, 12.18.4, 12.19.0, 12.20.0, 12.21.0, 12.22.0, 12.22.11, 12.22.4, 12.22.6, 12.22.9, 12.9.1, 13.9.0, 14.0.0, 14.1.0, 14.10.0, 14.10.1, 14.11.0, 14.12.0, 14.13.0, 14.13.1, 14.14.0, 14.15.0, 14.15.1, 14.16.0, 14.17.0, 14.17.4, 14.17.6, 14.18.3, 14.19.1, 14.2.0, 14.3.0, 14.4.0, 14.5.0, 14.6.0, 14.7.0, 14.8.0, 14.9.0, 15.0.0, 15.0.1, 15.1.0, 15.2.0, 15.2.1, 15.3.0, 16.13.1, 16.13.2, 16.14.0, 16.14.2, 16.5.0, 16.6.1, 16.8.0, 17.0.1, 17.1.0, 17.2.0, 17.3.1, 17.4.0, 17.5.0, 17.6.0, 4.4.7, 4.5.0, 4.8.0, 4.8.7, 4.9.0, 4.9.1, 6.10.3, 6.11.0, 6.11.5, 6.17.1, 6.2.2, 6.6.0, 6.9.3, 6.9.5, 8.0.0, 8.1.4, 8.11.2, 8.11.4, 8.12.0, 8.15.1, 8.16.1, 8.16.2, 8.17.0, 8.2.1, 8.8.1, 8.9.4, 9.4.0, 1.17.3, 1.18.0, 1.19.0, 1.19.1, 1.19.2, 1.20.0, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.4
---End of Oryx build logs---
Oryx has found build steps, but identified unsupported platform versions. Failing build.
I have tried to use setup-node action by following this link: https://github.com/actions/setup-node.
I edited the yml file, and added this:
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v3 ### Added this line
with:
submodules: true
node-version: 16 ### Added this line
I have also looked at this issue: https://github.com/microsoft/Oryx/issues/650. But there is nothing that is helpful.
EDIT: I have also posted the question here https://github.com/microsoft/Oryx/issues/1504

Apache Spark + Parquet not Respecting Configuration to use “Partitioned” Staging S3A Committer

I am writing partitioned data (Parquet file) to AWS S3 using Apache Spark (3.0) from my local machine without having Hadoop installed in my machine. I was getting FileNotFoundException while writing to S3 when I have lot of files to write to around 50 partitions(partitionBy = date).
Then I have come across new S3A committer, So I tried to configure "partitioned" committer instead. But still I could see that Spark uses ParquetOutputCommitter instead of PartitionedStagingCommitter when the file format is "parquet". And still I am getting FileNotFoundException when I have lot of data to write.
My Configuration:
sparkSession.conf().set("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", 2);
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.name", "partitioned");
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.magic.enabled ", false);
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.conflict-mode", "append");
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.unique-filenames", true);
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads", true);
sparkSession.conf().set("spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a", "org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory");
sparkSession.conf().set("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol");
sparkSession.conf().set("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter");
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.tmp.path", "tmp/staging");
What am I doing incorrect? Could someone please help?
Note: I have created a JIRA in Spark for the same but no help till now: SPARK-31072
==============================================================
I tried the answer from (#Rajadayalan). But its still using FileOutputFormatter. I tried downgrading the spark version to 2.4.5 without any luck.
20/04/06 12:44:52 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 12:44:52 WARN AbstractS3ACommitterFactory: **Using standard FileOutputCommitter to commit work**. This is slow and potentially unsafe.
20/04/06 12:44:52 INFO FileOutputCommitter: File Output Committer Algorithm version is 2
20/04/06 12:44:52 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 12:44:52 INFO AbstractS3ACommitterFactory: Using Commmitter FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406124452_0000}; taskId=attempt_20200406124452_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter#61deb03f}; outputPath=s3a://******/observation, workPath=s3a://******/observation/_temporary/0/_temporary/attempt_20200406124452_0000_m_000000_0, algorithmVersion=2, skipCleanup=false, ignoreCleanupFailures=false} for s3a://********/observation
20/04/06 12:44:53 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 81.077046 ms
20/04/06 12:44:54 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 31.993775 ms
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 9.967359 ms
Note: I don't have Spark installed in my local. So gave spark-hadoop-cloud_2.11 as compile time dependency
My build.gradle looks as follows:
compile group: 'org.apache.spark', name: 'spark-hadoop-cloud_2.11', version: '2.4.2.3.1.3.0-79'
compile group: 'org.apache.spark', name: 'spark-sql_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind
compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.10.0'
// https://mvnrepository.com/artifact/org.apache.parquet/parquet-column
compile group: 'org.apache.parquet', name: 'parquet-column', version: '1.10.1'
// https://mvnrepository.com/artifact/org.apache.parquet/parquet-hadoop
compile group: 'org.apache.parquet', name: 'parquet-hadoop', version: '1.10.1'
compile group: 'org.apache.parquet', name: 'parquet-avro', version: '1.10.1'
// https://mvnrepository.com/artifact/org.apache.spark/spark-sketch
compile group: 'org.apache.spark', name: 'spark-sketch_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-core
compile group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-catalyst
compile group: 'org.apache.spark', name: 'spark-catalyst_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-tags
compile group: 'org.apache.spark', name: 'spark-tags_2.11', version: '2.4.5'
compile group: 'org.apache.spark', name: 'spark-avro_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-hive
compile group: 'org.apache.spark', name: 'spark-hive_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.xbean/xbean-asm6-shaded
compile group: 'org.apache.xbean', name: 'xbean-asm7-shaded', version: '4.15'
compile group: 'org.apache.hadoop', name: 'hadoop-common', version: '3.2.1'
// compile group: 'org.apache.hadoop', name: 'hadoop-s3guard', version: '3.2.1'
compile group: 'org.apache.hadoop', name: 'hadoop-aws', version: '3.2.1'
compile group: 'org.apache.hadoop', name: 'hadoop-client', version: '3.2.1'
compile group: 'com.amazonaws', name: 'aws-java-sdk-bundle', version: '1.11.271'
Had the same issue, the solution from
How To Get Local Spark on AWS to Write to S3 worked to load PartitionedStagingCommitter. you also have to download spark-hadoop-cloud jar from as mentioned in the solution.
I also use spark 3.0 and this version of jar worked https://repo.hortonworks.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.2.3.1.3.0-79/
Settings in my spark-defaults.conf
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
spark.hadoop.fs.s3a.committer.name partitioned
spark.hadoop.fs.s3a.committer.magic.enabled false
spark.hadoop.fs.s3a.commiter.staging.conflict-mode append
spark.hadoop.fs.s3a.committer.staging.unique-filenames true
spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads true
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a
org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
spark.sql.sources.commitProtocolClass
org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
spark.sql.parquet.output.committer.class
org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
I got this working with a small change from what #Rajadayalan suggested. Apart from sparkSession.config().set() as in the initial question I have added the option() param in the df while writing parquet files
df.distinct()
.withColumn("date", date_format(col(EFFECTIVE_PERIOD_START), "yyyy-MM-dd"))
.repartition(col("date"))
.write()
.format(fileFormat)
.partitionBy("date")
.mode(SaveMode.Append)
.option("fs.s3a.committer.name", "partitioned")
.option("fs.s3a.committer.staging.conflict-mode", "append")
.option("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol")
.option("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter")
.option("compression", compressionCodecName.name().toLowerCase())
.save(DOWNLOADS_NON_COMPACT_PATH);
This makes the difference and the following stacktrace depicts that its using PartitionedStagingCommitter
Also I could also see that _SUCCESS file is a JSON instead of empty touch file (_SUCCESS) in S3.
20/04/06 14:27:26 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 14:27:26 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/04/06 14:27:26 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using committer partitioned to output data to s3a://************/observation
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using Commmitter PartitionedStagingCommitter{StagingCommitter{AbstractS3ACommitter{role=Task committer attempt_20200406142726_0000_m_000000_0, name=partitioned, outputPath=s3a://*********/observation, workPath=file:/tmp/hadoop-**********/s3a/local-1586197641397/_temporary/0/_temporary/attempt_20200406142726_0000_m_000000_0}, conflictResolution=APPEND, wrappedCommitter=FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406142726_0000}; taskId=attempt_20200406142726_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter#4494e88a}; outputPath=file:/Users/**********/Downloads/SparkParquetSample/tmp/staging/**********/local-1586197641397/staging-uploads, workPath=null, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}}} for s3a://parquet-uuid-test/device-metric-observation6
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 52.744811 ms
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 48.78277 ms

port isolation with bazel and linux

Trying to test out port isolation with bazel and linux and it fails.
My env is as following (started on aws):
$ cat /etc/*-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -mrs
Linux 3.16.0-4-amd64 x86_64
Bazel version
$ bazel version
Build label: 0.5.1
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Tue Jun 6 10:34:11 2017 (1496745251)
Build timestamp: 1496745251
Build timestamp as int: 1496745251
The repo: https://github.com/ittaiz/bazel-port-isolation
Running the test:
$ bazel test //...
____Loading package:
____Loading package: #bazel_tools//tools/cpp
____Loading package: #local_config_xcode//
____Loading package: #local_jdk//
____Loading package: #local_config_cc//
____Loading complete. Analyzing...
____Loading package: tools/defaults
____Loading package: #bazel_tools//tools/test
____Loading package: #junit_junit//jar
____Found 2 test targets...
____Building...
____[0 / 12] Expanding template SocketIsolation2Test
____[10 / 12] Building SocketIsolationTest.jar (1 source file)
FAIL: //:SocketIsolation2Test (see /home/builduser/.cache/bazel/_bazel_builduser/a589c0f8758972ab3aadcf172c468873/execroot/bazel-port-isolation/bazel-out/local-fastbuild/testlogs/SocketIsolation2Test/test.log)
PASS: //:SocketIsolationTest
____Elapsed time: 11.152s, Critical Path: 7.03s
//:SocketIsolationTest PASSED in 5.3s
//:SocketIsolation2Test FAILED in 0.3s
/home/builduser/.cache/bazel/_bazel_builduser/a589c0f8758972ab3aadcf172c468873/execroot/bazel-port-isolation/bazel-out/local-fastbuild/testlogs/SocketIsolation2Test/test.log
(As you can see - no sandbox failure or warnings)
The tet log show "Address already in use" - which means that there were no port isolation.
Why is it not working?
Found out why...
From here: https://bazel.build/versions/master/docs/bazel-user-manual.html#sandboxing
On some platforms such as Google Container Engine cluster nodes or
Debian, user namespaces are deactivated by default due to security
concerns. This can be checked by looking at the file
/proc/sys/kernel/unprivileged_userns_clone: if it exists and contains
a 0, then user namespaces can be activated with sudo sysctl
kernel.unprivileged_userns_clone=1.
The /proc/sys/kernel/unprivileged_userns_clone existed and had 0 in it.. . so changing it according to the insturction solved it.

Jooq does not find log4j2 Logger

Im using the JOOQ library to generate and perform SQL queries. For debugging reasons I want to see what queries exactly are performed without looking at the changing database every time. I think the jooq framework should show lots of informations about queries and their effects if the log level is set to debug. I use log4j2 for logging, and it is set to debug using the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="DEBUG" packages="ch.fhnw.ima.doggait">
<appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</appenders>
<loggers>
<root level="DEBUG">
<appender-ref ref="Console"/>
</root>
</loggers>
</configuration>
Unfortunally It looks like JOOQ uses another logger, which is set to info.
This is an example output:
2015-09-29 10:15:04,064 DEBUG Shutdown.. 2015-09-29 10:15:04,065 DEBUG
LoggerCo.. 2015-09-29 10:15:04,111 DEBUG Using d... 10:15:04.116 [Test
worker] DEBUG ch.fh.. Sep 29, 2015 10:15:04 AM
org.jooq.tools.JooqLogger info
INFORMATION:
######################################....
You can see the first few lines are logged in debug mode by log4j2, but Jooq uses JooqLogger in level info.
I load both libraries using gradle:
compile group: 'com.google.inject', name: 'guice-parent', version: '3.0'
compile group: 'com.google.inject.extensions', name: 'guice-persist', version: '3.0'
compile group: 'org.hibernate', name: 'hibernate-entitymanager', version: '4.3.8.Final'
compile group: 'com.h2database', name: 'h2', version: '1.4.185'
compile group: 'org.apache.commons', name: 'commons-math3', version: '3.2'
compile group: 'org.apache.logging.log4j', name: 'log4j-core', version: '2.0.2'
compile group: 'org.apache.logging.log4j', name: 'log4j-api', version: '2.0.2'
compile group: 'org.hamcrest', name: 'hamcrest-all', version: '1.3'
compile group: 'org.apache.pdfbox', name: 'pdfbox', version: '1.8.10'
compile group: 'postgresql', name:'postgresql', version:'9.0-801.jdbc4'
compile group: 'org.jooq', name:'jooq', version:'3.6.3'
compile group: 'org.jooq', name:'jooq-meta', version:'3.6.3'
compile group: 'org.jooq', name:'jooq-codegen', version:'3.6.3'
testCompile 'junit:junit:4.11'
Does somebody know why jooq does not use log4j2 as the default logger, or how I can set it to log in debug level?
jOOQ is still using log4j 1.2 as a default, internally. You should use that version (for now), or slf4j instead.
There are a couple of feature requests to improve these dependencies in jOOQ:
#1564 Replace JooqLogger by using slf4j, which already abstracts logger frameworks pretty well
#3280 Expose a new Log SPI, which can be implemented by popular loggers
I know it's little late, but I found the same problem and I fixed it with the help of apache itself.
You can include this adapter in your classpath, it bridges log4j2 with log4j 1.2
log4j-1.2-api-2.6.1.jar
But, you must not have log4j 1.2 jars or config files in your classpath. In short, do everything with log4j2 and just include above jar.
For further reading : http://logging.apache.org/log4j/2.x/faq.html#which_jars
This SO question helped me : Configuring log4j2 and log4j using a single log4j2 xml file

Resources