jooq `myPojo` to `record`: `store` function is not available on the record - jooq

I am learning from the manual here: https://www.jooq.org/doc/latest/manual/sql-execution/fetching/pojos/#NA6504
And i'm trying to create a Record from a pojo and i don't have available the record.store() method, or dslContext.executeInsert(newRecord).
My code looks as follows:
// this is created by me
data class Dummy(
val name: String,
val id: Long,
val thisDoesNotExistInTheDatabaseAndThatsFineBecauseMappingToMyBusinessClassStillWorks: String?,
)
// table -> Business Object works ✅
dslContext.select()
.from(DummyTable.DUMMY_TABLE)
.fetch()
.into(Dummy::class.java)
// Business Object -> table does not work ❌
val newPojo = Dummy("string1", 1, null)
val newRecord: Record = dslContext.newRecord(DUMMY_TABLE, newPojo)
newRecord.store() // this throws compilation error: Unresolved reference: store
dslContext.executeInsert(newRecord) // this throws compilation error: Type mismatch: inferred type is Record but TableRecord<*>! was expected
DUMMY_TABLE is generated by joooq: open class DummyTable: TableImpl<Record>.
From the documentation, my understanding is that dslContext.newRecord(DUMMY_TABLE, newPojo).store() should just work.
I'm sure I'm missing something obvious here, but I'm not sure where to start looking for.

The problem is that you're not using the most appropriate record type in your variable declaration:
// This
val newRecord: Record = dslContext.newRecord(DUMMY_TABLE, newPojo)
// ... should be this:
val newRecord: DummyTableRecord = dslContext.newRecord(DUMMY_TABLE, newPojo)
// ... or even this:
val newRecord = dslContext.newRecord(DUMMY_TABLE, newPojo)
In order to get the table records generated, please activate the relevant flag in your code generation configuration:
https://www.jooq.org/doc/latest/manual/code-generation/codegen-records/
It should be enabled by default.

Related

NullPointer in broadcast value being passed as parameter

:)
I'd like to say that I'm new in Spark, as many of these posts start..but the truth is I'm not that new.
Still, I'm facing this issue with broadcast variables.
When a variable is broadcast, each executor receives a copy of it. Later on, when this variable is referenced in the part of the code that is executed in the executors (let's say map or foreach), if the variable reference that was set in the driver is not passed to it, the executor does not know what are we talking about. Which I think is perfectly explain here
My problem is I am getting a nullPointerException even tough I passed the broadcast reference to the executors.
class A {
var broadcastVal: Broadcast[Dataframe] = _
...
def method1 {
broadcastVal = otherMethodWhichSendBroadcast
doSomething(broadcastVal, others)
}
}
class B {
def doSomething(...) {
forEachPartition {x => doSomethingElse(x, broadcasVal)}
}
}
object C {
def doSomethingElse(...) {
broadcastVal.value.show --> Exception
}
}
What am I missing?
Thanks in advance!
RDD and DataFrames are already distributed structures, no need to broadcast them as local variable .(org.apache.spark.sql.functions.broadcast() function (which is used while doing joins) is not local variable broadcast )
Even if you try the code syntax wise it wont show any compilation error, rather it will throw RuntimeException like NullPointerException which is 100% valid.
Example to Explain the behavior :
package examples
import org.apache.log4j.Level
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.sql.{DataFrame, SparkSession}
object BroadCastCheck extends App {
org.apache.log4j.Logger.getLogger("org").setLevel(Level.OFF)
val spark = SparkSession.builder().appName(getClass.getName).master("local").getOrCreate()
val sc = spark.sparkContext
val df = spark.range(100).toDF()
var broadcastVal: Broadcast[DataFrame] = sc.broadcast(df)
val t1 = sc.parallelize(0 until 10)
val t2 = sc.broadcast(2) // this is right since its local variable can be primitive or map or any scala collection
val t3 = t1.filter(_ % t2.value == 0).persist() //this is the way of ha
t3.foreach {
x =>
println(x)
// broadcastVal.value.toDF().show // null pointer wrong way
// spark.range(100).toDF().show // null pointer wrong way
}
}
Result : (if you un comment broadcastVal.value.toDF().show or spark.range(100).toDF().show in above code)
Caused by: java.lang.NullPointerException
at org.apache.spark.sql.execution.SparkPlan.sparkContext(SparkPlan.scala:56)
at org.apache.spark.sql.execution.WholeStageCodegenExec.metrics$lzycompute(WholeStageCodegenExec.scala:528)
at org.apache.spark.sql.execution.WholeStageCodegenExec.metrics(WholeStageCodegenExec.scala:527)
Further read the difference between broadcast variable and broadcast function here...

Spotify API-Net ResumePlayback - Error With Code Example From Documentation

The documentation at Spotiy API_NET for ResumePlayback
gives the following example:
ErrorResponse error = _spotify.ResumePlayback(uris: new List<string> { "spotify:track:4iV5W9uYEdYUVa79Axb7Rh" });
When I try that code in C#, I get the following code error which prevents me building:
Error CS0121 The call is ambiguous between the following methods or properties: 'SpotifyWebAPI.ResumePlayback(string, string, List, int?)' and 'SpotifyWebAPI.ResumePlayback(string, string, List, string)'
Can anyone tell me what is wrong with this?
Also, what is the simplest way to simply resume the existing player at the point where it was paused?
Edit
#rene answered the first part of my question.
In regard to the second part, how to resume the existing player at the point where it was paused, I got the answer through the library's Github site, it's simply:
_spotify.ResumePlayback(offset: "")
The ResumePlayback method has two overloads that take these parameters:
ErrorResponse ResumePlayback(string deviceId = "",
string contextUri = "",
List<string> uris = null,
int? offset = null)
and
ErrorResponse ResumePlayback(string deviceId = "",
string contextUri = "",
List<string> uris = null,
string offset = "")
When the compiler comes across this line
ErrorResponse error = _spotify.ResumePlayback(
uris: new List<string> { "spotify:track:4iV5W9uYEdYUVa79Axb7Rh" });
it has to decide which ResumePlayback it is going to call and it doesn't want to take a guess or roll a dice.
It looks at which parameters are going to be provided and you only give it uris (that is the third parameter). It will assume the defaults for the other parameters. For both methods these defaults (null for strings or for the Nullable<int> (int?)) apply so the compiler can't decide which method it should bind to. It shows you an error.
Provide more parameters so the compiler can pick an unique overload.
ErrorResponse error = _spotify.ResumePlayback(
uris: new List<string> { "spotify:track:4iV5W9uYEdYUVa79Axb7Rh" }
,
offset: 0
);
Adding that named parameter offset and setting it to the int value of 0 is enough for the compiler to pick this overload to bind to:
ResumePlayback(string deviceId, string contextUri, List<string> uris, int? offset)

Is this usage of a HashMap thread safe?

I happen to work in a project which uses mapperdao library.
Occasionally, it throws an exception which suggests that the library is not thread safe. I could not figure out why though.
The issue happens in this code:
type CacheKey = (Class[_], LazyLoad)
private val classCache = new scala.collection.mutable.HashMap[CacheKey, (Class[_], Map[String, ColumnInfoRelationshipBase[_, Any, Any, Any]])]
def proxyFor[ID, T](constructed: T with Persisted, entity: EntityBase[ID, T], lazyLoad: LazyLoad, vm: ValuesMap): T with Persisted = {
(...)
val key = (clz, lazyLoad)
// get cached proxy class or generate it
val (proxyClz, methodToCI) = classCache.synchronized {
classCache.get(key).getOrElse {
val methods = lazyRelationships.map(ci =>
ci.getterMethod.getOrElse(
throw new IllegalStateException("please define getter method on entity %s . %s".format(entity.getClass.getName, ci.column))
).getterMethod
).toSet
if (methods.isEmpty)
throw new IllegalStateException("can't lazy load class that doesn't declare any getters for relationships. Entity: %s".format(clz))
val proxyClz = createProxyClz(constructedClz, clz, methods)
val methodToCI = lazyRelationships.map {
ci =>
(ci.getterMethod.get.getterMethod.getName, ci.asInstanceOf[ColumnInfoRelationshipBase[T, Any, Any, Any]])
}.toMap
val r = (proxyClz, methodToCI)
classCache.put(key, r)
r
}
}
val instantiator = objenesis.getInstantiatorOf(proxyClz)
val instance = instantiator.newInstance.asInstanceOf[DeclaredIds[ID] with T with MethodImplementation[T with Persisted]]
(...)
}
instantiator.newInstance throws java.lang.NoClassDefFoundError for a class, which is supposed to be dynamically compiled and its name put into the map.
This code seems to be thread safe to me as any operation on the map is performed in the synchronized block. I can not figure out a scenario, when a map returns a class name, which is not generated and compiled yet. Am I missing something?
One other explanation is that the class is compiled, but it's not visible to the current class loader. I don't know how this can happen though and why this happens occasionally though.
UPDATE:
The track trace looks like:
java.lang.NoClassDefFoundError: Could not initialize class com.mypackage.MyClass_$2
at sun.reflect.GeneratedSerializationConstructorAccessor57.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator.newInstance(SunReflectionFactoryInstantiator.java:56)
at com.googlecode.mapperdao.lazyload.LazyLoadManager.proxyFor(LazyLoadManager.scala:63)
at com.googlecode.mapperdao.jdbc.impl.MapperDaoImpl.lazyLoadEntity(MapperDaoImpl.scala:338)
at com.googlecode.mapperdao.jdbc.impl.MapperDaoImpl.$anonfun$toEntities$5(MapperDaoImpl.scala:301)
at com.googlecode.mapperdao.internal.EntityMap.$anonfun$get$1(EntityMap.scala:46)
UPDATE 2: I finally found the root cause of my issue. It is a concurrency problem indeed, but not in the code I pasted. It's in MImpl class.
What is happening here is dynamically generated class being compiled correctly, but it's initialization fails occasionally due to concurrency issue in MImpl class. Next time the code tries to instantiate the class ends up with NoClassDefFoundException thrown by JVM.

Trivial deserialization failing with YamlDotNet

What can possible go wrong with this:
public void Main()
{
var input = new StringReader(Document);
var deserializer = new Deserializer(namingConvention: new CamelCaseNamingConvention());
var p = deserializer.Deserialize<Person>(input);
Console.WriteLine(p.Name);
}
public class Person
{
public string Name {get;set;}
}
private const string Document = #"Name: Peter";
A serialization exception is thrown:
Property 'Name' not found on type 'YamlDotNet.Samples.DeserializeObjectGraph+Person'
The same happens if I first serialize a Person object using the Serializer.
While the online sample for deserialization works just fine - this trivial code does not. What am I missing? It must be a stupid little detail. (But it happened before with other data structures I tried.)
As it seems, the problem is with the namingConvention parameter. If I don't set it to an instance of CamelCaseNamingConvention all is fine.
Unfortunately the "canonical" example (https://dotnetfiddle.net/HD2JXM) uses it and thus suggests it is important.
For any reason the CamelCaseNamingConvention converts the fields to lowercase in the class (ie. 'Name' to 'name'). As the string is 'Name' and not 'name' the deserialization fails. The example uses lower-case therefore it works....
I had the same problem....

System.Linq.Dynamic .Select("new ...") does not appear to be thread safe

I grabbed System.Linq.Dynamic.DynamicQueryable from here:
http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx
The issue that I am running into is in code that looks like this:
var results = dataContext.GetTable<MyClass>.Select("new (MyClassID, Name, Description)").Take(5);
It appears that if that line of code is executed by multiple threads near simultaneously, Microsoft's dynamic Linq code crashes in their ClassFactory.GetDynamicClass() method, which looks like this:
public Type GetDynamicClass(IEnumerable<DynamicProperty> properties)
{
rwLock.AcquireReaderLock(Timeout.Infinite);
try
{
Signature signature = new Signature(properties);
Type type;
if (!classes.TryGetValue(signature, out type))
{
type = CreateDynamicClass(signature.properties);
classes.Add(signature, type); // <-- crashes over here!
}
return type;
}
finally
{
rwLock.ReleaseReaderLock();
}
}
The crash is a simple dictionary error: "An item with the same key has already been added."
In Ms code, The rwLock variable is a ReadWriterLock class, but it does nothing to block multiple threads from getting inside classes.TryGetValue() if statement, so clearly, the Add will fail.
I can replicate this error pretty easily in any code that creates a two or more threads that try to execute the Select("new") statement.
Anyways, I'm wondering if anyone else has run into this issue, and if there are fixes or workarounds I can implement.
Thanks.
I did the following (requires .NET 4 or later to use System.Collections.Concurrent):
changed the classes field to a ConcurrentDictionary<Signature, Type> ,
removed all the ReaderWriterLock rwLock field and all the code referring to it,
updated GetDynamicClass to:
public Type GetDynamicClass(IEnumerable<DynamicProperty> properties) {
var signature = new Signature(properties);
return classes.GetOrAdd(signature, sig => CreateDynamicClass(sig.properties));
}
removed the classCount field and updated CreateDynamicClass to use classes.Count instead:
Type CreateDynamicClass(DynamicProperty[] properties) {
string typeName = "DynamicClass" + Guid.NewGuid().ToString("N");
...

Resources