Slick 3.0.0 documentation Error - slick

I'm following the Slick documentation that can be found at the following location:
http://slick.typesafe.com/doc/3.0.0/gettingstarted.html
In that I'm looking into the "Populating the Database" section. I'm not able to find the schema method defined for the TableQuery, hence I'm not able to populate my H2 database with initial values!
Is there something wrong with the documentation? It is confusing the hell out of me! Please help!

Here is how to do it:
val h2DbConfig = Map(
"default.driver" -> "slick.driver.H2Driver$",
"default.db.driver" -> "org.h2.Driver",
"default.db.url" -> "jdbc:h2:yourDbName;DATABASE_TO_UPPER=false;DB_CLOSE_DELAY=-1"
)
ConfigFactory.parseMap(h2DbConfig) // gives you a typesafe config
Once you have the typesafe config object containing the h2 database, you create the tables as below:
private def h2SchemaSetUp = {
val schema = slick.dbio.DBIO.seq(
(Table1.tbl1.schema ++
Table2.tbl2.schema
).create
)
Await.result(db.run(schema), 5.seconds)
}
You then insert values into the created Schema as per Slick's documentation!

Related

Multiple keys index leads in an PSQLException with Slick, Postgres on a lagom project

I have a lagom application and using for the readside a postgres with lagom jdbc.
Tables are created and works fine. After a restart and an already created table with a multiple key index I get always an error:
org.postgresql.util.PSQLException: ERROR: relation "article_number_fulfiller_idx" already exists
My table looks like this:
class ArticleTable(tag: Tag) extends Table[ArticleTableData](tag,ArticleTable.TableName) {
def entityId = column[UUID](ArticleTable.ColEntityId,O.PrimaryKey)
def articleBaseNumber = column[String](ArticleTable.ColArticleBaseNumber)
def articleSpecificationNumber = column[Option[String]](ArticleTable.ColArticleSpecificationNumber)
def fulfillerVendorNumber = column[String](ArticleTable.ColFulfillerVendorNumber)
def fulfillerName = column[String](ArticleTable.ColFulfillerName)
def availability = column[String](ArticleTable.ColAvailability)
def completeArticleNumber = column[String]("complete_article_number")
def idxKey = index("article_number_fulfiller_idx",(completeArticleNumber,fulfillerVendorNumber),unique = true)
def * = (entityId,articleBaseNumber,articleSpecificationNumber,fulfillerVendorNumber,fulfillerName,availability,completeArticleNumber) <> ( (ArticleTableData.apply _).tupled, ArticleTableData.unapply )
}
And my build handler is here:
override def buildHandler(): ReadSideProcessor.ReadSideHandler[Article.Event] = readSide
.builder[Article.Event](ArticleTable.TableName+"_offset")
.setGlobalPrepare(table.schema.createIfNotExists)
.setEventHandler[ArticleCreated](insert)
.setEventHandler[DescriptionAdded](_ => DBIOAction.successful(Done) )
.setEventHandler[DescriptionRemoved](_ => DBIOAction.successful(Done) )
.build()
I updated my sbt to use the latest:
Instead of this
lagomScaladslPersistenceJdbc
I use now this
"com.lightbend.lagom" %% "lagom-scaladsl-persistence-jdbc" % "1.6.2",
"com.typesafe.slick" %% "slick" % "3.3.2"
The exception is only ONE of the exceptions I got. I have for every multiple key index an exception :(
Lagom every restart will try to create new tables and indexes. To avoid getting these errors, do not force the creation of indexes and tables, use create if not exist. If slick does not allow to do it, look on native queries.
You are right about the slick issue.
The page describes that it is useful for dev and test environments.
In this case, you can try to have:
"create if not exist" query as #vladislav-kievski suggested,
some of the databases do not support such queries (SQL Server) and you can catch an exception with appropriate error code
I do not recommend to use globalPrepare() for creating tables and indexes. The main difficulty with this approach is table changes. In this case, you need to think about versioning your database scripts (what will you do in case index removing/adding or removing columns?).

How to create an update statement where a UDT value need to be updated using QueryBuilder

I have the following udt type
CREATE TYPE tag_partitions(
year bigint,
month bigint);
and the following table
CREATE TABLE ${tableName} (
tag text,
partition_info set<FROZEN<tag_partitions>>,
PRIMARY KEY ((tag))
)
The table schema is mapped using the following model
case class TagPartitionsInfo(year:Long, month:Long)
case class TagPartitions(tag:String, partition_info:Set[TagPartitionsInfo])
I have written a function which should create an Update.IfExists query: But I don't know how I should update the udt value. I tried to use set but it isn't working.
def updateValues(tableName:String, model:TagPartitions, id:TagPartitionKeys):Update.IfExists = {
val partitionInfoType:UserType = session.getCluster().getMetadata
.getKeyspace("codingjedi").getUserType("tag_partitions")
//create value
//the logic below assumes that there is only one element in the set
val partitionsInfoSet:Set[UDTValue] = model.partition_info.map((partitionInfo:TagPartitionsInfo) =>{
partitionInfoType.newValue()
.setLong("year",partitionInfo.year)
.setLong("month",partitionInfo.month)
})
println("partition info converted to UDTValue: "+partitionsInfoSet)
QueryBuilder.update(tableName).
`with`(QueryBuilder.WHAT_TO_DO_HERE_TO_UPDATE_UDT("partition_info",partitionsInfoSet))
.where(QueryBuilder.eq("tag", id.tag)).ifExists()
}
The mistake was I was adding partitionsInfoSet in the table but it is a Set of Scala. I needed to convert into Set of Java using setAsJavaSet
QueryBuilder.update(tableName).`with`(QueryBuilder.set("partition_info",setAsJavaSet(partitionsInfoSet)))
.where(QueryBuilder.eq("tag", id.tag))
.ifExists()
}
Although, it didn't answer your exact question, wouldn't it be easier to use Object Mapper for this? Something like this (I didn't modify it heavily to match your code):
#UDT(name = "scala_udt")
case class UdtCaseClass(id: Integer, #(Field #field)(name = "t") text: String) {
def this() {
this(0, "")
}
}
#Table(name = "scala_test_udt")
case class TableObjectCaseClassWithUDT(#(PartitionKey #field) id: Integer,
udts: java.util.Set[UdtCaseClass]) {
def this() {
this(0, new java.util.HashSet[UdtCaseClass]())
}
}
and then just create case class and use mapper.save on it. (Also note that you need to use Java collections, until you're imported Scala codecs).
The primary reason for using Object Mapper could be ease of use, and also better performance, because it's using prepared statements under the hood, instead of built statements that are much less efficient.
You can find more information about Object Mapper + Scala in article that I wrote recently.

How to extend Spark Catalyst optimizer with custom rules?

I want to use Catalyst rules to transform star-schema (https://en.wikipedia.org/wiki/Star_schema) SQL query to SQL query to denormalized star-schema where some fields from dimensions tables are represented in facts table.
I tried to find some extension points to add own rules to make a transformation described above. But I didn't find any extension points. So there are the following questions:
How can I add own rules to catalyst optimizer?
Is there another solution to implement a functionality described above?
Following #Ambling advice you can use the sparkSession.experimental.extraStrategies to add your functionality to the SparkPlanner.
An example strategy that simply prints "Hello world" on the console
object MyStrategy extends Strategy {
def apply(plan: LogicalPlan): Seq[SparkPlan] = {
println("Hello world!")
Nil
}
}
with example run:
val spark = SparkSession.builder().master("local").getOrCreate()
spark.experimental.extraStrategies = Seq(MyStrategy)
val q = spark.catalog.listTables.filter(t => t.name == "five")
q.explain(true)
spark.stop()
You can find a working example project on friend's GitHub: https://github.com/bartekkalinka/spark-custom-rule-executor
As a clue, now in Spark 2.0, you can import extraStrategies and extraOptimizations through SparkSession.experimental.

How to Compare only date in Linq to Entity with Dynamic Query API?

I have downloaded Microsoft Dynamic Query API. And using the dynamic query to filter the data using dates. I have written following query :-
Entities db = new Entities();
DateTime d = new DateTime(2014, 1, 17);
var lst = db.MSTPriorityS.Where("ModifiedOn == #0", d.Date.ToString()).ToList();
The result count, i am getting is 0. While there is data in the database table.
Please advise what i am doing wrong?
i think the problem is where you cast DateTime to String probably,
you can create your query step by step, and type safe, follow 'Creating dynamic queries with entity framework'
you can use lambda expression instead: var lst = db.MSTPriorityS.Where(u => u.ModifiedOn == System.Data.Objects.EntityFunctions.TruncateTime(d))

How to read schema of a keyspace using java.?

I want to read schema of a keyspace in cassandra.
I know that, in Cassandra-cli we can execute following command to get Schema
show schema keyspace1;
But i want to read schema from remote machine using java.
How i can solve this? Plzzz help me....
This one i solved by using thrift client
KsDef keyspaceDefinition = _client.describe_keyspace(_keyspace);
List<CfDef> columnDefinition = keyspaceDefinition.getCf_defs();
Here key space definition contains whole schema details, so from that KsDef we can read whatever we want. In my case i want to read metadata so i am reading column metadata from the above column definitions as shown below.
for(int i=0;i<columnDefinition.size();i++){
List<ColumnDef> columnMetadata = columnDefinition.get(i).getColumn_metadata();
for(int j=0;j<columnMetadata.size();j++){
columnfamilyNames.add(columnDefinition.get(i).getName());
columnNames.add(new String((columnMetadata.get(j).getName())));
validationClasses.add(columnMetadata.get(j).getValidation_class());
//ar.add(coldef.get(i).getName()+"\t"+bb_to_str(colmeta.get(j).getName())+"\t"+colmeta.get(j).getValidationClass());
}
}
here columnfamilyNames, columnNames and validationClasses are arraylists.

Resources