Generate SQLite database scheme from Java code - jooq

Is there any way to generate SQLite database model from Java code using JOOQ?

You can generate DDL statements like CREATE TABLE .. or ALTER TABLE .. ADD CONSTRAINT .. using the DSLContext.ddl() API, for instance:
// SCHEMA is the generated schema that contains a reference to all generated tables
Queries ddl =
DSL.using(configuration)
.ddl(SCHEMA);
for (Query query : ddl.queries()) {
System.out.println(query);
}
This is documented here:
https://www.jooq.org/doc/latest/manual/sql-building/ddl-statements/generating-ddl/

Related

How do I write this query without using raw query in sequelize?

I would like to a bulk update in sequelize. Unfortunately it seems like sequelize does not support bulk updates I am using sequelize-typescript if that helps and using postgresql 14
My query in raw SQL looks like this
UPDATE feed_items SET tags = val.tags FROM
(
VALUES ('ddab8ce7-afa3-824f-7b65-edfb53a71764'::uuid,ARRAY[]::VARCHAR(255)[]),
('ece9f2fc-2a09-4a95-16ce-07293b0a14d2'::uuid,ARRAY[]::VARCHAR(255)[])
) AS val(id, tags) WHERE feed_items.id = val.id
I would like to generate this query from a given array of string and array values. The tags is implemented as a string array in my table.
Is there a way to generate the above query without using raw query?
Or an SQL injection safe way of generating the above query?

JOOQ MYSQL QUERY

select question.*,
question_option.id
from question
left join question_option on question_option.question_id = question.id;
how do i write question.* in jooq instead of specifying all the entity vaiables
You can use field() or asterisk() methods from the JOOQ generated objects which are extended from TableImpl.
For example, if you just want to query the fields of a record:
dsl.select(QUESTION.fields()).from...
If you need fields from the join too:
dsl.select(QUESTION.asterisk(), QUESTION_OPTION.ID).from...
I assume that you generate the metamodel so you can use
dsl.select(QUESTION.fields()), QUESTION_OPTION.ID)...

How to fetch raw sql insert/update from sqlalchemy ORM

I was trying to dump my PostgreSQL database created via SQLalchemy using python script. Though I have successfully created a database and all the data are getting inserted via web parsing in the ORM I have mapped with. But when I am trying to take a dump for all my insert queries using this
tab = Table(table.__tablename__, MetaData())
x = tab.insert().compile(
dialect=postgresql.dialect(),
compile_kwargs={"literal_binds": True},
)
logging.info(f"{x}")
I am adding values using ORM like this:
for value in vertex_type_values:
data = table(
Type=value["type"],
Name=value["name"],
SizeX=value["size_x"],
SizeY=value["size_y"],
SizeZ=value["size_z"],
)
session.add(data)
session.commit()
here table is the model which i have designed and imported from my local library and vertex_type_values which I have extracted and yield in my script
I am getting the output as
INSERT INTO <tablename> DEFAULT VALUES
So my question is how to get rid of Default Values and get actual values so that I can directly use insert command if my DB crash anytime? I need to know raw SQL for insert command

Waterline - Postgres - DataTypes

I am having difficulties with Waterline models and creating the Postgres tables related to those models.
No matter what I do to create a varchar(n) in the table through a model, it converts the attribute to text. And bigint also is being converted to integer!
Should I change the ORM?
Is there a way to do that?
You can do a more pleasant approach, using Waterline to "RUD" in "CRUD" but not to "C" - create! This because Waterline can be very "bad" at creating intermediary tables, primary keys (composite keys) and etc. So what I do today is this:
Compose a full .sql file archive to create indexes and tables.
Create the database once. (Alter if needed).
Declare all the tables as models. Just insert the type, primary key (if it is a single one) and lifecycle callbacks.
Make sure that config/models.js is set to migrate : safe.
Conclusion: I can insert, read and delete rows with Waterline, but I don't trust it (performance-wise) to create my tables. Sequelize on the other hand is a much more mature ORM and can be used if you need it. For me the hybrid waterline + SQL is sufficient.
EDIT: My models dont have any aggregation (like my_pets: { model: pet} ), just row names and types, as simple as possible.
Sails supported datatype:
String, text, integer, float, date, datetime, boolean, binary, array, json, mediumtext, longtext, objectid
If you need to specify exact length -> varchar(n), you need to use supported data type as shown above, or sails provide option called query.
Model.query() method which you can use to perform any kind of query you want.
var queryString='CREATE TABLE if not exists sailsusers.test (id INT NOT NULL,name VARCHAR(45) NULL,PRIMARY KEY (id))'
Test.query(queryString,function(err,a){
if(err)
return console.log(err);
console.log(a,'\n',b);
res.ok();
});

Storing schema-less complex json object in Cassandra

I have a schema-less json object that I wish to store in Cassandra DB using spring-cassandra. I learned that Cassandra supports Map type but Cassandra doesn't accept Map<String, Object> as a data model.
I need to query on the fields of the json so storing it as a blob is out of question. Is there anyway I can do this?
PS: I've looked at Storing JSON object in CASSANDRA, the answer didn't seem applicable to my use case as my json could be very complex.
Did you look at UDT (user-defined-type) ?
You can define an UDT like this:
CREATE TYPE my_json(
property1 text,
property2 int,
property3 map<text, text>,
property4 map<int, another_json_type>,
...
)
And then in Java use Map<String, UserType>
Note: UserType comes from the Java driver: https://github.com/datastax/java-driver/blob/2.1/driver-core/src/main/java/com/datastax/driver/core/UserType.java
You cannot create an user type n Java, you can only get it from the metadata of your table, see this: https://github.com/datastax/java-driver/blob/3.0/driver-core/src/test/java/com/datastax/driver/core/UserTypesTest.java#L62-L81
1) one solution from me is, integrate solr search and index this table first.
2) Later write a solr analyser to parse the json and put under various fields in solr while indexing.
3) Next step is use solr supported query like select * from table where solr_query = "{search expression syntax}"

Resources