doctrine2 dql query by property of serialized object within entity - object

I have an entity with an 'object' type column. I want to be able to retreive the entity by a property (say id) of that object. For example, the query would look something like this:
$em->createQuery('SELECT e FROM Entity_Class e SOME_MAGIC e.object o WHERE o.id = ?1');
The question is, is there *SOME_MAGIC* in dql?

This is not possible an object type column is serialized into a CLOB field using serialize(). There is no way to query subproperties of it.

A possible solution but not the optimal one is to use a like expr:
$qb->add($qb->expr()->like("e.column", $qb->expr()->literal("%text_to_search%")));
Because in doctrine 2 in some RDBMS, the object are persist in longtext type and not CLOB or BLOB.

Related

Is it possible for CQL to parse a JSON object to insert data?

From what I looked so far, it seems impossible with Cassandra. But I thought I'd give it a shot:
How can I select a value of a json property, parsed from a json object string, and use it as part of an update / insert statement in Cassandra?
For example, I'm given the json object:
{
id:123,
some_string:"hello there",
mytimestamp: "2019-09-02T22:02:24.355Z"
}
And this is the table definition:
CREATE TABLE IF NOT EXISTS myspace.mytable (
id text,
data blob,
PRIMARY KEY (id)
);
Now the thing to know at this point is that for a given reason the data field will be set to the json string. In other words, there is no 1:1 mapping between the given json and the table columns, but the data field contains the json object as kind of a blob value.
... Is it possible to parse the timestamp value of the given json object as part of an insert statement?
Pseudo code example of what I mean, which obviously doesn't work ($myJson is a placeholder for the json object string above):
INSERT INTO myspace.mytable (id, data)
VALUES (123, $myJson)
USING timestamp toTimeStamp($myJson.mytimestamp)
The quick answer is no, it's not possible to do that with CQL.
The norm is to parse the elements of the JSON object within your application to extract the corresponding values to construct the CQL statement.
As a side note, I would discourage using the CQL blob type due to possible performance issues should the blob size exceeed 1MB. If it's JSON, consider storing it as CQL text type instead. Cheers!
Worth mentioning, but CQL can do a limited amount of JSON parsing on its own. Albeit, not as detailed as you're asking here (ex: USING timestamp).
But something like this works:
> CREATE TABLE myjsontable (
... id TEXT,
... some_string TEXT,
... PRIMARY KEY (id));
> INSERT INTO myjsontable JSON '{"id":"123","some_string":"hello there"}';
> SELECT * FROM myjsontable WHERE id='123';
id | some_string
-----+-------------
123 | hello there
(1 rows)
In your case you'd either have to redesign the table or the JSON payload so that they match. But as Erick and Cédrick have mentioned, the USING timestamp part would have to happen client-side.
What you detailed is doable with Cassandra.
Timestamp
To insert timestamp in a query it should be formatted as an ISO 8601 String. Sample examples could be found here. In your code, you might have to convert your incoming value to expected type and format.
Blob:
Blob expects to store binary data, as such it cannot be put Ad hoc as a String in a CQL query. (you can use TEXT type to do it if you want to encode base64)
When you need to insert binary data you need to provide proper type as well. For instance if you are working with Javascript to need to provide a Buffer as describe in the documentation Then when you execute your query you externalized your parameters
const sampleId = 123;
const sampleData = Buffer.from('hello world', 'utf8');
const sampleTimeStamp = new Date();
client.execute('INSERT INTO myspace.mytable (id, data) VALUES (?, ?) USING timestamp toTimeStamp(?)', [ sampleId, sampleData, sampleTimeStamp ]);

What are the returned data types from a Knex select() statement?

Hi everyone,
I am currently using Knex.js for a project and a question arise when I make a knex('table').select() function call.
What are the returned types from the query ? In particular, If I have a datetime column in my table, what is the return value for this field ?
I believe the query will return a value of type string for this column. But it is the case for any database (I use SQLite3) ? It is possible that the query returns a Date value ?
EXAMPLE :
the user table has this schema :
knex.schema.createTable('user', function (table) {
table.increments('id');
table.string('username', 256).notNullable().unique();
table.timestamps(true, true);
})
since I use SQLite3, table.timestamps(true, true); produces 2 datetime columns : created_at & modified_at.
when I make a query knex('user').select(), it returns a array of objects with the attributes : id, username, created_at, modified_at.
id is of type number
username is of type string
what will be the types of created_at & modified_at ?
Will it be always of string type ? If I use an other database like PostgreSQL, these columns will have the timestamptz SQL type. The returned type of knex will be also a string type ?
This is not in fact something that Knex is responsible for, but rather the underlying database library. So if you're using SQLite, it would be sqlite3. If you're using Postgres, pg is responsible and you could find more documentation here. Broadly, most libraries take the approach that types which have a direct JavaScript equivalent (booleans, strings, null, integers, etc.) are returned as those types; anything else is converted to a string.
Knex's job is to construct the SQL that the other libraries use to talk to the database, and receives the response that they return.
as I believe it will be object of strings or numbers

Convert single UDT object to list of UDT in Cassandra table

I have two userdefined type in Cassandra. First one is using the second object as frozen object inside it.
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata frozen<testdata>
);
CREATE TYPE my_keyspace.testdata (
subject text,
metadata text
);
Now my requirement is to convert this single object to list of UDT . Something like this
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata list<frozen<testdata>>
);
Is it possible to update single object to list of object in Cassandra. Whata are options available to update the schema.
The only way to do it, is to add another field with required type using the ALTER TYPE, and start to use this new field, migrating existing data using some code. Cassandra doesn't allow to change type of the existing fields, and you can't also drop a field from UDT. So your type should be something like this:
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata frozen<testdata>
testdata_lst list<frozen<testdata>>
);
Existing data could be migrated into a list, and then set to null to free the space.

Error binding OffsetDateTime [operator does not exist: timestamp with time zone <= character varying]

We are trying to execute dml which deletes records based on ZonedDateTime. We are using following code but running into an error.
dsl.execute ("delete from fieldhistory where createddate <= ? and object = ?", beforeDate.toOffsetDateTime(), objName)
Where beforeDate is ZonedDateTime and objectName is string
We are getting following error from postgres.
org.jooq.exception.DataAccessException: SQL [delete from fieldhistory where createddate <= ? and object = ?]; ERROR: operator does not exist: timestamp with time zone <= character varying
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Position: 56
at org.jooq_3.13.1.POSTGRES.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2751)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:755)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:385)
at org.jooq.impl.DefaultDSLContext.execute(DefaultDSLContext.java:1144)
Questions is, how do we bind datetime value in Jooq?
For historic reasons, jOOQ binds all JSR-310 times as strings, not as the relevant object type. This is because until recently, JDBC drivers did not support the JSR-310 types natively, and as such, using a string was not a bad default.
Unfortunately, this leads to type ambiguities, which you would not have if either:
jOOQ didn't bind a string
you were using the code generator and thus type safe DSL API methods
As a workaround, you can do a few things, including:
Casting your bind variable explicitly
dsl.execute("delete from fieldhistory where createddate <= ?::timestamptz and object = ?",
beforeDate.toOffsetDateTime(),
objName)
Using the DSL API
dsl.deleteFrom(FIELDHISTORY)
.where(FIELDHISTORY.CREATEDDATE.lt(beforeDate.toOffsetDateTime()))
.and(FIELDHISTORY.OBJECT.eq(objName))
.execute();
By writing your own binding
You can write your own data type binding and attach that to generated code, or to your plain SQL query, in case of which you would be in control of how the bind variable is sent to the JDBC driver. See:
https://www.jooq.org/doc/latest/manual/sql-building/queryparts/custom-bindings/
For example:
DataType<OffsetDateTime> myType = SQLDataType.OFFSETDATETIME
.asConvertedDataType(new MyBinding());
dsl.execute ("delete from fieldhistory where createddate <= {0} and object = {1}",
val(beforeDate.toOffsetDateTime(), myType),
val(objName))
There will be a fix in the future for this, so this won't be necessary anymore: https://github.com/jOOQ/jOOQ/issues/9902

UPDATE prepared statement with Object

I have an Object that maps column names to values. The columns to be updated are not known beforehand and are decided at run-time.
e.g. map = {col1: "value1", col2: "value2"}.
I want to execute an UPDATE query, updating a table with those columns to the corresponding values. Can I do the following? If not, is there an elegant way of doing it without building the query manually?
db.none('UPDATE mytable SET $1 WHERE id = 99', map)
is there an elegant way of doing it without building the query manually?
Yes, there is, by using the helpers for SQL generation.
You can pre-declare a static object like this:
const cs = new pgp.helpers.ColumnSet(['col1', 'col2'], {table: 'mytable'});
And then use it like this, via helpers.update:
const sql = pgp.helpers.update(data, cs) + /* WHERE clause with the condition */;
// and then execute it:
db.none(sql).then(data => {}).catch(error => {})
This approach will work with both a single object and an array of objects, and you will just append the update condition accordingly.
See also: PostgreSQL multi-row updates in Node.js
What if the column names are not known beforehand?
For that see: Dynamic named parameters in pg-promise, and note that a proper answer would depend on how you intend to cast types of such columns.
Something like this :
map = {col1: "value1", col2: "value2",id:"existingId"}.
db.none("UPDATE mytable SET col1=${col1}, col2=${col2} where id=${id}", map)

Resources