I've written a prolog project and now I have to change it to Logtalk file. In the code I can assert new facts to another prolog file with the code:
:- dynamic(student/5).
:- use_module(library(persistency)).
:- persistent(student(id, name, surname, mark, studies)).
:- initialization(db_attach('student_database.pl', [])).
add_student(ID, Name, Surname, Mark, Studies):-
with_mutex(student_db, assert_student(ID, Name, Surname, Mark, Studies)).
Now I want to make something similar in Logtalk, but not with facts, but objects. Ofc I know how to make new object (by create_object/4 with protocol), but I don't know how to save it in a file as database.
The protocol looks like:
:- protocol(student).
:- public([
id/1,
name/1,
surname/1,
studies/1,
marks/1
]).
:- end_protocol.
Can anyone help me with saving these objects?
Serializing dynamic objects can be complex in general depending on the objects dependencies but your case seems simpler as each object only depends on a single protocol and only contains facts.
When using a backend Prolog system that supports saved states (e.g. SICStus Prolog, SWI-Prolog, or YAP), a simple solution is to create a saved state. As there is no standard for saved states, this solution is necessarily non-portable.
When saved states are not possible or a portable solution is sought, we need to define a format to the saved data so that we can interpret it when loaded and restore the objects. Let's assume that we want to restore the objects as dynamic objects (as they were originally created using the create_object/4 predicate) and use a simple representation, data/1, for the saved state. We can define a generic serializer object as follows (not tested):
:- object(serializer).
:- public([
save/2,
restore/1
]).
save(Protocol, File) :-
protocol_property(Protocol, public(Predicates)),
open(File, write, Stream),
write_canonical(Stream, protocol(Protocol)), write(Stream, '.\n'),
forall(
conforms_to_protocol(Object, Protocol),
save_object(Object, Predicates, Stream)
),
close(Stream).
save_object(Object, Predicates, Stream) :-
object_data(Predicates, Object, [], Data),
write_canonical(Stream, data(Data)), write(Stream, '.\n').
object_data([], _, Data, Data).
object_data([Functor/Arity| Predicates], Object, Data0, Data) :-
functor(Fact, Functor, Arity),
findall(Fact, Object::Fact, Data1, Data0),
object_data(Predicates, Object, Data1, Data).
restore(File) :-
open(File, read, Stream),
read_term(Stream, Term, []),
restore_object(Term, _, Stream),
close(Stream).
restore_object(end_of_file, _, _).
restore_object(protocol(Protocol), Protocol, Stream) :-
read_term(Stream, Term, []),
restore_object(Term, Protocol, Stream).
restore_object(data(Data), Protocol, Stream) :-
create_object(_, [implements(Protocol)], [], Data),
read_term(Stream, Term, []),
restore_object(Term, Protocol, Stream).
:- end_object.
This is just a starting point, however. It can be improved in several ways but that mainly require more details about the particular serialization scenario.
Update
Added a serialization example to the Logtalk git version based on the code above: https://github.com/LogtalkDotOrg/logtalk3/tree/master/examples/serialization
Related
I’ve built a dynamic query generator to create my desired queries based on many factors, however, in rare cases it acted weird. After a day on reading logs I found a situation that can be simplified in this:
db.users.find({att: ‘a’, att: ‘b’})
What I expect is that mongodb by default uses AND, so above query’s result should be an empty array. However, it’s not!
But when I use AND explicitly, the result is an empty array
db.users.find({$and: [{att: 'a'}, {att: ‘b'}]})
In javascript object’s keys should be unique, otherwise the value will be replaced by the latest value
(also mongodb shell is based on js, so it follows some rules of js)
const t = {att: 'a', att: 'b'};
console.log(t);
So in your case your query is acting like this:
db.users.find({att: ‘b’})
You’ve to handle this situation on your code if you want the result be empty in the mentioned condition
I have the following query:
SELECT *
FROM table
WHERE (id, other_id, status)
IN (
(1, 'XYZ', 'OK'),
(2, 'ZXY', 'OK') -- , ...
);
Is it possible to construct this query in a type-safe manner using jOOQ, preferably without generating composite keys? Is it possible to do this using jOOQ 3.11?
My apologies, it seems my Google-fu was not up to par. The opposite of this question can be found here: Use JOOQ to do a delete specifying multiple columns in a "not in" clause
For completeness' sake, so that other Google searches might be more immediately helpful, the solution is:
// can be populated using DSL.row(...); for each entry
Collection<? extends Row3<Long, String, String>> values = ...
dslContext.selectFrom(TABLE)
.where(DSL.row(ID, OTHER_ID, STATUS).in(values))
.fetch();
Relevant jOOQ documentation: https://www.jooq.org/doc/3.14/manual/sql-building/conditional-expressions/in-predicate-degree-n/
Your own answer already shows how to do this with a 1:1 translation from SQL to jOOQ using the IN predicate for degrees > 1.
Starting from jOOQ 3.14, there is also the option of using the new <embeddablePrimaryKeys/> flag in the code generator, which will produce embeddable types for all primary keys (and foreign keys referencing them). This will help never forget a key column on these queries, which is especially useful for joins.
Your query would look like this:
ctx.selectFrom(TABLE)
.where(TABLE.PK_NAME.in(
new PkNameRecord(1, "XYZ", "OK"),
new PkNameRecord(2, "ZXY", "OK")))
.fetch();
The query generated behind the scenes is the same as yours, using the 3 constraint columns for the predicate. If you add or remove a constraint from the key, the query will no longer compile. A join would look like this:
ctx.select()
.from(TABLE)
.join(OTHER_TABLE)
.on(TABLE.PK_NAME.eq(OTHER_TABLE.FK_NAME))
.fetch();
Or an implicit join would look like this:
ctx.select(OTHER_TABLE.table().fields(), OTHER_TABLE.fields())
.from(OTHER_TABLE)
.fetch();
I'm writing a REST api in node js that will execute a sql query and send the results;
in the request I need to send the WHERE conditions; ex:
GET 127.0.0.1:5007/users //gets the list of users
GET 127.0.0.1:5007/users
id = 1 //gets the user with id 1
Right now the conditions are passed from the client to the rest api in the request's headers.
In the API I'm using sequelize, an ORM that needs to receive WHERE conditions in a particular form (an object); ex: having the condition:
(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)
this needs to be formatted as a nested object:
-- x=1
-- AND -| -- y=2
| -- OR ----|
| -- z=3
-- OR -|
|
| -- x=3
-- AND -|
-- y=1
so the object would be:
Sequelize.or (
Sequelize.and (
{x=1},
Sequelize.or(
{y=2},
{z=3}
)
),
Sequelize.and (
{x=3},
{y=1}
)
)
Now I'm trying to pass a simple string (like "(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)"), but then I will need a function on the server that can convert the string in the needed object (this method in my opinion has the advantage that the developer writing the client, can pass the where conditions in a simple way, like using sql, and this method is also indipendent from the used ORM, with no need to change the client if we need to change the server or use a different ORM);
The function to read and convert the conditions' string into an object is giving me headache (I'm trying to write one without success, so if you have some examples about how to do something like this...)
What I would like to get is a route capable of executing almost any kind of sql query and give the results:
now I have a different route for everything:
127.0.0.1:5007/users //to get all users
127.0.0.1:5007/users/1 //to get a single user
127.0.0.1:5007/lastusers //to get user registered in the last month
and so on for the other tables i need to query (one route for every kind of request I need in the client);
instead I would like to have only one route, something like:
127.0.0.1:5007/request
(when calling this route I will pass the table name and the conditions' string)
Do you think this solution would be a good solution or you generally use other ways to handle this kind of things?
Do you have any idea on how to write a function to convert the conditions' string into the desired object?
Any suggestion would be appreciated ;)
I would strongly advise you not to expose any part of your database model to your clients. Doing so means you can't change anything you expose without the risk of breaking the clients. One suggestion as far as what you've supplied is that you can and should use query parameters to cut down on the number of endpoints you've got.
GET /users //to get all users
GET /users?registeredInPastDays=30 //to get user registered in the last month
GET /users/1 //to get a single user
Obviously "registeredInPastDays" should be renamed to something less clumsy .. it's just an example.
As far as the conditions string, there ought to be plenty of parsers available online. The grammar looks very straightforward.
IMHO the main disadvantage of your solution is that you are creating just another API for quering data. Why create sthm from scratch if it is already created? You should use existing mature query API and focus on your business logic rather then inventing sthm new.
For example, you can take query syntax from Odata. Many people have been developing that standard for a long time. They have already considered different use cases and obstacles for query API.
Resources are located with a URI. You can use or mix three ways to address them:
Hierarchically with a sequence of path segments:
/users/john/posts/4711
Non hierarchically with query parameters:
/users/john/posts?minVotes=10&minViews=1000&tags=java
With matrix parameters which affect only one path segment:
/users;country=ukraine/posts
This is normally sufficient enough but it has limitations like the maximum length. In your case a problem is that you can't easily describe and and or conjunctions with query parameters. But you can use a custom or standard query syntax. For instance if you want to find all cars or vehicles from Ford except the Capri with a price between $10000 and $20000 Google uses the search parameter
q=cars+OR+vehicles+%22ford%22+-capri+%2410000..%2420000
(the %22 is a escaped ", the %24 a escaped $).
If this does not work for your case and you want to pass data outside of the URI the format is just a matter of your taste. Adding a custom header like X-Filter may be a valid approach. I would tend to use a POST. Although you just want to query data this is still RESTful if you treat your request as the creation of a search result resource:
POST /search HTTP/1.1
your query-data
Your server should return the newly created resource in the Location header:
HTTP/1.1 201 Created
Location: /search/3
The result can still be cached and you can bookmark it or send the link. The downside is that you need an additional POST.
I have an audited entity A. Entity A holds field 'name' and a collection of entities B (annotated as Many-to-many relationship). I created an instance of A, defined name, collection of entities B and save all it into DB. This is revision #1. Then I changed name of A and update it in DB. This is revision #2.
I use the following method to get all entities of class A at revision #2
List<A> list = getAuditReader().createQuery().forEntitiesAtRevision(A.class, 2)
.add(AuditEntity.revisionNumber().eq((int) revisionId)).getResultList();
I get entity A at revision #2, but Envers also fetches collection of entities B related to this A from revision #1. Here an example of query used by Envers:
SELECT a_b_aud.a_id, a_b_aud.b_id
FROM a_b_aud CROSS JOIN b_aud
WHERE a_b_aud.b_id=b_aud.id
AND b_aud.rev=(SELECT max(b_aud2.rev)) FROM b_aud AS b_aud2 WHERE b_aud2.rev<=2 AND b_aud.id=b_aud2.id)
AND a_b_aud.rev=(SELECT max(a_b_aud2.rev)) FROM a_b_aud AS a_b_aud2 WHERE a_b_aud2.rev<=2 AND a_b_aud.a_id=a_b_aud2.a_id AND a_b_aud.b_id=a_b_aud2.b_id)
But actually I need NULL as a collection of entities B in case of there were no changes for it at revision #2 (because of performance issue).
There are two subselects in this query. And if we have more then one collection of entities related to A (C, D, E, F) and for about 100 thousands rows for each b_aud and a_b_aud the query above takes a lot of time.
I defined entity B as not audited (i.e. did not add #Audited annotation into B) and defined A B relation by the following:
#ManyToMany
#Cascade({org.hibernate.annotations.CascadeType.SAVE_UPDATE})
#JoinTable(name = "a_b", joinColumns = #JoinColumn(name = a_id))
#Audited(targetAuditMode = RelationTargetAuditMode.NOT_AUDITED)
public Set<B> getBs();
It fixes first SUBSELECT.
But I can not find standard solution to not query B's if it do not exist for requested revision (in my case #2). So the query should look like:
SELECT a_b_aud.a_id, a_b_aud.b_id
FROM a_b_aud CROSS JOIN b_aud
WHERE a_b_aud.b_id=b_aud.id b_aud.rev=2 AND a_b_aud.rev=2
The only solution I found is using native sql query and to execute it using hibernate template. Then convert result values into entity A using ResultTransformer.
Could anybody help with this issue? Is there a standard configuration/annotation I need to add to avoid second SUBSELECT?
There's no option in Envers not to load related entities when requested. Not however, that the B entities are always loaded lazily (regardless of the annotations on the relation), so if you don't want to execute the query which loads them, simply do not access that field.
If you want better read performance, you may also want to look at the validity audit strategy, see http://docs.jboss.org/hibernate/core/4.1/devguide/en-US/html/ch15.html#d5e4085. It has faster reads, but slower writes.
In groovy, I want to search text (which is typically an xml structure) and find an occurrence of the ignore list.
For example:
My different search data requests are (reduced for clarity, but most are large):
<CustomerRQ field='a'></CustomerRQ>
<AddressRQ field='a'></AddressRQ>
My ignore list is:
CustomerRQ
CustomerRS
Based on the above two incoming requests of "customer" and "address", I want to ignore "Customer" since it's in my ignore list, but I want to identify "address" as a hit.
The overall intent is to use this for logging. I want to not log some incoming requests based on my "ignore" list, but all others will be logged.
Here's some pseudo code that may be on the right track but not really.
def list = ["CustomerRQ", "CustomerRS"]
println(list.contains("<CustomerRQ field='a'>"))
I'm not sure, but I think a closure will work in this case, but learning the groovy ropes here. Maybe a regexp will work as well. But the importance is to search in the incoming string (indexOf, exists...) across all of my exclusions list.
A quick solution:
shouldIgnore = list.inject(false) { bool, val -> bool || line.contains(val) }
Whether or not this is the best idea depends on information we don't have; it may be better to do something more-XMLy rather than checking against a string.