Is there some way in Haskell persistent to perform a select distinct (column_name_1, column_name_2) .... Note that I do not mean unique, I really want to select the distinct records for these columns. I can of course perform some filter-magic afterwards, but I would like to have the database (in my case postgres) solve it, but I did not really find this in the documentation.
Kasper
A search of the persistent repo shows that the DISTINCT keyword is not used in any meaningful context, meaning that persistent does not support DISTINCT queries at all.
The reason for this is because an explicit design goal of persistent is to be backend-agnostic, and many non-SQL backends do not natively support certain SQL features, such as distinct queries and joins.
I opened a Github issue to query this, and Matt Parsons, a persistent maintainer, responded recommending the Esqueleto package, which is written on top of persistent and aims to provide SQL-specific functionality.
Related
I am on jooq queries now...I feel the SQL queries looks more readable and maintainable and why we need to use JOOQ instead of using native SQL queries.
Can someone explains few reason for using the same?
Thanks.
Here are the top value propositions that you will never get with native (string based) SQL:
Dynamic SQL is what jOOQ is really really good at. You can compose the most complex queries dynamically based on user input, configuration, etc. and still be sure that the query will run correctly.
An often underestimated effect of dynamic SQL is the fact that you will be able to think of SQL as an algebra, because instead of writing difficult to compose native SQL syntax (with all the keywords, and weird parenthesis rules, etc.), you can think in terms of expression trees, because you're effectively building an expression tree for your queries. Not only will this allow you to implement more sophisticated features, such as SQL transformation for multi tenancy or row level security, but every day things like transforming a set of values into a SQL set operation
Vendor agnosticity. As soon as you have to support more than one SQL dialect, writing SQL manually is close to impossible because of the many subtle differences in dialects. The jOOQ documentation illustrates this e.g. with the LIMIT clause. Once this is a problem you have, you have to use either JPA (much restricted query language: JPQL) or jOOQ (almost no limitations with respect to SQL usage).
Type safety. Now, you will get type safety when you write views and stored procedures as well, but very often, you want to run ad-hoc queries from Java, and there is no guarantee about table names, column names, column data types, or syntax correctness when you do SQL in a string based fashion, e.g. using JDBC or JdbcTemplate, etc. By the way: jOOQ encourages you to use as many views and stored procedures as you want. They fit perfectly in the jOOQ paradigm.
Code generation. Which leads to more type safety. Your database schema becomes part of your client code. Your client code no longer compiles when your queries are incorrect. Imagine someone renaming a column and forgetting to refactor the 20 queries that use it. IDEs only provide some degree of safety when writing the query for the first time, they don't help you when you refactor your schema. With jOOQ, your build fails and you can fix the problem long before you go into production.
Documentation. The generated code also acts as documentation for your schema. Comments on your tables, columns turn into Javadoc, which you can introspect in your client language, without the need for looking them up in the server.
Data type bindings are very easy with jOOQ. Imagine using a library of 100s of stored procedures. Not only will you be able to access them type safely (through code generation), as if they were actual Java code, but you don't have to worry about the tedious and useless activity of binding each single in and out parameter to a type and value.
There are a ton of more advanced features derived from the above, such as:
The availability of a parser and by consequence the possibility of translating SQL.
Schema management tools, such as diffing two schema versions
Basic ActiveRecord support, including some nice things like optimistic locking.
Synthetic SQL features like type safe implicit JOIN
Query By Example.
A nice integration in Java streams or reactive streams.
Some more advanced SQL transformations (this is work in progress).
Export and import functionality
Simple JDBC mocking functionality, including a file based database mock.
Diagnostics
And, if you occasionally think something is much simpler to do with plain native SQL, then just:
Use plain native SQL, also in jOOQ
Disclaimer: As I work for the vendor, I'm obviously biased.
I've been evaluating Cassandra to replace MySQL in our microservices environment, due to MySQL being the only portion of the infrastructure that is not distributed. Our needs are both write and read intensive as it's a platform for exchanging raw data. A type of "bus" for lack of better description. Our selects are fairly simple and should remain that way, but I'm already struggling to get past some basic filtering due to the extreme limitations of select queries.
For example, if I need to filter data it has to be in the key. At that point I can't change data in the fields because they're part of the key. I can use a SASI index but then I hit a wall if I need to filter by more than one field. The hope was that materialized views would help with this but in another post I was told to avoid them, due to some instability and problematic behavior.
It would seem that Cassandra is good at storage but realistically, not good as a standalone database platform for non-trivial applications beyond very basic filtering (i.e. a single field.) I'm guessing I'll have to accept the use of another front-end like Elastic, Solr, etc. The other option might be to accept the idea of filtering data within application logic, which is do-able, as long as the data sets coming back remain small enough.
Apache Cassandra is far more than just a storage engine. Its design is a distributed database oriented towards providing high availability and partition tolerance which can limit query capability if you want good and reliable performance.
It has a query engine, CQL, which is quite powerful, but it is limited in a way to guide user to make effective queries. In order to use it effectively you need to model your tables around your queries.
More often than not, you need to query your data in multiple ways, so users will often denormalize their data into multiple tables. Materialized views aim to make that user experience better, but it has had its share of bugs and limitations as you indicated. At this point if you consider using them you should be aware of their limitations, although that is generally good idea for evaluating anything.
If you need advanced querying capabilities or do not have an ahead of time knowledge of what the queries will be, Cassandra may not be a good fit. You can build these capabilities using products like Spark and Solr on top of Cassandra (such as what DataStax Enterprise does), but it may be difficult to achieve using Cassandra alone.
On the other hand there are many use cases where Cassandra is a great fit, such as messaging, personalization, sensor data, and so on.
I am new to NoSql databases. I am trying to build a project and stuck with the approach of whether to choose sql databases or NoSql Databases for the project.
The requirements of my project are a legal firm would have many clients and each client can have different matter Type such as Immigration, Conveyancing, Family and etc and each MatterType can also have different fields which are never constant and they can fairly change in future.
Due to this nature I thought Nosql databases might be a good choice as they are document based and I can add any new fields to the document structure instead of always adding new columns to a sql data table dynamically which is not a good approach ( atleast i think)
Can anyone please kindly suggest me or refer me to an article which can assist me in deciding my approach
To give my clarity into my question let me explain with an example
For a client name xyz and matterType Immigration I can have fields such as firstName,lastName,Dob at this moment but later on for the same client I might have to add Dependants and their details
For a client name def and matterType conveyancing I would have different fields but those fields should also be added dynamically depending on the matter Type
Thank you in advance
Regards
Anand
In my opinion, you shouldn't only consider this feature in other to decide between NoSql or RBMDS.
In fact, this flexibility sounds very good, but it might be dangerous, once systems tend to raise, then things can get out of hand.
I have a system where I use MongoDB, but even though, I decided creating a schema for my collections.
I would suggest you finish modeling, then after that, conclude if it's really necessary to use NoSql.
I would like to suggest you to look into postgres sql if you are expecting large datasets. It offers the advantages of no sql databases such as support for key value pair and also keeping a rigid data structure like sql databases. Following are links to a few articles which may help you decide which approach to choose:
NoSql vs Sql
postgres vs mongodb
In http://www.yesodweb.com/book/persistent there is no mention of SQL views.
I (even in imperative languages) have been very fond of immutable database schema design. i.e. only INSERTs and SELECTs - UPDATEs and DELETEs are not used.
This has the advantage of preserving all history, at the expense of making the current 'state' a relatively expensive pure function of the history in the DB.
e.g. there is not a 'user' table, just 'user_created', 'user_password_updated' and 'user_deleted' tables, which are unified in a 'user' SQL VIEW, showing the current state of users.
How should I work with VIEWs in Persistent? Should I use Persistent at all - is it (ironically for Haskell) too tightly focused on a mutable DB for my use-case?
It's been a long time since the question was asked, but I thought it was worth
responding because seven years later the answer has not changed and I really like
your idea about keeping the old versions of tables around and reading them with
views! One drawback of this approach is that using Template Haskell in persistent
will slow down compile times a lot. Once I had a database of about 50 tables in
persistent Template Haskell and it took over half an hour to compile if it was
ever changed.
Yesod's persistent does not support
SQL views and I doubt it ever will because it intends to be database agnostic.
Currently it looks like it supports CouchDB, MongoDB, MySQL , PostgreSQL, Redis
and SQLite. Not all of these databases support SQL style views so it would be
hard to abstract over all of them.
Where persistent excels at is at providing an easy way to create a set of
Haskell types that serialize to and
from different databases. It provides with type class instances and functions to
do single table queries and these work really well. If you want to do join queries on an SQL database that
you are interfacing with persistent, then you can use esqueleto,
a type safe EDSL for SQL join queries.
As far as handling SQL Views in Haskell, I have not come across any tool yet.
You can either use rawQuery which will work but be harder to maintain or
you can build your own tool around one the Haskell DB interfaces like postgresql-simple,
which is what persistent does. In fact, you can even start with the persistent
source code of whatever database you are using and build an SQL View EDSL as you
need. In a closed-source project I helped build a custom PostgreSQL
interface based on some of persistent's ideas and types, but without using
an Template Haskell because the compile time was too slow.
I am building a tool that searches people based on a number of attributes. The values for these attributes are scattered across several systems.
As an example, dateOfBirth is stored in a SQL Server database as part of system ABC. That person's sales region assignment is stored in some horrible legacy database. Other attributes are stored in a system only accessible over an XML web service.
To make matters worse, the the legacy database and the web service can be really slow.
What strategies and tips should I consider for implementing a search across all these systems?
Note: Although I posted an answer, I'm not confident its a great answer. I don't intend to accept my own answer unless no one else gives better insight.
You could consider using an indexing mechanism to retrieve and locally index the data across all the systems, and then perform your searches against the index. Searches would be an awful lot faster and more reliable.
Of course, this just shifts the problem from one part of your system to another - now your indexing mechanism has to handle failures and heterogeneous systems, but that may be an easier problem to solve.
Another factor is how often the data changes. If you have to query data in real-time that goes stale very quickly, then indexing may not be practical.
If you can get away with a restrictive search, start by returning a list based on the search criteria corresponding to the fastest data source. Then join up those records with the other systems and remove records which don't match the search criteria.
If you have to implement OR logic, this approach is not going to work.
While not an actual answer, this might at least get you partway to a workable solution. We had a similar situation at a previous employer - lots of data sources, different ways of accessing those data sources, different access permissions, military/government/civilian sources, etc. We used Mule, which is built around the Enterprise Service Bus concept, to connect these data sources to our application. My details are a bit sketchy, as I wasn't the actual implementor, just an integrator, but what we did was define a channel in Mule. Then you write a simple integration piece to go between the channel and the data source, and the application and the channel. The integration piece does the work of making the actual query, and formatting the results, so we had a generic SQL integration piece for accessing a database, and for things like web services, we had some base classes that implemented common functionality, so the actual customization of the integration piecess was a lot less work than it sounds like. The application could then query the channel, which would handle accessing the various data sources, transforming them into a normalized bit of XML, and return the results to the application.
This had a lot of advantages for our situation. We could include new data sources for existing queries by simply connecting them to the channel - the application didn't have to know or care what data sources where there, as it only looked at the data from the channel. Since data can be pushed or pulled from the channel, we could have a data source update the application when, for example, it was updated.
It took a while to get it configured and working, but once we got it going, we were pretty successful with it. In our demo setup, we ended up with 4 or 5 applications acting as both producers and consumers of data, and connecting to maybe 10 data sources.
Have you thought of moving the data into a separate structure?
For example, Lucene stores data to be searched in a schema-less inverted indexed. You could have a separate program that retrieves data from all your different sources and puts them in a Lucene index. Your search could work against this index and the search results could contain a unique identifier and the system it came from.
http://lucene.apache.org/java/docs/
(There are implementations in other languages as well)
Have you taken a look at YQL? It may not be the perfect solution but I might give you starting point to work from.
Well, for starters I'd parallelize the queries to the different systems. That way we can minimize the query time.
You might also want to think about caching and aggregating the search attributes for subsequent queries in order to speed things up.
You have the option of creating an aggregation service or middleware that aggregates all the different systems so that you can provide a single interface for querying. If you do that, this is where I'd do the previously mentioned cache and parallize optimizations.
However, with all of that it you will need weighing up the development time/deployment time /long term benefits of the effort against migrating the old legacy database to a faster more modern one. You haven't said how tied into other systems those databases are so it may not be a very viable option in the short term.
EDIT: in response to data going out of date. You can consider caching if your data if you don't need the data to always match the database in real time. Also, if some data doesn't change very often (e.g. dates of birth) then you should cache them. If you employ caching then you could make your system configurable as to what tables/columns to include or exclude from the cache and you could give each table/column a personalizable cache timeout with an overall default.
Use Pentaho/Kettle to copy all of the data fields that you can search on and display into a local MySQL database
http://www.pentaho.com/products/data_integration/
Create a batch script to run nightly and update your local copy. Maybe even every hour. Then, write your query against your local MySQL database and display the results.