A very basic question from a newbie to the wonderful jhipster ecosystem.
How do I add an additional page to my app that displays two plots based on queries on the database?
I have created the demo jhipster app that tracks authors and books in a postgres database as described in the tutorial.
I would now like to add a summary page to my sample jhipster app that displays two plots based on data retrieved from an sql query.
1) barplot of total books published per author
select author_id, count(*)
from book
group by author_id
2) line plot showing number of publications for each author per year
select author_id,extract(year from publication_date) as year,count(*)
from book
group by author_id,year
I would normally use R/Shiny and plotly to create this type of dashboard, but would really appreciate any guidance on how to achieve the same using the jhipster framework
Thanks
Iain
I see 2 tasks here. In general you first prepare 2 API endpoints to deliver the data to your frontend (keep in mind, JHI provides both server and client), and then using plotly (js) to do your plots
preparing API
you should translate your SQL query to JPA, like this:
public interface BookRepository extends JpaRepository<Book,Long> {
#Query("select b.authorId, count(b.id) from Book b group by b.authorId ")
List<Book> findAllGroupByAuthor();
#Query("select b.authorId, YEAR(b.publicationDate) as year,count(*) from Book b group by b.authorId, b.year")
List<Book> findAllGroupByAuthorAndYear();
}
then you add this to some RestControllers. Here is an example
#RestController
#RequestMapping("/api/books/query/")
public class CustomBookQueryResource {
private BookRepository bookRepository;
public CustomBookQueryResource(BookRepository bookRepository) {
this.bookRepository = bookRepository;
}
#GetMapping("/group_by_author")
public ResponseEntity<List<Book>> groupByAuthor() {
return ResponseEntity.ok(bookRepository.findAllGroupByAuthor());
}
#GetMapping("/group_by_author_and_year")
public ResponseEntity<List<Book>> groupByAuthorAndYear() {
return ResponseEntity.ok(bookRepository.findAllGroupByAuthorAndYear());
}
}
so until this point, you should already have some api endpoints, serving your data. Now you should add your custom query book service in angular
angular
.module('<yourApp>')
.factory('CustomBookQuery', CustomBookQuery);
CustomBookQuery.$inject = ['$http'];
function CustomBookQuery ($http) {
var resourceUrl = 'api/books/query/';
return {
findAllGroupByAuthor: function () {
return $http.get(resourceUrl + 'group_by_author');
},
findAllGroupByAuthorAndYear: function () {
return $http.get(resourceUrl + 'group_by_author_and_year');
}
};
}
Now you just can inject your service and pass its promises resolves to your plotly, which is already has and JS part and a Angular implementation
(I coded above code from mind, so it's not tested)
For simple cases where your query only involves one entity it might be as simple as adding a method or methods to the associated Repository interface and have Spring Data handle the query generation.
Take a look at https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#repositories.query-methods.query-creation
and if you'd prefer to supply the query
https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#jpa.query-methods.at-query
You could then add an HTML page with JS to render the plot of the results of the query.
For situations that require a join over entities there’s a few possible approaches…
You could create a view in PostGresQL that would effectively make a table which is this query and then create an Entity for it (and then render them in your fav JS lib). This is my preferred option. You would need to update the liquidbase file to create the view. You would make an Entity and you would create a Spring Data repository interface (which will result in the autogeneration of a REST service etc) and then render using HTML and JS.
Alternatively, JPA entities can be queries using JPQL. That will give you a collection of objects that you can render with Angular.js or whatever your fav JS library.
http://www.oracle.com/technetwork/articles/vasiliev-jpql-087123.html
You can also use JPA to map native SQL to a PoJo (see especially the JPA 2.1 notes) and then create a REST service and render using your JS.
JPA : How to convert a native query result set to POJO class collection
http://www.thoughts-on-java.org/jpa-native-queries/
Related
What is the proper method to seed data into an Azure Database? Currently in development I have a seeder method that inserts the first couple of users as well as products. The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?
As far as the products are concerned, I have a json file with the product names and descriptions - which in development the seeder method iterates through and inserts the data.
To answer your question "The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?"
No you should keep your password in cleartext format, though you can keep it it encrypet mode and seed it.
In EF Core 2.1, the seeding workflow is quite different. There is now Fluent API logic to define the seed data in OnModelCreating. Then, when you create a migration, the seeding is transformed into migration commands to perform inserts, and is eventually transformed into SQL that that particular migration executes. Further migrations will know to insert more data, or even perform updates and deletes, depending on what changes you make in the OnModelCreating method.
Suppose thethree classes in my model are Magazine, Article and Author. A magazine can have one or more articles and an article can have one author. There’s also a PublicationsContext that uses SQLite as its data provider and has some basic SQL logging set up.
Let take an example of single entity type.
Let’s start by seeing what it looks like to provide seed data for a magazine—at its simplest.
The key to the new seeding feature is the HasData Fluent API method, which you can apply to an Entity in the OnModelCreating method.
Here’s the structure of the Magazine type:
public class Magazine
{
public int MagazineId { get; set; }
public string Name { get; set; }
public string Publisher { get; set; }
public List<Article> Articles { get; set; }
}
It has a key property, MagazineId, two strings and a list of Article types. Now let’s seed it with data for a single magazine:
protected override void OnModelCreating (ModelBuilder modelBuilder)
{
modelBuilder.Entity<Magazine> ().HasData
(new Magazine { MagazineId = 1, Name = "MSDN Magazine" });
}
A couple things to pay attention to here: First, I’m explicitly setting the key property, MagazineId. Second, I’m not supplying the Publisher string.
Next, I’ll add a migration, my first for this model. I happen to be using Visual Studio Code for this project, which is a .NET Core app, so I’m using the CLI migrations command, “dotnet ef migrations add init.” The resulting migration file contains all of the usual CreateTable and other relevant logic, followed by code to insert the new data, specifying the table name, columns and values:
migrationBuilder.InsertData(
table: "Magazines",
columns: new[] { "MagazineId", "Name", "Publisher" },
values: new object[] { 1, "MSDN Magazine", null });
Inserting the primary key value stands out to me here—especially after I’ve checked how the MagazineId column was defined further up in the migration file. It’s a column that should auto-increment, so you may not expect that value to be explicitly inserted:
MagazineId = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true)
Let’s continue to see how this works out. Using the migrations script command, “dotnet ef migrations script,” to show what will be sent to the database, I can see that the primary key value will still be inserted into the key column:
INSERT INTO "Magazines" ("MagazineId", "Name", "Publisher")
VALUES (1, 'MSDN Magazine', NULL);
That’s because I’m targeting SQLite. SQLite will insert a key value if it’s provided, overriding the auto-increment. But what about with a SQL Server database, which definitely won’t do that on the fly?
I switched the context to use the SQL Server provider to investigate and saw that the SQL generated by the SQL Server provider includes logic to temporarily set IDENTITY_INSERT ON. That way, the supplied value will be inserted into the primary key column. Mystery solved!
You can use HasData to insert multiple rows at a time, though keep in mind that HasData is specific to a single entity. You can’t combine inserts to multiple tables with HasData. Here, I’m inserting two magazines at once:
modelBuilder.Entity<Magazine>()
.HasData(new Magazine{MagazineId=2, Name="New Yorker"},
new Magazine{MagazineId=3, Name="Scientific American"}
);
For a complete example , you can browse through this sample repo
Hope it helps.
I recently started using Node.js + Express.js (generated with pug) + pg-promise for handling db.
My first target is to obtain data from Postgres (already set up) and display it pretty using render and pug. Let's say it is user list from Users table.
On this restful tutorial I have learned how to get data and return it as JSON - it worked.
Based on Mozilla's tutorial I seperated my code:
routes/users.js: where for '/' I call user_controller.user_list method (using router.get)
controllers/userController.js I have exported user_list where I would like to ask model for data and call render if I have results
queries.js which is kinda my model? But I'm not sure. It has API: connection to db with promises and one function for every query I am going to use in Controllers. I believe I should have like one Model file per table (or any logical entity) but where to store pgp connections?
This file is based on first tutorial I mentioned
// queries.js (connectionString is set properly to my postgres)
var pgp = require('pg-promise')(options);
var db = pgp(connectionString);
function getUsers(req, res, next) {
db.any('SELECT (user_id, username) FROM public.users ORDER BY user_id ASC LIMIT 1000')
.then(function (data) {
res.json({ data: data });
})
.catch(function (err) {
return next(err);
});
}
module.exports = {
getUsers: getUsers
};
Here starts my problem as most tutorials uses mongoose which is very model-db-schema-friendly and what I have is simple 'SELECT ...' string I pass to pg-promise's any() function.
Therefore I have no model class like User.
In userControllers.js I don't know how to call getUsers() to handle its data. Returning JS object from getUsers() would be nice.
Also: where should I call render? In controller or only in
db.any(...).then(function (data) { <--here--> })
Before, I also tried to embed whole Postgres handling into Controller but from db.any() I got this array for handling:
[{ row: '(1,John)' },{ row: '(2,Amy)' },{ row: '(50,Peter)' } ]
Didn't know how go from there as I probably lost my API functionality as well ;-)
I am browsing through multiple tutorials how to handle MVC but usually they handle MongoDB and
satisfy readers with res.send() not render().
I am not sure that I understand what your question is exactly about, but since I do not have enough reputation to comment, I'll do my best to help you with your interrogations. :)
First, regarding the queries.js file, it is IMO not exactly a model, but rather a DAO (Data Access Object) file. DAO comes between you Model (which is actually you database) and your Controller layers. There usually is a DAO file per object (User, Pet, whatever you want) in your data model.
When the data model is rather complex, it can be useful to use an Object Relational Mapping (ORM) such as Mongoose to map your database and execute complexe processes on your objects. In such a case, you might need a specific file per object so as to describe your model and store your queries. But since you don't need an ORM, you DAO can directly interact with your database. That is why you do not have a User.js file.
Regarding the way the db object should be used, I think you should refer directly to pg-promise documentation on the matter.
IMPORTANT: For any given connection, you should only create a single
Database object in a separate module, to be shared in your application
(see the code example below). If instead you keep creating the
Database object dynamically, your application will suffer from loss in
performance, and will be getting a warning in a development
environment (when NODE_ENV = development)
As a matter of fact, a db object in pg-promise sort of represents the database itself and is actually designed for the simultaneous use of several databases, which does not seem to be your case for the moment.
Finally, when it comes to the render function, I believe it should be in the controller, as your DAO is not supposed to know how the data it has gathered is going to be used.
Modularity is always a time-saving choice on the long-term.
Furthermore, note that you might later need a Business Layer between your DAO and your controller, in order to preprocess and postprocess data you are going to persist or to display. In such a case, if you need for instance to ask for data from your database, you will need to render data after it is processed by the Business layer. If the render is made in the DAO layer, it will not be possible.
In the link I provided earlier to pg-promise's db object connection, you will also find documentation on the any() method. You might already have looked it up.
It specifically states that it returns
A promise object that represents the query result:
When no rows are returned, it resolves with an empty array.
When 1 or more rows are returned, it resolves with the array of rows.
so your returned data is a JS Array. If you want to make it a JS object, just use
JSON.stringify(yourArray) to process your data before rendering it in your controller.
But I wonder if Pug is not able to use your data directly.
Also, if you cannot get any data out of your DAO, maybe you should check that your data object is not empty, as such a case is tolerated by the any() method. If you expect your query to always return something, you might want to consider using the many() or the one() methods.
I hope this helps you.
I am building a REST API which connects to a NEO4J instance. I am using the koa-neo4j library as the basis (https://github.com/assister-ai/koa-neo4j-starter-kit). I am a beginner at all these technologies but thanks to some help from this forum I have the basic functionality working. For example the below code allows me to create a new node with the label "metric" and set the name and dateAdded propertis.
URL:
/metric?metricName=Test&dateAdded=2/21/2017
index.js
app.defineAPI({
method: 'POST',
route: '/api/v1/imm/metric',
cypherQueryFile: './src/api/v1/imm/metric/createMetric.cyp'
});
createMetric.cyp"
CREATE (n:metric {
name: $metricName,
dateAdded: $dateAdded
})
return ID(n) as id
However, I am struggling to know how I can approach more complicated examples. How can I handle situations when I don't know how many properties will be added when creating a new node beforehand or when I want to create multiple nodes in a single post statement. Ideally I would like to be able to pass something like JSON as part of the POST which would contain all of the nodes, labels and properties that I want to create. Is something like this possible? I tried using the below Cypher query and passing a JSON string in the POST body but it didn't work.
UNWIND $props AS properties
CREATE (n:metric)
SET n = properties
RETURN n
Would I be better off switching tothe Neo4j Rest API instead of the BOLT protocol and the KOA-NEO4J framework. From my research I thought it was better to use BOLT but I want to have a Rest API as the middle layer between my front and back end so I am willing to change over if this will be easier in the longer term.
Thanks for the help!
Your Cypher syntax is bad in a couple of ways.
UNWIND only accepts a collection as its argument, not a string.
SET n = properties is only legal if properties is a map, not a string.
This query should work for creating a single node (assuming that $props is a map containing all the properties you want to store with the newly created node):
CREATE (n:metric $props)
RETURN n
If you want to create multiple nodes, then this query (essentially the same as yours) should work (but only if $prop_collection is a collection of maps):
UNWIND $prop_collection AS props
CREATE (n:metric)
SET n = props
RETURN n
I too have faced difficulties when trying to pass complex types as arguments to neo4j, this has to do with type conversions between js and cypher over bolt and there is not much one could do except for filing an issue in the official neo4j JavaScript driver repo. koa-neo4j uses the official driver under the hood.
One way to go about such scenarios in koa-neo4j is using JavaScript to manipulate the arguments before sending to Cypher:
https://github.com/assister-ai/koa-neo4j#preprocess-lifecycle
Also possible to further manipulate the results of a Cypher query using postProcess lifecycle hook:
https://github.com/assister-ai/koa-neo4j#postprocess-lifecycle
Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.
Im fairly new to ASP.NET MVC 3, and to coding in general really.
I have a very very small application i want to upload to my webhosting domain.
I am using entity framework, and it works fine on my local machine.
I've entered a new connection string to use my remote database instead however it dosen't really work, first of all i have 1 single MSSQL database, which cannot be de dropped and recreated, so i cannot use that strategy in my initializer, i tried to supply null in the strategy, but to no avail, my tables simply does not get created in my database and thats the problem, i don't know how i am to do that with entity framework.
When i run the application, it tries to select the data from the database, that part works fine, i just dont know how to be able to create those tabes in my database through codefirst.
I could probaly get it to work through manually recreating the tables, but i want to know the solution through codefirst.
This is my initializer class
public class EntityInit : DropCreateDatabaseIfModelChanges<NewsContext>
{
private NewsContext _db = new NewsContext();
protected override void Seed(NewsContext context)
{
new List<News>
{
new News{ Author="Michael Brandt", Title="Test News 1 ", NewsBody="Bblablabalblaaaaa1" },
new News{ Author="Michael Brandt", Title="Test News 2 ", NewsBody="Bblablabalblaaaaa2" },
new News{ Author="Michael Brandt", Title="Test News 3 ", NewsBody="Bblablabalblaaaaa3" },
new News{ Author="Michael Brandt", Title="Test News 4 ", NewsBody="Bblablabalblaaaaa4" },
}.ForEach(a => context.News.Add(a));
base.Seed(context);
}
}
As i said, im really new to all this, so excuse me, if im lacking to provide the proper information you need to answer my question, just me know and i will answer it
Initialization strategies do not support upgrade strategies at the moment.
Initialization strategies should be used to initialise a new database. all subsequent changes should be done using scripts at the moment.
the best practice as we speak is to modify the database with a script, and then adjust by hand the code to reflect this change.
in future releases, upgrade / migration strategies will be available.
try to execute the scripts statement by statement from a custom IDatabaseInitializer
then from this you can read the database version in the db and apply the missing scripts to your database. simply store a db version in a table. then level up with change scripts.
public class Initializer : IDatabaseInitializer<MyContext>
{
public void InitializeDatabase(MyContext context)
{
if (!context.Database.Exists() || !context.Database.CompatibleWithModel(false))
{
context.Database.Delete();
context.Database.Create();
var jobInstanceStateList = EnumExtensions.ConvertEnumToDictionary<JobInstanceStateEnum>().ToList();
jobInstanceStateList.ForEach(kvp => context.JobInstanceStateLookup.Add(
new JobInstanceStateLookup()
{
JobInstanceStateLookupId = kvp.Value,
Value = kvp.Key
}));
context.SaveChanges();
}
}
}
Have you tried to use the CreateDatabaseOnlyIfNotExists
– Every time the context is initialized, database will be recreated if it does not exist.
The database initializer can be set using the SetInitializer method of the Database class.If nothing is specified it will use the CreateDatabaseOnlyIfNotExists class to initialize the database.
Database.SetInitializer(null);
-
Database.SetInitializer<NewsContext>(new CreateDatabaseOnlyIfNotExists<NewsContext>());
I'm not sure if this is the exact syntax as I have not written this in a while. But it should be very similar.
If you are using a very small application, you maybe could go for SQL CE 4.0.
The bin-deployment should allow you to run SQL CE 4.0 even if your provider doesn't have the binaries installed for it. You can read more here.
That we you can actually use whatever initializer you want, since you now don't have the problem of not being able to drop databases and delete tables.
could this be of any help?