Waterline.js Joins/Populate with existing database - node.js

I have an existing postgres database which I am using to build a sails.js driven website, utilising waterline for ORM.
I'm fine with using my database in its existing form for everything other than population i.e. joining tables.
When working with a non-production database I'm comfortable with how waterline can produce join tables for me, but I'm really unsure how to bypass this to work with the current tables I have and their foreign key relationships. To give an idea of the types of tables I would typically have I've shown an example below:
| Intel | |
|-------------|--------|
| Column | Type |
| id | int PK |
| alliance_id | int FK |
| planet_id | int FK |
| dist | int |
| bg | string |
| amps | int |
| Alliance | |
|----------|--------|
| Column | Type |
| id | int PK |
| name | string |
| score | int |
| value | int |
| size | int |
| Planet | |
|-----------|--------|
| Column | Type |
| id | int PK |
| rulerName | string |
| score | int |
| value | int |
| size | int |
So in the above tables I would typically be able to join Intel --> Alliance and Intel --> Planet and access the data across each of these.
What would I need in my waterline model of Intel, Alliance, Planet to access this easily?
I'd love to do a:
Intel.find({alliance.name= 'test'})
or
Intel.find().populate('planet')
and then somehow be able to access intel.planet.score or intel.alliance.name etc
Thanks for any help. I can add more information if required just let me know in the comments.

first create models for all your databases table , as mention here
you can populate models and return joins results

Related

How to create sketches for million of data using Spark?

I have a data frame something like this
| UserID | Platform | Genre | Publisher |
| -------- | ------- |-----------|-----------|
| 1 | PS2. | FPS | Activision|
| 2 | PS1. | Race |EA Sports. |
| 3 | PS2. |RTS |Microsoft. |
| 4 | Xbox. | Race |EA Sports. |
Now from the above data frame, I want to build a Map that has column name and value as keys and a set of user Id as values.
For Ex
Platform_PS2 = [1,3]
Platform_Xbox = [4]
Platform_PS1 = [2]
Genere_Race = [2,4]
Basically, for these arrays, I want to build sketches at the end

Select rows from array of uuid when dealing with two tables

I have products and providers. Each product has an uuid and each provider has a list of uuid of products that they can provide.
How do I select all the products that a given (i.e. by provider uuid) provider can offer?
Products:
+------+------+------+
| uuid | date | name |
+------+------+------+
| 0 | - | - |
| 1 | - | - |
| 2 | - | - |
+------+------+------+
Providers:
+------+----------------+
| uuid | array_products |
+------+----------------+
| 0 | [...] |
| 1 | [...] |
| 2 | [...] |
+------+----------------+
select p.name, u.product_uuid
from products p
join
(
select unnest(array_products) as product_uuid
from providers where uuid = :target_provider_uuid
) u on p.uuid = u.product_uuid;
Please note however that your data design is not efficient and much harder to work with than a normalized one.

How to conditionally query tables based on oData request in an Azure Logic App

I am a total newbie when it comes to both oData and Logic Apps.
My scenario is as follows:
I have an Azure SQL database with two tables (daily_stats, weekly_stats) for users
I have a Logic App I managed to test successfully but that targets one table, triggered by an HTTP request and initialises a variable using the following expression to get the query
if(equals(coalesce(trigger()['outputs']?['queries']?['filter'],''),''),'1 eq 1',trigger()['outputs']?['queries']?['filter'])
The problem comes with how to query a different table based on what the user passes as an ODATA GET request
I imagine I need a condition and the pseudo code of this would be something like:
For daily stats the ODATA query URL would be
https://myproject.logic.azure.com/workflows/some-guid-here/triggers/manual/paths/invoke/daily_stats/api-version=2016-10-01/&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=my-key-here&filter=userid eq 'richard'
For weekly stats the ODATA query URL would be
https://myproject.logic.azure.com/workflows/some-guid-here/triggers/manual/paths/invoke/weekly_stats/api-version=2016-10-01/&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=my-sig-here&filter=userid eq 'richard'
If it is daily_stats, it queries the daily_stats stored procedure/table for the user = richard
If it is weekly_stats, it queries the weekly_stats stored procedure/table for the user = richard
Edit: Added an ASCII flow diagram
+----------------------+
| HTTP ODATA GET |
| Reguest |
| |
+----------+-----------+
|
|
|
|
v
+-------+---------+
| |
| |
| |
| filter has |
| daily_stats |
| |
+-------+---------+
|
|
|
|
+-------------+ | +--------------+
| | | | |
| | YES | NO | |
| query +<--------------+-----------------+ query |
| daily | | monthly |
| stats | | stats |
| table | | table |
| | | |
+-------------+ +--------------+
There is a switch action, further more information you could refer to here:Create switch statements that run workflow actions based on specific values in Azure Logic Apps.
Below is my sample, switch statements support only equality operators. If you need other relational operators, such as "greater than", use a conditional statement.

How can I do multiple concurrent insert transactions against postgres without causing a deadlock?

I have a large dump file that I am processing in parallel and inserting into a postgres 9.4.5 database. There are ~10 processes that all are starting a transaction, inserting ~X000 objects, and then committing, repeating until their chunk of the file is done. Except they never finish because the database locks up.
The dump contains 5 million or so objects, each object representing an album. An object has a title, a release date, a list of artists, a list of track names etc. I have a release table for each one of these (who's primary key comes from the object in the dump) and then join tables with their own primary keys for things like release_artist, release_track.
The tables look like this:
Table: mdc_releases
Column | Type | Modifiers | Storage | Stats target | Description
-----------+--------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
title | text | | extended | |
released | timestamp with time zone | | plain | |
Indexes:
"mdc_releases_pkey" PRIMARY KEY, btree (id)
Table: mdc_release_artists
Column | Type | Modifiers | Storage | Stats target | Description
------------+---------+------------------------------------------------------------------+---------+--------------+-------------
id | integer | not null default nextval('mdc_release_artists_id_seq'::regclass) | plain | |
release_id | integer | | plain | |
artist_id | integer | | plain | |
Indexes:
"mdc_release_artists_pkey" PRIMARY KEY, btree (id)
and inserting an object looks like this:
insert into release(...) values(...) returning id; // refer to id below as $ID
insert into release_meta(release_id, ...) values ($ID, ...);
insert into release_artists(release_id, ...) values ($ID, ...), ($ID, ...), ...;
insert into release_tracks(release_id, ...) values ($ID, ...), ($ID, ...), ...;
So the transactions look like BEGIN, the above snippet 5000 times, COMMIT. I've done some googling on this and I'm not sure why what look to me like independent inserts are causing deadlocks.
This is what select * from pg_stat_activity shows:
| state_change | waiting | state | backend_xid | backend_xmin | query
+-------------------------------+---------+---------------------+-------------+--------------+---------------------------------
| 2016-01-04 18:42:35.542629-08 | f | active | | 2597876 | select * from pg_stat_activity;
| 2016-01-04 07:36:06.730736-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:36.066837-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:36.314909-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:49.491939-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:36:04.865133-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:38:39.344163-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:36:48.400621-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:34:37.802813-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:24.615981-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:10.887804-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:44.200148-08 | f | idle in transaction | | | BEGIN

Resort key-Value combination

The following example just shows the pattern, my data much bigger.
I have a Table like
| Variable | String |
|:---------|-------:|
| V1 | Hello |
| V2 | little |
| V3 | World |
I have another table where different arrangements are defined
| Arrangement1 | Arrangement2 |
|:-------------|-------------:|
| V3 | V2 |
| V2 | V1 |
| V1 | V3 |
My output depending on the asked Arrangement (e.g. Arrangement1) should be
| Variable | Value |
|:---------|------:|
| V3 | World |
| V2 | little|
| V1 | Hello |
Till now I try to realize an approach with .find and array but think there might be an easier way (maybe with dictionary?) anyone an idea with good performance?

Resources