A while ago I asked how to perform the "Through Associations".
I have the following tables :
genres
+-----------+--------------+------+-----+
| Field | Type | Null | Key |
+-----------+--------------+------+-----+
| id | int(6) | NO | PRI |
| slug | varchar(255) | NO | |
| parent_id | int(11) | YES | MUL |
+-----------+--------------+------+-----+
genres_radios
+----------+--------+------+-----+
| Field | Type | Null | Key |
+----------+--------+------+-----+
| genre_id | int(6) | NO | MUL |
| radio_id | int(6) | NO | MUL |
+----------+--------+------+-----+
radios
+-----------+--------------+------+-----+
| Field | Type | Null | Key |
+-----------+--------------+------+-----+
| id | int(5) | NO | PRI |
| slug | varchar(100) | NO | |
| url | varchar(100) | NO | |
+-----------+--------------+------+-----+
The answer is there : Sails.js associations.
Now I was wondering, if I had a new field in the genres_radios table, for example:
genres_radios
+----------+--------+------+-----+
| Field | Type | Null | Key |
+----------+--------+------+-----+
| genre_id | int(6) | NO | MUL |
| new_field| int(10)| NO | |
| radio_id | int(6) | NO | MUL |
+----------+--------+------+-----+
How would I do to get that attribute while making the join?
It is not implemented yet. Quoting Waterline's documentation :
Many-to-Many Through Associations
Many-to-Many through associations behave the same way as many-to-many
associations with the exception of the join table being automatically
created for you. This allows you to attach additional attributes onto
the relationship inside of the join table.
Coming Soon
Related
I have a user table as follows
|------------|-----------------|
| user_id | visited |
|------------|-----------------|
| 1 | 12-23-2021 |
| 1 | 11-23-2021 |
| 1 | 10-23-2021 |
| 2 | 01-21-2021 |
| 3 | 02-19-2021 |
| 3 | 02-25-2021 |
|------------|-----------------|
I'm trying to create an incremental model to get the user's recent visited date.
Since the incremental model needs an unique key, I'm concatenating user_id||visited -> unique_id
DBT + Spark
{{ config(
materialized='incremental',
file_format='delta',
unique_key='unique_id',
incremental_strategy='merge'
) }}
with CTE as (
select user_id,
visited,
user_id||visited as unique_id
from my_table
{% if is_incremental() %}
where visited >= date_add(current_date, -1)
{% endif %}
)
select user_id,
unique_id,
max(visited) as recent_visited_date
from CTE
group by 1,2
This above model is giving me the result as follows
|------------|------------------|-----------------------|
| user_id | unique_id |recent_visited_date |
|------------|------------------|-----------------------|
| 1 | 112-23-2021 | 12-23-2021 |
| 1 | 111-23-2021 | 11-23-2021 |
| 1 | 110-23-2021 | 10-23-2021 |
| 2 | 201-21-2021 | 01-21-2021 |
| 3 | 302-19-2021 | 02-19-2021 |
| 3 | 302-25-2021 | 02-25-2021 |
|------------|------------------|-----------------------|
The output what I wanted is
|------------|------------------------|
| user_id | recent_visited_date |
|------------|------------------------|
| 1 | 12-23-2021 |
| 2 | 01-21-2021 |
| 3 | 02-25-2021 |
|------------|------------------------|
I know that for the incremental model with merge strategy, the unique_id should be in the final table in order to compare
but having the unique_id is giving the wrong output
Is there any other way around to get the max(visited) for the user?
I have hive table with below structure
+---------------+--------------+----------------------+
| column_value | metric_name | key |
+---------------+--------------+----------------------+
| A37B | Mean | {0:"202006",1:"1"} |
| ACCOUNT_ID | Mean | {0:"202006",1:"2"} |
| ANB_200 | Mean | {0:"202006",1:"3"} |
| ANB_201 | Mean | {0:"202006",1:"4"} |
| AS82_RE | Mean | {0:"202006",1:"5"} |
| ATTR001 | Mean | {0:"202007",1:"2"} |
| ATTR001_RE | Mean | {0:"202007",1:"3"} |
| ATTR002 | Mean | {0:"202007",1:"4"} |
| ATTR002_RE | Mean | {0:"202007",1:"5"} |
| ATTR003 | Mean | {0:"202008",1:"3"} |
| ATTR004 | Mean | {0:"202008",1:"4"} |
| ATTR005 | Mean | {0:"202008",1:"5"} |
| ATTR006 | Mean | {0:"202009",1:"4"} |
| ATTR006 | Mean | {0:"202009",1:"5"} |
I need to write a spark sql query to filter based on Key column with NOT IN condition with commination of both keys.
The following query works fine in HiveQL in Beeline
select * from your_data where key[0] between '202006' and '202009' and key NOT IN ( map(0,"202009",1,"5") );
But when i try the same query in Spark SQL. I am getting error
cannot resolve due to data type mismatch: map<int,string>
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:115)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:107)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:277)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
Please help!
I got the answer from different question which i raised before. This query is working fine
select * from your_data where key[0] between 202006 and 202009 and NOT (key[0]="202009" and key[1]="5" );
I am have following hive table. key column has map value(key-value pairs). I am executing spark sql query with between statement on key column, but it is returning null records.
+---------------+--------------+----------------------+---------+
| column_value | metric_name | key |key[0] |
+---------------+--------------+----------------------+---------+
| A37B | Mean | {0:"202009",1:"12"} | 202009 |
| ACCOUNT_ID | Mean | {0:"202009",1:"12"} | 202009 |
| ANB_200 | Mean | {0:"202009",1:"12"} | 202009 |
| ANB_201 | Mean | {0:"202009",1:"12"} | 202009 |
| AS82_RE | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR001 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR001_RE | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR002 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR002_RE | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR003 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR004 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR005 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR006 | Mean | {0:"202009",1:"12"} | 202008 |
I am running below spark sql query
SELECT column_value, metric_name,key FROM table where metric_name = 'Mean' and column_value IN ('ATTR003','ATTR004','ATTR005') and key[0] between 202009 and 202003
Query is not returning any records. Instead of between statement, if i use IN (202009,202007,202008,202006,202005,202004,202003) statement it is returning result.
Need help!
Try other way around between values. E.g. between 202003 and 202009.
I am having an issue with my React app where formik will not compile due to a typescript error in the module. Does anyone have any advice and how to get it to compile?
The error is:
D:/Projects/MyLibraryWebsite/MyLibrary.Website/ClientApp/node_modules/formik/dist/Form.d.ts
TypeScript error in D:/Projects/MyLibraryWebsite/MyLibrary.Website/ClientApp/node_modules/formik/dist/Form.d.ts(3,150):
Type '"hidden" | "color" | "style" | "title" | "key" | "children" | "name" | "translate" | "action" | "defaultChecked" | "defaultValue" | "suppressContentEditableWarning" | "suppressHydrationWarning" | ... 249 more ... | "noValidate"' does not satisfy the constraint '"hidden" | "color" | "style" | "title" | "key" | "ref" | "children" | "name" | "action" | "defaultChecked" | "defaultValue" | "suppressContentEditableWarning" | "suppressHydrationWarning" | ... 249 more ... | "noValidate"'.
Type '"translate"' is not assignable to type '"hidden" | "color" | "style" | "title" | "key" | "ref" | "children" | "name" | "action" | "defaultChecked" | "defaultValue" | "suppressContentEditableWarning" | "suppressHydrationWarning" | ... 249 more ... | "noValidate"'. TS2344
1 | import * as React from 'react';
2 | export declare type FormikFormProps = Pick<React.FormHTMLAttributes<HTMLFormElement>, Exclude<keyof React.FormHTMLAttributes<HTMLFormElement>, 'onReset' | 'onSubmit'>>;
> 3 | export declare const Form: React.ForwardRefExoticComponent<Pick<React.DetailedHTMLProps<React.FormHTMLAttributes<HTMLFormElement>, HTMLFormElement>, "acceptCharset" | "action" | "autoComplete" | "encType" | "method" | "name" | "noValidate" | "target" | "defaultChecked" | "defaultValue" | "suppressContentEditableWarning" | "suppressHydrationWarning" | "accessKey" | "className" | "contentEditable" | "contextMenu" | "dir" | "draggable" | "hidden" | "id" | "lang" | "placeholder" | "slot" | "spellCheck" | "style" | "tabIndex" | "title" | "translate" | "radioGroup" | "role" | "about" | "datatype" | "inlist" | "prefix" | "property" | "resource" | "typeof" | "vocab" | "autoCapitalize" | "autoCorrect" | "autoSave" | "color" | "itemProp" | "itemScope" | "itemType" | "itemID" | "itemRef" | "results" | "security" | "unselectable" | "inputMode" | "is" | "aria-activedescendant" | "aria-atomic" | "aria-autocomplete" | "aria-busy" | "aria-checked" | "aria-colcount" | "aria-colindex" | "aria-colspan" | "aria-controls" | "aria-current" | "aria-describedby" | "aria-details" | "aria-disabled" | "aria-dropeffect" | "aria-errormessage" | "aria-expanded" | "aria-flowto" | "aria-grabbed" | "aria-haspopup" | "aria-hidden" | "aria-invalid" | "aria-keyshortcuts" | "aria-label" | "aria-labelledby" | "aria-level" | "aria-live" | "aria-modal" | "aria-multiline" | "aria-multiselectable" | "aria-orientation" | "aria-owns" | "aria-placeholder" | "aria-posinset" | "aria-pressed" | "aria-readonly" | "aria-relevant" | "aria-required" | "aria-roledescription" | "aria-rowcount" | "aria-rowindex" | "aria-rowspan" | "aria-selected" | "aria-setsize" | "aria-sort" | "aria-valuemax" | "aria-valuemin" | "aria-valuenow" | "aria-valuetext" | "children" | "dangerouslySetInnerHTML" | "onCopy" | "onCopyCapture" | "onCut" | "onCutCapture" | "onPaste" | "onPasteCapture" | "onCompositionEnd" | "onCompositionEndCapture" | "onCompositionStart" | "onCompositionStartCapture" | "onCompositionUpdate" | "onCompositionUpdateCapture" | "onFocus" | "onFocusCapture" | "onBlur" | "onBlurCapture" | "onChange" | "onChangeCapture" | "onBeforeInput" | "onBeforeInputCapture" | "onInput" | "onInputCapture" | "onReset" | "onResetCapture" | "onSubmit" | "onSubmitCapture" | "onInvalid" | "onInvalidCapture" | "onLoad" | "onLoadCapture" | "onError" | "onErrorCapture" | "onKeyDown" | "onKeyDownCapture" | "onKeyPress" | "onKeyPressCapture" | "onKeyUp" | "onKeyUpCapture" | "onAbort" | "onAbortCapture" | "onCanPlay" | "onCanPlayCapture" | "onCanPlayThrough" | "onCanPlayThroughCapture" | "onDurationChange" | "onDurationChangeCapture" | "onEmptied" | "onEmptiedCapture" | "onEncrypted" | "onEncryptedCapture" | "onEnded" | "onEndedCapture" | "onLoadedData" | "onLoadedDataCapture" | "onLoadedMetadata" | "onLoadedMetadataCapture" | "onLoadStart" | "onLoadStartCapture" | "onPause" | "onPauseCapture" | "onPlay" | "onPlayCapture" | "onPlaying" | "onPlayingCapture" | "onProgress" | "onProgressCapture" | "onRateChange" | "onRateChangeCapture" | "onSeeked" | "onSeekedCapture" | "onSeeking" | "onSeekingCapture" | "onStalled" | "onStalledCapture" | "onSuspend" | "onSuspendCapture" | "onTimeUpdate" | "onTimeUpdateCapture" | "onVolumeChange" | "onVolumeChangeCapture" | "onWaiting" | "onWaitingCapture" | "onAuxClick" | "onAuxClickCapture" | "onClick" | "onClickCapture" | "onContextMenu" | "onContextMenuCapture" | "onDoubleClick" | "onDoubleClickCapture" | "onDrag" | "onDragCapture" | "onDragEnd" | "onDragEndCapture" | "onDragEnter" | "onDragEnterCapture" | "onDragExit" | "onDragExitCapture" | "onDragLeave" | "onDragLeaveCapture" | "onDragOver" | "onDragOverCapture" | "onDragStart" | "onDragStartCapture" | "onDrop" | "onDropCapture" | "onMouseDown" | "onMouseDownCapture" | "onMouseEnter" | "onMouseLeave" | "onMouseMove" | "onMouseMoveCapture" | "onMouseOut" | "onMouseOutCapture" | "onMouseOver" | "onMouseOverCapture" | "onMouseUp" | "onMouseUpCapture" | "onSelect" | "onSelectCapture" | "onTouchCancel" | "onTouchCancelCapture" | "onTouchEnd" | "onTouchEndCapture" | "onTouchMove" | "onTouchMoveCapture" | "onTouchStart" | "onTouchStartCapture" | "onPointerDown" | "onPointerDownCapture" | "onPointerMove" | "onPointerMoveCapture" | "onPointerUp" | "onPointerUpCapture" | "onPointerCancel" | "onPointerCancelCapture" | "onPointerEnter" | "onPointerEnterCapture" | "onPointerLeave" | "onPointerLeaveCapture" | "onPointerOver" | "onPointerOverCapture" | "onPointerOut" | "onPointerOutCapture" | "onGotPointerCapture" | "onGotPointerCaptureCapture" | "onLostPointerCapture" | "onLostPointerCaptureCapture" | "onScroll" | "onScrollCapture" | "onWheel" | "onWheelCapture" | "onAnimationStart" | "onAnimationStartCapture" | "onAnimationEnd" | "onAnimationEndCapture" | "onAnimationIteration" | "onAnimationIterationCapture" | "onTransitionEnd" | "onTransitionEndCapture" | "key"> & React.RefAttributes<HTMLFormElement>>;
After posting an issue on the modules Github the creator suggested updating #types/react which fixed the issue.
I have a large dump file that I am processing in parallel and inserting into a postgres 9.4.5 database. There are ~10 processes that all are starting a transaction, inserting ~X000 objects, and then committing, repeating until their chunk of the file is done. Except they never finish because the database locks up.
The dump contains 5 million or so objects, each object representing an album. An object has a title, a release date, a list of artists, a list of track names etc. I have a release table for each one of these (who's primary key comes from the object in the dump) and then join tables with their own primary keys for things like release_artist, release_track.
The tables look like this:
Table: mdc_releases
Column | Type | Modifiers | Storage | Stats target | Description
-----------+--------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
title | text | | extended | |
released | timestamp with time zone | | plain | |
Indexes:
"mdc_releases_pkey" PRIMARY KEY, btree (id)
Table: mdc_release_artists
Column | Type | Modifiers | Storage | Stats target | Description
------------+---------+------------------------------------------------------------------+---------+--------------+-------------
id | integer | not null default nextval('mdc_release_artists_id_seq'::regclass) | plain | |
release_id | integer | | plain | |
artist_id | integer | | plain | |
Indexes:
"mdc_release_artists_pkey" PRIMARY KEY, btree (id)
and inserting an object looks like this:
insert into release(...) values(...) returning id; // refer to id below as $ID
insert into release_meta(release_id, ...) values ($ID, ...);
insert into release_artists(release_id, ...) values ($ID, ...), ($ID, ...), ...;
insert into release_tracks(release_id, ...) values ($ID, ...), ($ID, ...), ...;
So the transactions look like BEGIN, the above snippet 5000 times, COMMIT. I've done some googling on this and I'm not sure why what look to me like independent inserts are causing deadlocks.
This is what select * from pg_stat_activity shows:
| state_change | waiting | state | backend_xid | backend_xmin | query
+-------------------------------+---------+---------------------+-------------+--------------+---------------------------------
| 2016-01-04 18:42:35.542629-08 | f | active | | 2597876 | select * from pg_stat_activity;
| 2016-01-04 07:36:06.730736-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:36.066837-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:36.314909-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:49.491939-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:36:04.865133-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:38:39.344163-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:36:48.400621-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:34:37.802813-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:24.615981-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:10.887804-08 | f | idle in transaction | | | BEGIN
| 2016-01-04 07:37:44.200148-08 | f | idle in transaction | | | BEGIN