I would like to run multiple SQL statements in a single AWS Glue script using boto3.
The first query creates a table from S3 bucket (parquet files)
import boto3
client = boto3.client('athena')
config = {'OutputLocation': 's3://LOGS'}
client.start_query_execution(QueryString =
"""CREATE EXTERNAL TABLE IF NOT EXISTS my_database_name.my_table (
'apples' string,
'oranges' string,
'price' int
) PARTITIONED BY (
update_date string
)
STORED AS PARQUET
LOCATION 's3://LOCATION'
TBLPROPERTIES ('parquet.compression' = 'SNAPPY');""",
QueryExecutionContext = {'Database': 'my_database_name'},
ResultConfiguration = config)
This only creates the table. Then I have to run the following query in order to update the partitions and insert the data.
client.start_query_execution(QueryString =
"""MSCK REPAIR TABLE my_database_name.my_table;""",
QueryExecutionContext = {'Database': 'my_database_name'},
ResultConfiguration = config)
Unfortunately, when I run the above statements in a single GLUE script, the partitions are not updated (only the table is created). I have to separate them in two jobs.
Is it possible to have a single scripts that can execute multiple queries in a sequence?
Using Glue Crawlers is not an option
You should explore the alternative of using partition projection which removes the need of loading partition via repair table or crawlers. See the docs: https://docs.aws.amazon.com/athena/latest/ug/partition-projection.html
Related
Using Azure Synapse , Dedicated SQL pool.
How can I structure my tables underneath a button that represents the schema?
This is a small issue that really makes a big impact when many tables and schemas will be used in the database, and users will need to navigate to the correct schema quickly.
I tried dragging the schema under over the tables section, but nothing worked.
to structure the table first we need to create schema in sql dedicated pool using below command
CREATE SCHEMA <schemaName>
we need to create table using above created schema with required columns with suitable data types to that column.
Table creation:
CREATE TABLE <tableName>(col1 dataType,col2 dataType)
I created external table in dedicated sql pool in synapse following below steps:
Schema creation:
Created external data source:
CREATE EXTERNAL DATA SOURCE [DATASOURCE] WITH
(
LOCATION = '<location>',
)
Image for reference:
created external file format:
CREATE EXTERNAL FILE FORMAT [FileFormat1] WITH
(
FORMAT_TYPE = DELIMITEDTEXT
)
Image for reference:
I created external table using above data source and file format using below code:
CREATE EXTERNAL TABLE [wwi].[information2]
(
[Id] INT
)
WITH
(
LOCATION = '<folder/file>',
DATA_SOURCE = [DATASOURCE1],
FILE_FORMAT = [FileFormat1]
)
Image for reference:
In this we can structure the table in synapse dedicated pool.
I have some containers in ADLS (gen2) and have multiple folders within that container. I would like to have a mechanism to scan those folders to infer their schema and detect partitions and update them in the data catalog. How do I achieve this functionality in Azure?
Sample:
- container1
---table1-folder
-----10-12-1970
-------files1.parquet
-------files2.parquet
-------files3.parquet
-----10-13-1970
-------files1.parquet
-------files2.parquet
-------files3.parquet
-----10-14-1970
-------files1.parquet
-------files2.parquet
----table2-folder
-----zipcode1
-------files1.parquet
-------files2.parquet
-------files3.parquet
-----zipcode2
-------files1.parquet
-------files2.parquet
...
So, what I expect is that in the catalog, it will create two tables (table1 & table2) where table1 will have date-based partitions (3 dates for this case) and have underline data within that table. Same for table2 which will have two partitions and their underline data.
In the AWS world, I can run a Glue crawler that can crawl these files, infers schemas and partitions, and populate Glue data catalogs, later I can query them through Athena. What's the Azure equivalent approach to achieve something similar?
I would recommend looking at Azure Synapse Analytics Serverless SQL. You can create a view which consumes the folders and does partition elimination if you follow this approach:
-- If you do not have a Master Key on your DW you will need to create one
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
GO
CREATE DATABASE SCOPED CREDENTIAL msi_cred
WITH IDENTITY = 'Managed Service Identity' ;
GO
CREATE EXTERNAL DATA SOURCE ds_container1
WITH
( TYPE = HADOOP ,
LOCATION = 'abfss://container1#mystorageaccount.dfs.core.windows.net' ,
CREDENTIAL = msi_cred
) ;
GO
CREATE VIEW Table2
AS SELECT *, f.filepath(1) AS [zipcode]
FROM
OPENROWSET(
BULK 'table2-folder/*/*.parquet',
DATA_SOURCE = 'ds_container1',
FORMAT='PARQUET'
) AS f
Then setup Azure Purview as your data catalog and have it index your Synapse Serverless SQL pool.
I am using Boto3 package in python3 to execute an Athena query. From the documentation of Boto3, I understand that I can specify a query execution context, i.e. a database name under which the query has to be executed. With a properly specified query execution context, we can omit the fully qualified table name(db_name.table_name) from the query and instead use just the table name.
So the query SELECT * FROM db1.tab1 can be converted to SELECT * FROM tab1 with QueryExecutionContext : {'database':'db1'}
The problem: I need to run a query on Athena from python which looks something like this
SELECT *
FROM ((SELECT *
FROM db1.tab1 AS Temp1)
INNER JOIN (SELECT *
FROM db2.tab2 AS Temp2)
ON temp1.id = temp2.id)
As we can see, the query joins tables from two different databases. If I want to omit the database names from this query, how do I specify the QueryExecutionContext ?
QueryExecutionContext accepts only one database as an argument.So if you want to run a query across multiple databases then you have to pass fully qualified table name along with database.
I'm already having a MySQL table in my local machine (Linux) itself, and I have a Hive external table with the same schema as the MySQL table.
I want to sync my hive external table whenever a new record is inserted or updated. Batch update is ok with me to say hourly.
What is the best possible approach to achieve the same without using sqoop?
Thanks,
Sumit
Without scoop, you can create table STORED BY JdbcStorageHandler. Project repository: https://github.com/qubole/Hive-JDBC-Storage-Handler It will work as usual hive table, but query will run on MySQL. Predicate pushdown will work.
DROP TABLE HiveTable;
CREATE EXTERNAL TABLE HiveTable(
id INT,
id_double DOUBLE,
names STRING,
test INT
)
STORED BY 'org.apache.hadoop.hive.jdbc.storagehandler.JdbcStorageHandler'
TBLPROPERTIES (
"mapred.jdbc.driver.class"="com.mysql.jdbc.Driver",
"mapred.jdbc.url"="jdbc:mysql://localhost:3306/rstore",
"mapred.jdbc.username"="root",
"mapred.jdbc.input.table.name"="JDBCTable",
"mapred.jdbc.output.table.name"="JDBCTable",
"mapred.jdbc.password"="",
"mapred.jdbc.hive.lazy.split"= "false"
);
Using a EMR cluster, I created an external Hive table (over 800 millions of rows) that maps to a DynamoDB table. It works well and I can do queries and inserts through hive.
IF I try a query with a condition by the hash_key in Hive, I get the results in seconds. But doing the same query through spark-submit using SparkSQL and enableHiveSupport (accesing Hive) it doesn't finish.It seems that from Spark it's doing a full scan to the table.
I tried several configurations(different hive-site.xml for example) but it doesn't seem to work well from Spark. How should I do it through Spark? Any suggestions?
Thanks
Just make sure to use the dynamo connector opensource by AWS. By default it is available on EMR AFAIK.
Syntax to create a table using the DynamoDBStorageHandler class:
CREATE EXTERNAL TABLE hive_tablename (
hive_column1_name column1_datatype,
hive_column2_name column2_datatype
)
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES (
"dynamodb.table.name" = "dynamodb_tablename",
"dynamodb.column.mapping" =
"hive_column1_name:dynamodb_attribute1_name,hive_column2_name:dynamodb_attribute2_name"
);
For any Spark Job, you need to have the followings confs :
$ spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar
...
import org.apache.hadoop.io.Text;
import org.apache.hadoop.dynamodb.DynamoDBItemWritable
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat
import org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.io.LongWritable
var jobConf = new JobConf(sc.hadoopConfiguration)
jobConf.set("dynamodb.input.tableName", "myDynamoDBTable")
jobConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
jobConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")
var orders = sc.hadoopRDD(jobConf, classOf[DynamoDBInputFormat], classOf[Text], classOf[DynamoDBItemWritable])
orders.count()
References :
https://github.com/awslabs/emr-dynamodb-connector