Is it possible to create a view from external data? - apache-spark

I have some csv files in my data lake which are being quite frequently updated through another process. Ideally I would like to be able to query these files through spark-sql, without having to run an equally frequent batch process to load all the new files into a spark table.
Looking at the documentation, I'm unsure as all the examples show views that query existing tables or other views, rather than loose files stored in a data lake.

You can do something like this if your csv is in S3 under the location s3://bucket/folder:
spark.sql(
"""
CREATE TABLE test2
(a string, b string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ","
LOCATION 's3://bucket/folder'
"""
)
You have to adapt the fields tho and the field separators.
To test it, you can first run:
Seq(("1","a"), ("2","b"), ("3","a"), ("4","b")).toDF("num", "char").repartition(1).write.mode("overwrite").csv("s3://bucket/folder")

Related

how can I manually update some table values in an Excel data model table imported from csv with power query

I am using Excel power query to import csv files containing transactions from a directory. That way adding a new file to the directory automatically makes it available when refreshing the query/data model. I load the table from the csv files into the data model. I do some cleaning and data transformation in the query.
However, there are some things that I can't do in the query that loads the raw data.
There may be missing data that I need to enter manually (a column missing some values)
I may need to split a transaction/row into multiple transactions/rows to categorize the parts correctly
It seems like there should be a way to do this that allows me to make my changes and not have them overwritten when I refresh the query to import new transactions.
Currently I am experimenting with creating a column with a unique id for the transaction table as part of the query. Then creating an aux table in excel relating to the raw transactions by unique id. I then make my changes in the aux table. And finally, I create a new table that merges the raw transactions with the aux table to create the working transaction table. This does work for missing data, or incorrect values, but it still doesn't allow me to split a row into multiple rows.
I would welcome any suggestions or references.

Add column to CSV File from another CSV File (Azure Data Factory)

For example:
Persons.csv
name, last_name
-----------------------
jack, jack_lastName
luc, luc_lastname
FileExample.csv
id
243
123
Result:
name, last_name, exampleId
-------------------------------
jack, jack_lastName, 243
luc, luc_lastname, 123
I want to aggregate any number of columns from another data source, to insert that final result in a file or in a database table.
I have been trying many ways but I can't do it.
You can try to make use of Mergefiles in azure data factory pipeline to merge two csv files .
Select copydata activity and go to source to loop through wild card entry *.csv to search for csv files in storage(configure linked storage to adf in this process).
Then the create a output csv in the same container if required as in my case to merge files and store by naming it some examplemerge.csv.
Check mark the first row as header.
validate and try to debug .
Then you must be able to see merged files in the resultant merged file in output folder.
You can check this reference vlog Merge Multiple CSV files to single CSV for more details and also this vlog on Load Multiple CSV Files to a Table in Azure Data Factory if required.
But if you want to join the files , there must be some common column to join.
Also check this thread from Q&A Azure Data Factory merge 2 csv files with different schema

Cache Lookup Properties in Azure Data Factory

I have a requirement where in I have a source file containing the Table Name(s) in Mapping Data Flow. Based on the Table Name in the file - there needs to be a dynamic query where column metadata, along with some other properties is retrieved from the data dictionary tables and inserted into a different sink table. The table name from the file would be used as a where condition filter.
Since there can be multiple tables listed in the input file (lets assume its a csv with only one column containing the table names), if we decide to use a cache sink for the file :
Is it possible to use the results of that cached sink in the Source transformation query in the same mapping data flow - as a lookup (from where the column metadata is being retrieved) and if Yes, how
What would be the best way to restrict data from the metadata table query based on this table name
Though of alternatively achieving this with a pipeline using For Each passing the table name as parameter to data flow, but in this case if there are 100 tables in the file, there would be 100 iterations and 100 times the cluster would need to be spun up. Please advise if this is wronf or there are better ways to achieve this
You would need to use option 3. Loop through the table names and pass each in as a parameter to the data flow to set the table name in the dataset.
ADF handles the cluster creation and teardown. All you have to worry about is whether you want to execute each sequentially or in parallel and how many. There are concurrency limits in ADF, so you should consider a batch count of 20 if you run in parallel.

How to quickly migrate from one table into another one with different table structure in the same/different cassandra?

I had one table with more than 10,000,000 records in Cassandra, but for some reason, I want to build another Cassandra table with the same fields and several additional fields, and I will migrate the previous data into it. And now the two tables are in the same Cassandra cluster.
I want to ask how to finish this task in a shortest time?
And If my new table in the different Cassandra, How to do it?
Any advice will be appreciated!
If you just need to add blank fields to a table, then the best thing to do is use the alter table command to add the fields to the existing table. Then no copying of the data would be needed and the new fields would show up as null in the existing rows until you set them to something.
If you want to change the structure of the data in the new table, or write it to a different cluster, then you'd probably need to write an application to read each row of the old table, transform the data as needed, and then write each row to the new location.
You could also do this by exporting the data to a csv file, write a program to restructure the csv file as needed, then import the csv file into the new location.
Another possible method would be to use Apache Spark. You'd read the existing table into an RDD, transform and filter the data into a new RDD, then save the transformed RDD to the new table. That would only work within the same cluster and would be fairly complex to set up.

Query database columns using Excel/csv data

I have a case where I need to read an Excel/csv/text file containing two columns (say colA and colB) of values (around 1000 rows). I need to query the database using values in colA. The query will return an XMLType into which the respective colB value needs to be inserted. I have the XML query and the insert working but I am stuck on what approach I should take to read the data, query and update it on the fly.
I have tried using external tables but realized that I don't have access to the server root to host the data file. I have also considered creating a temporary table to load the data to using SQL Loader or something similar and run the query/update within the tables. But that would need some formal overhead to go through. I would appreciate suggestions on the approach. Examples would be greatly helpful.
e.g.
text or Excel file:
ColA,ColB
abc,123
def,456
ghi,789
XMLTypeVal e.g.
<node1><node2><node3><colA></colA><colB></colB></node3></node2></node1>
UPDATE TableA SET XMLTypeVal
INSERTCHILDXML(XMLTypeVal,
'/node1/node2/node3', 'colBval',
XMLType('<colBval>123</colBval>'))
WHERE EXTRACTVALUE(TableA.XMLTypeVal, node1/node2/node3/ColA') = ('colAval');

Resources