I am coping comma separated partition data files into ADLS using azure datafactory.
The requirement is to copy the comma separated files to ORC format with SNAPPY compression.
Is it possible to achieve this with ADF? if yes, then could you please help me?
Unfortunatelly, data factory can read from ZLIB and SNAPPY, but can only write ZLIB, which is the default for the orc file format.
More info here: https://learn.microsoft.com/en-us/azure/data-factory/supported-file-formats-and-compression-codecs#orc-format
Hope this helped!!
Related
Delta Lake is the default storage format.I understand how to convert a parquet to Delta.
My question is is there any way to revert it back to parquet.Any options ?
What I need is I want single parquet file while writing .Do not need the extra log file !
If you run vacuum on the table and delete the log folder, you end up with regular parquet files.
I need to transfer the files in the below dbfs file system path:
%fs ls /FileStore/tables/26AS_report/customer_monthly_running_report/parts/
To the below Azure Blob
dbutils.fs.ls("wasbs://"+blob.storage_account_container+"#"
+ blob.storage_account_name+".blob.core.windows.net/")
WHAT SERIES OF STEPS SHOULD I FOLLOW? Pls suggest
The simplest way would be to load the data into a dataframe and then to write that dataframe into the target.
df = spark.read.format(format).load("dbfs://FileStore/tables/26AS_report/customer_monthly_running_report/parts/*")
df.write.format(format).save("wasbs://"+blob.storage_account_container+"#" + blob.storage_account_name+".blob.core.windows.net/")
You will have to replace "format" with the source file format and the format you want in the target folder.
Keep in mind that if you do not want to do any transformations to the data but to just move it, it will most likely be more efficient not to use pyspark but to just use the az-copy command line tool. You can also run that in Databricks with the %sh magic command if needed.
It's seem ADF v2 does not support writing data to TEXT file (.TXT).
After select File System
But don't see TextFormat at the next screen
So do we any method to write data to TEXT file ?
Thanks,
Thai
Data Factory only support these 6 file formats:
Please see: Supported file formats and compression codecs in Azure Data Factory.
If we want to write data to a txt file, the only format we can using is Delimited text, when the pipeline finished, you will get a txt file.
Reference: Delimited text: Follow this article when you want to parse the delimited text files or write the data into delimited text format.
For example, I create a pipeline to copy data from Azure SQL to Blob, choose DelimitedText format as Sink dataset:
The txt file I get in Blob Storeage:
Hope this helps
I think what you are looking for is DelimitedText dataset. You can specify extension as part of the file name
I have some .xls datas in my Google Cloud Storage and want to use airflow to store it to GCP. Can I export it directly to BigQuery or can i use additional library (such a pandas and xlrd) to convert the files and store it into BigQuery?
Thanks
Bigquery don't support xls format. The easiest way is to transform the file in CSV and to load it into big query.
However, I don't know your xls format. If it's multisheet you have to work on the file.
I have taken snapshot of a cassandra table . Following are the files generated :-
manifest.json mc-10-big-Filter.db mc-10-big-TOC.txt mc-11-big-Filter.db mc-11-big-TOC.txt mc-9-big-Filter.db mc-9-big-TOC.txt
mc-10-big-CompressionInfo.db mc-10-big-Index.db mc-11-big-CompressionInfo.db mc-11-big-Index.db mc-9-big-CompressionInfo.db mc-9-big-Index.db schema.cql
mc-10-big-Data.db mc-10-big-Statistics.db mc-11-big-Data.db mc-11-big-Statistics.db mc-9-big-Data.db mc-9-big-Statistics.db
mc-10-big-Digest.crc32 mc-10-big-Summary.db mc-11-big-Digest.crc32 mc-11-big-Summary.db mc-9-big-Digest.crc32 mc-9-big-Summary.db
Is there a way to use these files to extract data of the table into a csv file .
Yes, you can do that with the sstable2json tool.
Use the tool against the *Data.db file
This outputs in JSON format. You need to convert to CSV after.