Is there an equivalent to SAS sas7bdat table files in Python? - python-3.x

Is there a Python equivalent to reading and writing tabular files such as SAS sas7bdat files?
My team is moving away from SAS and we'd like to replicate the SAS process in Python with our methodology as follows:
1) Pull data from various sources i.e. Excel, CSV, DBs etc.
2) Update our Data Warehouse with the new information and export this data as a Python table file (to be used next)
3) Rather than pulling data from our warehouse (super slow) we'd like to read in those Python table files and then do some data matching on a bigger set of data.
We're trying to avoid using the sas7bdat altogether (SASPy) files since we won't have SAS for much longer
Any advice, insights is greatly appreciated!

Unlike SAS, Python doesn't have a native data format. However, there are modules that implements binary protocols for serializing and de-serializing a Python object. Consider using HDF5 format to save and read files (https://www.h5py.org/). Another possibility is Pickle (https://docs.python.org/3/library/pickle.html).

Parquet is also worth considering.

Related

How to parse big XML in google cloud function efficiently?

I have to extract data from XML files with the size of several hundreds of MB in a Google Cloud Function and I was wondering if there are any best practices?
Since I am used to nodejs I was looking at some popular libraries like fast-xml-parser but it seems cumbersome if you only want specific data from a huge xml. I am also not sure if there are any performance issues when the XML is too big. Overall this does not feel like the best solution to parse and extract data from huge XMLs.
Then I was wondering if I could use BigQuery for this task where I simple convert the xml to json and throw it into a Dataset where I then can use a query to retrieve the data I want.
Another solution could be to use python for the job since it is good in parsing and extracting data from a XML so even though I have no experience in python I was wondering if this path could still be
the best solution?
If anything above does not make sense or if one solution is preferable to the other or if anyone can share any insights I would highly appreciate it!
I suggest you to check this article in which they discuss how to load XML data into BigQuery using Python Dataflow. I think that this approach may work in your situation.
Basically what they suggest is:
To parse the xml into a Python dictionary using the package xmltodict.
Specify a schema of the output table in BigQuery.
Use a Beam pipeline to take an XML file and use it to populate a BigQuery table.

Workflow for interpreting linked data in .ttl files with Python RDFLib

I am using turtle files containing biographical information for historical research. Those files are provided by a major library and most of the information in the files is not explicit. While people's professions, for instance, are sometimes stated alongside links to the library's URIs, I only have URIs in the majority of cases. This is why I will need to retrieve the information behind them at some point in my workflow, and I would appreciate some advice.
I want to use Python's RDFLib for parsing the .ttl files. What is your recommended workflow? Should I read the prefixes I am interested in first, then store the results in .txt (?) and then write a script to retrieve the actual information from the web, replacing the URIs?
I have also seen that there are ways to convert RDFs directly to CSV, but although CSV is nice to work with, I would get a lot of unwanted "background noise" by simply converting all the data.
What would you recommend?
RDFlib's all about working with RDF data. If you have RDF data, my suggestion is to do as much RDF-native stuff that you can and then only export to CSV if you want to do something like print tabular results or load into Pandas DataFrames. Of course there are always more than one way to do things, so you could manipulate data in CSV, but RDF, by design, has far more information in it than a CSV file can so when you're manipulating RDF data, you have more things to get hold of.
most of the information in the files is not explicit
Better phrased: most of the information is indicated with objects identified by URIs, not given as literal values.
I want to use Python's RDFLib for parsing the .ttl files. What is your recommended workflow? Should I read the prefixes I am interested in first, then store the results in .txt (?) and then write a script to retrieve the actual information from the web, replacing the URIs?
No! You should store the ttl files you can get and then you may indeed retrieve all the other data referred to by URI but, presumably, that data is also in RDF form so you should download it into the same graph you loaded the initial ttl files in to and then you can have the full graph with links and literal values it it as your disposal to manipulate with SPARQL queries.

PDF Crawler with Deep Analytics Skills

I am trying to build a pdf crawler for annual reports of corporates - these reports are pdf documents with a lot of text and also a lot of tables.
I don't have any trouble with converting the pdf into a txt, but my actual goal is to search for certain keywords (for example REVENUE, PROFIT) and extract the data Revenue 1.000.000.000€ into a data frame.
I tried different libraries, especially tabula-py and PyPDF2 but I couldn't find a smart way to do that - can anyone please help with a strategy, it would be amazing!
Best Regards,
Robin
Extracting data from PDFs is tricky business. Although there are PDF standards , not all PDFs are created equal. If you can already extract the data you need in text form, you can use RegEx to pull the data you require.
Amazon have a machine learning tool called Textract which you can use alongside their boto3 SDK in Python. However, this is a 'pay-per' service. The main difference with using Textract to regular expressions is that Textract can recognise and format data pairs and tables which should mean that creating your 'crawler' is quicker and less prone to breaking if your PDFs change going forward.
There is a Python package called Textract but it's not the same as the one provided in AWS, rather, it's a wrapper that (for PDFs) uses pdftotext (default) or pdfminer.six. It's worth checking it out as it may yield your data in a better format.

Reading DSL data file in python

Is there a way read the data file generated in dsl format from a simulation to read and perform analysis in python. Below I have attached a file in .dsl format from a simulator and want to read the generated data in python to perform data analysis, but have no clue for extracting the data from the file. Any help will be highly regarded.
Link to dls file: simulated data in dls format

Does U-Sql support cursors for iterating through data sets and extracting more data based on row values?

Does Azure Data Lake Analytics and U-SQL support the notion of Cursors in scripts?
I have a data set that contains paths to further data sets I would like to extract and I want to output the results to separate files.
At the moment I can't seem to find a solution for dynamically extracting and outputting data based on values inside data sets.
U-SQL currently expect that files are known at compile time. Thus, you cannot do extraction or outputting based on locations provided inside a file.
You can specify filesets in the EXTRACT statement that will be somewhat data driven. We are currently working on adding the ability to use filesets on OUTPUT as well.
You can file feature requests at http://aka.ms/adlfeedback.
Cheers
Michael
You might be able to write a Processor to iterate over the rows in the primary dataset. However, you might not be able to access the additional datasets in the Processor.
Another work around might be to concatenate all the additional datasets and perform a join with the primary dataset.

Resources