I have a JSON file in which one of the columns is an XML string.
I tried extracting this field and writing to a file in the first step and reading the file in the next step. But each row has an XML header tag. So the resulting file is not a valid XML file.
How can I use the PySpark XML parser ('com.databricks.spark.xml') to read this string and parse out the values?
The following doesn't work:
tr = spark.read.json( "my-file-path")
trans_xml = sqlContext.read.format('com.databricks.spark.xml').options(rowTag='book').load(tr.select("trans_xml"))
Thanks,
Ram.
Try Hive XPath UDFs (LanguageManual XPathUDF):
>>> from pyspark.sql.functions import expr
>>> df.select(expr("xpath({0}, '{1}')".format(column_name, xpath_expression)))
or Python UDF:
>>> from pyspark.sql.types import *
>>> from pyspark.sql.functions import udf
>>> import xml.etree.ElementTree as ET
>>> schema = ... # Define schema
>>> def parse(s):
... root = ET.fromstring(s)
result = ... # Select values
... return result
>>> df.select(udf(parse, schema)(xml_column))
Related
Reads in a dataset using pandas.
Parameters
----------
file_path : string containing path to a file
Returns
-------
Pandas DataFrame with data read in from the file path
'''
I have defined the following UDF but it doesnt work.
def read_data(file_path):
pandas.read_csv('file_path')
Looks like you are missing the return and the variable shouldn't have quotes
import pandas as pd
def read_data(file_path: str) -> pd.DataFrame:
return pd.read_csv(file_path)
I am trying to run the below script to add to columns to the left of a file; however it keeps giving me
valueError: header must be integer or list of integers
Below is my code:
import pandas as pd
import numpy as np
read_file = pd.read_csv("/home/ex.csv",header='true')
df=pd.DataFrame(read_file)
def add_col(x):
df.insert(loc=0, column='Creation_DT', value=pd.to_datetime('today'))
df.insert(loc=1, column='Creation_By', value="Sean")
df.to_parquet("/home/sample.parquet")
add_col(df)
Any ways to make the creation_dt column a string?
According to pandas docs header is row number(s) to use as the column names, and the start of the data and must be int or list of int. So you have to pass header=0 to read_csv method.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
Also, pandas automatically creates dataframe from read file, you don't need to do it additionally. Use just
df = pd.read_csv("/home/ex.csv", header=0)
You can try:
import pandas as pd
import numpy as np
read_file = pd.read_csv("/home/ex.csv")
df=pd.DataFrame(read_file)
def add_col(x):
df.insert(loc=0, column='Creation_DT', value=str(pd.to_datetime('today')))
df.insert(loc=1, column='Creation_By', value="Sean")
df.to_parquet("/home/sample.parquet")
add_col(df)
I am pretty new to Python (using Python3) and read Pandas to import dataset.
I need to import dataset from url - https://newonlinecourses.science.psu.edu/stat501/sites/onlinecourses.science.psu.edu.stat501/files/data/leukemia_remission/index.txt
and convert it to csv file, I am getting some special character in converted csv -> ��
I am download txt file and converting it to csv, is is the right approach?
and converted csv is putting entire text into one column
from urllib.request import urlretrieve
import pandas as pd
from pandas import DataFrame
url = 'https://newonlinecourses.science.psu.edu/stat501/sites/onlinecourses.science.psu.edu.stat501/files/data/leukemia_remission/index.txt'
urlretrieve(url, 'index.txt')
df = pd.read_csv('index.txt', sep='/t', engine='python', lineterminator='\r\n')
csv_file = df.to_csv('index.csv', sep='\t', index=False, header=True)
print(csv_file)
after successful import, I have to Extract X as all columns except the first column and Y as first column also.
I'll appreciate your all help.
from urllib.request import urlretrieve
import pandas as pd
url = 'https://newonlinecourses.science.psu.edu/stat501/sites/onlinecourses.science.psu.edu.stat501/files/data/leukemia_remission/index.txt'
urlretrieve(url, 'index.txt')
df = pd.read_csv('index.txt', sep='\t',encoding='utf-16')
Y = df[['REMISS']]
X = df.drop(['REMISS'],axis=1)
I am trying to use below code to get posts with specific keywords from my csv file but I keep getting KeyErro "Tag1"
import re
import string
import pandas as pd
import openpyxl
import glob
import csv
import os
import xlsxwriter
import numpy as np
keywords = {"agile","backlog"}
# all your keywords
df = pd.read_csv(r"C:\Users\ferr1982\Desktop\split1_out.csv",
error_bad_lines=False)#, sep="," ,
encoding="utf-8")
output = pd.DataFrame(columns=df.columns)
for i in range(len(df.index)):
#if (df.loc[df['Tags'].isin(keywords)]):
if any(x in ((df['Tags1'][i]),(df['Tags2'][i]), (df['Tags3'][i] ),
(df['Tags4'][i]) , (df['Tags5'][i])) for x in keywords):
output.loc[len(output)] = [df[j][i] for j in df.columns]
output.to_csv("new_data5.csv", incdex=False)
Okay, it turned to be that there is a little space before "Tags" column in my CSV file !
it is working now after I added the space to the name in the code above.
I have an excel file with string stored in each cell:
rtypl srtyn OCVXZ srtyn
KPLNV KLNWZ bdfgh KLNWZ
xcvwh mvwhd WQKXM mvwhd
GYTR xvnm YTZN YTZN
ngws jklp PLNM jklp
I wanted to read excel file and write it in csv file. As you can see below:
import pandas as np
import csv
df = pd.read_excel(file, encoding='utf-16')
words= open("words.csv",'wb')
wr = csv.writer(words, dialect='excel')
for item in df:
wr.writerow(item)
But it reads the each line in separated alphabet and not as a string.
r,t,y,p,l
I am limited to write file as csv as I gonna use the result in a library that has lots of facility for csv file. Any advice on how I can read all the rows as a string in the cell is appreciated.
You can try the easiest solution:
# -*- coding: utf-8 -*-
import pandas as pd
df = pd.read_excel(file, encoding='utf-16')
df.to_csv('words.csv', encoding='utf-16')
Adding to zipa : If excel has multiple sheets : you can also try
import pandas as pd
df = pd.read_excel(file, 'Sheet1')
df.to_csv('words.csv')
Refer :
http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/