I have an xml like:
1234^12^999^`<row><ab key="someKey" value="someValue"/><ab key="someKey1" value="someValue1"/></row>`^23232
We can parse normal xml file easily using scala XML support or even using databricks xml format, but how do I parse the xml embedded inside text.
XML data alone can be extracted using:
val top5duration = data.map(line => line.split("^")).filter(line => {line(2)==100}).map(line => line(4))
But how do I proceed if i want to extract values for each 'key?
Question: how are the nested XML elements treated? How would I access
them?
For flattening nested structure you can use explode...
example : lets say I want every title (String type) /
authors(WrappedArray) combinations, can achieve it with explode :
schema :
root
|-- title: string (nullable = true)
|-- author: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- initial: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- lastName: string (nullable = true)
show()
+--------------------+--------------------+
| title| author|
+--------------------+--------------------+
|Proper Motions of...|[[WrappedArray(J,...|
|Catalogue of 2055...|[[WrappedArray(J,...|
| null| null|
|Katalog von 3356 ...|[[WrappedArray(J)...|
|Astrographic Cata...|[[WrappedArray(P)...|
|Astrographic Cata...|[[WrappedArray(P)...|
|Results of observ...|[[WrappedArray(H,...|
| AGK3 Catalogue|[[WrappedArray(W)...|
|Perth 70: A Catal...|[[WrappedArray(E)...|
import org.apache.spark.sql.functions;
DataFrame exploded = src.select(src.col("title"),functions.explode(src.col("author")).as("auth"))
.select("title","auth.initial","auth.lastName");
exploded = exploded.select(exploded.col("initial"),
exploded.col("title").as("title"),
exploded.col("lastName"));
exploded.printSchema
exploded.show
root
|-- initial: array (nullable = true)
| |-- element: string (containsNull = true)
|-- title: string (nullable = true)
|-- lastName: string (nullable = true)
+-------+--------------------+-------------+
|initial| title| lastName|
+-------+--------------------+-------------+
| [J, H]|Proper Motions of...| Spencer|
| [J]|Proper Motions of...| Jackson|
| [J, H]|Catalogue of 2055...| Spencer|
sample xml file
<?xml version='1.0' ?>
<!DOCTYPE datasets SYSTEM "http://www.cs.washington.edu/research/projects/xmltk/xmldata/data/nasa/dataset_053.dtd">
<datasets>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Proper Motions of Stars in the Zone Catalogue -40 to -52 degrees
of 20843 Stars for 1900</title>
<altname type="ADC">1005</altname>
<altname type="CDS">I/5</altname>
<altname type="brief">Proper Motions in Cape Zone Catalogue -40/-52</altname>
<reference>
<source>
<other>
<title>Proper Motions of Stars in the Zone Catalogue -40 to -52 degrees
of 20843 Stars for 1900</title>
<author>
<initial>J</initial>
<initial>H</initial>
<lastName>Spencer</lastName>
</author>
<author>
<initial>J</initial>
<lastName>Jackson</lastName>
</author>
<name>His Majesty's Stationery Office, London</name>
<publisher>???</publisher>
<city>???</city>
<date>
<year>1936</year>
</date>
</other>
</source>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Positional_data.html">Positional data</keyword>
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>This catalog, listing the proper motions of 20,843 stars
from the Cape Astrographic Zones, was compiled from three series of
photographic plates. The plates were taken at the Royal Observatory,
Cape of Good Hope, in the following years: 1892-1896, 1897-1910,
1923-1928. Data given include centennial proper motion, photographic
and visual magnitude, Harvard spectral type, Cape Photographic
Durchmusterung (CPD) identification, epoch, right ascension and
declination for 1900.</para>
</description>
<details/>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="czc.dat">
<title>The catalogue</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>---</name>
<definition>Number 5</definition>
<units>---</units>
</field>
<field>
<name>CZC</name>
<definition>Catalogue Identification Number</definition>
<units>---</units>
</field>
<field>
<name>Vmag</name>
<definition>Visual Magnitude</definition>
<units>mag</units>
</field>
<field>
<name>RAh</name>
<definition>Right Ascension for 1900 hours</definition>
<units>h</units>
</field>
<field>
<name>RAm</name>
<definition>Right Ascension for 1900 minutes</definition>
<units>min</units>
</field>
<field>
<name>RAcs</name>
<definition>Right Ascension seconds in 0.01sec 1900</definition>
<units>0.01s</units>
</field>
<field>
<name>DE-</name>
<definition>Declination Sign</definition>
<units>---</units>
</field>
<field>
<name>DEd</name>
<definition>Declination for 1900 degrees</definition>
<units>deg</units>
</field>
<field>
<name>DEm</name>
<definition>Declination for 1900 arcminutes</definition>
<units>arcmin</units>
</field>
<field>
<name>DEds</name>
<definition>Declination for 1900 arcseconds</definition>
<units>0.1arcsec</units>
</field>
<field>
<name>Ep-1900</name>
<definition>Epoch -1900</definition>
<units>cyr</units>
</field>
<field>
<name>CPDZone</name>
<definition>Cape Photographic
Durchmusterung Zone</definition>
<units>---</units>
</field>
<field>
<name>CPDNo</name>
<definition>Cape Photographic Durchmusterung Number</definition>
<units>---</units>
</field>
<field>
<name>Pmag</name>
<definition>Photographic Magnitude</definition>
<units>mag</units>
</field>
<field>
<name>Sp</name>
<definition>HD Spectral Type</definition>
<units>---</units>
</field>
<field>
<name>pmRAs</name>
<definition>Proper Motion in RA
<footnote>
<para>the relation is pmRA = 15 * pmRAs * cos(DE)
if pmRAs is expressed in s/yr and pmRA in arcsec/yr</para>
</footnote>
</definition>
<units>0.1ms/yr</units>
</field>
<field>
<name>pmRA</name>
<definition>Proper Motion in RA</definition>
<units>mas/yr</units>
</field>
<field>
<name>pmDE</name>
<definition>Proper Motion in Dec</definition>
<units>mas/yr</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Julie Anne Watko</lastName>
<affiliation>SSDOO/ADC</affiliation>
</creator>
<date>
<year>1995</year>
<month>Nov</month>
<day>03</day>
</date>
</ingest>
</history>
<identifier>I_5.xml</identifier>
</dataset>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Catalogue of 20554 Faint Stars in the Cape Astrographic Zone -40 to -52 Degrees
for the Equinox of 1900.0</title>
<altname type="ADC">1006</altname>
<altname type="CDS">I/6</altname>
<altname type="brief">Cape 20554 Faint Stars, -40 to -52, 1900.0</altname>
<reference>
<source>
<other>
<title>Catalogue of 20554 Faint Stars in the Cape Astrographic Zone -40 to -52 Degrees
for the Equinox of 1900.0</title>
<author>
<initial>J</initial>
<initial>H</initial>
<lastName>Spencer</lastName>
</author>
<author>
<initial>J</initial>
<lastName>Jackson</lastName>
</author>
<name>His Majesty's Stationery Office, London</name>
<publisher>???</publisher>
<city>???</city>
<date>
<year>1939</year>
</date>
<bibcode>1939HMSO..C......0S</bibcode>
</other>
</source>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Positional_data.html">Positional data</keyword>
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>This catalog contains positions, precessions, proper motions, and
photographic magnitudes for 20,554 stars. These were derived from
photographs taken at the Royal Observatory, Cape of Good Hope between 1923
and 1928. It covers the astrographic zones -40 degrees to -52 degrees of
declination. The positions are given for epoch 1900 (1900.0). It includes
spectral types for many of the stars listed. It extends the earlier
catalogs derived from the same plates to fainter magnitudes. The
computer-readable version consists of a single data table.</para>
<para>The stated probable error for the star positions is 0.024 seconds of time
(R.A.) and 0.25 seconds of arc (dec.) for stars with one determination,
0.017 seconds of time, and 0.18 seconds of arc for two determinations, and
0.014 / 0.15 for stars with three determinations.</para>
<para>The precession and secular variations were derived from Newcomb's constants.</para>
<para>The authors quote probable errors of the proper motions in both coordinates
of 0.008 seconds of arc for stars with one determination, 0.0055 seconds for
stars with two determinations, and 0.0044 for stars with three.</para>
<para>The photographic magnitudes were derived from the measured diameters on the
photographic plates and from the magnitudes given in the Cape Photographic
Durchmusterung.</para>
<para>The spectral classification of the cataloged stars was done with the
assistance of Annie Jump Cannon of the Harvard College Observatory.</para>
<para>The user should consult the source reference for more details of the
measurements and reductions. See also the notes in this document for
additional information on the interpretation of the entries.</para>
</description>
<details/>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="faint.dat">
<title>Data</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>ID</name>
<definition>Cape Number</definition>
<units>---</units>
</field>
<field>
<name>rem</name>
<definition>Remark
<footnote>
<para>A = Astrographic Star
F = Faint Proper Motion Star
N = Other Note</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>CPDZone</name>
<definition>Cape Phot. Durchmusterung (CPD) Zone
<footnote>
<para>All CPD Zones are negative. - signs are not included in data.
"0" in column 8 signifies Astrographic Plate instead of CPD.</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>CPD</name>
<definition>CPD Number or Astrographic Plate
<footnote>
<para>See also note on CPDZone.
Astrographic plate listed "is the more southerly on which the
star occurs." Thus, y-coordinate is positive wherever possible.</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>n_CPD</name>
<definition>[1234] Remarks
<footnote>
<para>A number from 1-4 appears in this byte for double stars where
the same CPD number applies to more than one star.</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>mpg</name>
<definition>Photographic Magnitude
<footnote>
<para>The Photographic Magnitude is "determined from the CPD Magnitude
and the diameter on the Cape Astrographic Plates by means of the
data given in the volume on the Magnitudes of Stars in the Cape
Zone Catalogue."
A null value (99.9) signifies a variable star.</para>
</footnote>
</definition>
<units>mag</units>
</field>
<field>
<name>RAh</name>
<definition>Mean Right Ascension hours 1900</definition>
<units>h</units>
</field>
<field>
<name>RAm</name>
<definition>Mean Right Ascension minutes 1900</definition>
<units>min</units>
</field>
<field>
<name>RAs</name>
<definition>Mean Right Ascension seconds 1900</definition>
<units>s</units>
</field>
<field>
<name>DEd</name>
<definition>Mean Declination degrees 1900</definition>
<units>deg</units>
</field>
<field>
<name>DEm</name>
<definition>Mean Declination arcminutes 1900</definition>
<units>arcmin</units>
</field>
<field>
<name>DEs</name>
<definition>Mean Declination arcseconds 1900</definition>
<units>arcsec</units>
</field>
<field>
<name>N</name>
<definition>Number of Observations</definition>
<units>---</units>
</field>
<field>
<name>Epoch</name>
<definition>Epoch +1900</definition>
<units>yr</units>
</field>
<field>
<name>pmRA</name>
<definition>Proper Motion in RA seconds of time</definition>
<units>s/a</units>
</field>
<field>
<name>pmRAas</name>
<definition>Proper Motion in RA arcseconds</definition>
<units>arcsec/a</units>
</field>
<field>
<name>pmDE</name>
<definition>Proper Motion in Dec arcseconds</definition>
<units>arcsec/a</units>
</field>
<field>
<name>Sp</name>
<definition>HD Spectral Type</definition>
<units>---</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Julie Anne Watko</lastName>
<affiliation>SSDOO/ADC</affiliation>
</creator>
<date>
<year>1996</year>
<month>Mar</month>
<day>26</day>
</date>
</ingest>
</history>
<identifier>I_6.xml</identifier>
</dataset>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Proper Motions of 1160 Late-Type Stars</title>
<altname type="ADC">1014</altname>
<altname type="CDS">I/14</altname>
<altname type="brief">Proper Motions of 1160 Late-Type Stars</altname>
<reference>
<source>
<journal>
<title>Proper Motions of 1160 Late-Type Stars</title>
<author>
<initial>H</initial>
<initial>J</initial>
<lastName>Fogh Olsen</lastName>
</author>
<name>Astron. Astrophys. Suppl. Ser.</name>
<volume>2</volume>
<pageno>69</pageno>
<date>
<year>1970</year>
</date>
<bibcode>1970A&AS....2...69O</bibcode>
</journal>
</source>
<related>
<holding role="similar">II/38 : Stars observed photoelectrically by Dickow et al.
<xlink:simple href="II/38"/>
</holding>Fogh Olsen H.J. 1970, Astron. Astrophys. Suppl. Ser., 2, 69.
Fogh Olsen H.J. 1970, Astron. Astrophys., Suppl. Ser., 1, 189.</related>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>Improved proper motions for the 1160 stars contained in the photometric
catalog by Dickow et al. (1970) are presented. Most of the proper motions
are from the GC, transferred to the system of FK4. For stars not included
in the GC, preliminary AGK or SAO proper motions are given. Fogh Olsen
(Astron. Astrophys. Suppl. Ser., 1, 189, 1970) describes the method of
improvement. The mean errors of the centennial proper motions increase with
increasing magnitude. In Right Ascension, these range from 0.0043/cos(dec)
for very bright stars to 0.096/cos(dec) for the faintest stars. In Dec-
lination, the range is from 0.065 to 1.14.</para>
</description>
<details/>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="pmlate.dat">
<title>Proper motion data</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>No</name>
<definition>Number
<footnote>
<para>Henry Draper or Bonner Durchmusterung number</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>pmRA</name>
<definition>Centennial Proper Motion RA</definition>
<units>s/ca</units>
</field>
<field>
<name>pmDE</name>
<definition>Centennial Proper Motion Dec</definition>
<units>arcsec/ca</units>
</field>
<field>
<name>RV</name>
<definition>Radial Velocity</definition>
<units>km/s</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Julie Anne Watko</lastName>
<affiliation>ADC</affiliation>
</creator>
<date>
<year>1996</year>
<month>Jun</month>
<day>03</day>
</date>
</ingest>
</history>
<identifier>I_14.xml</identifier>
</dataset>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Katalog von 3356 Schwachen Sternen fuer das Aequinoktium 1950
+89 degrees</title>
<altname type="ADC">1016</altname>
<altname type="CDS">I/16</altname>
<altname type="brief">Catalog of 3356 Faint Stars, 1950</altname>
<reference>
<source>
<other>
<title>Katalog von 3356 Schwachen Sternen fuer das Aequinoktium 1950
+89 degrees</title>
<author>
<initial>J</initial>
<lastName>Larink</lastName>
</author>
<author>
<initial>A</initial>
<lastName>Bohrmann</lastName>
</author>
<author>
<initial>H</initial>
<lastName>Kox</lastName>
</author>
<author>
<initial>J</initial>
<lastName>Groeneveld</lastName>
</author>
<author>
<initial>H</initial>
<lastName>Klauder</lastName>
</author>
<name>Verlag der Sternwarte, Hamburg-Bergedorf</name>
<publisher>???</publisher>
<city>???</city>
<date>
<year>1955</year>
</date>
<bibcode>1955</bibcode>
</other>
</source>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Fundamental_catalog.html">Fundamental catalog</keyword>
<keyword xlink:href="Positional_data.html">Positional data</keyword>
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>This catalog of 3356 faint stars was derived from meridian circle
observations at the Bergedorf and Heidelberg Observatories. The
positions are given for the equinox 1950 on the FK3 system. The stars
are mainly between 8.0 and 10.0 visual magnitude. A few are brighter
than 8.0 mag. The lower limit in brightness resulted from the visibility
of the stars.</para>
</description>
<details>
<para>All stars were observed at both the Heidelberg and Bergedorf
Observatories. Normally, at each observatory, two observations were
obtained with the clamp east and two with the clamp west. The mean
errors are comparable for the two observatories with no significant
systematic difference in the positions between them. The mean errors of
the resulting positions should be approximated 0.011s/cos(dec) in right
ascension and ).023" in declination.</para>
<para>The proper motions were derived from a comparison with the catalog
positions with the positions in the AGK2 and AGK2A with a 19 year
baseline and from a comparison of new positions with those in Kuestner
1900 with about a fifty year baseline.</para>
<para>The magnitudes were taken from the AGK2. Most spectral types were
determined by A. N. Vyssotsky. A few are from the Bergedorfer
Spektraldurchmusterung.</para>
</details>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="catalog.dat">
<title>The catalog</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>ID</name>
<definition>Catalog number</definition>
<units>---</units>
</field>
<field>
<name>DMz</name>
<definition>BD zone</definition>
<units>---</units>
</field>
<field>
<name>DMn</name>
<definition>BD number</definition>
<units>---</units>
</field>
<field>
<name>mag</name>
<definition>Photographic magnitude</definition>
<units>mag</units>
</field>
<field>
<name>Sp</name>
<definition>Spectral class</definition>
<units>---</units>
</field>
<field>
<name>RAh</name>
<definition>Right Ascension hours (1950)</definition>
<units>h</units>
</field>
<field>
<name>RAm</name>
<definition>Right Ascension minutes (1950)</definition>
<units>min</units>
</field>
<field>
<name>RAs</name>
<definition>Right Ascension seconds (1950)</definition>
<units>s</units>
</field>
<field>
<name>Pr-RA1</name>
<definition>First order precession in RA per century</definition>
<units>0.01s/a</units>
</field>
<field>
<name>Pr-RA2</name>
<definition>Second order precession in RA per century</definition>
<units>0.0001s2/a2</units>
</field>
<field>
<name>pmRA</name>
<definition>Proper motion in RA from AGK2 positions</definition>
<units>0.01s/a</units>
</field>
<field>
<name>pmRA2</name>
<definition>Proper motion in RA from Kuestner positions</definition>
<units>0.01s/a</units>
</field>
<field>
<name>DE-</name>
<definition>Sign of declination (1950)</definition>
<units>---</units>
</field>
<field>
<name>DEd</name>
<definition>Declination degrees (1950)</definition>
<units>deg</units>
</field>
<field>
<name>DEm</name>
<definition>Declination minutes (1950)</definition>
<units>arcmin</units>
</field>
<field>
<name>DEs</name>
<definition>Declination seconds (1950)</definition>
<units>arcsec</units>
</field>
<field>
<name>Pr-de1</name>
<definition>First order precession in dec per century</definition>
<units>arcsec/ha</units>
</field>
<field>
<name>Pr-de2</name>
<definition>Second order precession in dec per century</definition>
<units>arcsec2/ha2</units>
</field>
<field>
<name>pmdec</name>
<definition>Proper motion in DE from AGK2 positions</definition>
<units>arcsec/ha</units>
</field>
<field>
<name>pmdec2</name>
<definition>Proper motion in DE from Kuestner positions</definition>
<units>arcsec/ha</units>
</field>
<field>
<name>epoch</name>
<definition>Epoch of observation - 1900.0</definition>
<units>yr</units>
</field>
<field>
<name>rem</name>
<definition>Note for star in printed catalog
<footnote>
<para>1 = ma (blend?)
3 = pr (preceding)
4 = seq (following)
5 = bor (northern)
6 = au (southern)
* = other note in printed volume (All notes in the printed volume have not
been indicated in this version.)
the printed volume sometimes has additional information on the systems with
numerical remarks.</para>
</footnote>
</definition>
<units>---</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Nancy Grace Roman</lastName>
<affiliation>ADC/SSDOO</affiliation>
</creator>
<date>
<year>1996</year>
<month>Feb</month>
<day>01</day>
</date>
</ingest>
</history>
<identifier>I_16.xml</identifier>
</dataset>
</datasets>
If you have XML alone in RDD[String] format,
you can convert it to DataFrame with Databricks utility class:
com.databricks.spark.xml.XmlReader#xmlRdd
You could use SGML for parsing your text file using SGML's SHORTREF feature for parsing mixed CSVs like yours and Wiki syntaxes. With SHORTREF you declare text tokens to be replaced into other text (typically start- and end-element tags).
<DOCTYPE data [
<!ELEMENT data O O (field+)>
<!ELEMENT field O O (#PCDATA|markup)>
<!ELEMENT markup O O (row)>
<!ELEMENT row - - (ab+)>
<!ELEMENT ab - - (#PCDATA)>
<!ENTITY start-field "<field>">
<!SHORTREF in-data "^" start-field>
<!USEMAP in-data data>
<!ENTITY start-markup "<markup>">
<!ENTITY end-markup "</markup>">
<!SHORTREF in-field "`" start-markup>
<!USEMAP in-field field>
<!SHORTREF in-markup "`" end-markup>
<!USEMAP in-markup markup>
]>
1234^12^999^`<row><ab key="someKey" value="someValue"/><ab key="someKey1" value="someValue1"/></row>`^23232
Parsing this using SGML will result in the following
<data>
<field>1234</field>
<field>12</field>
<field>999</field>
<field>
<markup>
<row>
<ab key="someKey" value="someValue"/>
<ab key="someKey1" value="someValue1"/>
</row>
</markup>
</field>
<field>23232</field>
</data>
The SHORTREF and USEMAP declarations tell SGML to treat a caret character as a start-element tag for <field> when in data child content, and to treat a backtick character as start-element tag for markup when in field child content. When in markup child content, another backtick character ends the markup element.
SGML will also infer omitted start- and end-element tags based on O omission indicators and content model rules.
Edit: to make this work without changing your data file (datafile.csv, say), instead of including the content verbatim into the master SGML file, declare and place an entity reference to it like this:
<!DOCTYPE data [
<!-- ... same declarations as above ... -->
<ENTITY datafile SYSTEM "datafile.csv">
]>
&datafile
SGML will pull the content of datafile.csv into the datafile entity and replace the &datafile entity reference with the file content.
I tried parsing mentioned data without using xplode (dataframe) in RDD level. Please suggest any improvements.
Read the data as text file and define a schema
split string using delimiter ^
filter out bad records which don't confer to schema
match the data against the schema defined earlier.
Now you will have data like below in a tuple and we are left to parse the middle xml data.
(1234,12,999,"<row><ab key="someKey" value="someValue"/><ab key="someKey1" value="someValue1"/></row>, 23232)
xml.attribute("key") as it will either return all the keys.
if you need value someValue and not interested in someValue1, then loop through this node sequence and apply filter of contains("key") to eliminate other keys. I have used key Duration that was present in the data.
apply xpath \"#value" on previous step to get value.
similar question in cloudera
//define a case class for schema match with data input
case class stb (server_unique_id:Int,request_type:Int,event_id:Int,stb_timestamp:String,stb_xml:String,device_id:String,secondary_timestamp: String)
val data = spark.read.textFile(args(0)).rdd;///read data from supplied path from CLI
//check for ^ delimiter and 7 fields, else filter out
var clean_Data = data.filter { line => {line.trim().contains("^")}}
.map { line => {line.split("\\^")}}
.filter{ line => line.length == 7}
//match the schema and filter out data having event id = 100 and the tag having Duration
var tup_Map = clean_Data.map{ line => stb (line(0).toInt,line(1).toInt,line(2).toInt,line(3),line(4),line(5),line(6))}
.filter(line => (line.event_id == 100 && line.stb_xml.contains("Duration")));
//xml is of name-value format, hence the attrbutes are all same(n,v)
//parse through the xml structure and find out necessary data
//xmlnv will parse top level to nodeseq having 8 different data like duration,channel in self closing tags
//and name-value format
var xml_Map = tup_Map.map{line =>
var xmld = XML.loadString(line.stb_xml);
var xmlnv = xmld \\ "nv";
var duration = 0;
for { i <- 0 to xmlnv.length-1 if xmlnv(i).attributes.toString().contains("Duration") } duration = (xmlnv(i) \\ "#v").text.toInt;
var channelNum = 0;
for { i <- 0 to xmlnv.length-1 if xmlnv(i).attributes.toString().contains("ChannelNumber") } channelNum = (xmlnv(i) \\ "#v").text.toInt;
var channelType = "";
for { i <- 0 to xmlnv.length-1 if xmlnv(i).attributes.toString().contains("ChannelType") } channelType = (xmlnv(i) \\ "#v").text;
(duration, channelNum, channelType,line.device_id)
}
//persist xml_Map for further operations
xml_Map.persist();
Related
I'm trying to set nextcall based on time zone in cronjob in 2nd next day
Here's my code:
<field name="nextcall"
eval="datetime.now() + (datetime.now(pytz.timezone('Asia/Ho_Chi_Minh')).replace(day=26, hour=2, minute=00, second=00) - datetime.now(pytz.timezone('Asia/Ho_Chi_Minh'))) % timedelta(hours=24)"/>
This code is to change the nextcall to the time zone of "Asia/Ho_Chi_Minh".
But when using it, i can't set the days to it, such as 2nd next day.
I've try to set day, but not working.
Please help, thanks.
Edit:
Full code:
<record id="ir_cron_check_qty_and_move_from_internal_customer_to_customer" model="ir.cron">
<field name="name">KiotViet: Check Qty And Move Product From Internal Customer To Customer</field>
<field name="model_id" ref="model_kiotviet_cron"/>
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">model.check_qty_and_move_from_internal_customer_to_customer()</field>
<!-- set cron will run after 1 days -->
<field name="interval_number">1</field>
<field name="interval_type">days</field>
<!-- set cron will run at 2am -->
<field name="nextcall"
eval="datetime.now() + (datetime.now(pytz.timezone('Asia/Ho_Chi_Minh')).replace(day=26, hour=2, minute=00, second=00) - datetime.now(pytz.timezone('Asia/Ho_Chi_Minh'))) % timedelta(hours=24)"/>
<field name="numbercall">-1</field>
</record>
I am totally new to odoo, now I am learning about building modules and I am working with odoo 13, When I tried installing the school module it gives me the following error:
File "/home/user/odoo/odoo13/odoo/fields.py", line 2338, in convert_to_cache
raise ValueError("Wrong value for %s: %r" % (self, value))
odoo.tools.convert.ParseError: "Wrong value for ir.ui.menu.action: 'form,189'" while parsing /home/user/odoo/odoo13/custom_addons/school/views/student_view.xml:2, near
<odoo>
<data>
<record id="student_menu_action" model="ir.actions.act_window">
<field name="name">Students</field>
<field name="res_model">student.student</field>
<field name="type">form</field>
<field name="view_mode">tree,form</field>
<field name="domain">[]</field>
<field name="help" type="html
<p class="oe_view_nocontent_create">Create The First Student
</p>
</field>
</record>
<menuitem id="school_menu" name="School"/>
<menuitem id="school_student_menu" parent="school_menu" name="Student" action="student_menu_action"/>
</data>
</odoo>
I would appreciate any help, feel free to ask for more informations if necessary in the comment section.
You forgot to close your double quote and tag near type attribute:
<field name="help" type="html
<p class="oe_view_nocontent_create">Create The First Student</p>
</field>
It should be type="html">
in the below snippet I have two different records.
In the HEADLINE we see one is an Earnings Call, while the other is a message about an acquisition.
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Record key="18AD026E657696BE1A7AE7C0D1CE94EF321EFD4C203B31A1F87DD27DEF345872" req_sym="OUT1V-FI">
<Fields>
<Field id="7000" name="HEADLINE" value="CORRECTED TRANSCRIPT: Outokumpu Oyj(OUT1V-FI), Q3 2019 Earnings Call, 31-October-2019 9:00 AM ET" />
<Field id="7001" name="SOURCE" value="FCST" />
<Field id="7003" name="ALL_IDS" value="OUT1V-FI" />
<Field id="7046" name="PRIMARY_IDS" value="OUT1V-FI" />
<Field id="7004" name="STORY_DATE" value="20191101" />
<Field id="7005" name="STORY_TIME" value="041606" />
<Field id="7007" name="CATEGORIES" value="CN:FI,DT:EARN,DT:ERNS,DT:ER_GEN,DT:EVTS,DT:EV_ME,DT:FILNS_TS_TR,IN:METAL,LN:EN,RN:EU,RN:NE,SB:ERNS,SB:ER_GEN,SB:EVTS,SB:EV_ME" />
<Field id="7002" name="SEARCH_IDS" value="OUT1V-FI" />
<Field id="7011" name="LINK1" value="https://datadirect.factset.com/services/docretrieval?report=feed&key=U2FsdGVkX1%2fEHwXn0zpAkqjR%2bJOkauoxw0LQ2BhLtraPMDZwyAwoN9WuYQ8PMM4ZKNAXx8VpWFsDe2T%2fZ7WNdQ%3d%3d&timezone=America/New_York" />
<Field id="7039" name="FILING_SIZE" value="NULL" />
<Field id="8000" name="EVENT_IDS" value="1201149455" />
<Field id="8001" name="REPORT_IDS" value="2314010" />
<Field id="8002" name="EVENTDATE-REPORTID-TRANSCRIPTTYPE" value="20191031-2314010-C" />
<Field id="8003" name="EVENT" value="E" />
<Field id="8004" name="UPLOAD_DATE_TIME" value="2019-11-01 22:36:48" />
<Field id="8005" name="VERSION_ID" value="4379596" />
</Fields>
<Record key="0BB357A317B871E3ED0FD0ECBD210D771E8331097964E1D9223C9BEE844E68F2" req_sym="SUBC-NO">
<Fields>
<Field id="7000" name="HEADLINE" value="CORRECTED TRANSCRIPT: Subsea 7 SA(SUBC-NO), Acquisition of McDermott International,Inc by Subsea 7 S.A Call, 23-April-2018 9:00 AM ET" />
<Field id="7001" name="SOURCE" value="FCST" />
<Field id="7003" name="ALL_IDS" value="SUBC-NO" />
<Field id="7046" name="PRIMARY_IDS" value="SUBC-NO" />
<Field id="7004" name="STORY_DATE" value="20180423" />
<Field id="7005" name="STORY_TIME" value="142404" />
<Field id="7007" name="CATEGORIES" value="CN:GB,DT:CA_MNA_GEN,DT:CORPS,DT:FILNS_TS_TR,DT:MANDA,IN:OIL,LN:EN,RN:EU,SB:EVTS,SB:MANDA" />
<Field id="7002" name="SEARCH_IDS" value="SUBC-NO" />
<Field id="7011" name="LINK1" value="https://datadirect.factset.com/services/docretrieval?report=feed&key=U2FsdGVkX1%2bJsxYfwGoI5ggt7BF%2bBr8ttuTeQZmMIWBDSxPjFIksm%2bjEDqkK5hq4NDxszCncdCgA18qo3qN5SQ%3d%3d&timezone=America/New_York" />
<Field id="7039" name="FILING_SIZE" value="NULL" />
<Field id="8000" name="EVENT_IDS" value="6235691" />
<Field id="8001" name="REPORT_IDS" value="2081721" />
<Field id="8002" name="EVENTDATE-REPORTID-TRANSCRIPTTYPE" value="20180423-2081721-C" />
<Field id="8003" name="EVENT" value="SS" />
<Field id="8004" name="UPLOAD_DATE_TIME" value="2018-04-26 22:35:20" />
<Field id="8005" name="VERSION_ID" value="3453250" />
</Fields>
</Record>
</Response>
Right now my code cannot differentiate between the two records.
from bs4 import BeautifulSoup
import pandas as pd
import xml.etree.ElementTree as ET
import glob
import os
path = "/Users/User/Downloads/Thesis papers/links/"
for filename in glob.glob(os.path.join(path, "*")):
with open(filename) as open_file:
content = open_file.read()
bs = BeautifulSoup(content, "xml")
for individual_xml in bs.find_all("Response"):
for link in individual_xml.find_all("Fields"):
for fields in link.find_all("Field", {"id":"7000"}):
print(fields[])
How can I specify I only want the records when the words Earnings Call is included, like the first record in the xml snippet?
Solved it myself.
Here's the solution. Just added these lines to the end:
word = "Earnings Call"
if word in fields["value"]:
print(fields)
Now that you have the answer, I'll give you another solution for reference only.
from simplified_scrapy import SimplifiedDoc,req,utils
html = ''' '''
doc = SimplifiedDoc(html)
field = doc.getElementByReg('Earnings Call',tag='Field') # Locate by value
print (field)
field = doc.select('Field#7000') # Get the first Field in the document with id = 7000
# field = doc.select('Response>Fields>Field#7000')
print (field)
Result:
{'id': '7000', 'name': 'HEADLINE', 'value': 'CORRECTED TRANSCRIPT: Outokumpu Oyj(OUT1V-FI), Q3 2019 Earnings Call, 31-October-2019 9:00 AM ET', 'tag': 'Field'}
{'id': '7000', 'name': 'HEADLINE', 'value': 'CORRECTED TRANSCRIPT: Outokumpu Oyj(OUT1V-FI), Q3 2019 Earnings Call, 31-October-2019 9:00 AM ET', 'tag': 'Field'}
I want to view 'working_hours' field only for employee, his manager and 'hr.group_hr_user' group.
how to hide this field automatically without edit form or trigger a button
class InheritHrEmployee(models.Model):
_inherit = 'hr.employee'
def hide_working_hours(self):
if self.env.uid == self.user_id.id or self.env.uid == self.parent_id.user_id.id or self.env.user.has_group(
'hr.group_hr_user'):
self.working_hours_view = True
else:
self.working_hours_view = False
working_hours_view = fields.Boolean(computed=hide_working_hours)
XML:
<record id="hide_working_hours_for_employees" model="ir.ui.view">
<field name="name">Hide Working Hours Employees Form</field>
<field name="model">hr.employee</field>
<field name="inherit_id" ref="hr.view_employee_form"/>
<field name="arch" type="xml">
<xpath expr="//field[#name='resource_calendar_id']" position="before">
<field name="working_hours_view" invisible="1"/>
</xpath>
<xpath expr="//field[#name='resource_calendar_id']" position="attributes">
<attribute name="attrs">{'invisible': [('working_hours_view' ,'=', False)]}</attribute>
</xpath>
</field>
</record>
Try below code for display working hours field only hr.group_hr_user group users.
XML:
<record id="hide_working_hours_for_employees" model="ir.ui.view">
<field name="name">Hide Working Hours Employees Form</field>
<field name="model">hr.employee</field>
<field name="inherit_id" ref="hr.view_employee_form"/>
<field name="arch" type="xml">
<xpath expr="//field[#name='resource_calendar_id']" position="before">
<field name="working_hours_view" invisible="1"/>
</xpath>
<xpath expr="//field[#name='resource_calendar_id']" position="attributes">
<attribute name="groups">hr.group_hr_user</attribute>
</xpath>
</field>
</record>
You can add multiple attributes in the XML file like above code.
i added the field before the function and it works now automatically
class InheritHrEmployee(models.Model):
_inherit = 'hr.employee'
inv = fields.Boolean(string="Invisible", compute="c_inv", store=False)
#api.one
def c_inv(self):
if self.env.uid == self.user_id.id or self.env.uid == self.parent_id.user_id.id or self.env.user.has_group(
'hr.group_hr_user'):
self.inv = False
else:
self.inv = True
.. like this example
make fields visible to user and invisible to other
I am using Pentaho Data Integration, I created a new transformation and I have 2 steps in it....1 is a CSV file of my data, the second is an Excel file with two columns one is are the state names and the other the sort form of that state name, Example ("New York" "NY")
In my CSV file I have a state columns with the state names "New York" I want to use my excel file to map "New York" with "NY"
I have googled this all day with no clear answer...can anyone help?
You can use Merge Join. Using this you can merge both the files and select the desired columns. Before merging, you have to sort those files according to fields which use are using for mapping. In your case, it will be state name.
I would recommend you to use stream lookup to do this task. Check the test transformation attached. It will do your task.
<?xml version="1.0" encoding="UTF-8"?>
<transformation-steps>
<steps>
<step>
<name>EXCEL</name>
<type>DataGrid</type>
<description/>
<distribute>Y</distribute>
<custom_distribution/>
<copies>1</copies>
<partitioning>
<method>none</method>
<schema_name/>
</partitioning>
<fields>
<field>
<name>State</name>
<type>String</type>
<format/>
<currency/>
<decimal/>
<group/>
<length>-1</length>
<precision>-1</precision>
<set_empty_string>N</set_empty_string>
</field>
<field>
<name>Short_state</name>
<type>String</type>
<format/>
<currency/>
<decimal/>
<group/>
<length>-1</length>
<precision>-1</precision>
<set_empty_string>N</set_empty_string>
</field>
</fields>
<data>
<line> <item>New York</item><item>TX</item> </line>
<line> <item>Texas</item><item>TX</item> </line>
</data>
<cluster_schema/>
<remotesteps> <input> </input> <output> </output> </remotesteps> <GUI>
<xloc>392</xloc>
<yloc>80</yloc>
<draw>Y</draw>
</GUI>
</step>
<step>
<name>CSV</name>
<type>DataGrid</type>
<description/>
<distribute>Y</distribute>
<custom_distribution/>
<copies>1</copies>
<partitioning>
<method>none</method>
<schema_name/>
</partitioning>
<fields>
<field>
<name>Full_state_name</name>
<type>String</type>
<format/>
<currency/>
<decimal/>
<group/>
<length>-1</length>
<precision>-1</precision>
<set_empty_string>N</set_empty_string>
</field>
</fields>
<data>
<line> <item>New York</item> </line>
<line> <item>Texas</item> </line>
</data>
<cluster_schema/>
<remotesteps> <input> </input> <output> </output> </remotesteps> <GUI>
<xloc>511</xloc>
<yloc>169</yloc>
<draw>Y</draw>
</GUI>
</step>
<step>
<name>Stream lookup</name>
<type>StreamLookup</type>
<description/>
<distribute>Y</distribute>
<custom_distribution/>
<copies>1</copies>
<partitioning>
<method>none</method>
<schema_name/>
</partitioning>
<from>EXCEL</from>
<input_sorted>N</input_sorted>
<preserve_memory>Y</preserve_memory>
<sorted_list>N</sorted_list>
<integer_pair>N</integer_pair>
<lookup>
<key>
<name>Full_state_name</name>
<field>State</field>
</key>
<value>
<name>State</name>
<rename>State</rename>
<default/>
<type>String</type>
</value>
<value>
<name>Short_state</name>
<rename>Short_state</rename>
<default/>
<type>String</type>
</value>
</lookup>
<cluster_schema/>
<remotesteps> <input> </input> <output> </output> </remotesteps> <GUI>
<xloc>510</xloc>
<yloc>79</yloc>
<draw>Y</draw>
</GUI>
</step>
</steps>
<order>
<hop> <from>EXCEL</from><to>Stream lookup</to><enabled>Y</enabled> </hop>
<hop> <from>CSV</from><to>Stream lookup</to><enabled>Y</enabled> </hop>
</order>
<notepads>
</notepads>
<step_error_handling>
</step_error_handling>
</transformation-steps>